Sample records for existing methods results

  1. Lipid Adjustment for Chemical Exposures: Accounting for Concomitant Variables

    PubMed Central

    Li, Daniel; Longnecker, Matthew P.; Dunson, David B.

    2013-01-01

    Background Some environmental chemical exposures are lipophilic and need to be adjusted by serum lipid levels before data analyses. There are currently various strategies that attempt to account for this problem, but all have their drawbacks. To address such concerns, we propose a new method that uses Box-Cox transformations and a simple Bayesian hierarchical model to adjust for lipophilic chemical exposures. Methods We compared our Box-Cox method to existing methods. We ran simulation studies in which increasing levels of lipid-adjusted chemical exposure did and did not increase the odds of having a disease, and we looked at both single-exposure and multiple-exposures cases. We also analyzed an epidemiology dataset that examined the effects of various chemical exposures on the risk of birth defects. Results Compared with existing methods, our Box-Cox method produced unbiased estimates, good coverage, similar power, and lower type-I error rates. This was the case in both single- and multiple-exposure simulation studies. Results from analysis of the birth-defect data differed from results using existing methods. Conclusion Our Box-Cox method is a novel and intuitive way to account for the lipophilic nature of certain chemical exposures. It addresses some of the problems with existing methods, is easily extendable to multiple exposures, and can be used in any analyses that involve concomitant variables. PMID:24051893

  2. 75 FR 49536 - Petitions for Modification of Existing Mandatory Safety Standards

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-13

    ... alternative method of achieving the result of such standard exists which will at all times guarantee no less... for inspection; (8) the proposed alternative method will not be implemented until miners who have been... training plans will apply. The petitioner asserts that the proposed alternative method will at all times...

  3. Quadcopter Control Using Speech Recognition

    NASA Astrophysics Data System (ADS)

    Malik, H.; Darma, S.; Soekirno, S.

    2018-04-01

    This research reported a comparison from a success rate of speech recognition systems that used two types of databases they were existing databases and new databases, that were implemented into quadcopter as motion control. Speech recognition system was using Mel frequency cepstral coefficient method (MFCC) as feature extraction that was trained using recursive neural network method (RNN). MFCC method was one of the feature extraction methods that most used for speech recognition. This method has a success rate of 80% - 95%. Existing database was used to measure the success rate of RNN method. The new database was created using Indonesian language and then the success rate was compared with results from an existing database. Sound input from the microphone was processed on a DSP module with MFCC method to get the characteristic values. Then, the characteristic values were trained using the RNN which result was a command. The command became a control input to the single board computer (SBC) which result was the movement of the quadcopter. On SBC, we used robot operating system (ROS) as the kernel (Operating System).

  4. A Novel Method to Identify Differential Pathways in Hippocampus Alzheimer's Disease.

    PubMed

    Liu, Chun-Han; Liu, Lian

    2017-05-08

    BACKGROUND Alzheimer's disease (AD) is the most common type of dementia. The objective of this paper is to propose a novel method to identify differential pathways in hippocampus AD. MATERIAL AND METHODS We proposed a combined method by merging existed methods. Firstly, pathways were identified by four known methods (DAVID, the neaGUI package, the pathway-based co-expressed method, and the pathway network approach), and differential pathways were evaluated through setting weight thresholds. Subsequently, we combined all pathways by a rank-based algorithm and called the method the combined method. Finally, common differential pathways across two or more of five methods were selected. RESULTS Pathways obtained from different methods were also different. The combined method obtained 1639 pathways and 596 differential pathways, which included all pathways gained from the four existing methods; hence, the novel method solved the problem of inconsistent results. Besides, a total of 13 common pathways were identified, such as metabolism, immune system, and cell cycle. CONCLUSIONS We have proposed a novel method by combining four existing methods based on a rank product algorithm, and identified 13 significant differential pathways based on it. These differential pathways might provide insight into treatment and diagnosis of hippocampus AD.

  5. An overview of data integration methods for regional assessment.

    PubMed

    Locantore, Nicholas W; Tran, Liem T; O'Neill, Robert V; McKinnis, Peter W; Smith, Elizabeth R; O'Connell, Michael

    2004-06-01

    The U.S. Environmental Protections Agency's (U.S. EPA) Regional Vulnerability Assessment(ReVA) program has focused much of its research over the last five years on developing and evaluating integration methods for spatial data. An initial strategic priority was to use existing data from monitoring programs, model results, and other spatial data. Because most of these data were not collected with an intention of integrating into a regional assessment of conditions and vulnerabilities, issues exist that may preclude the use of some methods or require some sort of data preparation. Additionally, to support multi-criteria decision-making, methods need to be able to address a series of assessment questions that provide insights into where environmental risks are a priority. This paper provides an overview of twelve spatial integration methods that can be applied towards regional assessment, along with preliminary results as to how sensitive each method is to data issues that will likely be encountered with the use of existing data.

  6. Uncertain decision tree inductive inference

    NASA Astrophysics Data System (ADS)

    Zarban, L.; Jafari, S.; Fakhrahmad, S. M.

    2011-10-01

    Induction is the process of reasoning in which general rules are formulated based on limited observations of recurring phenomenal patterns. Decision tree learning is one of the most widely used and practical inductive methods, which represents the results in a tree scheme. Various decision tree algorithms have already been proposed such as CLS, ID3, Assistant C4.5, REPTree and Random Tree. These algorithms suffer from some major shortcomings. In this article, after discussing the main limitations of the existing methods, we introduce a new decision tree induction algorithm, which overcomes all the problems existing in its counterparts. The new method uses bit strings and maintains important information on them. This use of bit strings and logical operation on them causes high speed during the induction process. Therefore, it has several important features: it deals with inconsistencies in data, avoids overfitting and handles uncertainty. We also illustrate more advantages and the new features of the proposed method. The experimental results show the effectiveness of the method in comparison with other methods existing in the literature.

  7. Effect of Blast-Induced Vibration from New Railway Tunnel on Existing Adjacent Railway Tunnel in Xinjiang, China

    NASA Astrophysics Data System (ADS)

    Liang, Qingguo; Li, Jie; Li, Dewu; Ou, Erfeng

    2013-01-01

    The vibrations of existing service tunnels induced by blast-excavation of adjacent tunnels have attracted much attention from both academics and engineers during recent decades in China. The blasting vibration velocity (BVV) is the most widely used controlling index for in situ monitoring and safety assessment of existing lining structures. Although numerous in situ tests and simulations had been carried out to investigate blast-induced vibrations of existing tunnels due to excavation of new tunnels (mostly by bench excavation method), research on the overall dynamical response of existing service tunnels in terms of not only BVV but also stress/strain seemed limited for new tunnels excavated by the full-section blasting method. In this paper, the impacts of blast-induced vibrations from a new tunnel on an existing railway tunnel in Xinjiang, China were comprehensively investigated by using laboratory tests, in situ monitoring and numerical simulations. The measured data from laboratory tests and in situ monitoring were used to determine the parameters needed for numerical simulations, and were compared with the calculated results. Based on the results from in situ monitoring and numerical simulations, which were consistent with each other, the original blasting design and corresponding parameters were adjusted to reduce the maximum BVV, which proved to be effective and safe. The effect of both the static stress before blasting vibrations and the dynamic stress induced by blasting on the total stresses in the existing tunnel lining is also discussed. The methods and related results presented could be applied in projects with similar ground and distance between old and new tunnels if the new tunnel is to be excavated by the full-section blasting method.

  8. Threshold-free high-power methods for the ontological analysis of genome-wide gene-expression studies

    PubMed Central

    Nilsson, Björn; Håkansson, Petra; Johansson, Mikael; Nelander, Sven; Fioretos, Thoas

    2007-01-01

    Ontological analysis facilitates the interpretation of microarray data. Here we describe new ontological analysis methods which, unlike existing approaches, are threshold-free and statistically powerful. We perform extensive evaluations and introduce a new concept, detection spectra, to characterize methods. We show that different ontological analysis methods exhibit distinct detection spectra, and that it is critical to account for this diversity. Our results argue strongly against the continued use of existing methods, and provide directions towards an enhanced approach. PMID:17488501

  9. Walking on a user similarity network towards personalized recommendations.

    PubMed

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance.

  10. An Automatic Multidocument Text Summarization Approach Based on Naïve Bayesian Classifier Using Timestamp Strategy

    PubMed Central

    Ramanujam, Nedunchelian; Kaliappan, Manivannan

    2016-01-01

    Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach. PMID:27034971

  11. A uniformly valid approximation algorithm for nonlinear ordinary singular perturbation problems with boundary layer solutions.

    PubMed

    Cengizci, Süleyman; Atay, Mehmet Tarık; Eryılmaz, Aytekin

    2016-01-01

    This paper is concerned with two-point boundary value problems for singularly perturbed nonlinear ordinary differential equations. The case when the solution only has one boundary layer is examined. An efficient method so called Successive Complementary Expansion Method (SCEM) is used to obtain uniformly valid approximations to this kind of solutions. Four test problems are considered to check the efficiency and accuracy of the proposed method. The numerical results are found in good agreement with exact and existing solutions in literature. The results confirm that SCEM has a superiority over other existing methods in terms of easy-applicability and effectiveness.

  12. ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES

    PubMed Central

    LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.

    2008-01-01

    Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508

  13. Parametric symplectic partitioned Runge-Kutta methods with energy-preserving properties for Hamiltonian systems

    NASA Astrophysics Data System (ADS)

    Wang, Dongling; Xiao, Aiguo; Li, Xueyang

    2013-02-01

    Based on W-transformation, some parametric symplectic partitioned Runge-Kutta (PRK) methods depending on a real parameter α are developed. For α=0, the corresponding methods become the usual PRK methods, including Radau IA-IA¯ and Lobatto IIIA-IIIB methods as examples. For any α≠0, the corresponding methods are symplectic and there exists a value α∗ such that energy is preserved in the numerical solution at each step. The existence of the parameter and the order of the numerical methods are discussed. Some numerical examples are presented to illustrate these results.

  14. Nonlinear least squares regression for single image scanning electron microscope signal-to-noise ratio estimation.

    PubMed

    Sim, K S; Norhisham, S

    2016-11-01

    A new method based on nonlinear least squares regression (NLLSR) is formulated to estimate signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images. The estimation of SNR value based on NLLSR method is compared with the three existing methods of nearest neighbourhood, first-order interpolation and the combination of both nearest neighbourhood and first-order interpolation. Samples of SEM images with different textures, contrasts and edges were used to test the performance of NLLSR method in estimating the SNR values of the SEM images. It is shown that the NLLSR method is able to produce better estimation accuracy as compared to the other three existing methods. According to the SNR results obtained from the experiment, the NLLSR method is able to produce approximately less than 1% of SNR error difference as compared to the other three existing methods. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  15. Walking on a User Similarity Network towards Personalized Recommendations

    PubMed Central

    Gan, Mingxin

    2014-01-01

    Personalized recommender systems have been receiving more and more attention in addressing the serious problem of information overload accompanying the rapid evolution of the world-wide-web. Although traditional collaborative filtering approaches based on similarities between users have achieved remarkable success, it has been shown that the existence of popular objects may adversely influence the correct scoring of candidate objects, which lead to unreasonable recommendation results. Meanwhile, recent advances have demonstrated that approaches based on diffusion and random walk processes exhibit superior performance over collaborative filtering methods in both the recommendation accuracy and diversity. Building on these results, we adopt three strategies (power-law adjustment, nearest neighbor, and threshold filtration) to adjust a user similarity network from user similarity scores calculated on historical data, and then propose a random walk with restart model on the constructed network to achieve personalized recommendations. We perform cross-validation experiments on two real data sets (MovieLens and Netflix) and compare the performance of our method against the existing state-of-the-art methods. Results show that our method outperforms existing methods in not only recommendation accuracy and diversity, but also retrieval performance. PMID:25489942

  16. An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information

    NASA Astrophysics Data System (ADS)

    Tsuruta, Masanobu; Masuyama, Shigeru

    We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.

  17. Characterization of background concentrations of contaminants using a mixture of normal distributions.

    PubMed

    Qian, Song S; Lyons, Regan E

    2006-10-01

    We present a Bayesian approach for characterizing background contaminant concentration distributions using data from sites that may have been contaminated. Our method, focused on estimation, resolves several technical problems of the existing methods sanctioned by the U.S. Environmental Protection Agency (USEPA) (a hypothesis testing based method), resulting in a simple and quick procedure for estimating background contaminant concentrations. The proposed Bayesian method is applied to two data sets from a federal facility regulated under the Resource Conservation and Restoration Act. The results are compared to background distributions identified using existing methods recommended by the USEPA. The two data sets represent low and moderate levels of censorship in the data. Although an unbiased estimator is elusive, we show that the proposed Bayesian estimation method will have a smaller bias than the EPA recommended method.

  18. Existence, regularity, and concentration phenomenon of nontrivial solitary waves for a class of generalized variable coefficient Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Alves, Claudianor O.; Miyagaki, Olímpio H.

    2017-08-01

    In this paper, we establish some results concerning the existence, regularity, and concentration phenomenon of nontrivial solitary waves for a class of generalized variable coefficient Kadomtsev-Petviashvili equation. Variational methods are used to get an existence result, as well as, to study the concentration phenomenon, while the regularity is more delicate because we are leading with functions in an anisotropic Sobolev space.

  19. Homotopy perturbation method with Laplace Transform (LT-HPM) for solving Lane-Emden type differential equations (LETDEs).

    PubMed

    Tripathi, Rajnee; Mishra, Hradyesh Kumar

    2016-01-01

    In this communication, we describe the Homotopy Perturbation Method with Laplace Transform (LT-HPM), which is used to solve the Lane-Emden type differential equations. It's very difficult to solve numerically the Lane-Emden types of the differential equation. Here we implemented this method for two linear homogeneous, two linear nonhomogeneous, and four nonlinear homogeneous Lane-Emden type differential equations and use their appropriate comparisons with exact solutions. In the current study, some examples are better than other existing methods with their nearer results in the form of power series. The Laplace transform used to accelerate the convergence of power series and the results are shown in the tables and graphs which have good agreement with the other existing method in the literature. The results show that LT-HPM is very effective and easy to implement.

  20. Numerical simulation for the air entrainment of aerated flow with an improved multiphase SPH model

    NASA Astrophysics Data System (ADS)

    Wan, Hang; Li, Ran; Pu, Xunchi; Zhang, Hongwei; Feng, Jingjie

    2017-11-01

    Aerated flow is a complex hydraulic phenomenon that exists widely in the field of environmental hydraulics. It is generally characterised by large deformation and violent fragmentation of the free surface. Compared to Euler methods (volume of fluid (VOF) method or rigid-lid hypothesis method), the existing single-phase Smooth Particle Hydrodynamics (SPH) method has performed well for solving particle motion. A lack of research on interphase interaction and air concentration, however, has affected the application of SPH model. In our study, an improved multiphase SPH model is presented to simulate aeration flows. A drag force was included in the momentum equation to ensure accuracy of the air particle slip velocity. Furthermore, a calculation method for air concentration is developed to analyse the air entrainment characteristics. Two studies were used to simulate the hydraulic and air entrainment characteristics. And, compared with the experimental results, the simulation results agree with the experimental results well.

  1. Selecting supplier combination based on fuzzy multicriteria analysis

    NASA Astrophysics Data System (ADS)

    Han, Zhi-Qiu; Luo, Xin-Xing; Chen, Xiao-Hong; Yang, Wu-E.

    2015-07-01

    Existing multicriteria analysis (MCA) methods are probably ineffective in selecting a supplier combination. Thus, an MCA-based fuzzy 0-1 programming method is introduced. The programming relates to a simple MCA matrix that is used to select a single supplier. By solving the programming, the most feasible combination of suppliers is selected. Importantly, this result differs from selecting suppliers one by one according to a single-selection order, which is used to rank sole suppliers in existing MCA methods. An example highlights such difference and illustrates the proposed method.

  2. Trends in heteroepitaxy of III-Vs on silicon for photonic and photovoltaic applications

    NASA Astrophysics Data System (ADS)

    Lourdudoss, Sebastian; Junesand, Carl; Kataria, Himanshu; Metaferia, Wondwosen; Omanakuttan, Giriprasanth; Sun, Yan-Ting; Wang, Zhechao; Olsson, Fredrik

    2017-02-01

    We present and compare the existing methods of heteroepitaxy of III-Vs on silicon and their trends. We focus on the epitaxial lateral overgrowth (ELOG) method as a means of achieving good quality III-Vs on silicon. Initially conducted primarily by near-equilibrium epitaxial methods such as liquid phase epitaxy and hydride vapour phase epitaxy, nowadays ELOG is being carried out even by non-equilibrium methods such as metal organic vapour phase epitaxy. In the ELOG method, the intermediate defective seed and the mask layers still exist between the laterally grown purer III-V layer and silicon. In a modified ELOG method called corrugated epitaxial lateral overgrowth (CELOG) method, it is possible to obtain direct interface between the III-V layer and silicon. In this presentation we exemplify some recent results obtained by these techniques. We assess the potentials of these methods along with the other existing methods for realizing truly monolithic photonic integration on silicon and III-V/Si heterojunction solar cells.

  3. Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation

    PubMed Central

    2011-01-01

    Background Evaluation of Word Sense Disambiguation (WSD) methods in the biomedical domain is difficult because the available resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We present a method that can be used to automatically develop a WSD test collection using the Unified Medical Language System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. We demonstrate the use of this method by developing such a data set, called MSH WSD. Methods In our method, the Metathesaurus is first screened to identify ambiguous terms whose possible senses consist of two or more MeSH headings. We then use each ambiguous term and its corresponding MeSH heading to extract MEDLINE citations where the term and only one of the MeSH headings co-occur. The term found in the MEDLINE citation is automatically assigned the UMLS CUI linked to the MeSH heading. Each instance has been assigned a UMLS Concept Unique Identifier (CUI). We compare the characteristics of the MSH WSD data set to the previously existing NLM WSD data set. Results The resulting MSH WSD data set consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203 ambiguous entities. For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from MEDLINE. We evaluated the reliability of the MSH WSD data set using existing knowledge-based methods and compared their performance to that of the results previously obtained by these algorithms on the pre-existing data set, NLM WSD. We show that the knowledge-based methods achieve different results but keep their relative performance except for the Journal Descriptor Indexing (JDI) method, whose performance is below the other methods. Conclusions The MSH WSD data set allows the evaluation of WSD algorithms in the biomedical domain. Compared to previously existing data sets, MSH WSD contains a larger number of biomedical terms/abbreviations and covers the largest set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions. PMID:21635749

  4. A study of prediction methods for the high angle-of-attack aerodynamics of straight wings and fighter aircraft

    NASA Technical Reports Server (NTRS)

    Mcmillan, O. J.; Mendenhall, M. R.; Perkins, S. C., Jr.

    1984-01-01

    Work is described dealing with two areas which are dominated by the nonlinear effects of vortex flows. The first area concerns the stall/spin characteristics of a general aviation wing with a modified leading edge. The second area concerns the high-angle-of-attack characteristics of high performance military aircraft. For each area, the governing phenomena are described as identified with the aid of existing experimental data. Existing analytical methods are reviewed, and the most promising method for each area used to perform some preliminary calculations. Based on these results, the strengths and weaknesses of the methods are defined, and research programs recommended to improve the methods as a result of better understanding of the flow mechanisms involved.

  5. Looking for trees in the forest: summary tree from posterior samples

    PubMed Central

    2013-01-01

    Background Bayesian phylogenetic analysis generates a set of trees which are often condensed into a single tree representing the whole set. Many methods exist for selecting a representative topology for a set of unrooted trees, few exist for assigning branch lengths to a fixed topology, and even fewer for simultaneously setting the topology and branch lengths. However, there is very little research into locating a good representative for a set of rooted time trees like the ones obtained from a BEAST analysis. Results We empirically compare new and known methods for generating a summary tree. Some new methods are motivated by mathematical constructions such as tree metrics, while the rest employ tree concepts which work well in practice. These use more of the posterior than existing methods, which discard information not directly mapped to the chosen topology. Using results from a large number of simulations we assess the quality of a summary tree, measuring (a) how well it explains the sequence data under the model and (b) how close it is to the “truth”, i.e to the tree used to generate the sequences. Conclusions Our simulations indicate that no single method is “best”. Methods producing good divergence time estimates have poor branch lengths and lower model fit, and vice versa. Using the results presented here, a user can choose the appropriate method based on the purpose of the summary tree. PMID:24093883

  6. Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation.

    PubMed

    Jimeno-Yepes, Antonio J; McInnes, Bridget T; Aronson, Alan R

    2011-06-02

    Evaluation of Word Sense Disambiguation (WSD) methods in the biomedical domain is difficult because the available resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We present a method that can be used to automatically develop a WSD test collection using the Unified Medical Language System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. We demonstrate the use of this method by developing such a data set, called MSH WSD. In our method, the Metathesaurus is first screened to identify ambiguous terms whose possible senses consist of two or more MeSH headings. We then use each ambiguous term and its corresponding MeSH heading to extract MEDLINE citations where the term and only one of the MeSH headings co-occur. The term found in the MEDLINE citation is automatically assigned the UMLS CUI linked to the MeSH heading. Each instance has been assigned a UMLS Concept Unique Identifier (CUI). We compare the characteristics of the MSH WSD data set to the previously existing NLM WSD data set. The resulting MSH WSD data set consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203 ambiguous entities. For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from MEDLINE.We evaluated the reliability of the MSH WSD data set using existing knowledge-based methods and compared their performance to that of the results previously obtained by these algorithms on the pre-existing data set, NLM WSD. We show that the knowledge-based methods achieve different results but keep their relative performance except for the Journal Descriptor Indexing (JDI) method, whose performance is below the other methods. The MSH WSD data set allows the evaluation of WSD algorithms in the biomedical domain. Compared to previously existing data sets, MSH WSD contains a larger number of biomedical terms/abbreviations and covers the largest set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions.

  7. Multiple network alignment via multiMAGNA+.

    PubMed

    Vijayan, Vipin; Milenkovic, Tijana

    2017-08-21

    Network alignment (NA) aims to find a node mapping that identifies topologically or functionally similar network regions between molecular networks of different species. Analogous to genomic sequence alignment, NA can be used to transfer biological knowledge from well- to poorly-studied species between aligned network regions. Pairwise NA (PNA) finds similar regions between two networks while multiple NA (MNA) can align more than two networks. We focus on MNA. Existing MNA methods aim to maximize total similarity over all aligned nodes (node conservation). Then, they evaluate alignment quality by measuring the amount of conserved edges, but only after the alignment is constructed. Directly optimizing edge conservation during alignment construction in addition to node conservation may result in superior alignments. Thus, we present a novel MNA method called multiMAGNA++ that can achieve this. Indeed, multiMAGNA++ outperforms or is on par with existing MNA methods, while often completing faster than existing methods. That is, multiMAGNA++ scales well to larger network data and can be parallelized effectively. During method evaluation, we also introduce new MNA quality measures to allow for more fair MNA method comparison compared to the existing alignment quality measures. MultiMAGNA++ code is available on the method's web page at http://nd.edu/~cone/multiMAGNA++/.

  8. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  9. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE PAGES

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason; ...

    2018-04-05

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  10. Infrared Ship Target Segmentation Based on Spatial Information Improved FCM.

    PubMed

    Bai, Xiangzhi; Chen, Zhiguo; Zhang, Yu; Liu, Zhaoying; Lu, Yi

    2016-12-01

    Segmentation of infrared (IR) ship images is always a challenging task, because of the intensity inhomogeneity and noise. The fuzzy C-means (FCM) clustering is a classical method widely used in image segmentation. However, it has some shortcomings, like not considering the spatial information or being sensitive to noise. In this paper, an improved FCM method based on the spatial information is proposed for IR ship target segmentation. The improvements include two parts: 1) adding the nonlocal spatial information based on the ship target and 2) using the spatial shape information of the contour of the ship target to refine the local spatial constraint by Markov random field. In addition, the results of K -means are used to initialize the improved FCM method. Experimental results show that the improved method is effective and performs better than the existing methods, including the existing FCM methods, for segmentation of the IR ship images.

  11. Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion

    NASA Astrophysics Data System (ADS)

    Zou, Cuiming; Kou, Kit Ian

    2018-05-01

    Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.

  12. Development of Image Segmentation Methods for Intracranial Aneurysms

    PubMed Central

    Qian, Yi; Morgan, Michael

    2013-01-01

    Though providing vital means for the visualization, diagnosis, and quantification of decision-making processes for the treatment of vascular pathologies, vascular segmentation remains a process that continues to be marred by numerous challenges. In this study, we validate eight aneurysms via the use of two existing segmentation methods; the Region Growing Threshold and Chan-Vese model. These methods were evaluated by comparison of the results obtained with a manual segmentation performed. Based upon this validation study, we propose a new Threshold-Based Level Set (TLS) method in order to overcome the existing problems. With divergent methods of segmentation, we discovered that the volumes of the aneurysm models reached a maximum difference of 24%. The local artery anatomical shapes of the aneurysms were likewise found to significantly influence the results of these simulations. In contrast, however, the volume differences calculated via use of the TLS method remained at a relatively low figure, at only around 5%, thereby revealing the existence of inherent limitations in the application of cerebrovascular segmentation. The proposed TLS method holds the potential for utilisation in automatic aneurysm segmentation without the setting of a seed point or intensity threshold. This technique will further enable the segmentation of anatomically complex cerebrovascular shapes, thereby allowing for more accurate and efficient simulations of medical imagery. PMID:23606905

  13. Existence and stability of circular orbits in general static and spherically symmetric spacetimes

    NASA Astrophysics Data System (ADS)

    Jia, Junji; Liu, Jiawei; Liu, Xionghui; Mo, Zhongyou; Pang, Xiankai; Wang, Yaoguang; Yang, Nan

    2018-02-01

    The existence and stability of circular orbits (CO) in static and spherically symmetric (SSS) spacetime are important because of their practical and potential usefulness. In this paper, using the fixed point method, we first prove a necessary and sufficient condition on the metric function for the existence of timelike COs in SSS spacetimes. After analyzing the asymptotic behavior of the metric, we then show that asymptotic flat SSS spacetime that corresponds to a negative Newtonian potential at large r will always allow the existence of CO. The stability of the CO in a general SSS spacetime is then studied using the Lyapunov exponent method. Two sufficient conditions on the (in)stability of the COs are obtained. For null geodesics, a sufficient condition on the metric function for the (in)stability of null CO is also obtained. We then illustrate one powerful application of these results by showing that three SSS spacetimes whose metric function is not completely known will allow the existence of timelike and/or null COs. We also used our results to assert the existence and (in)stabilities of a number of known SSS metrics.

  14. Existence results for degenerate p(x)-Laplace equations with Leray-Lions type operators

    NASA Astrophysics Data System (ADS)

    Ho, Ky; Sim, Inbo

    2017-01-01

    We show the various existence results for degenerate $p(x)$-Laplace equations with Leray-Lions type operators. A suitable condition on degeneracy is discussed and proofs are mainly based on direct methods and critical point theories in Calculus of Variations. In particular, we investigate the various situations of the growth rates between principal operators and nonlinearities.

  15. Analysis of Classes of Singular Steady State Reaction Diffusion Equations

    NASA Astrophysics Data System (ADS)

    Son, Byungjae

    We study positive radial solutions to classes of steady state reaction diffusion problems on the exterior of a ball with both Dirichlet and nonlinear boundary conditions. We study both Laplacian as well as p-Laplacian problems with reaction terms that are p-sublinear at infinity. We consider both positone and semipositone reaction terms and establish existence, multiplicity and uniqueness results. Our existence and multiplicity results are achieved by a method of sub-supersolutions and uniqueness results via a combination of maximum principles, comparison principles, energy arguments and a-priori estimates. Our results significantly enhance the literature on p-sublinear positone and semipositone problems. Finally, we provide exact bifurcation curves for several one-dimensional problems. In the autonomous case, we extend and analyze a quadrature method, and in the nonautonomous case, we employ shooting methods. We use numerical solvers in Mathematica to generate the bifurcation curves.

  16. The existence results and Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces

    NASA Astrophysics Data System (ADS)

    Wang, Min

    2017-06-01

    This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.

  17. Prediction-Correction Algorithms for Time-Varying Constrained Optimization

    DOE PAGES

    Simonetto, Andrea; Dall'Anese, Emiliano

    2017-07-26

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simonetto, Andrea; Dall'Anese, Emiliano

    This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less

  19. An efficient genome-wide association test for multivariate phenotypes based on the Fisher combination function.

    PubMed

    Yang, James J; Li, Jia; Williams, L Keoki; Buu, Anne

    2016-01-05

    In genome-wide association studies (GWAS) for complex diseases, the association between a SNP and each phenotype is usually weak. Combining multiple related phenotypic traits can increase the power of gene search and thus is a practically important area that requires methodology work. This study provides a comprehensive review of existing methods for conducting GWAS on complex diseases with multiple phenotypes including the multivariate analysis of variance (MANOVA), the principal component analysis (PCA), the generalizing estimating equations (GEE), the trait-based association test involving the extended Simes procedure (TATES), and the classical Fisher combination test. We propose a new method that relaxes the unrealistic independence assumption of the classical Fisher combination test and is computationally efficient. To demonstrate applications of the proposed method, we also present the results of statistical analysis on the Study of Addiction: Genetics and Environment (SAGE) data. Our simulation study shows that the proposed method has higher power than existing methods while controlling for the type I error rate. The GEE and the classical Fisher combination test, on the other hand, do not control the type I error rate and thus are not recommended. In general, the power of the competing methods decreases as the correlation between phenotypes increases. All the methods tend to have lower power when the multivariate phenotypes come from long tailed distributions. The real data analysis also demonstrates that the proposed method allows us to compare the marginal results with the multivariate results and specify which SNPs are specific to a particular phenotype or contribute to the common construct. The proposed method outperforms existing methods in most settings and also has great applications in GWAS on complex diseases with multiple phenotypes such as the substance abuse disorders.

  20. New Internet search volume-based weighting method for integrating various environmental impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. Themore » resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.« less

  1. Automatic sleep stage classification of single-channel EEG by using complex-valued convolutional neural network.

    PubMed

    Zhang, Junming; Wu, Yan

    2018-03-28

    Many systems are developed for automatic sleep stage classification. However, nearly all models are based on handcrafted features. Because of the large feature space, there are so many features that feature selection should be used. Meanwhile, designing handcrafted features is a difficult and time-consuming task because the feature designing needs domain knowledge of experienced experts. Results vary when different sets of features are chosen to identify sleep stages. Additionally, many features that we may be unaware of exist. However, these features may be important for sleep stage classification. Therefore, a new sleep stage classification system, which is based on the complex-valued convolutional neural network (CCNN), is proposed in this study. Unlike the existing sleep stage methods, our method can automatically extract features from raw electroencephalography data and then classify sleep stage based on the learned features. Additionally, we also prove that the decision boundaries for the real and imaginary parts of a complex-valued convolutional neuron intersect orthogonally. The classification performances of handcrafted features are compared with those of learned features via CCNN. Experimental results show that the proposed method is comparable to the existing methods. CCNN obtains a better classification performance and considerably faster convergence speed than convolutional neural network. Experimental results also show that the proposed method is a useful decision-support tool for automatic sleep stage classification.

  2. Bridge Condition Assessment Using D Numbers

    PubMed Central

    Hu, Yong

    2014-01-01

    Bridge condition assessment is a complex problem influenced by many factors. The uncertain environment increases more its complexity. Due to the uncertainty in the process of assessment, one of the key problems is the representation of assessment results. Though there exists many methods that can deal with uncertain information, however, they have more or less deficiencies. In this paper, a new representation of uncertain information, called D numbers, is presented. It extends the Dempster-Shafer theory. By using D numbers, a new method is developed for the bridge condition assessment. Compared to these existing methods, the proposed method is simpler and more effective. An illustrative case is given to show the effectiveness of the new method. PMID:24696639

  3. A Bayesian taxonomic classification method for 16S rRNA gene sequences with improved species-level accuracy.

    PubMed

    Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng

    2017-05-10

    Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite its higher computational costs, our method is still suitable for analyzing large-scale microbiome datasets for practical purposes. Furthermore, our method can be applied for taxonomic classification of any phylogenetic marker gene sequences. Our software, called BLCA, is freely available at https://github.com/qunfengdong/BLCA .

  4. Multimodal Image Registration through Simultaneous Segmentation.

    PubMed

    Aganj, Iman; Fischl, Bruce

    2017-11-01

    Multimodal image registration facilitates the combination of complementary information from images acquired with different modalities. Most existing methods require computation of the joint histogram of the images, while some perform joint segmentation and registration in alternate iterations. In this work, we introduce a new non-information-theoretical method for pairwise multimodal image registration, in which the error of segmentation - using both images - is considered as the registration cost function. We empirically evaluate our method via rigid registration of multi-contrast brain magnetic resonance images, and demonstrate an often higher registration accuracy in the results produced by the proposed technique, compared to those by several existing methods.

  5. Global antioxidant response of meat.

    PubMed

    Carrillo, Celia; Barrio, Ángela; Del Mar Cavia, María; Alonso-Torre, Sara

    2017-06-01

    The global antioxidant response (GAR) method uses an enzymatic digestion to release antioxidants from foods. Owing to the importance of digestion for protein breakdown and subsequent release of bioactive compounds, the aim of the present study was to compare the GAR method for meat with the existing methodologies: the extraction-based method and QUENCHER. Seven fresh meats were analyzed using ABTS and FRAP assays. Our results indicated that the GAR of meat was higher than the total antioxidant capacity (TAC) assessed with the traditional extraction-based method. When evaluated with GAR, the thermal treatment led to an increase in the TAC of the soluble fraction, contrasting with a decreased TAC after cooking measured using the extraction-based method. The effect of thermal treatment on the TAC assessed by the QUENCHER method seemed to be dependent on the assay applied, since results from ABTS differed from FRAP. Our results allow us to hypothesize that the activation of latent bioactive peptides along the gastrointestinal tract should be taken into consideration when evaluating the TAC of meat. Therefore, we conclude that the GAR method may be more appropriate for assessing the TAC of meat than the existing, most commonly used methods. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  6. A Unified Approach to Modeling Multidisciplinary Interactions

    NASA Technical Reports Server (NTRS)

    Samareh, Jamshid A.; Bhatia, Kumar G.

    2000-01-01

    There are a number of existing methods to transfer information among various disciplines. For a multidisciplinary application with n disciplines, the traditional methods may be required to model (n(exp 2) - n) interactions. This paper presents a unified three-dimensional approach that reduces the number of interactions from (n(exp 2) - n) to 2n by using a computer-aided design model. The proposed modeling approach unifies the interactions among various disciplines. The approach is independent of specific discipline implementation, and a number of existing methods can be reformulated in the context of the proposed unified approach. This paper provides an overview of the proposed unified approach and reformulations for two existing methods. The unified approach is specially tailored for application environments where the geometry is created and managed through a computer-aided design system. Results are presented for a blended-wing body and a high-speed civil transport.

  7. Looking for trees in the forest: summary tree from posterior samples.

    PubMed

    Heled, Joseph; Bouckaert, Remco R

    2013-10-04

    Bayesian phylogenetic analysis generates a set of trees which are often condensed into a single tree representing the whole set. Many methods exist for selecting a representative topology for a set of unrooted trees, few exist for assigning branch lengths to a fixed topology, and even fewer for simultaneously setting the topology and branch lengths. However, there is very little research into locating a good representative for a set of rooted time trees like the ones obtained from a BEAST analysis. We empirically compare new and known methods for generating a summary tree. Some new methods are motivated by mathematical constructions such as tree metrics, while the rest employ tree concepts which work well in practice. These use more of the posterior than existing methods, which discard information not directly mapped to the chosen topology. Using results from a large number of simulations we assess the quality of a summary tree, measuring (a) how well it explains the sequence data under the model and (b) how close it is to the "truth", i.e to the tree used to generate the sequences. Our simulations indicate that no single method is "best". Methods producing good divergence time estimates have poor branch lengths and lower model fit, and vice versa. Using the results presented here, a user can choose the appropriate method based on the purpose of the summary tree.

  8. Utility-preserving anonymization for health data publishing.

    PubMed

    Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn

    2017-07-11

    Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.

  9. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques.

    PubMed

    Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.

  10. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques

    PubMed Central

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634

  11. Acoustic-Liner Admittance in a Duct

    NASA Technical Reports Server (NTRS)

    Watson, W. R.

    1986-01-01

    Method calculates admittance from easily obtainable values. New method for calculating acoustic-liner admittance in rectangular duct with grazing flow based on finite-element discretization of acoustic field and reposing of unknown admittance value as linear eigenvalue problem on admittance value. Problem solved by Gaussian elimination. Unlike existing methods, present method extendable to mean flows with two-dimensional boundary layers as well. In presence of shear, results of method compared well with results of Runge-Kutta integration technique.

  12. Sampling Frequency Optimisation and Nonlinear Distortion Mitigation in Subsampling Receiver

    NASA Astrophysics Data System (ADS)

    Castanheira, Pedro Xavier Melo Fernandes

    Subsampling receivers utilise the subsampling method to down convert signals from radio frequency (RF) to a lower frequency location. Multiple signals can also be down converted using the subsampling receiver, but using the incorrect subsampling frequency could result in signals aliasing one another after down conversion. The existing method for subsampling multiband signals focused on down converting all the signals without any aliasing between the signals. The case considered initially was a dual band signal, and then it was further extended to a more general multiband case. In this thesis, a new method is proposed with the assumption that only one signal is needed to not overlap the other multiband signals that are down converted at the same time. The proposed method will introduce unique formulas using the said assumption to calculate the valid subsampling frequencies, ensuring that the target signal is not aliased by the other signals. Simulation results show that the proposed method will provide lower valid subsampling frequencies for down conversion compared to the existing methods.

  13. Comparison of OpenFOAM and EllipSys3D actuator line methods with (NEW) MEXICO results

    NASA Astrophysics Data System (ADS)

    Nathan, J.; Meyer Forsting, A. R.; Troldborg, N.; Masson, C.

    2017-05-01

    The Actuator Line Method exists for more than a decade and has become a well established choice for simulating wind rotors in computational fluid dynamics. Numerous implementations exist and are used in the wind energy research community. These codes were verified by experimental data such as the MEXICO experiment. Often the verification against other codes were made on a very broad scale. Therefore this study attempts first a validation by comparing two different implementations, namely an adapted version of SOWFA/OpenFOAM and EllipSys3D and also a verification by comparing against experimental results from the MEXICO and NEW MEXICO experiments.

  14. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  15. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  16. Determining Semantically Related Significant Genes.

    PubMed

    Taha, Kamal

    2014-01-01

    GO relation embodies some aspects of existence dependency. If GO term xis existence-dependent on GO term y, the presence of y implies the presence of x. Therefore, the genes annotated with the function of the GO term y are usually functionally and semantically related to the genes annotated with the function of the GO term x. A large number of gene set enrichment analysis methods have been developed in recent years for analyzing gene sets enrichment. However, most of these methods overlook the structural dependencies between GO terms in GO graph by not considering the concept of existence dependency. We propose in this paper a biological search engine called RSGSearch that identifies enriched sets of genes annotated with different functions using the concept of existence dependency. We observe that GO term xcannot be existence-dependent on GO term y, if x- and y- have the same specificity (biological characteristics). After encoding into a numeric format the contributions of GO terms annotating target genes to the semantics of their lowest common ancestors (LCAs), RSGSearch uses microarray experiment to identify the most significant LCA that annotates the result genes. We evaluated RSGSearch experimentally and compared it with five gene set enrichment systems. Results showed marked improvement.

  17. Experimental and CFD evidence of multiple solutions in a naturally ventilated building.

    PubMed

    Heiselberg, P; Li, Y; Andersen, A; Bjerre, M; Chen, Z

    2004-02-01

    This paper considers the existence of multiple solutions to natural ventilation of a simple one-zone building, driven by combined thermal and opposing wind forces. The present analysis is an extension of an earlier analytical study of natural ventilation in a fully mixed building, and includes the effect of thermal stratification. Both computational and experimental investigations were carried out in parallel with an analytical investigation. When flow is dominated by thermal buoyancy, it was found experimentally that there is thermal stratification. When the flow is wind-dominated, the room is fully mixed. Results from all three methods have shown that the hysteresis phenomena exist. Under certain conditions, two different stable steady-state solutions are found to exist by all three methods for the same set of parameters. As shown by both the computational fluid dynamics (CFD) and experimental results, one of the solutions can shift to another when there is a sufficient perturbation. These results have probably provided the strongest evidence so far for the conclusion that multiple states exist in natural ventilation of simple buildings. Different initial conditions in the CFD simulations led to different solutions, suggesting that caution must be taken when adopting the commonly used 'zero initialization'.

  18. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination.

    PubMed

    Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A

    2018-02-08

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods.

  19. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination

    PubMed Central

    2018-01-01

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods. PMID:29419759

  20. Predictive local receptive fields based respiratory motion tracking for motion-adaptive radiotherapy.

    PubMed

    Yubo Wang; Tatinati, Sivanagaraja; Liyu Huang; Kim Jeong Hong; Shafiq, Ghufran; Veluvolu, Kalyana C; Khong, Andy W H

    2017-07-01

    Extracranial robotic radiotherapy employs external markers and a correlation model to trace the tumor motion caused by the respiration. The real-time tracking of tumor motion however requires a prediction model to compensate the latencies induced by the software (image data acquisition and processing) and hardware (mechanical and kinematic) limitations of the treatment system. A new prediction algorithm based on local receptive fields extreme learning machines (pLRF-ELM) is proposed for respiratory motion prediction. All the existing respiratory motion prediction methods model the non-stationary respiratory motion traces directly to predict the future values. Unlike these existing methods, the pLRF-ELM performs prediction by modeling the higher-level features obtained by mapping the raw respiratory motion into the random feature space of ELM instead of directly modeling the raw respiratory motion. The developed method is evaluated using the dataset acquired from 31 patients for two horizons in-line with the latencies of treatment systems like CyberKnife. Results showed that pLRF-ELM is superior to that of existing prediction methods. Results further highlight that the abstracted higher-level features are suitable to approximate the nonlinear and non-stationary characteristics of respiratory motion for accurate prediction.

  1. Tempest: Mesoscale test case suite results and the effect of order-of-accuracy on pressure gradient force errors

    NASA Astrophysics Data System (ADS)

    Guerra, J. E.; Ullrich, P. A.

    2014-12-01

    Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.

  2. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    PubMed

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  3. Feature selection using probabilistic prediction of support vector regression.

    PubMed

    Yang, Jian-Bo; Ong, Chong-Jin

    2011-06-01

    This paper presents a new wrapper-based feature selection method for support vector regression (SVR) using its probabilistic predictions. The method computes the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of this importance measure is expensive, two approximations are proposed. The effectiveness of the measure using these approximations, in comparison to several other existing feature selection methods for SVR, is evaluated on both artificial and real-world problems. The result of the experiments show that the proposed method generally performs better than, or at least as well as, the existing methods, with notable advantage when the dataset is sparse.

  4. Further studies using matched filter theory and stochastic simulation for gust loads prediction

    NASA Technical Reports Server (NTRS)

    Scott, Robert C.; Pototzky, Anthony S.; Perry, Boyd Iii

    1993-01-01

    This paper describes two analysis methods -- one deterministic, the other stochastic -- for computing maximized and time-correlated gust loads for aircraft with nonlinear control systems. The first method is based on matched filter theory; the second is based on stochastic simulation. The paper summarizes the methods, discusses the selection of gust intensity for each method and presents numerical results. A strong similarity between the results from the two methods is seen to exist for both linear and nonlinear configurations.

  5. Design sensitivity analysis with Applicon IFAD using the adjoint variable method

    NASA Technical Reports Server (NTRS)

    Frederick, Marjorie C.; Choi, Kyung K.

    1984-01-01

    A numerical method is presented to implement structural design sensitivity analysis using the versatility and convenience of existing finite element structural analysis program and the theoretical foundation in structural design sensitivity analysis. Conventional design variables, such as thickness and cross-sectional areas, are considered. Structural performance functionals considered include compliance, displacement, and stress. It is shown that calculations can be carried out outside existing finite element codes, using postprocessing data only. That is, design sensitivity analysis software does not have to be imbedded in an existing finite element code. The finite element structural analysis program used in the implementation presented is IFAD. Feasibility of the method is shown through analysis of several problems, including built-up structures. Accurate design sensitivity results are obtained without the uncertainty of numerical accuracy associated with selection of a finite difference perturbation.

  6. Efficient iterative method for solving the Dirac-Kohn-Sham density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Lin; Shao, Sihong; E, Weinan

    2012-11-06

    We present for the first time an efficient iterative method to directly solve the four-component Dirac-Kohn-Sham (DKS) density functional theory. Due to the existence of the negative energy continuum in the DKS operator, the existing iterative techniques for solving the Kohn-Sham systems cannot be efficiently applied to solve the DKS systems. The key component of our method is a novel filtering step (F) which acts as a preconditioner in the framework of the locally optimal block preconditioned conjugate gradient (LOBPCG) method. The resulting method, dubbed the LOBPCG-F method, is able to compute the desired eigenvalues and eigenvectors in the positive energy band without computing any state in the negative energy band. The LOBPCG-F method introduces mild extra cost compared to the standard LOBPCG method and can be easily implemented. We demonstrate our method in the pseudopotential framework with a planewave basis set which naturally satisfies the kinetic balance prescription. Numerical results for Ptmore » $$_{2}$$, Au$$_{2}$$, TlF, and Bi$$_{2}$$Se$$_{3}$$ indicate that the LOBPCG-F method is a robust and efficient method for investigating the relativistic effect in systems containing heavy elements.« less

  7. Existence and non-existence of transition fronts in mixed ignition-monostable media

    NASA Astrophysics Data System (ADS)

    Graham, Cole; Shean Lim, Tau; Ma, Andrew; Weber, David

    2018-02-01

    We study transition fronts for one-dimensional reaction-diffusion equations with compactly-perturbed ignition-monostable reactions. We establish an almost sharp condition on reactions which characterizes the existence and non-existence of fronts. In particular, we prove that a strong inhomogeneity in the reaction prevents formation of transition fronts, while a weak inhomogeneity gives rise to a front. Our work extends the results and methods introduced in Nolen et al 2012 (Arch. Ration. Mech. Anal. 203 217-46), which studied the same question in inhomogeneous KPP media.

  8. A simplified analytic form for generation of axisymmetric plasma boundaries

    DOE PAGES

    Luce, Timothy C.

    2017-02-23

    An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less

  9. A simplified analytic form for generation of axisymmetric plasma boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luce, Timothy C.

    An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less

  10. A robust two-way semi-linear model for normalization of cDNA microarray data

    PubMed Central

    Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento

    2005-01-01

    Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789

  11. Development, evaluation and application of a modified micrometeorological gradient method for long-term estimation of gaseous dry deposition over forest canopies.

    EPA Science Inventory

    Small pollutant concentration gradients between levels above a plant canopy result in large uncertainties in estimated air–surface exchange fluxes when using existing micrometeorological gradient methods, including the aerodynamic gradient method (AGM) and the modified Bowen rati...

  12. Comparing and improving reconstruction methods for proxies based on compositional data

    NASA Astrophysics Data System (ADS)

    Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.

    2017-12-01

    Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data

  13. Why conventional detection methods fail in identifying the existence of contamination events.

    PubMed

    Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han

    2016-04-15

    Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Detecting wood surface defects with fusion algorithm of visual saliency and local threshold segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Xuejuan; Wu, Shuhang; Liu, Yunpeng

    2018-04-01

    This paper presents a new method for wood defect detection. It can solve the over-segmentation problem existing in local threshold segmentation methods. This method effectively takes advantages of visual saliency and local threshold segmentation. Firstly, defect areas are coarsely located by using spectral residual method to calculate global visual saliency of them. Then, the threshold segmentation of maximum inter-class variance method is adopted for positioning and segmenting the wood surface defects precisely around the coarse located areas. Lastly, we use mathematical morphology to process the binary images after segmentation, which reduces the noise and small false objects. Experiments on test images of insect hole, dead knot and sound knot show that the method we proposed obtains ideal segmentation results and is superior to the existing segmentation methods based on edge detection, OSTU and threshold segmentation.

  15. Accelerated Dynamic Corrosion Test Method Development

    DTIC Science & Technology

    test method has poor correlation to outdoor exposures, particularly for non-chromate primers. As a result, more realistic cyclic environmental...exposures have been developed to more closely resemble actual atmospheric corrosion damage. Several existing tests correlate well with the outdoor performance

  16. Image superresolution by midfrequency sparse representation and total variation regularization

    NASA Astrophysics Data System (ADS)

    Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi

    2015-01-01

    Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.

  17. Improved mapping of radio sources from VLBI data by least-square fit

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.

    1985-01-01

    A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.

  18. Pavement crack detection combining non-negative feature with fast LoG in complex scene

    NASA Astrophysics Data System (ADS)

    Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu

    2015-12-01

    Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.

  19. Locally refined block-centred finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  20. Locally refined block-centered finite-difference groundwater models: Evaluation of parameter sensitivity and the consequences for inverse modelling and predictions

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.

    2002-01-01

    Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.

  1. Chimpanzees prioritise social information over pre-existing behaviours in a group context but not in dyads.

    PubMed

    Watson, Stuart K; Lambeth, Susan P; Schapiro, Steven J; Whiten, Andrew

    2018-05-01

    How animal communities arrive at homogeneous behavioural preferences is a central question for studies of cultural evolution. Here, we investigated whether chimpanzees (Pan troglodytes) would relinquish a pre-existing behaviour to adopt an alternative demonstrated by an overwhelming majority of group mates; in other words, whether chimpanzees behave in a conformist manner. In each of five groups of chimpanzees (N = 37), one individual was trained on one method of opening a two-action puzzle box to obtain food, while the remaining individuals learned the alternative method. Over 5 h of open access to the apparatus in a group context, it was found that 4/5 'minority' individuals explored the majority method and three of these used this new method in the majority of trials. Those that switched did so after observing only a small subset of their group, thereby not matching conventional definitions of conformity. In a further 'Dyad' condition, six pairs of chimpanzees were trained on alternative methods and then given access to the task together. Only one of these individuals ever switched method. The number of observations that individuals in the minority and Dyad individuals made of their untrained method was not found to influence whether or not they themselves switched to use it. In a final 'Asocial' condition, individuals (N = 10) did not receive social information and did not deviate from their first-learned method. We argue that these results demonstrate an important influence of social context upon prioritisation of social information over pre-existing methods, which can result in group homogeneity of behaviour.

  2. Prediction of heterotrimeric protein complexes by two-phase learning using neighboring kernels

    PubMed Central

    2014-01-01

    Background Protein complexes play important roles in biological systems such as gene regulatory networks and metabolic pathways. Most methods for predicting protein complexes try to find protein complexes with size more than three. It, however, is known that protein complexes with smaller sizes occupy a large part of whole complexes for several species. In our previous work, we developed a method with several feature space mappings and the domain composition kernel for prediction of heterodimeric protein complexes, which outperforms existing methods. Results We propose methods for prediction of heterotrimeric protein complexes by extending techniques in the previous work on the basis of the idea that most heterotrimeric protein complexes are not likely to share the same protein with each other. We make use of the discriminant function in support vector machines (SVMs), and design novel feature space mappings for the second phase. As the second classifier, we examine SVMs and relevance vector machines (RVMs). We perform 10-fold cross-validation computational experiments. The results suggest that our proposed two-phase methods and SVM with the extended features outperform the existing method NWE, which was reported to outperform other existing methods such as MCL, MCODE, DPClus, CMC, COACH, RRW, and PPSampler for prediction of heterotrimeric protein complexes. Conclusions We propose two-phase prediction methods with the extended features, the domain composition kernel, SVMs and RVMs. The two-phase method with the extended features and the domain composition kernel using SVM as the second classifier is particularly useful for prediction of heterotrimeric protein complexes. PMID:24564744

  3. A simulation-based evaluation of methods for inferring linear barriers to gene flow

    Treesearch

    Christopher Blair; Dana E. Weigel; Matthew Balazik; Annika T. H. Keeley; Faith M. Walker; Erin Landguth; Sam Cushman; Melanie Murphy; Lisette Waits; Niko Balkenhol

    2012-01-01

    Different analytical techniques used on the same data set may lead to different conclusions about the existence and strength of genetic structure. Therefore, reliable interpretation of the results from different methods depends on the efficacy and reliability of different statistical methods. In this paper, we evaluated the performance of multiple analytical methods to...

  4. Fast frequency domain method to detect skew in a document image

    NASA Astrophysics Data System (ADS)

    Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee

    2015-12-01

    In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.

  5. Existing methods for improving the accuracy of digital-to-analog converters

    NASA Astrophysics Data System (ADS)

    Eielsen, Arnfinn A.; Fleming, Andrew J.

    2017-09-01

    The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.

  6. Comparison of bulk sediment and sediment elutriate toxicity testing methods

    EPA Science Inventory

    Elutriate bioassays are among numerous methods that exist for assessing the potential toxicity of sediments in aquatic systems. In this study, interlaboratory results were compared from 96-hour Ceriodaphnia dubia and Pimephales promelas static-renewal acute toxicity tests conduct...

  7. Airplane detection based on fusion framework by combining saliency model with Deep Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Dou, Hao; Sun, Xiao; Li, Bin; Deng, Qianqian; Yang, Xubo; Liu, Di; Tian, Jinwen

    2018-03-01

    Aircraft detection from very high resolution remote sensing images, has gained more increasing interest in recent years due to the successful civil and military applications. However, several problems still exist: 1) how to extract the high-level features of aircraft; 2) locating objects within such a large image is difficult and time consuming; 3) A common problem of multiple resolutions of satellite images still exists. In this paper, inspirited by biological visual mechanism, the fusion detection framework is proposed, which fusing the top-down visual mechanism (deep CNN model) and bottom-up visual mechanism (GBVS) to detect aircraft. Besides, we use multi-scale training method for deep CNN model to solve the problem of multiple resolutions. Experimental results demonstrate that our method can achieve a better detection result than the other methods.

  8. Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1989-01-01

    An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.

  9. Predicting chaos in memristive oscillator via harmonic balance method.

    PubMed

    Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai

    2012-12-01

    This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.

  10. Boomerang: A method for recursive reclassification.

    PubMed

    Devlin, Sean M; Ostrovnaya, Irina; Gönen, Mithat

    2016-09-01

    While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted toward this reclassification goal. In this article, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a prespecified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia data set where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent data set. © 2016, The International Biometric Society.

  11. Boomerang: A Method for Recursive Reclassification

    PubMed Central

    Devlin, Sean M.; Ostrovnaya, Irina; Gönen, Mithat

    2016-01-01

    Summary While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted towards this reclassification goal. In this paper, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a pre-specified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia dataset where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent dataset. PMID:26754051

  12. Flip-avoiding interpolating surface registration for skull reconstruction.

    PubMed

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  13. Initial Results of an MDO Method Evaluation Study

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Kodiyalam, Srinivas

    1998-01-01

    The NASA Langley MDO method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of re- producible experiments. In the first phase of the study, three MDO methods were implemented in the SIGHT: framework and used to solve a set of ten relatively simple problems. In this paper, we comment on the general considerations for conducting method evaluation studies and report some initial results obtained to date. In particular, although the results are not conclusive because of the small initial test set, other formulations, optimality conditions, and sensitivity of solutions to various perturbations. Optimization algorithms are used to solve a particular MDO formulation. It is then appropriate to speak of local convergence rates and of global convergence properties of an optimization algorithm applied to a specific formulation. An analogous distinction exists in the field of partial differential equations. On the one hand, equations are analyzed in terms of regularity, well-posedness, and the existence and unique- ness of solutions. On the other, one considers numerous algorithms for solving differential equations. The area of MDO methods studies MDO formulations combined with optimization algorithms, although at times the distinction is blurred. It is important to

  14. Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.

    ERIC Educational Resources Information Center

    Tomas, Jose M.; Oliver, Amparo

    1999-01-01

    Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)

  15. Controlling the Display of Capsule Endoscopy Video for Diagnostic Assistance

    NASA Astrophysics Data System (ADS)

    Vu, Hai; Echigo, Tomio; Sagawa, Ryusuke; Yagi, Keiko; Shiba, Masatsugu; Higuchi, Kazuhide; Arakawa, Tetsuo; Yagi, Yasushi

    Interpretations by physicians of capsule endoscopy image sequences captured over periods of 7-8 hours usually require 45 to 120 minutes of extreme concentration. This paper describes a novel method to reduce diagnostic time by automatically controlling the display frame rate. Unlike existing techniques, this method displays original images with no skipping of frames. The sequence can be played at a high frame rate in stable regions to save time. Then, in regions with rough changes, the speed is decreased to more conveniently ascertain suspicious findings. To realize such a system, cue information about the disparity of consecutive frames, including color similarity and motion displacements is extracted. A decision tree utilizes these features to classify the states of the image acquisitions. For each classified state, the delay time between frames is calculated by parametric functions. A scheme selecting the optimal parameters set determined from assessments by physicians is deployed. Experiments involved clinical evaluations to investigate the effectiveness of this method compared to a standard-view using an existing system. Results from logged action based analysis show that compared with an existing system the proposed method reduced diagnostic time to around 32.5 ± minutes per full sequence while the number of abnormalities found was similar. As well, physicians needed less effort because of the systems efficient operability. The results of the evaluations should convince physicians that they can safely use this method and obtain reduced diagnostic times.

  16. Identification of influential users by neighbors in online social networks

    NASA Astrophysics Data System (ADS)

    Sheikhahmadi, Amir; Nematbakhsh, Mohammad Ali; Zareie, Ahmad

    2017-11-01

    Identification and ranking of influential users in social networks for the sake of news spreading and advertising has recently become an attractive field of research. Given the large number of users in social networks and also the various relations that exist among them, providing an effective method to identify influential users has been gradually considered as an essential factor. In most of the already-provided methods, those users who are located in an appropriate structural position of the network are regarded as influential users. These methods do not usually pay attention to the interactions among users, and also consider those relations as being binary in nature. This paper, therefore, proposes a new method to identify influential users in a social network by considering those interactions that exist among the users. Since users tend to act within the frame of communities, the network is initially divided into different communities. Then the amount of interaction among users is used as a parameter to set the weight of relations existing within the network. Afterward, by determining the neighbors' role for each user, a two-level method is proposed for both detecting users' influence and also ranking them. Simulation and experimental results on twitter data shows that those users who are selected by the proposed method, comparing to other existing ones, are distributed in a more appropriate distance. Moreover, the proposed method outperforms the other ones in terms of both the influential speed and capacity of the users it selects.

  17. $n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.

    PubMed

    Wu, Yue; Hua, Zhongyun; Zhou, Yicong

    2016-11-01

    Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.

  18. Tchebichef moment based restoration of Gaussian blurred images.

    PubMed

    Kumar, Ahlad; Paramesran, Raveendran; Lim, Chern-Loon; Dass, Sarat C

    2016-11-10

    With the knowledge of how edges vary in the presence of a Gaussian blur, a method that uses low-order Tchebichef moments is proposed to estimate the blur parameters: sigma (σ) and size (w). The difference between the Tchebichef moments of the original and the reblurred images is used as feature vectors to train an extreme learning machine for estimating the blur parameters (σ,w). The effectiveness of the proposed method to estimate the blur parameters is examined using cross-database validation. The estimated blur parameters from the proposed method are used in the split Bregman-based image restoration algorithm. A comparative analysis of the proposed method with three existing methods using all the images from the LIVE database is carried out. The results show that the proposed method in most of the cases performs better than the three existing methods in terms of the visual quality evaluated using the structural similarity index.

  19. A new evaluation method research for fusion quality of infrared and visible images

    NASA Astrophysics Data System (ADS)

    Ge, Xingguo; Ji, Yiguo; Tao, Zhongxiang; Tian, Chunyan; Ning, Chengda

    2017-03-01

    In order to objectively evaluate the fusion effect of infrared and visible image, a fusion evaluation method for infrared and visible images based on energy-weighted average structure similarity and edge information retention value is proposed for drawbacks of existing evaluation methods. The evaluation index of this method is given, and the infrared and visible image fusion results under different algorithms and environments are made evaluation experiments on the basis of this index. The experimental results show that the objective evaluation index is consistent with the subjective evaluation results obtained from this method, which shows that the method is a practical and effective fusion image quality evaluation method.

  20. A Method for the Design and Development of Medical or Health Care Information Websites to Optimize Search Engine Results Page Rankings on Google

    PubMed Central

    Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter

    2013-01-01

    Background The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. Objective This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). Methods A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. Results The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. Conclusions The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion. PMID:23981848

  1. An Improved Aerial Target Localization Method with a Single Vector Sensor

    PubMed Central

    Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin

    2017-01-01

    This paper focuses on the problems encountered in the actual data processing with the use of the existing aerial target localization methods, analyzes the causes of the problems, and proposes an improved algorithm. Through the processing of the sea experiment data, it is found that the existing algorithms have higher requirements for the accuracy of the angle estimation. The improved algorithm reduces the requirements of the angle estimation accuracy and obtains the robust estimation results. The closest distance matching estimation algorithm and the horizontal distance estimation compensation algorithm are proposed. The smoothing effect of the data after being post-processed by using the forward and backward two-direction double-filtering method has been improved, thus the initial stage data can be filtered, so that the filtering results retain more useful information. In this paper, the aerial target height measurement methods are studied, the estimation results of the aerial target are given, so as to realize the three-dimensional localization of the aerial target and increase the understanding of the underwater platform to the aerial target, so that the underwater platform has better mobility and concealment. PMID:29135956

  2. CLT and AE methods of in-situ load testing : comparison and development of evaluation criteria : in-situ evaluation of post-tensioned parking garage, Kansas City, Missouri

    DOT National Transportation Integrated Search

    2008-02-01

    The objective of the proposed research project is to compare the results of two recently introduced nondestructive load test methods to the existing 24-hour load test method described in Chapter 20 of ACI 318-05. The two new methods of nondestructive...

  3. Enhanced graphene oxide membranes and methods for making same

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Yongsoon; Gotthold, David W.; Fifield, Leonard S.

    A method for making a graphene oxide membrane and a resulting free-standing graphene oxide membrane that provides desired qualities of water permeability and selectivity at larger sizes, thinner cross sections, and with increased ruggedness as compared to existing membranes and processes.

  4. An interlaboratory comparison of sediment elutriate preparation and toxicity test methods

    EPA Science Inventory

    Elutriate bioassays are among numerous methods that exist for assessing the potential toxicity of sediments in aquatic systems. In this study, interlaboratory results were compared from 96-hour Ceriodaphnia dubia and Pimephales promelas static-renewal acute toxicity tests conduct...

  5. BAYESIAN META-ANALYSIS ON MEDICAL DEVICES: APPLICATION TO IMPLANTABLE CARDIOVERTER DEFIBRILLATORS

    PubMed Central

    Youn, Ji-Hee; Lord, Joanne; Hemming, Karla; Girling, Alan; Buxton, Martin

    2012-01-01

    Objectives: The aim of this study is to describe and illustrate a method to obtain early estimates of the effectiveness of a new version of a medical device. Methods: In the absence of empirical data, expert opinion may be elicited on the expected difference between the conventional and modified devices. Bayesian Mixed Treatment Comparison (MTC) meta-analysis can then be used to combine this expert opinion with existing trial data on earlier versions of the device. We illustrate this approach for a new four-pole implantable cardioverter defibrillator (ICD) compared with conventional ICDs, Class III anti-arrhythmic drugs, and conventional drug therapy for the prevention of sudden cardiac death in high risk patients. Existing RCTs were identified from a published systematic review, and we elicited opinion on the difference between four-pole and conventional ICDs from experts recruited at a cardiology conference. Results: Twelve randomized controlled trials were identified. Seven experts provided valid probability distributions for the new ICDs compared with current devices. The MTC model resulted in estimated relative risks of mortality of 0.74 (0.60–0.89) (predictive relative risk [RR] = 0.77 [0.41–1.26]) and 0.83 (0.70–0.97) (predictive RR = 0.84 [0.55–1.22]) with the new ICD therapy compared to Class III anti-arrhythmic drug therapy and conventional drug therapy, respectively. These results showed negligible differences from the preliminary results for the existing ICDs. Conclusions: The proposed method incorporating expert opinion to adjust for a modification made to an existing device may play a useful role in assisting decision makers to make early informed judgments on the effectiveness of frequently modified healthcare technologies. PMID:22559753

  6. The modified Ottawa method to establish the update need of a systematic review: glass-ionomer versus resin sealants for caries prevention

    PubMed Central

    MICKENAUTSCH, Steffen; YENGOPAL, Veerasamy

    2013-01-01

    Objective To demonstrate the application of the modified Ottawa method by establishing the update need of a systematic review with focus on the caries preventive effect of GIC versus resin pit and fissure sealants; to answer the question as to whether the existing conclusions of this systematic review are still current; to establish whether a new update of this systematic review was needed. Methods: Application of the Modified Ottawa method. Application date: April/May 2012. Results Four signals aligned with the criteria of the modified Ottawa method were identified. The content of these signals suggest that higher precision of the current systematic review results might be achieved if an update of the current review were conducted at this point in time. However, these signals further indicate that such systematic review update, despite its higher precision, would only confirm the existing review conclusion that no statistically significant difference exists in the caries-preventive effect of GIC and resin-based fissure sealants. Conclusion In conclusion, this study demonstrated the modified Ottawa method as an effective tool in establishing the update need of the systematic review. In addition, it was established that the conclusions of the systematic review in relation to the caries preventive effect of GIC versus resin based fissure sealants are still current, and that no update of this systematic review was warranted at date of application. PMID:24212996

  7. Aerodynamics and performance verifications of test methods for laboratory fume cupboards.

    PubMed

    Tseng, Li-Ching; Huang, Rong Fung; Chen, Chih-Chieh; Chang, Cheng-Ping

    2007-03-01

    The laser-light-sheet-assisted smoke flow visualization technique is performed on a full-size, transparent, commercial grade chemical fume cupboard to diagnose the flow characteristics and to verify the validity of several current containment test methods. The visualized flow patterns identify the recirculation areas that would inevitably exist in the conventional fume cupboards because of the fundamental configurations and structures. The large-scale vortex structures exist around the side walls, the doorsill of the cupboard and in the vicinity of the near-wake region of the manikin. The identified recirculation areas are taken as the 'dangerous' regions where the risk of turbulent dispersion of contaminants may be high. Several existing tracer gas containment test methods (BS 7258:1994, prEN 14175-3:2003 and ANSI/ASHRAE 110:1995) are conducted to verify the effectiveness of these methods in detecting the contaminant leakage. By comparing the results of the flow visualization and the tracer gas tests, it is found that the local recirculation regions are more prone to contaminant leakage because of the complex interaction between the shear layers and the smoke movement through the mechanism of turbulent dispersion. From the point of view of aerodynamics, the present study verifies that the methodology of the prEN 14175-3:2003 protocol can produce more reliable and consistent results because it is based on the region-by-region measurement and encompasses the most area of the entire recirculation zone of the cupboard. A modified test method combined with the region-by-region approach at the presence of the manikin shows substantially different results of the containment. A better performance test method which can describe an operator's exposure and the correlation between flow characteristics and the contaminant leakage properties is therefore suggested.

  8. Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness

    PubMed Central

    2015-01-01

    Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073

  9. Impact, Fire, and Fluid Spread Code Coupling for Complex Transportation Accident Environment Simulation.

    PubMed

    Brown, Alexander L; Wagner, Gregory J; Metzinger, Kurt E

    2012-06-01

    Transportation accidents frequently involve liquids dispersing in the atmosphere. An example is that of aircraft impacts, which often result in spreading fuel and a subsequent fire. Predicting the resulting environment is of interest for design, safety, and forensic applications. This environment is challenging for many reasons, one among them being the disparate time and length scales that are necessary to resolve for an accurate physical representation of the problem. A recent computational method appropriate for this class of problems has been described for modeling the impact and subsequent liquid spread. Because the environment is difficult to instrument and costly to test, the existing validation data are of limited scope and quality. A comparatively well instrumented test involving a rocket propelled cylindrical tank of water was performed, the results of which are helpful to understand the adequacy of the modeling methods. Existing data include estimates of drop sizes at several locations, final liquid surface deposition mass integrated over surface area regions, and video evidence of liquid cloud spread distances. Comparisons are drawn between the experimental observations and the predicted results of the modeling methods to provide evidence regarding the accuracy of the methods, and to provide guidance on the application and use of these methods.

  10. Segmentation of Image Ensembles via Latent Atlases

    PubMed Central

    Van Leemput, Koen; Menze, Bjoern H.; Wells, William M.; Golland, Polina

    2010-01-01

    Spatial priors, such as probabilistic atlases, play an important role in MRI segmentation. However, the availability of comprehensive, reliable and suitable manual segmentations for atlas construction is limited. We therefore propose a method for joint segmentation of corresponding regions of interest in a collection of aligned images that does not require labeled training data. Instead, a latent atlas, initialized by at most a single manual segmentation, is inferred from the evolving segmentations of the ensemble. The algorithm is based on probabilistic principles but is solved using partial differential equations (PDEs) and energy minimization criteria. We evaluate the method on two datasets, segmenting subcortical and cortical structures in a multi-subject study and extracting brain tumors in a single-subject multi-modal longitudinal experiment. We compare the segmentation results to manual segmentations, when those exist, and to the results of a state-of-the-art atlas-based segmentation method. The quality of the results supports the latent atlas as a promising alternative when existing atlases are not compatible with the images to be segmented. PMID:20580305

  11. LEAKAGE CHARACTERISTICS OF BASE OF RIVERBANK BY SELF POTENTIAL METHOD AND EXAMINATION OF EFFECTIVENESS OF SELF POTENTIAL METHOD TO HEALTH MONITORING OF BASE OF RIVERBANK

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko

    Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.

  12. EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.

    PubMed

    Diykh, Mohammed; Li, Yan; Wen, Peng

    2016-11-01

    The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method.

  13. Joint detection and tracking of size-varying infrared targets based on block-wise sparse decomposition

    NASA Astrophysics Data System (ADS)

    Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu

    2016-05-01

    The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.

  14. An Early Years Toolbox for Assessing Early Executive Function, Language, Self-Regulation, and Social Development: Validity, Reliability, and Preliminary Norms

    PubMed Central

    Howard, Steven J.; Melhuish, Edward

    2016-01-01

    Several methods of assessing executive function (EF), self-regulation, language development, and social development in young children have been developed over previous decades. Yet new technologies make available methods of assessment not previously considered. In resolving conceptual and pragmatic limitations of existing tools, the Early Years Toolbox (EYT) offers substantial advantages for early assessment of language, EF, self-regulation, and social development. In the current study, results of our large-scale administration of this toolbox to 1,764 preschool and early primary school students indicated very good reliability, convergent validity with existing measures, and developmental sensitivity. Results were also suggestive of better capture of children’s emerging abilities relative to comparison measures. Preliminary norms are presented, showing a clear developmental trajectory across half-year age groups. The accessibility of the EYT, as well as its advantages over existing measures, offers considerably enhanced opportunities for objective measurement of young children’s abilities to enable research and educational applications. PMID:28503022

  15. [Establishment of Assessment Method for Air Bacteria and Fungi Contamination].

    PubMed

    Zhang, Hua-ling; Yao, Da-jun; Zhang, Yu; Fang, Zi-liang

    2016-03-15

    In this paper, in order to settle existing problems in the assessment of air bacteria and fungi contamination, the indoor and outdoor air bacteria and fungi filed concentrations by impact method and settlement method in existing documents were collected and analyzed, then the goodness of chi square was used to test whether these concentration data obeyed normal distribution at the significant level of α = 0.05, and combined with the 3σ principle of normal distribution and the current assessment standards, the suggested concentrations ranges of air microbial concentrations were determined. The research results could provide a reference for developing air bacteria and fungi contamination assessment standards in the future.

  16. Curvelet-domain multiple matching method combined with cubic B-spline function

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  17. A hybrid frame concealment algorithm for H.264/AVC.

    PubMed

    Yan, Bo; Gharavi, Hamid

    2010-01-01

    In packet-based video transmissions, packets loss due to channel errors may result in the loss of the whole video frame. Recently, many error concealment algorithms have been proposed in order to combat channel errors; however, most of the existing algorithms can only deal with the loss of macroblocks and are not able to conceal the whole missing frame. In order to resolve this problem, in this paper, we have proposed a new hybrid motion vector extrapolation (HMVE) algorithm to recover the whole missing frame, and it is able to provide more accurate estimation for the motion vectors of the missing frame than other conventional methods. Simulation results show that it is highly effective and significantly outperforms other existing frame recovery methods.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Simonetto, Andrea

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are establishedmore » to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.« less

  19. Investigation of 2‐stage meta‐analysis methods for joint longitudinal and time‐to‐event data through simulation and real data application

    PubMed Central

    Tudur Smith, Catrin; Gueyffier, François; Kolamunnage‐Dona, Ruwanthi

    2017-01-01

    Background Joint modelling of longitudinal and time‐to‐event data is often preferred over separate longitudinal or time‐to‐event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time‐to‐event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta‐analysis of joint model estimates from multiple studies. Methods We propose a 2‐stage method for meta‐analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta‐analyses of separate longitudinal or time‐to‐event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Results Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta‐analytic setting where association exists between the longitudinal and time‐to‐event outcomes. Conclusions Where evidence of association between longitudinal and time‐to‐event outcomes exists, results from joint models over standalone analyses should be pooled in 2‐stage meta‐analyses. PMID:29250814

  20. Existence of weak solutions to degenerate p-Laplacian equations and integral formulas

    NASA Astrophysics Data System (ADS)

    Chua, Seng-Kee; Wheeden, Richard L.

    2017-12-01

    We study the problem of solving some general integral formulas and then apply the conclusions to obtain results about the existence of weak solutions of various degenerate p-Laplacian equations. We adapt Variational Calculus methods and the Mountain Pass Lemma without the Palais-Smale condition, and we use an abstract version of Lions' Concentration Compactness Principle II.

  1. LEGO-MM: LEarning structured model by probabilistic loGic Ontology tree for MultiMedia.

    PubMed

    Tang, Jinhui; Chang, Shiyu; Qi, Guo-Jun; Tian, Qi; Rui, Yong; Huang, Thomas S

    2016-09-22

    Recent advances in Multimedia ontology have resulted in a number of concept models, e.g., LSCOM and Mediamill 101, which are accessible and public to other researchers. However, most current research effort still focuses on building new concepts from scratch, very few work explores the appropriate method to construct new concepts upon the existing models already in the warehouse. To address this issue, we propose a new framework in this paper, termed LEGO1-MM, which can seamlessly integrate both the new target training examples and the existing primitive concept models to infer the more complex concept models. LEGOMM treats the primitive concept models as the lego toy to potentially construct an unlimited vocabulary of new concepts. Specifically, we first formulate the logic operations to be the lego connectors to combine existing concept models hierarchically in probabilistic logic ontology trees. Then, we incorporate new target training information simultaneously to efficiently disambiguate the underlying logic tree and correct the error propagation. Extensive experiments are conducted on a large vehicle domain data set from ImageNet. The results demonstrate that LEGO-MM has significantly superior performance over existing state-of-the-art methods, which build new concept models from scratch.

  2. Simulation of unsteady state performance of a secondary air system by the 1D-3D-Structure coupled method

    NASA Astrophysics Data System (ADS)

    Wu, Hong; Li, Peng; Li, Yulong

    2016-02-01

    This paper describes the calculation method for unsteady state conditions in the secondary air systems in gas turbines. The 1D-3D-Structure coupled method was applied. A 1D code was used to model the standard components that have typical geometric characteristics. Their flow and heat transfer were described by empirical correlations based on experimental data or CFD calculations. A 3D code was used to model the non-standard components that cannot be described by typical geometric languages, while a finite element analysis was carried out to compute the structural deformation and heat conduction at certain important positions. These codes were coupled through their interfaces. Thus, the changes in heat transfer and structure and their interactions caused by exterior disturbances can be reflected. The results of the coupling method in an unsteady state showed an apparent deviation from the existing data, while the results in the steady state were highly consistent with the existing data. The difference in the results in the unsteady state was caused primarily by structural deformation that cannot be predicted by the 1D method. Thus, in order to obtain the unsteady state performance of a secondary air system more accurately and efficiently, the 1D-3D-Structure coupled method should be used.

  3. Digital double random amplitude image encryption method based on the symmetry property of the parametric discrete Fourier transform

    NASA Astrophysics Data System (ADS)

    Bekkouche, Toufik; Bouguezel, Saad

    2018-03-01

    We propose a real-to-real image encryption method. It is a double random amplitude encryption method based on the parametric discrete Fourier transform coupled with chaotic maps to perform the scrambling. The main idea behind this method is the introduction of a complex-to-real conversion by exploiting the inherent symmetry property of the transform in the case of real-valued sequences. This conversion allows the encrypted image to be real-valued instead of being a complex-valued image as in all existing double random phase encryption methods. The advantage is to store or transmit only one image instead of two images (real and imaginary parts). Computer simulation results and comparisons with the existing double random amplitude encryption methods are provided for peak signal-to-noise ratio, correlation coefficient, histogram analysis, and key sensitivity.

  4. General imaging of advanced 3D mask objects based on the fully-vectorial extended Nijboer-Zernike (ENZ) theory

    NASA Astrophysics Data System (ADS)

    van Haver, Sven; Janssen, Olaf T. A.; Braat, Joseph J. M.; Janssen, Augustus J. E. M.; Urbach, H. Paul; Pereira, Silvania F.

    2008-03-01

    In this paper we introduce a new mask imaging algorithm that is based on the source point integration method (or Abbe method). The method presented here distinguishes itself from existing methods by exploiting the through-focus imaging feature of the Extended Nijboer-Zernike (ENZ) theory of diffraction. An introduction to ENZ-theory and its application in general imaging is provided after which we describe the mask imaging scheme that can be derived from it. The remainder of the paper is devoted to illustrating the advantages of the new method over existing methods (Hopkins-based). To this extent several simulation results are included that illustrate advantages arising from: the accurate incorporation of isolated structures, the rigorous treatment of the object (mask topography) and the fully vectorial through-focus image formation of the ENZ-based algorithm.

  5. Correction of cryptotia using a subcutaneous pedicled flap.

    PubMed

    Nakajima, T; Yoneda, K; Yoshimura, Y

    1991-01-01

    Cryptotia is a relatively common deformity of the ear among orientals. Although many methods for correcting this deformity have been reported, there is no one perfect method. We have developed a method using a subcutaneous pedicle flap raised from the retroauricular region, where relative abundance of skin exists. We have treated 9 ears of 7 patients by the method reported herein. Results are satisfactory in all cases.

  6. Wideband characterization of the complex wave number and characteristic impedance of sound absorbers.

    PubMed

    Salissou, Yacoubou; Panneton, Raymond

    2010-11-01

    Several methods for measuring the complex wave number and the characteristic impedance of sound absorbers have been proposed in the literature. These methods can be classified into single frequency and wideband methods. In this paper, the main existing methods are revisited and discussed. An alternative method which is not well known or discussed in the literature while exhibiting great potential is also discussed. This method is essentially an improvement of the wideband method described by Iwase et al., rewritten so that the setup is more ISO 10534-2 standard-compliant. Glass wool, melamine foam and acoustical/thermal insulator wool are used to compare the main existing wideband non-iterative methods with this alternative method. It is found that, in the middle and high frequency ranges the alternative method yields results that are comparable in accuracy to the classical two-cavity method and the four-microphone transfer-matrix method. However, in the low frequency range, the alternative method appears to be more accurate than the other methods, especially when measuring the complex wave number.

  7. Generalized disequilibrium test for association in qualitative traits incorporating imprinting effects based on extended pedigrees.

    PubMed

    Li, Jian-Long; Wang, Peng; Fung, Wing Kam; Zhou, Ji-Yuan

    2017-10-16

    For dichotomous traits, the generalized disequilibrium test with the moment estimate of the variance (GDT-ME) is a powerful family-based association method. Genomic imprinting is an important epigenetic phenomenon and currently, there has been increasing interest of incorporating imprinting to improve the test power of association analysis. However, GDT-ME does not take imprinting effects into account, and it has not been investigated whether it can be used for association analysis when the effects indeed exist. In this article, based on a novel decomposition of the genotype score according to the paternal or maternal source of the allele, we propose the generalized disequilibrium test with imprinting (GDTI) for complete pedigrees without any missing genotypes. Then, we extend GDTI and GDT-ME to accommodate incomplete pedigrees with some pedigrees having missing genotypes, by using a Monte Carlo (MC) sampling and estimation scheme to infer missing genotypes given available genotypes in each pedigree, denoted by MCGDTI and MCGDT-ME, respectively. The proposed GDTI and MCGDTI methods evaluate the differences of the paternal as well as maternal allele scores for all discordant relative pairs in a pedigree, including beyond first-degree relative pairs. Advantages of the proposed GDTI and MCGDTI test statistics over existing methods are demonstrated by simulation studies under various simulation settings and by application to the rheumatoid arthritis dataset. Simulation results show that the proposed tests control the size well under the null hypothesis of no association, and outperform the existing methods under various imprinting effect models. The existing GDT-ME and the proposed MCGDT-ME can be used to test for association even when imprinting effects exist. For the application to the rheumatoid arthritis data, compared to the existing methods, MCGDTI identifies more loci statistically significantly associated with the disease. Under complete and incomplete imprinting effect models, our proposed GDTI and MCGDTI methods, by considering the information on imprinting effects and all discordant relative pairs within each pedigree, outperform all the existing test statistics and MCGDTI can recapture much of the missing information. Therefore, MCGDTI is recommended in practice.

  8. FMLRC: Hybrid long read error correction using an FM-index.

    PubMed

    Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D

    2018-02-09

    Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.

  9. Robust volcano plot: identification of differential metabolites in the presence of outliers.

    PubMed

    Kumar, Nishith; Hoque, Md Aminul; Sugimoto, Masahiro

    2018-04-11

    The identification of differential metabolites in metabolomics is still a big challenge and plays a prominent role in metabolomics data analyses. Metabolomics datasets often contain outliers because of analytical, experimental, and biological ambiguity, but the currently available differential metabolite identification techniques are sensitive to outliers. We propose a kernel weight based outlier-robust volcano plot for identifying differential metabolites from noisy metabolomics datasets. Two numerical experiments are used to evaluate the performance of the proposed technique against nine existing techniques, including the t-test and the Kruskal-Wallis test. Artificially generated data with outliers reveal that the proposed method results in a lower misclassification error rate and a greater area under the receiver operating characteristic curve compared with existing methods. An experimentally measured breast cancer dataset to which outliers were artificially added reveals that our proposed method produces only two non-overlapping differential metabolites whereas the other nine methods produced between seven and 57 non-overlapping differential metabolites. Our data analyses show that the performance of the proposed differential metabolite identification technique is better than that of existing methods. Thus, the proposed method can contribute to analysis of metabolomics data with outliers. The R package and user manual of the proposed method are available at https://github.com/nishithkumarpaul/Rvolcano .

  10. Unconstrained and contactless hand geometry biometrics.

    PubMed

    de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; Del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier

    2011-01-01

    This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely support vector machines (SVM) and k-nearest neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices.

  11. Unconstrained and Contactless Hand Geometry Biometrics

    PubMed Central

    de-Santos-Sierra, Alberto; Sánchez-Ávila, Carmen; del Pozo, Gonzalo Bailador; Guerra-Casanova, Javier

    2011-01-01

    This paper presents a hand biometric system for contact-less, platform-free scenarios, proposing innovative methods in feature extraction, template creation and template matching. The evaluation of the proposed method considers both the use of three contact-less publicly available hand databases, and the comparison of the performance to two competitive pattern recognition techniques existing in literature: namely Support Vector Machines (SVM) and k-Nearest Neighbour (k-NN). Results highlight the fact that the proposed method outcomes existing approaches in literature in terms of computational cost, accuracy in human identification, number of extracted features and number of samples for template creation. The proposed method is a suitable solution for human identification in contact-less scenarios based on hand biometrics, providing a feasible solution to devices with limited hardware requirements like mobile devices. PMID:22346634

  12. A Review On Missing Value Estimation Using Imputation Algorithm

    NASA Astrophysics Data System (ADS)

    Armina, Roslan; Zain, Azlan Mohd; Azizah Ali, Nor; Sallehuddin, Roselina

    2017-09-01

    The presence of the missing value in the data set has always been a major problem for precise prediction. The method for imputing missing value needs to minimize the effect of incomplete data sets for the prediction model. Many algorithms have been proposed for countermeasure of missing value problem. In this review, we provide a comprehensive analysis of existing imputation algorithm, focusing on the technique used and the implementation of global or local information of data sets for missing value estimation. In addition validation method for imputation result and way to measure the performance of imputation algorithm also described. The objective of this review is to highlight possible improvement on existing method and it is hoped that this review gives reader better understanding of imputation method trend.

  13. Embedded WENO: A design strategy to improve existing WENO schemes

    NASA Astrophysics Data System (ADS)

    van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.

    2017-02-01

    Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.

  14. MOISTURE IN COTTON BY THE KARL FISCHER TITRATION REFERENCE METHOD

    USDA-ARS?s Scientific Manuscript database

    Moisture is a critical parameter that influences many aspects of cotton fiber from harvesting and ginning to various fiber properties. Because of their importance, reference moisture methods that are more accurate than the existing oven-drying techniques and relatively easy to generate results are ...

  15. Shrinkage regression-based methods for microarray missing value imputation.

    PubMed

    Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng

    2013-01-01

    Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.

  16. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  17. Proposal on Calculation of Ventilation Threshold Using Non-contact Respiration Measurement with Pattern Light Projection

    NASA Astrophysics Data System (ADS)

    Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo

    We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.

  18. Existence and global attractivity of positive periodic solutions of periodic n-species Lotka-Volterra competition systems with several deviating arguments.

    PubMed

    Fan, M; Wang, K; Jiang, D

    1999-08-01

    In this paper, we study the existence and global attractivity of positive periodic solutions of periodic n-species Lotka-Volterra competition systems. By using the method of coincidence degree and Lyapunov functional, a set of easily verifiable sufficient conditions are derived for the existence of at least one strictly positive (componentwise) periodic solution of periodic n-species Lotka-Volterra competition systems with several deviating arguments and the existence of a unique globally asymptotically stable periodic solution with strictly positive components of periodic n-species Lotka-Volterra competition system with several delays. Some new results are obtained. As an application, we also examine some special cases of the system we considered, which have been studied extensively in the literature. Some known results are improved and generalized.

  19. An improved conjugate gradient scheme to the solution of least squares SVM.

    PubMed

    Chu, Wei; Ong, Chong Jin; Keerthi, S Sathiya

    2005-03-01

    The least square support vector machines (LS-SVM) formulation corresponds to the solution of a linear system of equations. Several approaches to its numerical solutions have been proposed in the literature. In this letter, we propose an improved method to the numerical solution of LS-SVM and show that the problem can be solved using one reduced system of linear equations. Compared with the existing algorithm for LS-SVM, the approach used in this letter is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.

  20. A New Adaptive Framework for Collaborative Filtering Prediction

    PubMed Central

    Almosallam, Ibrahim A.; Shang, Yi

    2010-01-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix’s system. PMID:21572924

  1. A New Adaptive Framework for Collaborative Filtering Prediction.

    PubMed

    Almosallam, Ibrahim A; Shang, Yi

    2008-06-01

    Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix's system.

  2. Experimental Demonstration of In-Place Calibration for Time Domain Microwave Imaging System

    NASA Astrophysics Data System (ADS)

    Kwon, S.; Son, S.; Lee, K.

    2018-04-01

    In this study, the experimental demonstration of in-place calibration was conducted using the developed time domain measurement system. Experiments were conducted using three calibration methods—in-place calibration and two existing calibrations, that is, array rotation and differential calibration. The in-place calibration uses dual receivers located at an equal distance from the transmitter. The received signals at the dual receivers contain similar unwanted signals, that is, the directly received signal and antenna coupling. In contrast to the simulations, the antennas are not perfectly matched and there might be unexpected environmental errors. Thus, we experimented with the developed experimental system to demonstrate the proposed method. The possible problems with low signal-to-noise ratio and clock jitter, which may exist in time domain systems, were rectified by averaging repeatedly measured signals. The tumor was successfully detected using the three calibration methods according to the experimental results. The cross correlation was calculated using the reconstructed image of the ideal differential calibration for a quantitative comparison between the existing rotation calibration and the proposed in-place calibration. The mean value of cross correlation between the in-place calibration and ideal differential calibration was 0.80, and the mean value of cross correlation of the rotation calibration was 0.55. Furthermore, the results of simulation were compared with the experimental results to verify the in-place calibration method. A quantitative analysis was also performed, and the experimental results show a tendency similar to the simulation.

  3. A new family of Polak-Ribiere-Polyak conjugate gradient method with the strong-Wolfe line search

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Mamat, Mustafa; Rivaie, Mohd

    2017-08-01

    Conjugate gradient (CG) method is an important technique in unconstrained optimization, due to its effectiveness and low memory requirements. The focus of this paper is to introduce a new CG method for solving large scale unconstrained optimization. Theoretical proofs show that the new method fulfills sufficient descent condition if strong Wolfe-Powell inexact line search is used. Besides, computational results show that our proposed method outperforms to other existing CG methods.

  4. BinQuasi: a peak detection method for ChIP-sequencing data with biological replicates.

    PubMed

    Goren, Emily; Liu, Peng; Wang, Chao; Wang, Chong

    2018-04-19

    ChIP-seq experiments that are aimed at detecting DNA-protein interactions require biological replication to draw inferential conclusions, however there is no current consensus on how to analyze ChIP-seq data with biological replicates. Very few methodologies exist for the joint analysis of replicated ChIP-seq data, with approaches ranging from combining the results of analyzing replicates individually to joint modeling of all replicates. Combining the results of individual replicates analyzed separately can lead to reduced peak classification performance compared to joint modeling. Currently available methods for joint analysis may fail to control the false discovery rate at the nominal level. We propose BinQuasi, a peak caller for replicated ChIP-seq data, that jointly models biological replicates using a generalized linear model framework and employs a one-sided quasi-likelihood ratio test to detect peaks. When applied to simulated data and real datasets, BinQuasi performs favorably compared to existing methods, including better control of false discovery rate than existing joint modeling approaches. BinQuasi offers a flexible approach to joint modeling of replicated ChIP-seq data which is preferable to combining the results of replicates analyzed individually. Source code is freely available for download at https://cran.r-project.org/package=BinQuasi, implemented in R. pliu@iastate.edu or egoren@iastate.edu. Supplementary material is available at Bioinformatics online.

  5. Spatial Lattice Modulation for MIMO Systems

    NASA Astrophysics Data System (ADS)

    Choi, Jiwook; Nam, Yunseo; Lee, Namyoon

    2018-06-01

    This paper proposes spatial lattice modulation (SLM), a spatial modulation method for multipleinput-multiple-output (MIMO) systems. The key idea of SLM is to jointly exploit spatial, in-phase, and quadrature dimensions to modulate information bits into a multi-dimensional signal set that consists oflattice points. One major finding is that SLM achieves a higher spectral efficiency than the existing spatial modulation and spatial multiplexing methods for the MIMO channel under the constraint ofM-ary pulseamplitude-modulation (PAM) input signaling per dimension. In particular, it is shown that when the SLM signal set is constructed by using dense lattices, a significant signal-to-noise-ratio (SNR) gain, i.e., a nominal coding gain, is attainable compared to the existing methods. In addition, closed-form expressions for both the average mutual information and average symbol-vector-error-probability (ASVEP) of generic SLM are derived under Rayleigh-fading environments. To reduce detection complexity, a low-complexity detection method for SLM, which is referred to as lattice sphere decoding, is developed by exploiting lattice theory. Simulation results verify the accuracy of the conducted analysis and demonstrate that the proposed SLM techniques achieve higher average mutual information and lower ASVEP than do existing methods.

  6. On existence of the σ(600) Its physical implications and related problems

    NASA Astrophysics Data System (ADS)

    Ishida, Shin

    1998-05-01

    We make a re-analysis of 1=0 ππ scattering phase shift δ00 through a new method of S-matrix parametrization (IA; interfering amplitude method), and show a result suggesting strongly for the existence of σ-particle-long-sought Chiral partner of π-meson. Furthermore, through the phenomenological analyses of typical production processes of the 2π-system, the pp-central collision and the J/Ψ→ωππ decay, by applying an intuitive formula as sum of Breit-Wigner amplitudes, (VMW; variant mass and width method), the other evidences for the σ-existence are given. The validity of the methods used in the above analyses is investigated, using a simple field theoretical model, from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem, especially in relation to the "universality" argument. It is shown that the IA and VMW are obtained as the physical state representations of scattering and production amplitudes, respectively. The VMW is shown to be an effective method to obtain the resonance properties from production processes, which generally have the unknown strong-phases. The conventional analyses based on the "universality" seem to be powerless for this purpose.

  7. A multi-scale network method for two-phase flow in porous media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khayrat, Karim, E-mail: khayratk@ifd.mavt.ethz.ch; Jenny, Patrick

    Pore-network models of porous media are useful in the study of pore-scale flow in porous media. In order to extract macroscopic properties from flow simulations in pore-networks, it is crucial the networks are large enough to be considered representative elementary volumes. However, existing two-phase network flow solvers are limited to relatively small domains. For this purpose, a multi-scale pore-network (MSPN) method, which takes into account flow-rate effects and can simulate larger domains compared to existing methods, was developed. In our solution algorithm, a large pore network is partitioned into several smaller sub-networks. The algorithm to advance the fluid interfaces withinmore » each subnetwork consists of three steps. First, a global pressure problem on the network is solved approximately using the multiscale finite volume (MSFV) method. Next, the fluxes across the subnetworks are computed. Lastly, using fluxes as boundary conditions, a dynamic two-phase flow solver is used to advance the solution in time. Simulation results of drainage scenarios at different capillary numbers and unfavourable viscosity ratios are presented and used to validate the MSPN method against solutions obtained by an existing dynamic network flow solver.« less

  8. Stored grain pack factors for wheat: comparison of three methods to field measurements

    USDA-ARS?s Scientific Manuscript database

    Storing grain in bulk storage units results in grain packing from overbearing pressure, which increases grain bulk density and storage-unit capacity. This study compared pack factors of hard red winter (HRW) wheat in vertical storage bins using different methods: the existing packing model (WPACKING...

  9. Comparison of brown sugar, hot water, and salt methods for detecting western cherry fruit fly (Diptera: Tephritidae) larvae in sweet cherry

    USDA-ARS?s Scientific Manuscript database

    Brown sugar or hot water methods have been developed to detect larvae of tephritid fruit flies in post-harvest fruit in order to maintain quarantine security. It would be useful to determine if variations of these methods can yield better results and if less expensive alternatives exist. This stud...

  10. A new gradient shimming method based on undistorted field map of B0 inhomogeneity.

    PubMed

    Bao, Qingjia; Chen, Fang; Chen, Li; Song, Kan; Liu, Zao; Liu, Chaoyang

    2016-04-01

    Most existing gradient shimming methods for NMR spectrometers estimate field maps that resolve B0 inhomogeneity spatially from dual gradient-echo (GRE) images acquired at different echo times. However, the distortions induced by B0 inhomogeneity that always exists in the GRE images can result in estimated field maps that are distorted in both geometry and intensity, leading to inaccurate shimming. This work proposes a new gradient shimming method based on undistorted field map of B0 inhomogeneity obtained by a more accurate field map estimation technique. Compared to the traditional field map estimation method, this new method exploits both the positive and negative polarities of the frequency encoded gradients to eliminate the distortions caused by B0 inhomogeneity in the field map. Next, the corresponding automatic post-data procedure is introduced to obtain undistorted B0 field map based on knowledge of the invariant characteristics of the B0 inhomogeneity and the variant polarity of the encoded gradient. The experimental results on both simulated and real gradient shimming tests demonstrate the high performance of this new method. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Control optimization of a lifting body entry problem by an improved and a modified method of perturbation function. Ph.D. Thesis - Houston Univ.

    NASA Technical Reports Server (NTRS)

    Garcia, F., Jr.

    1974-01-01

    A study of the solution problem of a complex entry optimization was studied. The problem was transformed into a two-point boundary value problem by using classical calculus of variation methods. Two perturbation methods were devised. These methods attempted to desensitize the contingency of the solution of this type of problem on the required initial co-state estimates. Also numerical results are presented for the optimal solution resulting from a number of different initial co-states estimates. The perturbation methods were compared. It is found that they are an improvement over existing methods.

  12. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.

  13. Joint histogram-based cost aggregation for stereo matching.

    PubMed

    Min, Dongbo; Lu, Jiangbo; Do, Minh N

    2013-10-01

    This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.

  14. An Analysis of Measured Pressure Signatures From Two Theory-Validation Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2003-01-01

    Two wing/fuselage/nacelle/fin concepts were designed to check the validity and the applicability of sonic-boom minimization theory, sonic-boom analysis methods, and low-boom design methodology in use at the end of the 1980is. Models of these concepts were built, and the pressure signatures they generated were measured in the wind-tunnel. The results of these measurements lead to three conclusions: (1) the existing methods could adequately predict sonic-boom characteristics of wing/fuselage/fin(s) configurations if the equivalent area distributions of each component were smooth and continuous; (2) these methods needed revision so the engine-nacelle volume and the nacelle-wing interference lift disturbances could be accurately predicted; and (3) current nacelle-configuration integration methods had to be updated. With these changes in place, the existing sonic-boom analysis and minimization methods could be effectively applied to supersonic-cruise concepts for acceptable/tolerable sonic-boom overpressures during cruise.

  15. A Novel Method for Block Size Forensics Based on Morphological Operations

    NASA Astrophysics Data System (ADS)

    Luo, Weiqi; Huang, Jiwu; Qiu, Guoping

    Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.

  16. Max-margin multiattribute learning with low-rank constraint.

    PubMed

    Zhang, Qiang; Chen, Lin; Li, Baoxin

    2014-07-01

    Attribute learning has attracted a lot of interests in recent years for its advantage of being able to model high-level concepts with a compact set of midlevel attributes. Real-world objects often demand multiple attributes for effective modeling. Most existing methods learn attributes independently without explicitly considering their intrinsic relatedness. In this paper, we propose max margin multiattribute learning with low-rank constraint, which learns a set of attributes simultaneously, using only relative ranking of the attributes for the data. By learning all the attributes simultaneously through low-rank constraint, the proposed method is able to capture their intrinsic correlation for improved learning; by requiring only relative ranking, the method avoids restrictive binary labels of attributes that are often assumed by many existing techniques. The proposed method is evaluated on both synthetic data and real visual data including a challenging video data set. Experimental results demonstrate the effectiveness of the proposed method.

  17. A Method for Search Engine Selection using Thesaurus for Selective Meta-Search Engine

    NASA Astrophysics Data System (ADS)

    Goto, Shoji; Ozono, Tadachika; Shintani, Toramatsu

    In this paper, we propose a new method for selecting search engines on WWW for selective meta-search engine. In selective meta-search engine, a method is needed that would enable selecting appropriate search engines for users' queries. Most existing methods use statistical data such as document frequency. These methods may select inappropriate search engines if a query contains polysemous words. In this paper, we describe an search engine selection method based on thesaurus. In our method, a thesaurus is constructed from documents in a search engine and is used as a source description of the search engine. The form of a particular thesaurus depends on the documents used for its construction. Our method enables search engine selection by considering relationship between terms and overcomes the problems caused by polysemous words. Further, our method does not have a centralized broker maintaining data, such as document frequency for all search engines. As a result, it is easy to add a new search engine, and meta-search engines become more scalable with our method compared to other existing methods.

  18. Bifurcating fronts for the Taylor-Couette problem in infinite cylinders

    NASA Astrophysics Data System (ADS)

    Hărăguş-Courcelle, M.; Schneider, G.

    We show the existence of bifurcating fronts for the weakly unstable Taylor-Couette problem in an infinite cylinder. These fronts connect a stationary bifurcating pattern, here the Taylor vortices, with the trivial ground state, here the Couette flow. In order to show the existence result we improve a method which was already used in establishing the existence of bifurcating fronts for the Swift-Hohenberg equation by Collet and Eckmann, 1986, and by Eckmann and Wayne, 1991. The existence proof is based on spatial dynamics and center manifold theory. One of the difficulties in applying center manifold theory comes from an infinite number of eigenvalues on the imaginary axis for vanishing bifurcation parameter. But nevertheless, a finite dimensional reduction is possible, since the eigenvalues leave the imaginary axis with different velocities, if the bifurcation parameter is increased. In contrast to previous work we have to use normalform methods and a non-standard cut-off function to obtain a center manifold which is large enough to contain the bifurcating fronts.

  19. On the evaluation of segmentation editing tools

    PubMed Central

    Heckel, Frank; Moltz, Jan H.; Meine, Hans; Geisler, Benjamin; Kießling, Andreas; D’Anastasi, Melvin; dos Santos, Daniel Pinto; Theruvath, Ashok Joseph; Hahn, Horst K.

    2014-01-01

    Abstract. Efficient segmentation editing tools are important components in the segmentation process, as no automatic methods exist that always generate sufficient results. Evaluating segmentation editing algorithms is challenging, because their quality depends on the user’s subjective impression. So far, no established methods for an objective, comprehensive evaluation of such tools exist and, particularly, intermediate segmentation results are not taken into account. We discuss the evaluation of editing algorithms in the context of tumor segmentation in computed tomography. We propose a rating scheme to qualitatively measure the accuracy and efficiency of editing tools in user studies. In order to objectively summarize the overall quality, we propose two scores based on the subjective rating and the quantified segmentation quality over time. Finally, a simulation-based evaluation approach is discussed, which allows a more reproducible evaluation without the need for human input. This automated evaluation complements user studies, allowing a more convincing evaluation, particularly during development, where frequent user studies are not possible. The proposed methods have been used to evaluate two dedicated editing algorithms on 131 representative tumor segmentations. We show how the comparison of editing algorithms benefits from the proposed methods. Our results also show the correlation of the suggested quality score with the qualitative ratings. PMID:26158063

  20. Pseudorange Measurement Method Based on AIS Signals.

    PubMed

    Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng

    2017-05-22

    In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system.

  1. Pseudorange Measurement Method Based on AIS Signals

    PubMed Central

    Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng

    2017-01-01

    In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system. PMID:28531153

  2. A NEW APPROACH TO THE STUDY OF MUCOADHESIVENESS OF POLYMERIC MEMBRANES USING SILICONE DISCS.

    PubMed

    Nowak, Karolina Maria; Szterk, Arkadiusz; Fiedor, Piotr; Bodek, Kazimiera Henryka

    2016-01-01

    The introduction of new test methods and the modification of existing ones are crucial for obtaining reliable results, which contributes to the development of innovative materials that may have clinical applications. Today, silicone is commonly used in medicine and the diversity of its applications are continually growing. The aim of this study is to evaluate the mucoadhesiveness of polymeric membranes by a method that modifies the existing test methods through the introduction of silicone discs. The matrices were designed for clinical application in the management of diseases within the oral cavity. The use of silicone discs allows reliable and reproducible results to be obtained, which allows us to make various tensometric measurements. In this study, different types of polymeric matrices were examined, as well as their crosslinking and the presence for the active pharmaceutical ingredient were compared to the pure dosage form. The lidocaine hydrochloride (Lid(HCl)) was used as a model active substance, due to its use in dentistry and clinical safety. The results were characterized by a high repeatability (RSD < 10.6%). The advantage of silicone material due to its mechanical strength, chemical and physical resistance, allowed a new test method using a texture analyzer to be proposed.

  3. A Hybrid Method to Estimate Specific Differential Phase and Rainfall With Linear Programming and Physics Constraints

    DOE PAGES

    Huang, Hao; Zhang, Guifu; Zhao, Kun; ...

    2016-10-20

    A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (K DP) and to improve rain estimation. Moreover, the hybrid K DP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δ hv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and K DP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid K DP estimatormore » over existing methods.« less

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cary, J.R.

    During the most recent funding period the authors obtained results important for helical confinement systems and in the use of modern computational methods for modeling of fusion systems. The most recent results include showing that the set of magnetic field functions that are omnigenous (i.e., the bounce-average drift lies within the flux surface) and, therefore, have good transport properties, is much larger than the set of quasihelical systems. This is important as quasihelical systems exist only for large aspect ratio. The authors have also carried out extensive earlier work on developing integrable three-dimensional magnetic fields, on trajectories in three-dimensional configurations,more » and on the existence of three-dimensional MHD equilibria close to vacuum integrable fields. At the same time they have been investigating the use of object oriented methods for scientific computing.« less

  5. Simple diffusion can support the pitchfork, the flip bifurcations, and the chaos

    NASA Astrophysics Data System (ADS)

    Meng, Lili; Li, Xinfu; Zhang, Guang

    2017-12-01

    In this paper, a discrete rational fration population model with the Dirichlet boundary conditions will be considered. According to the discrete maximum principle and the sub- and supper-solution method, the necessary and sufficient conditions of uniqueness and existence of positive steady state solutions will be obtained. In addition, the dynamical behavior of a special two patch metapopulation model is investigated by using the bifurcation method, the center manifold theory, the bifurcation diagrams and the largest Lyapunov exponent. The results show that there exist the pitchfork, the flip bifurcations, and the chaos. Clearly, these phenomena are caused by the simple diffusion. The theoretical analysis of chaos is very imortant, unfortunately, there is not any results in this hand. However, some open problems are given.

  6. A method for the design and development of medical or health care information websites to optimize search engine results page rankings on Google.

    PubMed

    Dunne, Suzanne; Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter

    2013-08-27

    The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion.

  7. A DFFD simulation method combined with the spectral element method for solid-fluid-interaction problems

    NASA Astrophysics Data System (ADS)

    Chen, Li-Chieh; Huang, Mei-Jiau

    2017-02-01

    A 2D simulation method for a rigid body moving in an incompressible viscous fluid is proposed. It combines one of the immersed-boundary methods, the DFFD (direct forcing fictitious domain) method with the spectral element method; the former is employed for efficiently capturing the two-way FSI (fluid-structure interaction) and the geometric flexibility of the latter is utilized for any possibly co-existing stationary and complicated solid or flow boundary. A pseudo body force is imposed within the solid domain to enforce the rigid body motion and a Lagrangian mesh composed of triangular elements is employed for tracing the rigid body. In particular, a so called sub-cell scheme is proposed to smooth the discontinuity at the fluid-solid interface and to execute integrations involving Eulerian variables over the moving-solid domain. The accuracy of the proposed method is verified through an observed agreement of the simulation results of some typical flows with analytical solutions or existing literatures.

  8. A new method for designing dual foil electron beam forming systems. I. Introduction, concept of the method

    NASA Astrophysics Data System (ADS)

    Adrich, Przemysław

    2016-05-01

    In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.

  9. An information-theoretical perspective on weighted ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Weijs, Steven V.; van de Giesen, Nick

    2013-08-01

    This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.

  10. On the existence of global solutions of the one-dimensional cubic NLS for initial data in the modulation space Mp,q (R)

    NASA Astrophysics Data System (ADS)

    Chaichenets, Leonid; Hundertmark, Dirk; Kunstmann, Peer; Pattakos, Nikolaos

    2017-10-01

    We prove global existence for the one-dimensional cubic nonlinear Schrödinger equation in modulation spaces Mp,p‧ for p sufficiently close to 2. In contrast to known results, [9] and [14], our result requires no smallness condition on initial data. The proof adapts a splitting method inspired by work of Vargas-Vega, Hyakuna-Tsutsumi and Grünrock to the modulation space setting and exploits polynomial growth of the free Schrödinger group on modulation spaces.

  11. Using iterative cluster merging with improved gap statistics to perform online phenotype discovery in the context of high-throughput RNAi screens

    PubMed Central

    Yin, Zheng; Zhou, Xiaobo; Bakal, Chris; Li, Fuhai; Sun, Youxian; Perrimon, Norbert; Wong, Stephen TC

    2008-01-01

    Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi) or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM) is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of image-based datasets derived from a wide spectrum of experimental conditions and is suitable to adaptively process new images which are continuously added to existing datasets. Validations were carried out on different dataset, including published RNAi screening using Drosophila embryos [Additional files 1, 2], dataset for cell cycle phase identification using HeLa cells [Additional files 1, 3, 4] and synthetic dataset using polygons, our methods tackled three aforementioned tasks effectively with an accuracy range of 85%–90%. When our method is implemented in the context of a Drosophila genome-scale RNAi image-based screening of cultured cells aimed to identifying the contribution of individual genes towards the regulation of cell-shape, it efficiently discovers meaningful new phenotypes and provides novel biological insight. We also propose a two-step procedure to modify the novelty detection method based on one-class SVM, so that it can be used to online phenotype discovery. In different conditions, we compared the SVM based method with our method using various datasets and our methods consistently outperformed SVM based method in at least two of three tasks by 2% to 5%. These results demonstrate that our methods can be used to better identify novel phenotypes in image-based datasets from a wide range of conditions and organisms. Conclusion We demonstrate that our method can detect various novel phenotypes effectively in complex datasets. Experiment results also validate that our method performs consistently under different order of image input, variation of starting conditions including the number and composition of existing phenotypes, and dataset from different screens. In our findings, the proposed method is suitable for online phenotype discovery in diverse high-throughput image-based genetic and chemical screens. PMID:18534020

  12. Structured filtering

    NASA Astrophysics Data System (ADS)

    Granade, Christopher; Wiebe, Nathan

    2017-08-01

    A major challenge facing existing sequential Monte Carlo methods for parameter estimation in physics stems from the inability of existing approaches to robustly deal with experiments that have different mechanisms that yield the results with equivalent probability. We address this problem here by proposing a form of particle filtering that clusters the particles that comprise the sequential Monte Carlo approximation to the posterior before applying a resampler. Through a new graphical approach to thinking about such models, we are able to devise an artificial-intelligence based strategy that automatically learns the shape and number of the clusters in the support of the posterior. We demonstrate the power of our approach by applying it to randomized gap estimation and a form of low circuit-depth phase estimation where existing methods from the physics literature either exhibit much worse performance or even fail completely.

  13. Statistical methods to estimate treatment effects from multichannel electroencephalography (EEG) data in clinical trials.

    PubMed

    Ma, Junshui; Wang, Shubing; Raubertas, Richard; Svetnik, Vladimir

    2010-07-15

    With the increasing popularity of using electroencephalography (EEG) to reveal the treatment effect in drug development clinical trials, the vast volume and complex nature of EEG data compose an intriguing, but challenging, topic. In this paper the statistical analysis methods recommended by the EEG community, along with methods frequently used in the published literature, are first reviewed. A straightforward adjustment of the existing methods to handle multichannel EEG data is then introduced. In addition, based on the spatial smoothness property of EEG data, a new category of statistical methods is proposed. The new methods use a linear combination of low-degree spherical harmonic (SPHARM) basis functions to represent a spatially smoothed version of the EEG data on the scalp, which is close to a sphere in shape. In total, seven statistical methods, including both the existing and the newly proposed methods, are applied to two clinical datasets to compare their power to detect a drug effect. Contrary to the EEG community's recommendation, our results suggest that (1) the nonparametric method does not outperform its parametric counterpart; and (2) including baseline data in the analysis does not always improve the statistical power. In addition, our results recommend that (3) simple paired statistical tests should be avoided due to their poor power; and (4) the proposed spatially smoothed methods perform better than their unsmoothed versions. Copyright 2010 Elsevier B.V. All rights reserved.

  14. Melnikov processes and chaos in randomly perturbed dynamical systems

    NASA Astrophysics Data System (ADS)

    Yagasaki, Kazuyuki

    2018-07-01

    We consider a wide class of randomly perturbed systems subjected to stationary Gaussian processes and show that chaotic orbits exist almost surely under some nondegenerate condition, no matter how small the random forcing terms are. This result is very contrasting to the deterministic forcing case, in which chaotic orbits exist only if the influence of the forcing terms overcomes that of the other terms in the perturbations. To obtain the result, we extend Melnikov’s method and prove that the corresponding Melnikov functions, which we call the Melnikov processes, have infinitely many zeros, so that infinitely many transverse homoclinic orbits exist. In addition, a theorem on the existence and smoothness of stable and unstable manifolds is given and the Smale–Birkhoff homoclinic theorem is extended in an appropriate form for randomly perturbed systems. We illustrate our theory for the Duffing oscillator subjected to the Ornstein–Uhlenbeck process parametrically.

  15. Compare diagnostic tests using transformation-invariant smoothed ROC curves⋆

    PubMed Central

    Tang, Liansheng; Du, Pang; Wu, Chengqing

    2012-01-01

    Receiver operating characteristic (ROC) curve, plotting true positive rates against false positive rates as threshold varies, is an important tool for evaluating biomarkers in diagnostic medicine studies. By definition, ROC curve is monotone increasing from 0 to 1 and is invariant to any monotone transformation of test results. And it is often a curve with certain level of smoothness when test results from the diseased and non-diseased subjects follow continuous distributions. Most existing ROC curve estimation methods do not guarantee all of these properties. One of the exceptions is Du and Tang (2009) which applies certain monotone spline regression procedure to empirical ROC estimates. However, their method does not consider the inherent correlations between empirical ROC estimates. This makes the derivation of the asymptotic properties very difficult. In this paper we propose a penalized weighted least square estimation method, which incorporates the covariance between empirical ROC estimates as a weight matrix. The resulting estimator satisfies all the aforementioned properties, and we show that it is also consistent. Then a resampling approach is used to extend our method for comparisons of two or more diagnostic tests. Our simulations show a significantly improved performance over the existing method, especially for steep ROC curves. We then apply the proposed method to a cancer diagnostic study that compares several newly developed diagnostic biomarkers to a traditional one. PMID:22639484

  16. Determining the semantic similarities among Gene Ontology terms.

    PubMed

    Taha, Kamal

    2013-05-01

    We present in this paper novel techniques that determine the semantic relationships among GeneOntology (GO) terms. We implemented these techniques in a prototype system called GoSE, which resides between user application and GO database. Given a set S of GO terms, GoSE would return another set S' of GO terms, where each term in S' is semantically related to each term in S. Most current research is focused on determining the semantic similarities among GO ontology terms based solely on their IDs and proximity to one another in the GO graph structure, while overlooking the contexts of the terms, which may lead to erroneous results. The context of a GO term T is the set of other terms, whose existence in the GO graph structure is dependent on T. We propose novel techniques that determine the contexts of terms based on the concept of existence dependency. We present a stack-based sort-merge algorithm employing these techniques for determining the semantic similarities among GO terms.We evaluated GoSE experimentally and compared it with three existing methods. The results of measuring the semantic similarities among genes in KEGG and Pfam pathways retrieved from the DBGET and Sanger Pfam databases, respectively, have shown that our method outperforms the other three methods in recall and precision.

  17. Implications of neutron star properties for the existence of light dark matter

    NASA Astrophysics Data System (ADS)

    Motta, T. F.; Guichon, P. A. M.; Thomas, A. W.

    2018-05-01

    It was recently suggested that the discrepancy between two methods of measuring the lifetime of the neutron may be a result of an unseen decay mode into a dark matter particle which is almost degenerate with the neutron. We explore the consequences of this for the properties of neutron stars, finding that their known properties are in conflict with the existence of such a particle.

  18. Existence of topological multi-string solutions in Abelian gauge field theories

    NASA Astrophysics Data System (ADS)

    Han, Jongmin; Sohn, Juhee

    2017-11-01

    In this paper, we consider a general form of self-dual equations arising from Abelian gauge field theories coupled with the Einstein equations. By applying the super/subsolution method, we prove that topological multi-string solutions exist for any coupling constant, which improves previously known results. We provide two examples for application: the self-dual Einstein-Maxwell-Higgs model and the gravitational Maxwell gauged O(3) sigma model.

  19. Current advances on polynomial resultant formulations

    NASA Astrophysics Data System (ADS)

    Sulaiman, Surajo; Aris, Nor'aini; Ahmad, Shamsatun Nahar

    2017-08-01

    Availability of computer algebra systems (CAS) lead to the resurrection of the resultant method for eliminating one or more variables from the polynomials system. The resultant matrix method has advantages over the Groebner basis and Ritt-Wu method due to their high complexity and storage requirement. This paper focuses on the current resultant matrix formulations and investigates their ability or otherwise towards producing optimal resultant matrices. A determinantal formula that gives exact resultant or a formulation that can minimize the presence of extraneous factors in the resultant formulation is often sought for when certain conditions that it exists can be determined. We present some applications of elimination theory via resultant formulations and examples are given to explain each of the presented settings.

  20. Optimal Methods to Screen Men and Women for Intimate Partner Violence: Results from an Internal Medicine Residency Continuity Clinic

    ERIC Educational Resources Information Center

    Kapur, Nitin A.; Windish, Donna M.

    2011-01-01

    Contradictory data exist regarding optimal methods and instruments for intimate partner violence (IPV) screening in primary care settings. The purpose of this study was to determine the optimal method and screening instrument for IPV among men and women in a primary-care resident clinic. We conducted a cross-sectional study at an urban, academic,…

  1. A family of conjugate gradient methods for large-scale nonlinear equations.

    PubMed

    Feng, Dexiang; Sun, Min; Wang, Xueyong

    2017-01-01

    In this paper, we present a family of conjugate gradient projection methods for solving large-scale nonlinear equations. At each iteration, it needs low storage and the subproblem can be easily solved. Compared with the existing solution methods for solving the problem, its global convergence is established without the restriction of the Lipschitz continuity on the underlying mapping. Preliminary numerical results are reported to show the efficiency of the proposed method.

  2. An Investigation of the Overlap Between the Statistical Discrete Gust and the Power Spectral Density Analysis Methods

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III; Pototzky, Anthony S.; Woods, Jessica A.

    1989-01-01

    The results of a NASA investigation of a claimed Overlap between two gust response analysis methods: the Statistical Discrete Gust (SDG) Method and the Power Spectral Density (PSD) Method are presented. The claim is that the ratio of an SDG response to the corresponding PSD response is 10.4. Analytical results presented for several different airplanes at several different flight conditions indicate that such an Overlap does appear to exist. However, the claim was not met precisely: a scatter of up to about 10 percent about the 10.4 factor can be expected.

  3. Solving coupled groundwater flow systems using a Jacobian Free Newton Krylov method

    NASA Astrophysics Data System (ADS)

    Mehl, S.

    2012-12-01

    Jacobian Free Newton Kyrlov (JFNK) methods can have several advantages for simulating coupled groundwater flow processes versus conventional methods. Conventional methods are defined here as those based on an iterative coupling (rather than a direct coupling) and/or that use Picard iteration rather than Newton iteration. In an iterative coupling, the systems are solved separately, coupling information is updated and exchanged between the systems, and the systems are re-solved, etc., until convergence is achieved. Trusted simulators, such as Modflow, are based on these conventional methods of coupling and work well in many cases. An advantage of the JFNK method is that it only requires calculation of the residual vector of the system of equations and thus can make use of existing simulators regardless of how the equations are formulated. This opens the possibility of coupling different process models via augmentation of a residual vector by each separate process, which often requires substantially fewer changes to the existing source code than if the processes were directly coupled. However, appropriate perturbation sizes need to be determined for accurate approximations of the Frechet derivative, which is not always straightforward. Furthermore, preconditioning is necessary for reasonable convergence of the linear solution required at each Kyrlov iteration. Existing preconditioners can be used and applied separately to each process which maximizes use of existing code and robust preconditioners. In this work, iteratively coupled parent-child local grid refinement models of groundwater flow and groundwater flow models with nonlinear exchanges to streams are used to demonstrate the utility of the JFNK approach for Modflow models. Use of incomplete Cholesky preconditioners with various levels of fill are examined on a suite of nonlinear and linear models to analyze the effect of the preconditioner. Comparisons of convergence and computer simulation time are made using conventional iteratively coupled methods and those based on Picard iteration to those formulated with JFNK to gain insights on the types of nonlinearities and system features that make one approach advantageous. Results indicate that nonlinearities associated with stream/aquifer exchanges are more problematic than those resulting from unconfined flow.

  4. Forestry sector analysis for developing countries: issues and methods.

    Treesearch

    R.W. Haynes

    1993-01-01

    A satellite meeting of the 10th Forestry World Congress focused on the methods used for forest sector analysis and their applications in both developed and developing countries. The results of that meeting are summarized, and a general approach for forest sector modeling is proposed. The approach includes models derived from the existing...

  5. Processes, Procedures, and Methods to Control Pollution Resulting from Silvicultural Activities.

    ERIC Educational Resources Information Center

    Environmental Protection Agency, Washington, DC. Office of Water Programs.

    This report presents brief documentation of silvicultural practices, both those now in use and those in stages of research and development. A majority of the text is concerned with the specific aspects of silvicultural activities which relate to nonpoint source pollution control methods. Analyzed are existing and near future pollution control…

  6. Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-03-12

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  7. An improved parameter estimation scheme for image modification detection based on DCT coefficient analysis.

    PubMed

    Yu, Liyang; Han, Qi; Niu, Xiamu; Yiu, S M; Fang, Junbin; Zhang, Ye

    2016-02-01

    Most of the existing image modification detection methods which are based on DCT coefficient analysis model the distribution of DCT coefficients as a mixture of a modified and an unchanged component. To separate the two components, two parameters, which are the primary quantization step, Q1, and the portion of the modified region, α, have to be estimated, and more accurate estimations of α and Q1 lead to better detection and localization results. Existing methods estimate α and Q1 in a completely blind manner, without considering the characteristics of the mixture model and the constraints to which α should conform. In this paper, we propose a more effective scheme for estimating α and Q1, based on the observations that, the curves on the surface of the likelihood function corresponding to the mixture model is largely smooth, and α can take values only in a discrete set. We conduct extensive experiments to evaluate the proposed method, and the experimental results confirm the efficacy of our method. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  8. RAPTR-SV: a hybrid method for the detection of structural variants

    USDA-ARS?s Scientific Manuscript database

    Motivation: Identification of Structural Variants (SV) in sequence data results in a large number of false positive calls using existing software, which overburdens subsequent validation. Results: Simulations using RAPTR-SV and another software package that uses a similar algorithm for SV detection...

  9. The effect of contact angles and capillary dimensions on the burst frequency of super hydrophilic and hydrophilic centrifugal microfluidic platforms, a CFD study.

    PubMed

    Kazemzadeh, Amin; Ganesan, Poo; Ibrahim, Fatimah; He, Shuisheng; Madou, Marc J

    2013-01-01

    This paper employs the volume of fluid (VOF) method to numerically investigate the effect of the width, height, and contact angles on burst frequencies of super hydrophilic and hydrophilic capillary valves in centrifugal microfluidic systems. Existing experimental results in the literature have been used to validate the implementation of the numerical method. The performance of capillary valves in the rectangular and the circular microfluidic structures on super hydrophilic centrifugal microfluidic platforms is studied. The numerical results are also compared with the existing theoretical models and the differences are discussed. Our experimental and computed results show a minimum burst frequency occurring at square capillaries and this result is useful for designing and developing more sophisticated networks of capillary valves. It also predicts that in super hydrophilic microfluidics, the fluid leaks consistently from the capillary valve at low pressures which can disrupt the biomedical procedures in centrifugal microfluidic platforms.

  10. The Effect of Contact Angles and Capillary Dimensions on the Burst Frequency of Super Hydrophilic and Hydrophilic Centrifugal Microfluidic Platforms, a CFD Study

    PubMed Central

    Kazemzadeh, Amin; Ganesan, Poo; Ibrahim, Fatimah; He, Shuisheng; Madou, Marc J.

    2013-01-01

    This paper employs the volume of fluid (VOF) method to numerically investigate the effect of the width, height, and contact angles on burst frequencies of super hydrophilic and hydrophilic capillary valves in centrifugal microfluidic systems. Existing experimental results in the literature have been used to validate the implementation of the numerical method. The performance of capillary valves in the rectangular and the circular microfluidic structures on super hydrophilic centrifugal microfluidic platforms is studied. The numerical results are also compared with the existing theoretical models and the differences are discussed. Our experimental and computed results show a minimum burst frequency occurring at square capillaries and this result is useful for designing and developing more sophisticated networks of capillary valves. It also predicts that in super hydrophilic microfluidics, the fluid leaks consistently from the capillary valve at low pressures which can disrupt the biomedical procedures in centrifugal microfluidic platforms. PMID:24069169

  11. Improved Filon-type asymptotic methods for highly oscillatory differential equations with multiple time scales

    NASA Astrophysics Data System (ADS)

    Wang, Bin; Wu, Xinyuan

    2014-11-01

    In this paper we consider multi-frequency highly oscillatory second-order differential equations x″ (t) + Mx (t) = f (t , x (t) ,x‧ (t)) where high-frequency oscillations are generated by the linear part Mx (t), and M is positive semi-definite (not necessarily nonsingular). It is known that Filon-type methods are effective approach to numerically solving highly oscillatory problems. Unfortunately, however, existing Filon-type asymptotic methods fail to apply to the highly oscillatory second-order differential equations when M is singular. We study and propose an efficient improvement on the existing Filon-type asymptotic methods, so that the improved Filon-type asymptotic methods can be able to numerically solving this class of multi-frequency highly oscillatory systems with a singular matrix M. The improved Filon-type asymptotic methods are designed by combining Filon-type methods with the asymptotic methods based on the variation-of-constants formula. We also present one efficient and practical improved Filon-type asymptotic method which can be performed at lower cost. Accompanying numerical results show the remarkable efficiency.

  12. Development of Advanced Methods of Structural and Trajectory Analysis for Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Ardema, Mark D.

    1996-01-01

    In this report the author describes: (1) development of advanced methods of structural weight estimation, and (2) development of advanced methods of flight path optimization. A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT bas traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight.

  13. An efficient closed-form solution for acoustic emission source location in three-dimensional structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xibing; Dong, Longjun, E-mail: csudlj@163.com; Australian Centre for Geomechanics, The University of Western Australia, Crawley, 6009

    This paper presents an efficient closed-form solution (ECS) for acoustic emission(AE) source location in three-dimensional structures using time difference of arrival (TDOA) measurements from N receivers, N ≥ 6. The nonlinear location equations of TDOA are simplified to linear equations. The unique analytical solution of AE sources for unknown velocity system is obtained by solving the linear equations. The proposed ECS method successfully solved the problems of location errors resulting from measured deviations of velocity as well as the existence and multiplicity of solutions induced by calculations of square roots in existed close-form methods.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Hao; Zhang, Guifu; Zhao, Kun

    A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (K DP) and to improve rain estimation. Moreover, the hybrid K DP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δ hv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and K DP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid K DP estimatormore » over existing methods.« less

  15. Analytical Fuselage and Wing Weight Estimation of Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Chambers, Mark C.; Ardema, Mark D.; Patron, Anthony P.; Hahn, Andrew S.; Miura, Hirokazu; Moore, Mark D.

    1996-01-01

    A method of estimating the load-bearing fuselage weight and wing weight of transport aircraft based on fundamental structural principles has been developed. This method of weight estimation represents a compromise between the rapid assessment of component weight using empirical methods based on actual weights of existing aircraft, and detailed, but time-consuming, analysis using the finite element method. The method was applied to eight existing subsonic transports for validation and correlation. Integration of the resulting computer program, PDCYL, has been made into the weights-calculating module of the AirCraft SYNThesis (ACSYNT) computer program. ACSYNT has traditionally used only empirical weight estimation methods; PDCYL adds to ACSYNT a rapid, accurate means of assessing the fuselage and wing weights of unconventional aircraft. PDCYL also allows flexibility in the choice of structural concept, as well as a direct means of determining the impact of advanced materials on structural weight. Using statistical analysis techniques, relations between the load-bearing fuselage and wing weights calculated by PDCYL and corresponding actual weights were determined.

  16. Exponential Stability of Almost Periodic Solutions for Memristor-Based Neural Networks with Distributed Leakage Delays.

    PubMed

    Xu, Changjin; Li, Peiluan; Pang, Yicheng

    2016-12-01

    In this letter, we deal with a class of memristor-based neural networks with distributed leakage delays. By applying a new Lyapunov function method, we obtain some sufficient conditions that ensure the existence, uniqueness, and global exponential stability of almost periodic solutions of neural networks. We apply the results of this solution to prove the existence and stability of periodic solutions for this delayed neural network with periodic coefficients. We then provide an example to illustrate the effectiveness of the theoretical results. Our results are completely new and complement the previous studies Chen, Zeng, and Jiang ( 2014 ) and Jiang, Zeng, and Chen ( 2015 ).

  17. Visualization and characterization of engineered nanoparticles in complex environmental and food matrices using atmospheric scanning electron microscopy.

    PubMed

    Luo, P; Morrison, I; Dudkiewicz, A; Tiede, K; Boyes, E; O'Toole, P; Park, S; Boxall, A B

    2013-04-01

    Imaging and characterization of engineered nanoparticles (ENPs) in water, soils, sediment and food matrices is very important for research into the risks of ENPs to consumers and the environment. However, these analyses pose a significant challenge as most existing techniques require some form of sample manipulation prior to imaging and characterization, which can result in changes in the ENPs in a sample and in the introduction of analytical artefacts. This study therefore explored the application of a newly designed instrument, the atmospheric scanning electron microscope (ASEM), which allows the direct characterization of ENPs in liquid matrices and which therefore overcomes some of the limitations associated with existing imaging methods. ASEM was used to characterize the size distribution of a range of ENPs in a selection of environmental and food matrices, including supernatant of natural sediment, test medium used in ecotoxicology studies, bovine serum albumin and tomato soup under atmospheric conditions. The obtained imaging results were compared to results obtained using conventional imaging by transmission electron microscope (TEM) and SEM as well as to size distribution data derived from nanoparticle tracking analysis (NTA). ASEM analysis was found to be a complementary technique to existing methods that is able to visualize ENPs in complex liquid matrices and to provide ENP size information without extensive sample preparation. ASEM images can detect ENPs in liquids down to 30 nm and to a level of 1 mg L(-1) (9×10(8) particles mL(-1) , 50 nm Au ENPs). The results indicate ASEM is a highly complementary method to existing approaches for analyzing ENPs in complex media and that its use will allow those studying to study ENP behavior in situ, something that is currently extremely challenging to do. © 2013 The Authors Journal of Microscopy © 2013 Royal Microscopical Society.

  18. Image steganalysis using Artificial Bee Colony algorithm

    NASA Astrophysics Data System (ADS)

    Sajedi, Hedieh

    2017-09-01

    Steganography is the science of secure communication where the presence of the communication cannot be detected while steganalysis is the art of discovering the existence of the secret communication. Processing a huge amount of information takes extensive execution time and computational sources most of the time. As a result, it is needed to employ a phase of preprocessing, which can moderate the execution time and computational sources. In this paper, we propose a new feature-based blind steganalysis method for detecting stego images from the cover (clean) images with JPEG format. In this regard, we present a feature selection technique based on an improved Artificial Bee Colony (ABC). ABC algorithm is inspired by honeybees' social behaviour in their search for perfect food sources. In the proposed method, classifier performance and the dimension of the selected feature vector depend on using wrapper-based methods. The experiments are performed using two large data-sets of JPEG images. Experimental results demonstrate the effectiveness of the proposed steganalysis technique compared to the other existing techniques.

  19. Reevaluation of air surveillance station siting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abbott, K.; Jannik, T.

    2016-07-06

    DOE Technical Standard HDBK-1216-2015 (DOE 2015) recommends evaluating air-monitoring station placement using the analytical method developed by Waite. The technique utilizes wind rose and population distribution data in order to determine a weighting factor for each directional sector surrounding a nuclear facility. Based on the available resources (number of stations) and a scaling factor, this weighting factor is used to determine the number of stations recommended to be placed in each sector considered. An assessment utilizing this method was performed in 2003 to evaluate the effectiveness of the existing SRS air-monitoring program. The resulting recommended distribution of air-monitoring stations wasmore » then compared to that of the existing site perimeter surveillance program. The assessment demonstrated that the distribution of air-monitoring stations at the time generally agreed with the results obtained using the Waite method; however, at the time new stations were established in Barnwell and in Williston in order to meet requirements of DOE guidance document EH-0173T.« less

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Santos-Villalobos, Hector J; Barstow, Del R; Karakaya, Mahmut

    Iris recognition has been proven to be an accurate and reliable biometric. However, the recognition of non-ideal iris images such as off angle images is still an unsolved problem. We propose a new biometric targeted eye model and a method to reconstruct the off-axis eye to its frontal view allowing for recognition using existing methods and algorithms. This allows for existing enterprise level algorithms and approaches to be largely unmodified by using our work as a pre-processor to improve performance. In addition, we describe the `Limbus effect' and its importance for an accurate segmentation of off-axis irides. Our method usesmore » an anatomically accurate human eye model and ray-tracing techniques to compute a transformation function, which reconstructs the iris to its frontal, non-refracted state. Then, the same eye model is used to render a frontal view of the reconstructed iris. The proposed method is fully described and results from synthetic data are shown to establish an upper limit on performance improvement and establish the importance of the proposed approach over traditional linear elliptical unwrapping methods. Our results with synthetic data demonstrate the ability to perform an accurate iris recognition with an image taken as much as 70 degrees off-axis.« less

  1. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach.

    PubMed

    Park, Hyunseok; Magee, Christopher L

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents.

  2. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach

    PubMed Central

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents. PMID:28135304

  3. A robust method using propensity score stratification for correcting verification bias for binary tests

    PubMed Central

    He, Hua; McDermott, Michael P.

    2012-01-01

    Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650

  4. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    PubMed

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Optimal chroma-like channel design for passive color image splicing detection

    NASA Astrophysics Data System (ADS)

    Zhao, Xudong; Li, Shenghong; Wang, Shilin; Li, Jianhua; Yang, Kongjin

    2012-12-01

    Image splicing is one of the most common image forgeries in our daily life and due to the powerful image manipulation tools, image splicing is becoming easier and easier. Several methods have been proposed for image splicing detection and all of them worked on certain existing color channels. However, the splicing artifacts vary in different color channels and the selection of color model is important for image splicing detection. In this article, instead of finding an existing color model, we propose a color channel design method to find the most discriminative channel which is referred to as optimal chroma-like channel for a given feature extraction method. Experimental results show that both spatial and frequency features extracted from the designed channel achieve higher detection rate than those extracted from traditional color channels.

  6. Terminal Sliding Mode-Based Consensus Tracking Control for Networked Uncertain Mechanical Systems on Digraphs.

    PubMed

    Chen, Gang; Song, Yongduan; Guan, Yanfeng

    2018-03-01

    This brief investigates the finite-time consensus tracking control problem for networked uncertain mechanical systems on digraphs. A new terminal sliding-mode-based cooperative control scheme is developed to guarantee that the tracking errors converge to an arbitrarily small bound around zero in finite time. All the networked systems can have different dynamics and all the dynamics are unknown. A neural network is used at each node to approximate the local unknown dynamics. The control schemes are implemented in a fully distributed manner. The proposed control method eliminates some limitations in the existing terminal sliding-mode-based consensus control methods and extends the existing analysis methods to the case of directed graphs. Simulation results on networked robot manipulators are provided to show the effectiveness of the proposed control algorithms.

  7. Designing a global monitoring system for pilot introduction of a new contraceptive technology, subcutaneous DMPA (DMPA-SC).

    PubMed

    Stout, Anna; Wood, Siri; Namagembe, Allen; Kaboré, Alain; Siddo, Daouda; Ndione, Ida

    2018-06-01

    In collaboration with ministries of health, PATH and key partners launched the first pilot introductions of subcutaneous depot medroxyprogesterone acetate (DMPA-SC, brand name Sayana ® Press) in Burkina Faso, Niger, Senegal, and Uganda from July 2014 through June 2016. While each country implemented a unique introduction strategy, all agreed to track a set of uniform indicators to chart the effect of introducing this new method across settings. Existing national health information systems (HIS) were unable to track new methods or delivery channels introduced for a pilot, thus were not a feasible source for project data. We successfully monitored the four-country pilot introductions by implementing a four-phase approach: 1) developing and defining global indicators, 2) integrating indicators into existing country data collection tools, 3) facilitating consistent reporting and data management, and 4) analyzing and interpreting data and sharing results. Project partners leveraged existing family planning registers to the extent possible, and introduced new or modified data collection and reporting tools to generate project-specific data where necessary. We routinely shared monitoring results with global and national stakeholders, informing decisions about future investments in the product and scale up of DMPA-SC nationwide. Our process and lessons learned may provide insights for countries planning to introduce DMPA-SC or other new contraceptive methods in settings where stakeholder expectations for measureable results for decision-making are high. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.

    PubMed

    Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng

    2018-05-01

    Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.

  9. Bias Characterization in Probabilistic Genotype Data and Improved Signal Detection with Multiple Imputation

    PubMed Central

    Palmer, Cameron; Pe’er, Itsik

    2016-01-01

    Missing data are an unavoidable component of modern statistical genetics. Different array or sequencing technologies cover different single nucleotide polymorphisms (SNPs), leading to a complicated mosaic pattern of missingness where both individual genotypes and entire SNPs are sporadically absent. Such missing data patterns cannot be ignored without introducing bias, yet cannot be inferred exclusively from nonmissing data. In genome-wide association studies, the accepted solution to missingness is to impute missing data using external reference haplotypes. The resulting probabilistic genotypes may be analyzed in the place of genotype calls. A general-purpose paradigm, called Multiple Imputation (MI), is known to model uncertainty in many contexts, yet it is not widely used in association studies. Here, we undertake a systematic evaluation of existing imputed data analysis methods and MI. We characterize biases related to uncertainty in association studies, and find that bias is introduced both at the imputation level, when imputation algorithms generate inconsistent genotype probabilities, and at the association level, when analysis methods inadequately model genotype uncertainty. We find that MI performs at least as well as existing methods or in some cases much better, and provides a straightforward paradigm for adapting existing genotype association methods to uncertain data. PMID:27310603

  10. A duality approach for solving bounded linear programming problems with fuzzy variables based on ranking functions and its application in bounded transportation problems

    NASA Astrophysics Data System (ADS)

    Ebrahimnejad, Ali

    2015-08-01

    There are several methods, in the literature, for solving fuzzy variable linear programming problems (fuzzy linear programming in which the right-hand-side vectors and decision variables are represented by trapezoidal fuzzy numbers). In this paper, the shortcomings of some existing methods are pointed out and to overcome these shortcomings a new method based on the bounded dual simplex method is proposed to determine the fuzzy optimal solution of that kind of fuzzy variable linear programming problems in which some or all variables are restricted to lie within lower and upper bounds. To illustrate the proposed method, an application example is solved and the obtained results are given. The advantages of the proposed method over existing methods are discussed. Also, one application of this algorithm in solving bounded transportation problems with fuzzy supplies and demands is dealt with. The proposed method is easy to understand and to apply for determining the fuzzy optimal solution of bounded fuzzy variable linear programming problems occurring in real-life situations.

  11. Patient, staff and physician satisfaction: a new model, instrument and their implications.

    PubMed

    York, Anne S; McCarthy, Kim A

    2011-01-01

    Customer satisfaction's importance is well-documented in the marketing literature and is rapidly gaining wide acceptance in the healthcare industry. The purpose of this paper is to introduce a new customer-satisfaction measuring method - Reichheld's ultimate question - and compare it with traditional techniques using data gathered from four healthcare clinics. A new survey method, called the ultimate question, was used to collect patient satisfaction data. It was subsequently compared with the data collected via an existing method. Findings suggest that the ultimate question provides similar ratings to existing models at lower costs. A relatively small sample size may affect the generalizability of the results; it is also possible that potential spill-over effects exist owing to two patient satisfaction surveys administered at the same time. This new ultimate question method greatly improves the process and ease with which hospital or clinic administrators are able to collect patient (as well as staff and physician) satisfaction data in healthcare settings. Also, the feedback gained from this method is actionable and can be used to make strategic improvements that will impact business and ultimately increase profitability. The paper's real value is pinpointing specific quality improvement areas based not just on patient ratings but also physician and staff satisfaction, which often underlie patients' clinical experiences.

  12. How Magnetic Disturbance Influences the Attitude and Heading in Magnetic and Inertial Sensor-Based Orientation Estimation.

    PubMed

    Fan, Bingfei; Li, Qingguo; Liu, Tao

    2017-12-28

    With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method.

  13. On existence of the {sigma}(600) Its physical implications and related problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ishida, Shin

    1998-05-29

    We make a re-analysis of 1=0 {pi}{pi} scattering phase shift {delta}{sub 0}{sup 0} through a new method of S-matrix parametrization (IA; interfering amplitude method), and show a result suggesting strongly for the existence of {sigma}-particle-long-sought Chiral partner of {pi}-meson. Furthermore, through the phenomenological analyses of typical production processes of the 2{pi}-system, the pp-central collision and the J/{psi}{yields}{omega}{pi}{pi} decay, by applying an intuitive formula as sum of Breit-Wigner amplitudes, (VMW; variant mass and width method), the other evidences for the {sigma}-existence are given. The validity of the methods used in the above analyses is investigated, using a simple field theoretical model,more » from the general viewpoint of unitarity and the applicability of final state interaction (FSI-) theorem, especially in relation to the ''universality'' argument. It is shown that the IA and VMW are obtained as the physical state representations of scattering and production amplitudes, respectively. The VMW is shown to be an effective method to obtain the resonance properties from production processes, which generally have the unknown strong-phases. The conventional analyses based on the 'universality' seem to be powerless for this purpose.« less

  14. Novel Ultrasound Joint Selection Methods Using a Reduced Joint Number Demonstrate Inflammatory Improvement when Compared to Existing Methods and Disease Activity Score at 28 Joints.

    PubMed

    Tan, York Kiat; Allen, John C; Lye, Weng Kit; Conaghan, Philip G; D'Agostino, Maria Antonietta; Chew, Li-Ching; Thumboo, Julian

    2016-01-01

    A pilot study testing novel ultrasound (US) joint-selection methods in rheumatoid arthritis. Responsiveness of novel [individualized US (IUS) and individualized composite US (ICUS)] methods were compared with existing US methods and the Disease Activity Score at 28 joints (DAS28) for 12 patients followed for 3 months. IUS selected up to 7 and 12 most ultrasonographically inflamed joints, while ICUS additionally incorporated clinically symptomatic joints. The existing, IUS, and ICUS methods' standardized response means were -0.39, -1.08, and -1.11, respectively, for 7 joints; -0.49, -1.00, and -1.16, respectively, for 12 joints; and -0.94 for DAS28. Novel methods effectively demonstrate inflammatory improvement when compared with existing methods and DAS28.

  15. Cue-based assertion classification for Swedish clinical text – developing a lexicon for pyConTextSwe

    PubMed Central

    Velupillai, Sumithra; Skeppstedt, Maria; Kvist, Maria; Mowery, Danielle; Chapman, Brian E.; Dalianis, Hercules; Chapman, Wendy W.

    2014-01-01

    Objective The ability of a cue-based system to accurately assert whether a disorder is affirmed, negated, or uncertain is dependent, in part, on its cue lexicon. In this paper, we continue our study of porting an assertion system (pyConTextNLP) from English to Swedish (pyConTextSwe) by creating an optimized assertion lexicon for clinical Swedish. Methods and material We integrated cues from four external lexicons, along with generated inflections and combinations. We used subsets of a clinical corpus in Swedish. We applied four assertion classes (definite existence, probable existence, probable negated existence and definite negated existence) and two binary classes (existence yes/no and uncertainty yes/no) to pyConTextSwe. We compared pyConTextSwe’s performance with and without the added cues on a development set, and improved the lexicon further after an error analysis. On a separate evaluation set, we calculated the system’s final performance. Results Following integration steps, we added 454 cues to pyConTextSwe. The optimized lexicon developed after an error analysis resulted in statistically significant improvements on the development set (83% F-score, overall). The system’s final F-scores on an evaluation set were 81% (overall). For the individual assertion classes, F-score results were 88% (definite existence), 81% (probable existence), 55% (probable negated existence), and 63% (definite negated existence). For the binary classifications existence yes/no and uncertainty yes/no, final system performance was 97%/87% and 78%/86% F-score, respectively. Conclusions We have successfully ported pyConTextNLP to Swedish (pyConTextSwe). We have created an extensive and useful assertion lexicon for Swedish clinical text, which could form a valuable resource for similar studies, and which is publicly available. PMID:24556644

  16. Maximum margin multiple instance clustering with applications to image and text clustering.

    PubMed

    Zhang, Dan; Wang, Fei; Si, Luo; Li, Tao

    2011-05-01

    In multiple instance learning problems, patterns are often given as bags and each bag consists of some instances. Most of existing research in the area focuses on multiple instance classification and multiple instance regression, while very limited work has been conducted for multiple instance clustering (MIC). This paper formulates a novel framework, maximum margin multiple instance clustering (M(3)IC), for MIC. However, it is impractical to directly solve the optimization problem of M(3)IC. Therefore, M(3)IC is relaxed in this paper to enable an efficient optimization solution with a combination of the constrained concave-convex procedure and the cutting plane method. Furthermore, this paper presents some important properties of the proposed method and discusses the relationship between the proposed method and some other related ones. An extensive set of empirical results are shown to demonstrate the advantages of the proposed method against existing research for both effectiveness and efficiency.

  17. Analysis and optimization of cross-immunity epidemic model on complex networks

    NASA Astrophysics Data System (ADS)

    Chen, Chao; Zhang, Hao; Wu, Yin-Hua; Feng, Wei-Qiang; Zhang, Jian

    2015-09-01

    There are various infectious diseases in real world, and these diseases often spread on a network of population and compete for the limited hosts. Cross-immunity is an important disease competing pattern, which has attracted the attention of many researchers. In this paper, we discovered an important conclusion for two cross-immunity epidemics on a network. When the infectious ability of the second epidemic takes a fixed value, the infectious ability of the first epidemic has an optimal value which minimizes the sum of the infection sizes of the two epidemics. We also proposed a simple mathematical analysis method for the infection size of the second epidemic using the cavity method. The proposed method and conclusion are verified by simulation results. Minor inaccuracies of the existing mathematical methods for the infection size of the second epidemic are also found and discussed in experiments, which have not been noticed in existing research.

  18. Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.

    PubMed

    Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan

    2014-09-22

    A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.

  19. Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.

    PubMed

    Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana

    2017-07-01

    Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.

  20. Evolutionary branching under multi-dimensional evolutionary constraints.

    PubMed

    Ito, Hiroshi; Sasaki, Akira

    2016-10-21

    The fitness of an existing phenotype and of a potential mutant should generally depend on the frequencies of other existing phenotypes. Adaptive evolution driven by such frequency-dependent fitness functions can be analyzed effectively using adaptive dynamics theory, assuming rare mutation and asexual reproduction. When possible mutations are restricted to certain directions due to developmental, physiological, or physical constraints, the resulting adaptive evolution may be restricted to subspaces (constraint surfaces) with fewer dimensionalities than the original trait spaces. To analyze such dynamics along constraint surfaces efficiently, we develop a Lagrange multiplier method in the framework of adaptive dynamics theory. On constraint surfaces of arbitrary dimensionalities described with equality constraints, our method efficiently finds local evolutionarily stable strategies, convergence stable points, and evolutionary branching points. We also derive the conditions for the existence of evolutionary branching points on constraint surfaces when the shapes of the surfaces can be chosen freely. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Strengthening of Existing Bridge Structures for Shear and Bending with Carbon Textile-Reinforced Mortar.

    PubMed

    Herbrand, Martin; Adam, Viviane; Classen, Martin; Kueres, Dominik; Hegger, Josef

    2017-09-19

    Increasing traffic loads and changes in code provisions lead to deficits in shear and flexural capacity of many existing highway bridges. Therefore, a large number of structures are expected to require refurbishment and strengthening in the future. This projection is based on the current condition of many older road bridges. Different strengthening methods for bridges exist to extend their service life, all having specific advantages and disadvantages. By applying a thin layer of carbon textile-reinforced mortar (CTRM) to bridge deck slabs and the webs of pre-stressed concrete bridges, the fatigue and ultimate strength of these members can be increased significantly. The CTRM layer is a combination of a corrosion resistant carbon fiber reinforced polymer (CFRP) fabric and an efficient mortar. In this paper, the strengthening method and the experimental results obtained at RWTH Aachen University are presented.

  2. Strengthening of Existing Bridge Structures for Shear and Bending with Carbon Textile-Reinforced Mortar

    PubMed Central

    Herbrand, Martin; Classen, Martin; Kueres, Dominik; Hegger, Josef

    2017-01-01

    Increasing traffic loads and changes in code provisions lead to deficits in shear and flexural capacity of many existing highway bridges. Therefore, a large number of structures are expected to require refurbishment and strengthening in the future. This projection is based on the current condition of many older road bridges. Different strengthening methods for bridges exist to extend their service life, all having specific advantages and disadvantages. By applying a thin layer of carbon textile-reinforced mortar (CTRM) to bridge deck slabs and the webs of pre-stressed concrete bridges, the fatigue and ultimate strength of these members can be increased significantly. The CTRM layer is a combination of a corrosion resistant carbon fiber reinforced polymer (CFRP) fabric and an efficient mortar. In this paper, the strengthening method and the experimental results obtained at RWTH Aachen University are presented. PMID:28925962

  3. Exploring the notion of space coupling propulsion

    NASA Technical Reports Server (NTRS)

    Millis, Marc G.

    1990-01-01

    All existing methods of space propulsion are based on expelling a reaction mass (propellant) to induce motion. Alternatively, 'space coupling propulsion' refers to speculations about reacting with space-time itself to generate propulsive forces. Conceivably, the resulting increases in payload, range, and velocity would constitute a breakthrough in space propulsion. Such speculations are still considered science fiction for a number of reasons: (1) it appears to violate conservation of momentum; (2) no reactive media appear to exist in space; (3) no 'Grand Uniform Theories' exist to link gravity, an acceleration field, to other phenomena of nature such as electrodynamics. The rationale behind these objectives is the focus of interest. Various methods to either satisfy or explore these issues are presented along with secondary considerations. It is found that it may be useful to consider alternative conventions of science to further explore speculations of space coupling propulsion.

  4. Existence of the sugar-bisulfite adducts and its inhibiting effect on degradation of monosaccharide in acid system.

    PubMed

    Shi, Yan

    2014-02-01

    Degradation of fermentable monosaccharides is one of the primary concerns for acid prehydrolysis of lignocellulosic biomass. Recently, in our research on degradation of pure monosaccharides in aqueous SO₂ solution by gas chromatography (GC) analysis, we found that detected yield was not actual yield of each monosaccharide due to the existence of sugar-bisulfite adducts, and a new method was developed by ourselves which led to accurate detection of recovery yield of each monosaccharide in aqueous SO₂ solution by GC analysis. By the use of this method, degradation of each monosaccharide in aqueous SO₂ was investigated and results showed that sugar-bisulfite adducts have different inhibiting effect on degradation of each monosaccharide in aqueous SO₂ because of their different stability. In addition, NMR testing also demonstrated possible existence of reaction between conjugated based HSO₃(-) and aldehyde group of sugars in acid system.

  5. Comparative Assessment of Models and Methods To Calculate Grid Electricity Emissions.

    PubMed

    Ryan, Nicole A; Johnson, Jeremiah X; Keoleian, Gregory A

    2016-09-06

    Due to the complexity of power systems, tracking emissions attributable to a specific electrical load is a daunting challenge but essential for many environmental impact studies. Currently, no consensus exists on appropriate methods for quantifying emissions from particular electricity loads. This paper reviews a wide range of the existing methods, detailing their functionality, tractability, and appropriate use. We identified and reviewed 32 methods and models and classified them into two distinct categories: empirical data and relationship models and power system optimization models. To illustrate the impact of method selection, we calculate the CO2 combustion emissions factors associated with electric-vehicle charging using 10 methods at nine charging station locations around the United States. Across the methods, we found an up to 68% difference from the mean CO2 emissions factor for a given charging site among both marginal and average emissions factors and up to a 63% difference from the average across average emissions factors. Our results underscore the importance of method selection and the need for a consensus on approaches appropriate for particular loads and research questions being addressed in order to achieve results that are more consistent across studies and allow for soundly supported policy decisions. The paper addresses this issue by offering a set of recommendations for determining an appropriate model type on the basis of the load characteristics and study objectives.

  6. Rating curve uncertainty: A comparison of estimation methods

    USGS Publications Warehouse

    Mason, Jr., Robert R.; Kiang, Julie E.; Cohn, Timothy A.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    The USGS is engaged in both internal development and collaborative efforts to evaluate existing methods for characterizing the uncertainty of streamflow measurements (gaugings), stage-discharge relations (ratings), and, ultimately, the streamflow records derived from them. This paper provides a brief overview of two candidate methods that may be used to characterize the uncertainty of ratings, and illustrates the results of their application to the ratings of the two USGS streamgages.

  7. Push-Broom-Type Very High-Resolution Satellite Sensor Data Correction Using Combined Wavelet-Fourier and Multiscale Non-Local Means Filtering.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Seo, Doochun; Jeong, Jaeheon; Paik, Joonki

    2015-09-10

    In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments.

  8. Component-based subspace linear discriminant analysis method for face recognition with one training sample

    NASA Astrophysics Data System (ADS)

    Huang, Jian; Yuen, Pong C.; Chen, Wen-Sheng; Lai, J. H.

    2005-05-01

    Many face recognition algorithms/systems have been developed in the last decade and excellent performances have also been reported when there is a sufficient number of representative training samples. In many real-life applications such as passport identification, only one well-controlled frontal sample image is available for training. Under this situation, the performance of existing algorithms will degrade dramatically or may not even be implemented. We propose a component-based linear discriminant analysis (LDA) method to solve the one training sample problem. The basic idea of the proposed method is to construct local facial feature component bunches by moving each local feature region in four directions. In this way, we not only generate more samples with lower dimension than the original image, but also consider the face detection localization error while training. After that, we propose a subspace LDA method, which is tailor-made for a small number of training samples, for the local feature projection to maximize the discrimination power. Theoretical analysis and experiment results show that our proposed subspace LDA is efficient and overcomes the limitations in existing LDA methods. Finally, we combine the contributions of each local component bunch with a weighted combination scheme to draw the recognition decision. A FERET database is used for evaluating the proposed method and results are encouraging.

  9. Push-Broom-Type Very High-Resolution Satellite Sensor Data Correction Using Combined Wavelet-Fourier and Multiscale Non-Local Means Filtering

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Seo, Doochun; Jeong, Jaeheon; Paik, Joonki

    2015-01-01

    In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments. PMID:26378532

  10. Two types of modes in finite size one-dimensional coaxial photonic crystals: General rules and experimental evidence

    NASA Astrophysics Data System (ADS)

    El Boudouti, E. H.; El Hassouani, Y.; Djafari-Rouhani, B.; Aynaou, H.

    2007-08-01

    We demonstrate analytically and experimentally the existence and behavior of two types of modes in finite size one-dimensional coaxial photonic crystals made of N cells with vanishing magnetic field on both sides. We highlight the existence of N-1 confined modes in each band and one mode by gap associated to either one or the other of the two surfaces surrounding the structure. The latter modes are independent of N . These results generalize our previous findings on the existence of surface modes in two semi-infinite superlattices obtained from the cleavage of an infinite superlattice between two cells. The analytical results are obtained by means of the Green’s function method, whereas the experiments are carried out using coaxial cables in the radio-frequency regime.

  11. Stochastic functional evolution equations with monotone nonlinearity: Existence and stability of the mild solutions

    NASA Astrophysics Data System (ADS)

    Jahanipur, Ruhollah

    In this paper, we study a class of semilinear functional evolution equations in which the nonlinearity is demicontinuous and satisfies a semimonotone condition. We prove the existence, uniqueness and exponentially asymptotic stability of the mild solutions. Our approach is to apply a convenient version of Burkholder inequality for convolution integrals and an iteration method based on the existence and measurability results for the functional integral equations in Hilbert spaces. An Itô-type inequality is the main tool to study the uniqueness, p-th moment and almost sure sample path asymptotic stability of the mild solutions. We also give some examples to illustrate the applications of the theorems and meanwhile we compare the results obtained in this paper with some others appeared in the literature.

  12. Documenting Preservice Teacher Growth through Critical Assessment of Online Lesson Plans

    ERIC Educational Resources Information Center

    Cude, Michelle D.; Haraway, Dana L.

    2017-01-01

    This research explores the question of how students in a social studies methods course improve skills in analyzing and critiquing pre-existing lesson plans. It utilizes a pre-post authentic assessment tool to measure student growth in key skills of lesson plan critique over the course of one semester's methods instruction. The results support the…

  13. Oak regeneration potential increased by shelterwood treatments

    Treesearch

    Richard C. Schlesinger; Ivan L. Sander; Kenneth R. Davidson

    1993-01-01

    In much of the Central Hardwood Forest Region, oak species are not regenerating well, even though large oak trees are common within the existing forests. The shelterwood method has been suggested as a potential tool for establishing and developing advanced regeneration where it is lacking. The 10-yr results from a study of several variants of the shelterwood method...

  14. Toward cost-efficient sampling methods

    NASA Astrophysics Data System (ADS)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  15. Impact of including or excluding both-armed zero-event studies on using standard meta-analysis methods for rare event outcome: a simulation study

    PubMed Central

    Cheng, Ji; Pullenayegum, Eleanor; Marshall, John K; Thabane, Lehana

    2016-01-01

    Objectives There is no consensus on whether studies with no observed events in the treatment and control arms, the so-called both-armed zero-event studies, should be included in a meta-analysis of randomised controlled trials (RCTs). Current analytic approaches handled them differently depending on the choice of effect measures and authors' discretion. Our objective is to evaluate the impact of including or excluding both-armed zero-event (BA0E) studies in meta-analysis of RCTs with rare outcome events through a simulation study. Method We simulated 2500 data sets for different scenarios varying the parameters of baseline event rate, treatment effect and number of patients in each trial, and between-study variance. We evaluated the performance of commonly used pooling methods in classical meta-analysis—namely, Peto, Mantel-Haenszel with fixed-effects and random-effects models, and inverse variance method with fixed-effects and random-effects models—using bias, root mean square error, length of 95% CI and coverage. Results The overall performance of the approaches of including or excluding BA0E studies in meta-analysis varied according to the magnitude of true treatment effect. Including BA0E studies introduced very little bias, decreased mean square error, narrowed the 95% CI and increased the coverage when no true treatment effect existed. However, when a true treatment effect existed, the estimates from the approach of excluding BA0E studies led to smaller bias than including them. Among all evaluated methods, the Peto method excluding BA0E studies gave the least biased results when a true treatment effect existed. Conclusions We recommend including BA0E studies when treatment effects are unlikely, but excluding them when there is a decisive treatment effect. Providing results of including and excluding BA0E studies to assess the robustness of the pooled estimated effect is a sensible way to communicate the results of a meta-analysis when the treatment effects are unclear. PMID:27531725

  16. An improved strategy for skin lesion detection and classification using uniform segmentation and feature selection based approach.

    PubMed

    Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar

    2018-02-21

    Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.

  17. Spatially Regularized Machine Learning for Task and Resting-state fMRI

    PubMed Central

    Song, Xiaomu; Panych, Lawrence P.; Chen, Nan-kuei

    2015-01-01

    Background Reliable mapping of brain function across sessions and/or subjects in task- and resting-state has been a critical challenge for quantitative fMRI studies although it has been intensively addressed in the past decades. New Method A spatially regularized support vector machine (SVM) technique was developed for the reliable brain mapping in task- and resting-state. Unlike most existing SVM-based brain mapping techniques, which implement supervised classifications of specific brain functional states or disorders, the proposed method performs a semi-supervised classification for the general brain function mapping where spatial correlation of fMRI is integrated into the SVM learning. The method can adapt to intra- and inter-subject variations induced by fMRI nonstationarity, and identify a true boundary between active and inactive voxels, or between functionally connected and unconnected voxels in a feature space. Results The method was evaluated using synthetic and experimental data at the individual and group level. Multiple features were evaluated in terms of their contributions to the spatially regularized SVM learning. Reliable mapping results in both task- and resting-state were obtained from individual subjects and at the group level. Comparison with Existing Methods A comparison study was performed with independent component analysis, general linear model, and correlation analysis methods. Experimental results indicate that the proposed method can provide a better or comparable mapping performance at the individual and group level. Conclusions The proposed method can provide accurate and reliable mapping of brain function in task- and resting-state, and is applicable to a variety of quantitative fMRI studies. PMID:26470627

  18. Reliability and Validity of the Alcohol Consequences Expectations Scale

    ERIC Educational Resources Information Center

    Arriola, Kimberly R. Jacob; Usdan, Stuart; Mays, Darren; Weitzel, Jessica Aungst; Cremeens, Jennifer; Martin, Ryan J.; Borba, Christina; Bernhardt, Jay M.

    2009-01-01

    Objectives: To examine the reliability and validity of a new measure of alcohol outcome expectations for college students, the Alcohol Consequences Expectations Scale (ACES). Methods: College students (N = 169) completed the ACES and several other measures. Results: Results support the existence of 5 internally consistent subscales. Additionally,…

  19. Diagnostic Utility of a Clonality Test for Lymphoproliferative Diseases in Koreans Using the BIOMED-2 PCR Assay

    PubMed Central

    Kim, Young; Choi, Yoo Duk; Choi, Chan

    2013-01-01

    Background A clonality test for immunoglobulin (IG) and T cell receptor (TCR) is a useful adjunctive method for the diagnosis of lymphoproliferative diseases (LPDs). Recently, the BIOMED-2 multiplex polymerase chain reaction (PCR) assay has been established as a standard method for assessing the clonality of LPDs. We tested clonality in LPDs in Koreans using the BIOMED-2 multiplex PCR and compared the results with those obtained in European, Taiwanese, and Thai participants. We also evaluated the usefulness of the test as an ancillary method for diagnosing LPDs. Methods Two hundred and nineteen specimens embedded in paraffin, including 78 B cell lymphomas, 80 T cell lymphomas and 61 cases of reactive lymphadenitis, were used for the clonality test. Results Mature B cell malignancies showed 95.7% clonality for IG, 2.9% co-existing clonality, and 4.3% polyclonality. Mature T cell malignancies exhibited 83.8% clonality for TCR, 8.1% co-existing clonality, and 16.2% polyclonality. Reactive lymphadenitis showed 93.4% polyclonality for IG and TCR. The majority of our results were similar to those obtained in Europeans. However, the clonality for IGK of B cell malignancies and TCRG of T cell malignancies was lower in Koreans than Europeans. Conclusions The BIOMED-2 multiplex PCR assay was a useful adjunctive method for diagnosing LPDs. PMID:24255634

  20. Invariant Feature Matching for Image Registration Application Based on New Dissimilarity of Spatial Features

    PubMed Central

    Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H.; J. Zahra, Sophia

    2016-01-01

    An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996

  1. A Hierarchical Algorithm for Fast Debye Summation with Applications to Small Angle Scattering

    PubMed Central

    Gumerov, Nail A.; Berlin, Konstantin; Fushman, David; Duraiswami, Ramani

    2012-01-01

    Debye summation, which involves the summation of sinc functions of distances between all pair of atoms in three dimensional space, arises in computations performed in crystallography, small/wide angle X-ray scattering (SAXS/WAXS) and small angle neutron scattering (SANS). Direct evaluation of Debye summation has quadratic complexity, which results in computational bottleneck when determining crystal properties, or running structure refinement protocols that involve SAXS or SANS, even for moderately sized molecules. We present a fast approximation algorithm that efficiently computes the summation to any prescribed accuracy ε in linear time. The algorithm is similar to the fast multipole method (FMM), and is based on a hierarchical spatial decomposition of the molecule coupled with local harmonic expansions and translation of these expansions. An even more efficient implementation is possible when the scattering profile is all that is required, as in small angle scattering reconstruction (SAS) of macromolecules. We examine the relationship of the proposed algorithm to existing approximate methods for profile computations, and show that these methods may result in inaccurate profile computations, unless an error bound derived in this paper is used. Our theoretical and computational results show orders of magnitude improvement in computation complexity over existing methods, while maintaining prescribed accuracy. PMID:22707386

  2. Fog seal guidelines.

    DOT National Transportation Integrated Search

    2003-10-01

    Fog seals are a method of adding asphalt to an existing pavement surface to improve sealing or waterproofing, prevent further stone loss by holding aggregate in place, or simply improve the surface appearance. However, inappropriate use can result in...

  3. 3D Surface Reconstruction and Automatic Camera Calibration

    NASA Technical Reports Server (NTRS)

    Jalobeanu, Andre

    2004-01-01

    Illustrations in this view-graph presentation are presented on a Bayesian approach to 3D surface reconstruction and camera calibration.Existing methods, surface analysis and modeling,preliminary surface reconstruction results, and potential applications are addressed.

  4. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    PubMed

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.

  5. Models of Integrating Physical Therapists into Family Health Teams in Ontario, Canada: Challenges and Opportunities

    PubMed Central

    Mandoda, Shilpa; Landry, Michel D.

    2011-01-01

    ABSTRACT Purpose: To explore the potential for different models of incorporating physical therapy (PT) services within the emerging network of family health teams (FHTs) in Ontario and to identify challenges and opportunities of each model. Methods: A two-phase mixed-methods qualitative descriptive approach was used. First, FHTs were mapped in relation to existing community-based PT practices. Second, semi-structured key-informant interviews were conducted with representatives from urban and rural FHTs and from a variety of community-based PT practices. Interviews were digitally recorded, transcribed verbatim, and analyzed using a categorizing/editing approach. Results: Most participants agreed that the ideal model involves embedding physical therapists directly into FHTs; in some situations, however, partnering with an existing external PT provider may be more feasible and sustainable. Access and funding remain the key issues, regardless of the model adopted. Conclusion: Although there are differences across the urban/rural divide, there exist opportunities to enhance and optimize existing delivery models so as to improve client access and address emerging demand for community-based PT services. PMID:22654231

  6. Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data

    NASA Astrophysics Data System (ADS)

    Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai

    2017-11-01

    Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.

  7. Hemovigilance in Massachusetts and the adoption of statewide hospital blood bank reporting using the National Healthcare Safety Network.

    PubMed

    Cumming, Melissa; Osinski, Anthony; O'Hearn, Lynne; Waksmonski, Pamela; Herman, Michele; Gordon, Deborah; Griffiths, Elzbieta; Knox, Kim; McHale, Eileen; Quillen, Karen; Rios, Jorge; Pisciotto, Patricia; Uhl, Lynne; DeMaria, Alfred; Andrzejewski, Chester

    2017-02-01

    A collaboration that grew over time between local hemovigilance stakeholders and the Massachusetts Department of Public Health (MDPH) resulted in the change from a paper-based method of reporting adverse reactions and monthly transfusion activity for regulatory compliance purposes to statewide adoption of electronic reporting via the National Healthcare Safety Network (NHSN). The NHSN is a web-based surveillance system that offers the capacity to capture transfusion-related adverse events, incidents, and monthly transfusion statistics from participating facilities. Massachusetts' hospital blood banks share the data they enter into NHSN with the MDPH to satisfy reporting requirements. Users of the NHSN Hemovigilance Module adhere to specified data entry guidelines, resulting in data that are comparable and standardized. Keys to successful statewide adoption of this reporting method include the fostering of strong partnerships with local hemovigilance champions and experts, engagement of regulatory and epidemiology divisions at the state health department, the leveraging of existing relationships with hospital NHSN administrators, and the existence of a regulatory deadline for implementation. Although limitations exist, successful implementation of statewide use of the NHSN Hemovigilance Module for hospital blood bank reporting is possible. The result is standardized, actionable data at both the hospital and state level that can facilitate interfacility comparisons, benchmarking, and opportunities for practice improvement. © 2016 AABB.

  8. Assessing Species Diversity Using Metavirome Data: Methods and Challenges.

    PubMed

    Herath, Damayanthi; Jayasundara, Duleepa; Ackland, David; Saeed, Isaam; Tang, Sen-Lin; Halgamuge, Saman

    2017-01-01

    Assessing biodiversity is an important step in the study of microbial ecology associated with a given environment. Multiple indices have been used to quantify species diversity, which is a key biodiversity measure. Measuring species diversity of viruses in different environments remains a challenge relative to measuring the diversity of other microbial communities. Metagenomics has played an important role in elucidating viral diversity by conducting metavirome studies; however, metavirome data are of high complexity requiring robust data preprocessing and analysis methods. In this review, existing bioinformatics methods for measuring species diversity using metavirome data are categorised broadly as either sequence similarity-dependent methods or sequence similarity-independent methods. The former includes a comparison of DNA fragments or assemblies generated in the experiment against reference databases for quantifying species diversity, whereas estimates from the latter are independent of the knowledge of existing sequence data. Current methods and tools are discussed in detail, including their applications and limitations. Drawbacks of the state-of-the-art method are demonstrated through results from a simulation. In addition, alternative approaches are proposed to overcome the challenges in estimating species diversity measures using metavirome data.

  9. An automatic and accurate method of full heart segmentation from CT image based on linear gradient model

    NASA Astrophysics Data System (ADS)

    Yang, Zili

    2017-07-01

    Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.

  10. AISLE: an automatic volumetric segmentation method for the study of lung allometry.

    PubMed

    Ren, Hongliang; Kazanzides, Peter

    2011-01-01

    We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.

  11. An investigation of the 'Overlap' between the Statistical-Discrete-Gust and the Power-Spectral-Density analysis methods

    NASA Technical Reports Server (NTRS)

    Perry, Boyd, III; Pototzky, Anthony S.; Woods, Jessica A.

    1989-01-01

    This paper presents the results of a NASA investigation of a claimed 'Overlap' between two gust response analysis methods: the Statistical Discrete Gust (SDG) method and the Power Spectral Density (PSD) method. The claim is that the ratio of an SDG response to the corresponding PSD response is 10.4. Analytical results presented in this paper for several different airplanes at several different flight conditions indicate that such an 'Overlap' does appear to exist. However, the claim was not met precisely: a scatter of up to about 10 percent about the 10.4 factor can be expected.

  12. Testing actinide fission yield treatment in CINDER90 for use in MCNP6 burnup calculations

    DOE PAGES

    Fensin, Michael Lorne; Umbel, Marissa

    2015-09-18

    Most of the development of the MCNPX/6 burnup capability focused on features that were applied to the Boltzman transport or used to prepare coefficients for use in CINDER90, with little change to CINDER90 or the CINDER90 data. Though a scheme exists for best solving the coupled Boltzman and Bateman equations, the most significant approximation is that the employed nuclear data are correct and complete. Thus, the CINDER90 library file contains 60 different actinide fission yields encompassing 36 fissionable actinides (thermal, fast, high energy and spontaneous fission). Fission reaction data exists for more than 60 actinides and as a result, fissionmore » yield data must be approximated for actinides that do not possess fission yield information. Several types of approximations are used for estimating fission yields for actinides which do not possess explicit fission yield data. The objective of this study is to test whether or not certain approximations of fission yield selection have any impact on predictability of major actinides and fission products. Further we assess which other fission products, available in MCNP6 Tier 3, result in the largest difference in production. Because the CINDER90 library file is in ASCII format and therefore easily amendable, we assess reasons for choosing, as well as compare actinide and major fission product prediction for the H. B. Robinson benchmark for, three separate fission yield selection methods: (1) the current CINDER90 library file method (Base); (2) the element method (Element); and (3) the isobar method (Isobar). Results show that the three methods tested result in similar prediction of major actinides, Tc-99 and Cs-137; however, certain fission products resulted in significantly different production depending on the method of choice.« less

  13. Sizing up arthropod genomes: an evaluation of the impact of environmental variation on genome size estimates by flow cytometry and the use of qPCR as a method of estimation.

    PubMed

    Gregory, T Ryan; Nathwani, Paula; Bonnett, Tiffany R; Huber, Dezene P W

    2013-09-01

    A study was undertaken to evaluate both a pre-existing method and a newly proposed approach for the estimation of nuclear genome sizes in arthropods. First, concerns regarding the reliability of the well-established method of flow cytometry relating to impacts of rearing conditions on genome size estimates were examined. Contrary to previous reports, a more carefully controlled test found negligible environmental effects on genome size estimates in the fly Drosophila melanogaster. Second, a more recently touted method based on quantitative real-time PCR (qPCR) was examined in terms of ease of use, efficiency, and (most importantly) accuracy using four test species: the flies Drosophila melanogaster and Musca domestica and the beetles Tribolium castaneum and Dendroctonus ponderosa. The results of this analysis demonstrated that qPCR has the tendency to produce substantially different genome size estimates from other established techniques while also being far less efficient than existing methods.

  14. Laser Spot Tracking Based on Modified Circular Hough Transform and Motion Pattern Analysis

    PubMed Central

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-01-01

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas–Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development. PMID:25350502

  15. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  16. Laser spot tracking based on modified circular Hough transform and motion pattern analysis.

    PubMed

    Krstinić, Damir; Skelin, Ana Kuzmanić; Milatić, Ivan

    2014-10-27

    Laser pointers are one of the most widely used interactive and pointing devices in different human-computer interaction systems. Existing approaches to vision-based laser spot tracking are designed for controlled indoor environments with the main assumption that the laser spot is very bright, if not the brightest, spot in images. In this work, we are interested in developing a method for an outdoor, open-space environment, which could be implemented on embedded devices with limited computational resources. Under these circumstances, none of the assumptions of existing methods for laser spot tracking can be applied, yet a novel and fast method with robust performance is required. Throughout the paper, we will propose and evaluate an efficient method based on modified circular Hough transform and Lucas-Kanade motion analysis. Encouraging results on a representative dataset demonstrate the potential of our method in an uncontrolled outdoor environment, while achieving maximal accuracy indoors. Our dataset and ground truth data are made publicly available for further development.

  17. On correct evaluation techniques of brightness enhancement effect measurement data

    NASA Astrophysics Data System (ADS)

    Kukačka, Leoš; Dupuis, Pascal; Motomura, Hideki; Rozkovec, Jiří; Kolář, Milan; Zissis, Georges; Jinno, Masafumi

    2017-11-01

    This paper aims to establish confidence intervals of the quantification of brightness enhancement effects resulting from the use of pulsing bright light. It is found that the methods used so far may yield significant bias in the published results, overestimating or underestimating the enhancement effect. The authors propose to use a linear algebra method called the total least squares. Upon an example dataset, it is shown that this method does not yield biased results. The statistical significance of the results is also computed. It is concluded over an observation set that the currently used linear algebra methods present many patterns of noise sensitivity. Changing algorithm details leads to inconsistent results. It is thus recommended to use the method with the lowest noise sensitivity. Moreover, it is shown that this method also permits one to obtain an estimate of the confidence interval. This paper neither aims to publish results about a particular experiment nor to draw any particular conclusion about existence or nonexistence of the brightness enhancement effect.

  18. An Isometric Mapping Based Co-Location Decision Tree Algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Wei, J.; Zhou, X.; Zhang, R.; Huang, W.; Sha, H.; Chen, J.

    2018-05-01

    Decision tree (DT) induction has been widely used in different pattern classification. However, most traditional DTs have the disadvantage that they consider only non-spatial attributes (ie, spectral information) as a result of classifying pixels, which can result in objects being misclassified. Therefore, some researchers have proposed a co-location decision tree (Cl-DT) method, which combines co-location and decision tree to solve the above the above-mentioned traditional decision tree problems. Cl-DT overcomes the shortcomings of the existing DT algorithms, which create a node for each value of a given attribute, which has a higher accuracy than the existing decision tree approach. However, for non-linearly distributed data instances, the euclidean distance between instances does not reflect the true positional relationship between them. In order to overcome these shortcomings, this paper proposes an isometric mapping method based on Cl-DT (called, (Isomap-based Cl-DT), which is a method that combines heterogeneous and Cl-DT together. Because isometric mapping methods use geodetic distances instead of Euclidean distances between non-linearly distributed instances, the true distance between instances can be reflected. The experimental results and several comparative analyzes show that: (1) The extraction method of exposed carbonate rocks is of high accuracy. (2) The proposed method has many advantages, because the total number of nodes, the number of leaf nodes and the number of nodes are greatly reduced compared to Cl-DT. Therefore, the Isomap -based Cl-DT algorithm can construct a more accurate and faster decision tree.

  19. Evaluating Gene Set Enrichment Analysis Via a Hybrid Data Model

    PubMed Central

    Hua, Jianping; Bittner, Michael L.; Dougherty, Edward R.

    2014-01-01

    Gene set enrichment analysis (GSA) methods have been widely adopted by biological labs to analyze data and generate hypotheses for validation. Most of the existing comparison studies focus on whether the existing GSA methods can produce accurate P-values; however, practitioners are often more concerned with the correct gene-set ranking generated by the methods. The ranking performance is closely related to two critical goals associated with GSA methods: the ability to reveal biological themes and ensuring reproducibility, especially for small-sample studies. We have conducted a comprehensive simulation study focusing on the ranking performance of seven representative GSA methods. We overcome the limitation on the availability of real data sets by creating hybrid data models from existing large data sets. To build the data model, we pick a master gene from the data set to form the ground truth and artificially generate the phenotype labels. Multiple hybrid data models can be constructed from one data set and multiple data sets of smaller sizes can be generated by resampling the original data set. This approach enables us to generate a large batch of data sets to check the ranking performance of GSA methods. Our simulation study reveals that for the proposed data model, the Q2 type GSA methods have in general better performance than other GSA methods and the global test has the most robust results. The properties of a data set play a critical role in the performance. For the data sets with highly connected genes, all GSA methods suffer significantly in performance. PMID:24558298

  20. Analysis of Existing Guidelines for the Systematic Planning Process of Clinical Registries.

    PubMed

    Löpprich, Martin; Knaup, Petra

    2016-01-01

    Clinical registries are a powerful method to observe the clinical practice and natural disease history. In contrast to clinical trials, where guidelines and standardized methods exist and are mandatory, only a few initiatives have published methodological guidelines for clinical registries. The objective of this paper was to review these guidelines and systematically assess their completeness, usability and feasibility according to a SWOT analysis. The results show that each guideline has its own strengths and weaknesses. While one supports the systematic planning process, the other discusses clinical registries in great detail. However, the feasibility was mostly limited and the special requirements of clinical registries, their flexible, expandable and adaptable technological structure was not addressed consistently.

  1. Management of Dynamic Biomedical Terminologies: Current Status and Future Challenges

    PubMed Central

    Dos Reis, J. C.; Pruski, C.

    2015-01-01

    Summary Objectives Controlled terminologies and their dependent artefacts provide a consensual understanding of a domain while reducing ambiguities and enabling reasoning. However, the evolution of a domain’s knowledge directly impacts these terminologies and generates inconsistencies in the underlying biomedical information systems. In this article, we review existing work addressing the dynamic aspect of terminologies as well as their effects on mappings and semantic annotations. Methods We investigate approaches related to the identification, characterization and propagation of changes in terminologies, mappings and semantic annotations including techniques to update their content. Results and conclusion Based on the explored issues and existing methods, we outline open research challenges requiring investigation in the near future. PMID:26293859

  2. Optimal trace inequality constants for interior penalty discontinuous Galerkin discretisations of elliptic operators using arbitrary elements with non-constant Jacobians

    NASA Astrophysics Data System (ADS)

    Owens, A. R.; Kópházi, J.; Eaton, M. D.

    2017-12-01

    In this paper, a new method to numerically calculate the trace inequality constants, which arise in the calculation of penalty parameters for interior penalty discretisations of elliptic operators, is presented. These constants are provably optimal for the inequality of interest. As their calculation is based on the solution of a generalised eigenvalue problem involving the volumetric and face stiffness matrices, the method is applicable to any element type for which these matrices can be calculated, including standard finite elements and the non-uniform rational B-splines of isogeometric analysis. In particular, the presented method does not require the Jacobian of the element to be constant, and so can be applied to a much wider variety of element shapes than are currently available in the literature. Numerical results are presented for a variety of finite element and isogeometric cases. When the Jacobian is constant, it is demonstrated that the new method produces lower penalty parameters than existing methods in the literature in all cases, which translates directly into savings in the solution time of the resulting linear system. When the Jacobian is not constant, it is shown that the naive application of existing approaches can result in penalty parameters that do not guarantee coercivity of the bilinear form, and by extension, the stability of the solution. The method of manufactured solutions is applied to a model reaction-diffusion equation with a range of parameters, and it is found that using penalty parameters based on the new trace inequality constants result in better conditioned linear systems, which can be solved approximately 11% faster than those produced by the methods from the literature.

  3. Projection-free approximate balanced truncation of large unstable systems

    NASA Astrophysics Data System (ADS)

    Flinois, Thibault L. B.; Morgans, Aimee S.; Schmid, Peter J.

    2015-08-01

    In this article, we show that the projection-free, snapshot-based, balanced truncation method can be applied directly to unstable systems. We prove that even for unstable systems, the unmodified balanced proper orthogonal decomposition algorithm theoretically yields a converged transformation that balances the Gramians (including the unstable subspace). We then apply the method to a spatially developing unstable system and show that it results in reduced-order models of similar quality to the ones obtained with existing methods. Due to the unbounded growth of unstable modes, a practical restriction on the final impulse response simulation time appears, which can be adjusted depending on the desired order of the reduced-order model. Recommendations are given to further reduce the cost of the method if the system is large and to improve the performance of the method if it does not yield acceptable results in its unmodified form. Finally, the method is applied to the linearized flow around a cylinder at Re = 100 to show that it actually is able to accurately reproduce impulse responses for more realistic unstable large-scale systems in practice. The well-established approximate balanced truncation numerical framework therefore can be safely applied to unstable systems without any modifications. Additionally, balanced reduced-order models can readily be obtained even for large systems, where the computational cost of existing methods is prohibitive.

  4. Open-source platform to benchmark fingerprints for ligand-based virtual screening

    PubMed Central

    2013-01-01

    Similarity-search methods using molecular fingerprints are an important tool for ligand-based virtual screening. A huge variety of fingerprints exist and their performance, usually assessed in retrospective benchmarking studies using data sets with known actives and known or assumed inactives, depends largely on the validation data sets used and the similarity measure used. Comparing new methods to existing ones in any systematic way is rather difficult due to the lack of standard data sets and evaluation procedures. Here, we present a standard platform for the benchmarking of 2D fingerprints. The open-source platform contains all source code, structural data for the actives and inactives used (drawn from three publicly available collections of data sets), and lists of randomly selected query molecules to be used for statistically valid comparisons of methods. This allows the exact reproduction and comparison of results for future studies. The results for 12 standard fingerprints together with two simple baseline fingerprints assessed by seven evaluation methods are shown together with the correlations between methods. High correlations were found between the 12 fingerprints and a careful statistical analysis showed that only the two baseline fingerprints were different from the others in a statistically significant way. High correlations were also found between six of the seven evaluation methods, indicating that despite their seeming differences, many of these methods are similar to each other. PMID:23721588

  5. A reconsideration of negative ratings for network-based recommendation

    NASA Astrophysics Data System (ADS)

    Hu, Liang; Ren, Liang; Lin, Wenbin

    2018-01-01

    Recommendation algorithms based on bipartite networks have become increasingly popular, thanks to their accuracy and flexibility. Currently, many of these methods ignore users' negative ratings. In this work, we propose a method to exploit negative ratings for the network-based inference algorithm. We find that negative ratings play a positive role regardless of sparsity of data sets. Furthermore, we improve the efficiency of our method and compare it with the state-of-the-art algorithms. Experimental results show that the present method outperforms the existing algorithms.

  6. Magnetically induced rotor vibration in dual-stator permanent magnet motors

    NASA Astrophysics Data System (ADS)

    Xie, Bang; Wang, Shiyu; Wang, Yaoyao; Zhao, Zhifu; Xiu, Jie

    2015-07-01

    Magnetically induced vibration is a major concern in permanent magnet (PM) motors, which is especially true for dual-stator motors. This work develops a two-dimensional model of the rotor by using energy method, and employs this model to examine the rigid- and elastic-body vibrations induced by the inner stator tooth passage force and that by the outer. The analytical results imply that there exist three typical vibration modes. Their presence or absence depends on the combination of magnet/slot, force's frequency and amplitude, the relative position between two stators, and other structural parameters. The combination and relative position affect these modes via altering the force phase. The predicted results are verified by magnetic force wave analysis by finite element method (FEM) and comparison with the existing results. Potential directions are also given with the anticipation of bringing forth more interesting and useful findings. As an engineering application, the magnetically induced vibration can be first reduced via the combination and then a suitable relative position.

  7. Development and application of a gradient method for solving differential games

    NASA Technical Reports Server (NTRS)

    Roberts, D. A.; Montgomery, R. C.

    1971-01-01

    A technique for solving n-dimensional games is developed and applied to two pursuit-evasion games. The first is a two-dimensional game similar to the homicidal chauffeur but modified to resemble an airplane-helicopter engagement. The second is a five-dimensional game of two airplanes at constant altitude and with thrust and turning controls. The performance function to be optimized by the pursuer and evader was the distance between the evader and a given target point in front of the pursuer. The analytic solution to the first game reveals that both unique and nonunique solutions exist. A comparison between the gradient results and the analytic solution shows a dependence on the nominal controls in regions where nonunique solutions exist. In the unique solution region, the results from the two methods agree closely. The results for the five-dimensional two-airplane game are also shown to be dependent on the nominal controls selected and indicate that initial conditions are in a region of nonunique solutions.

  8. A Method of Retrospective Computerized System Validation for Drug Manufacturing Software Considering Modifications

    NASA Astrophysics Data System (ADS)

    Takahashi, Masakazu; Fukue, Yoshinori

    This paper proposes a Retrospective Computerized System Validation (RCSV) method for Drug Manufacturing Software (DMSW) that relates to drug production considering software modification. Because DMSW that is used for quality management and facility control affects big impact to quality of drugs, regulatory agency required proofs of adequacy for DMSW's functions and performance based on developed documents and test results. Especially, the work that explains adequacy for previously developed DMSW based on existing documents and operational records is called RCSV. When modifying RCSV conducted DMSW, it was difficult to secure consistency between developed documents and test results for modified DMSW parts and existing documents and operational records for non-modified DMSW parts. This made conducting RCSV difficult. In this paper, we proposed (a) definition of documents architecture, (b) definition of descriptive items and levels in the documents, (c) management of design information using database, (d) exhaustive testing, and (e) integrated RCSV procedure. As a result, we could conduct adequate RCSV securing consistency.

  9. Ensemble-based prediction of RNA secondary structures.

    PubMed

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.

  10. SURVEYS OF FALLOUT SHELTER--A COMPARISON BETWEEN AERIAL PHOTOGRAPHIC AND DOCUMENTARY METHODS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kleinecke, D.C.

    1960-02-01

    In 1959 a large part of Contra Costa County, California, was surveyed for fallout shelter areas. This survey was based on an examination of the tax assessor's records of existing buildings. A portion of this area was also surveyed independently by a method based on aerial photography. A statistical comparison of the results of these two surveys indicates that the aerial photographic method was more efficient than the documentary method in locating potential shelter space in buildings of heavy construction. This result, however, is probably not operationally significant. There is reason to believe that a combination of these two surveymore » methods could be devised which would be operationally preferable to either method. (auth)« less

  11. A retrospective analysis of in vivo eye irritation, skin irritation and skin sensitisation studies with agrochemical formulations: Setting the scene for development of alternative strategies.

    PubMed

    Corvaro, M; Gehen, S; Andrews, K; Chatfield, R; Macleod, F; Mehta, J

    2017-10-01

    Analysis of the prevalence of health effects in large scale databases is key in defining testing strategies within the context of Integrated Approaches on Testing and Assessment (IATA), and is relevant to drive policy changes in existing regulatory toxicology frameworks towards non-animal approaches. A retrospective analysis of existing results from in vivo skin irritation, eye irritation, and skin sensitisation studies on a database of 223 agrochemical formulations is herein published. For skin or eye effects, high prevalence of mild to non-irritant formulations (i.e. per GHS, CLP or EPA classification) would generally suggest a bottom-up approach. Severity of erythema or corneal opacity, for skinor eye effects respectively, were the key drivers for classification, consistent with existing literature. The reciprocal predictivity of skin versus eye irritation and the good negative predictivity of the GHS additivity calculation approach (>85%) provided valuable non-testing evidence for irritation endpoints. For dermal sensitisation, concordance on data from three different methods confirmed the high false negative rate for the Buehler method in this product class. These results have been reviewed together with existing literature on the use of in vitro alternatives for agrochemical formulations, to propose improvements to current regulatory strategies and to identify further research needs. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Two- and three-photon ionization of hydrogen and lithium

    NASA Technical Reports Server (NTRS)

    Chang, T. N.; Poe, R. T.

    1977-01-01

    We present the detailed result of a calculation on two- and three-photon ionization of hydrogen and lithium based on a recently proposed calculational method. Our calculation has demonstrated that this method is capable of retaining the numerical advantages enjoyed by most of the existing calculational methods and, at the same time, circumventing their limitations. In particular, we have concentrated our discussion on the relative contribution from the resonant and nonresonant intermediate states.

  13. Integration of existing systematic reviews into new reviews: identification of guidance needs

    PubMed Central

    2014-01-01

    Background An exponential increase in the number of systematic reviews published, and constrained resources for new reviews, means that there is an urgent need for guidance on explicitly and transparently integrating existing reviews into new systematic reviews. The objectives of this paper are: 1) to identify areas where existing guidance may be adopted or adapted, and 2) to suggest areas for future guidance development. Methods We searched documents and websites from healthcare focused systematic review organizations to identify and, where available, to summarize relevant guidance on the use of existing systematic reviews. We conducted informational interviews with members of Evidence-based Practice Centers (EPCs) to gather experiences in integrating existing systematic reviews, including common issues and challenges, as well as potential solutions. Results There was consensus among systematic review organizations and the EPCs about some aspects of incorporating existing systematic reviews into new reviews. Current guidance may be used in assessing the relevance of prior reviews and in scanning references of prior reviews to identify studies for a new review. However, areas of challenge remain. Areas in need of guidance include how to synthesize, grade the strength of, and present bodies of evidence composed of primary studies and existing systematic reviews. For instance, empiric evidence is needed regarding how to quality check data abstraction and when and how to use study-level risk of bias assessments from prior reviews. Conclusions There remain areas of uncertainty for how to integrate existing systematic reviews into new reviews. Methods research and consensus processes among systematic review organizations are needed to develop guidance to address these challenges. PMID:24956937

  14. CNV-TV: a robust method to discover copy number variation from short sequencing reads.

    PubMed

    Duan, Junbo; Zhang, Ji-Gang; Deng, Hong-Wen; Wang, Yu-Ping

    2013-05-02

    Copy number variation (CNV) is an important structural variation (SV) in human genome. Various studies have shown that CNVs are associated with complex diseases. Traditional CNV detection methods such as fluorescence in situ hybridization (FISH) and array comparative genomic hybridization (aCGH) suffer from low resolution. The next generation sequencing (NGS) technique promises a higher resolution detection of CNVs and several methods were recently proposed for realizing such a promise. However, the performances of these methods are not robust under some conditions, e.g., some of them may fail to detect CNVs of short sizes. There has been a strong demand for reliable detection of CNVs from high resolution NGS data. A novel and robust method to detect CNV from short sequencing reads is proposed in this study. The detection of CNV is modeled as a change-point detection from the read depth (RD) signal derived from the NGS, which is fitted with a total variation (TV) penalized least squares model. The performance (e.g., sensitivity and specificity) of the proposed approach are evaluated by comparison with several recently published methods on both simulated and real data from the 1000 Genomes Project. The experimental results showed that both the true positive rate and false positive rate of the proposed detection method do not change significantly for CNVs with different copy numbers and lengthes, when compared with several existing methods. Therefore, our proposed approach results in a more reliable detection of CNVs than the existing methods.

  15. Ocular Chromatic Aberrations and Their Effects on Polychromatic Retinal Image Quality

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxiao

    Previous studies of ocular chromatic aberrations have concentrated on chromatic difference of focus (CDF). Less is known about the chromatic difference of image position (CDP) in the peripheral retina and no experimental attempt has been made to measure the ocular chromatic difference of magnification (CDM). Consequently, theoretical modelling of human eyes is incomplete. The insufficient knowledge of ocular chromatic aberrations is partially responsible for two unsolved applied vision problems: (1) how to improve vision by correcting ocular chromatic aberration? (2) what is the impact of ocular chromatic aberration on the use of isoluminance gratings as a tool in spatial-color vision?. Using optical ray tracing methods, MTF analysis methods of image quality, and psychophysical methods, I have developed a more complete model of ocular chromatic aberrations and their effects on vision. The ocular CDM was determined psychophysically by measuring the tilt in the apparent frontal parallel plane (AFPP) induced by interocular difference in image wavelength. This experimental result was then used to verify a theoretical relationship between the ocular CDM, the ocular CDF and the entrance pupil of the eye. In the retinal image after correcting the ocular CDF with existing achromatizing methods, two forms of chromatic aberration (CDM and chromatic parallax) were examined. The CDM was predicted by theoretical ray tracing and measured with the same method used to determine ocular CDM. The chromatic parallax was predicted with a nodal ray model and measured with the two-color vernier alignment method. The influence of these two aberrations on polychromatic MTF were calculated. Using this improved model of ocular chromatic aberration, luminance artifacts in the images of isoluminance gratings were calculated. The predicted luminance artifacts were then compared with experimental data from previous investigators. The results show that: (1) A simple relationship exists between two major chromatic aberrations and the location of the pupil; (2) The ocular CDM is measurable and varies among individuals; (3) All existing methods to correct ocular chromatic aberration face another aberration, chromatic parallax, which is inherent in the methodology; (4) Ocular chromatic aberrations have the potential to contaminate psychophysical experimental results on human spatial-color vision.

  16. Research notes : retrofitting culverts for fish.

    DOT National Transportation Integrated Search

    2005-01-01

    Culverts are a well established method to pass a roadway over a waterway. Standard design criteria exist for meeting the hydraulic requirements for moving the water through the culverts. However, the hydraulic conditions resulting from many culvert d...

  17. A method for evaluating the importance of system state observations to model predictions, with application to the Death Valley regional groundwater flow system

    USGS Publications Warehouse

    Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.; O'Brien, Grady M.

    2004-01-01

    We develop a new observation‐prediction (OPR) statistic for evaluating the importance of system state observations to model predictions. The OPR statistic measures the change in prediction uncertainty produced when an observation is added to or removed from an existing monitoring network, and it can be used to guide refinement and enhancement of the network. Prediction uncertainty is approximated using a first‐order second‐moment method. We apply the OPR statistic to a model of the Death Valley regional groundwater flow system (DVRFS) to evaluate the importance of existing and potential hydraulic head observations to predicted advective transport paths in the saturated zone underlying Yucca Mountain and underground testing areas on the Nevada Test Site. Important existing observations tend to be far from the predicted paths, and many unimportant observations are in areas of high observation density. These results can be used to select locations at which increased observation accuracy would be beneficial and locations that could be removed from the network. Important potential observations are mostly in areas of high hydraulic gradient far from the paths. Results for both existing and potential observations are related to the flow system dynamics and coarse parameter zonation in the DVRFS model. If system properties in different locations are as similar as the zonation assumes, then the OPR results illustrate a data collection opportunity whereby observations in distant, high‐gradient areas can provide information about properties in flatter‐gradient areas near the paths. If this similarity is suspect, then the analysis produces a different type of data collection opportunity involving testing of model assumptions critical to the OPR results.

  18. Influence of boundary conditions on the existence and stability of minimal surfaces of revolution made of soap films

    NASA Astrophysics Data System (ADS)

    Salkin, Louis; Schmit, Alexandre; Panizza, Pascal; Courbin, Laurent

    2014-09-01

    Because of surface tension, soap films seek the shape that minimizes their surface energy and thus their surface area. This mathematical postulate allows one to predict the existence and stability of simple minimal surfaces. After briefly recalling classical results obtained in the case of symmetric catenoids that span two circular rings with the same radius, we discuss the role of boundary conditions on such shapes, working with two rings having different radii. We then investigate the conditions of existence and stability of other shapes that include two portions of catenoids connected by a planar soap film and half-symmetric catenoids for which we introduce a method of observation. We report a variety of experimental results including metastability—an hysteretic evolution of the shape taken by a soap film—explained using simple physical arguments. Working by analogy with the theory of phase transitions, we conclude by discussing universal behaviors of the studied minimal surfaces in the vicinity of their existence thresholds.

  19. Fitting Formulae and Constraints for the Existence of S-type and P-type Habitable Zones in Binary Systems

    NASA Astrophysics Data System (ADS)

    Wang, Zhaopeng; Cuntz, Manfred

    2017-10-01

    We derive fitting formulae for the quick determination of the existence of S-type and P-type habitable zones (HZs) in binary systems. Based on previous work, we consider the limits of the climatological HZ in binary systems (which sensitively depend on the system parameters) based on a joint constraint encompassing planetary orbital stability and a habitable region for a possible system planet. Additionally, we employ updated results on planetary climate models obtained by Kopparapu and collaborators. Our results are applied to four P-type systems (Kepler-34, Kepler-35, Kepler-413, and Kepler-1647) and two S-type systems (TrES-2 and KOI-1257). Our method allows us to gauge the existence of climatological HZs for these systems in a straightforward manner with detailed consideration of the observational uncertainties. Further applications may include studies of other existing systems as well as systems to be identified through future observational campaigns.

  20. Fitting Formulae and Constraints for the Existence of S-type and P-type Habitable Zones in Binary Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Zhaopeng; Cuntz, Manfred, E-mail: zhaopeng.wang@mavs.uta.edu, E-mail: cuntz@uta.edu

    We derive fitting formulae for the quick determination of the existence of S-type and P-type habitable zones (HZs) in binary systems. Based on previous work, we consider the limits of the climatological HZ in binary systems (which sensitively depend on the system parameters) based on a joint constraint encompassing planetary orbital stability and a habitable region for a possible system planet. Additionally, we employ updated results on planetary climate models obtained by Kopparapu and collaborators. Our results are applied to four P-type systems (Kepler-34, Kepler-35, Kepler-413, and Kepler-1647) and two S-type systems (TrES-2 and KOI-1257). Our method allows us tomore » gauge the existence of climatological HZs for these systems in a straightforward manner with detailed consideration of the observational uncertainties. Further applications may include studies of other existing systems as well as systems to be identified through future observational campaigns.« less

  1. Robust digital image watermarking using distortion-compensated dither modulation

    NASA Astrophysics Data System (ADS)

    Li, Mianjie; Yuan, Xiaochen

    2018-04-01

    In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.

  2. Business Training and Education Needs of Chiropractors

    PubMed Central

    Henson, Steve W; Pressley, Milton; Korfmann, Scott

    2008-01-01

    Objective: This report is an examination of the perceived need for business skills among chiropractors. Methods: An online survey was completed by 64 chiropractors. They assessed the need for business skills and current levels of business skills. Using this information, gaps in business skills are identified. Results: The need for business skills is broad, encompassing all major business functions. Existing business skills are well below needed levels. Conclusion: The chiropractic profession needs significantly greater business and practice management skills. The existing gap between needed business skills and existing skills suggests that current training and education programs are not providing adequate business skills training PMID:19043535

  3. Global existence and finite time blow-up for a class of thin-film equation

    NASA Astrophysics Data System (ADS)

    Dong, Zhihua; Zhou, Jun

    2017-08-01

    This paper deals with a class of thin-film equation, which was considered in Li et al. (Nonlinear Anal Theory Methods Appl 147:96-109, 2016), where the case of lower initial energy (J(u_0)≤ d and d is a positive constant) was discussed, and the conditions on global existence or blow-up are given. We extend the results of this paper on two aspects: Firstly, we consider the upper and lower bounds of blow-up time and asymptotic behavior when J(u_0)d.

  4. Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy

    PubMed Central

    Zhang, Lina; Zhang, Chengjin; Gao, Rui; Yang, Runtao; Song, Qing

    2016-01-01

    Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information), PSSM (Position Specific Scoring Matrix), RSA (Relative Solvent Accessibility), and CTD (Composition, Transition, Distribution). The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest), SMO (Sequential Minimal Optimization), NNA (Nearest Neighbor Algorithm), and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection) method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew’s Correlation Coefficient) of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc. PMID:27662651

  5. Khater method for nonlinear Sharma Tasso-Olever (STO) equation of fractional order

    NASA Astrophysics Data System (ADS)

    Bibi, Sadaf; Mohyud-Din, Syed Tauseef; Khan, Umar; Ahmed, Naveed

    In this work, we have implemented a direct method, known as Khater method to establish exact solutions of nonlinear partial differential equations of fractional order. Number of solutions provided by this method is greater than other traditional methods. Exact solutions of nonlinear fractional order Sharma Tasso-Olever (STO) equation are expressed in terms of kink, travelling wave, periodic and solitary wave solutions. Modified Riemann-Liouville derivative and Fractional complex transform have been used for compatibility with fractional order sense. Solutions have been graphically simulated for understanding the physical aspects and importance of the method. A comparative discussion between our established results and the results obtained by existing ones is also presented. Our results clearly reveal that the proposed method is an effective, powerful and straightforward technique to work out new solutions of various types of differential equations of non-integer order in the fields of applied sciences and engineering.

  6. Comparison of several methods for estimating low speed stability derivatives

    NASA Technical Reports Server (NTRS)

    Fletcher, H. S.

    1971-01-01

    Methods presented in five different publications have been used to estimate the low-speed stability derivatives of two unpowered airplane configurations. One configuration had unswept lifting surfaces, the other configuration was the D-558-II swept-wing research airplane. The results of the computations were compared with each other, with existing wind-tunnel data, and with flight-test data for the D-558-II configuration to assess the relative merits of the methods for estimating derivatives. The results of the study indicated that, in general, for low subsonic speeds, no one text appeared consistently better for estimating all derivatives.

  7. Methods and optical fibers that decrease pulse degradation resulting from random chromatic dispersion

    DOEpatents

    Chertkov, Michael; Gabitov, Ildar

    2004-03-02

    The present invention provides methods and optical fibers for periodically pinning an actual (random) accumulated chromatic dispersion of an optical fiber to a predicted accumulated dispersion of the fiber through relatively simple modifications of fiber-optic manufacturing methods or retrofitting of existing fibers. If the pinning occurs with sufficient frequency (at a distance less than or are equal to a correlation scale), pulse degradation resulting from random chromatic dispersion is minimized. Alternatively, pinning may occur quasi-periodically, i.e., the pinning distance is distributed between approximately zero and approximately two to three times the correlation scale.

  8. Application of the probabilistic approximate analysis method to a turbopump blade analysis. [for Space Shuttle Main Engine

    NASA Technical Reports Server (NTRS)

    Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.

    1990-01-01

    An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.

  9. Single tree biomass modelling using airborne laser scanning

    NASA Astrophysics Data System (ADS)

    Kankare, Ville; Räty, Minna; Yu, Xiaowei; Holopainen, Markus; Vastaranta, Mikko; Kantola, Tuula; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto

    2013-11-01

    Accurate forest biomass mapping methods would provide the means for e.g. detecting bioenergy potential, biofuel and forest-bound carbon. The demand for practical biomass mapping methods at all forest levels is growing worldwide, and viable options are being developed. Airborne laser scanning (ALS) is a promising forest biomass mapping technique, due to its capability of measuring the three-dimensional forest vegetation structure. The objective of the study was to develop new methods for tree-level biomass estimation using metrics derived from ALS point clouds and to compare the results with field references collected using destructive sampling and with existing biomass models. The study area was located in Evo, southern Finland. ALS data was collected in 2009 with pulse density equalling approximately 10 pulses/m2. Linear models were developed for the following tree biomass components: total, stem wood, living branch and total canopy biomass. ALS-derived geometric and statistical point metrics were used as explanatory variables when creating the models. The total and stem biomass root mean square error per cents equalled 26.3% and 28.4% for Scots pine (Pinus sylvestris L.), and 36.8% and 27.6% for Norway spruce (Picea abies (L.) H. Karst.), respectively. The results showed that higher estimation accuracy for all biomass components can be achieved with models created in this study compared to existing allometric biomass models when ALS-derived height and diameter were used as input parameters. Best results were achieved when adding field-measured diameter and height as inputs in the existing biomass models. The only exceptions to this were the canopy and living branch biomass estimations for spruce. The achieved results are encouraging for the use of ALS-derived metrics in biomass mapping and for further development of the models.

  10. Optimizing the performance of the amphipod, Hyalella azteca, in chronic toxicity tests: Results of feeding studies with various foods and feeding regimes

    EPA Science Inventory

    The freshwater amphipod, Hyalella azteca, is a common organism used for sediment toxicity testing. Standard methods for 10-d and 42-d sediment toxicity tests with H. azteca were last revised and published by USEPA/ASTM in 2000. While Hyalella azteca methods exist for sediment tox...

  11. One-Year Integrated Mathematics and Mathematics Methods Course for Prospective Elementary School Teachers.

    ERIC Educational Resources Information Center

    Springer, George

    This guide describes the content of a proposed mathematics course for prospective elementary school teachers. It is the result of a two-year study at Indiana University in which three existing courses were integrated and coordinated. For each unit of instruction, there are (1) remarks for motivation of study, (2) remarks on methods of teaching,…

  12. Dynamic Bayesian Networks as a Probabilistic Metamodel for Combat Simulations

    DTIC Science & Technology

    2014-09-18

    test is commonly used for large data sets and is the method of comparison presented in Section 5.5. 4.3.3 Kullback - Leibler Divergence Goodness of Fit ...methods exist that might improve the results. A goodness of fit test using the Kullback - Leibler Divergence was proposed in the first paper, but still... Kullback - Leibler Divergence Goodness of Fit Test . . .

  13. Student's Perceptions of Quality Learning in a Malaysian University--A Mixed Method Approach

    ERIC Educational Resources Information Center

    Choy, S. Chee; Yim, Joanne Sau-Ching; Tan, Poh Leong

    2017-01-01

    Purpose: This paper aims to examine students' perceptions of quality learning using a mixed-methods approach in a Malaysian university, with an aim to fill existing knowledge gaps in the literature on relationships among relevant quality variables. The study also assesses the extent to which detailed results from a few participants can be…

  14. Assessing biodiversity on the farm scale as basis for ecosystem service payments.

    PubMed

    von Haaren, Christina; Kempa, Daniela; Vogel, Katrin; Rüter, Stefan

    2012-12-30

    Ecosystem services payments must be based on a standardised transparent assessment of the goods and services provided. This is especially relevant in the context of EU agri-environmental programs, but also for organic-food companies that foster environmental services on their contractor farms. Addressing the farm scale is important because land users/owners are major recipients of payments and they could be more involved in data generation and conservation management. A standardised system for measuring on-farm biodiversity does not yet exist that concentrates on performance indicators and includes farmers in generating information. A method is required that produces ordinal or metric scaled assessment results as well as management measures. Another requirement is the ease of application, which includes the ease of gathering input data and understandability. In order to respond to this need, we developed a method which is designed for automated application in an open source farm assessment system named MANUELA. The method produces an ordinal scale assessment of biodiversity that includes biotopes, species, biotope connectivity and the influence of land use. In addition, specific measures for biotope types are proposed. The open source geographical information system OpenJump is used for the implementation of MANUELA. The results of the trial applications and robustness tests show that the assessment can be implemented, for the most part, using existing information as well as data available from farmers or advisors. The results are more sensitive for showing on-farm achievements and changes than existing biotope-type classifications. Such a differentiated classification is needed as a basis for ecosystem service payments and for designing effective measures. The robustness of the results with respect to biotope connectivity is comparable to that of complex models, but it should be further improved. Interviews with the test farmers substantiate that the assessment methods can be implemented on farms and they are understood by farmers. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. De novo protein structure prediction by dynamic fragment assembly and conformational space annealing.

    PubMed

    Lee, Juyong; Lee, Jinhyuk; Sasaki, Takeshi N; Sasai, Masaki; Seok, Chaok; Lee, Jooyoung

    2011-08-01

    Ab initio protein structure prediction is a challenging problem that requires both an accurate energetic representation of a protein structure and an efficient conformational sampling method for successful protein modeling. In this article, we present an ab initio structure prediction method which combines a recently suggested novel way of fragment assembly, dynamic fragment assembly (DFA) and conformational space annealing (CSA) algorithm. In DFA, model structures are scored by continuous functions constructed based on short- and long-range structural restraint information from a fragment library. Here, DFA is represented by the full-atom model by CHARMM with the addition of the empirical potential of DFIRE. The relative contributions between various energy terms are optimized using linear programming. The conformational sampling was carried out with CSA algorithm, which can find low energy conformations more efficiently than simulated annealing used in the existing DFA study. The newly introduced DFA energy function and CSA sampling algorithm are implemented into CHARMM. Test results on 30 small single-domain proteins and 13 template-free modeling targets of the 8th Critical Assessment of protein Structure Prediction show that the current method provides comparable and complementary prediction results to existing top methods. Copyright © 2011 Wiley-Liss, Inc.

  16. An optical method for characterizing carbon content in ceramic pot filters.

    PubMed

    Goodwin, J Y; Elmore, A C; Salvinelli, C; Reidmeyer, Mary R

    2017-08-01

    Ceramic pot filter (CPF) technology is a relatively common means of household water treatment in developing areas, and performance characteristics of CPFs have been characterized using production CPFs, experimental CPFs fabricated in research laboratories, and ceramic disks intended to be CPF surrogates. There is evidence that CPF manufacturers do not always fire their products according to best practices and the result is incomplete combustion of the pore forming material and the creation of a carbon core in the final CPFs. Researchers seldom acknowledge the existence of potential existence of carbon cores, and at least one CPF producer has postulated that the carbon may be beneficial in terms of final water quality because of the presence of activated carbon in consumer filters marketed in the Western world. An initial step in characterizing the presence and impact of carbon cores is the characterization of those cores. An optical method which may be more viable to producers relative to off-site laboratory analysis of carbon content has been developed and verified. The use of the optical method is demonstrated via preliminary disinfection and flowrate studies, and the results of these studies indicate that the method may be of use in studying production kiln operation.

  17. Combining Nordtest method and bootstrap resampling for measurement uncertainty estimation of hematology analytes in a medical laboratory.

    PubMed

    Cui, Ming; Xu, Lili; Wang, Huimin; Ju, Shaoqing; Xu, Shuizhu; Jing, Rongrong

    2017-12-01

    Measurement uncertainty (MU) is a metrological concept, which can be used for objectively estimating the quality of test results in medical laboratories. The Nordtest guide recommends an approach that uses both internal quality control (IQC) and external quality assessment (EQA) data to evaluate the MU. Bootstrap resampling is employed to simulate the unknown distribution based on the mathematical statistics method using an existing small sample of data, where the aim is to transform the small sample into a large sample. However, there have been no reports of the utilization of this method in medical laboratories. Thus, this study applied the Nordtest guide approach based on bootstrap resampling for estimating the MU. We estimated the MU for the white blood cell (WBC) count, red blood cell (RBC) count, hemoglobin (Hb), and platelets (Plt). First, we used 6months of IQC data and 12months of EQA data to calculate the MU according to the Nordtest method. Second, we combined the Nordtest method and bootstrap resampling with the quality control data and calculated the MU using MATLAB software. We then compared the MU results obtained using the two approaches. The expanded uncertainty results determined for WBC, RBC, Hb, and Plt using the bootstrap resampling method were 4.39%, 2.43%, 3.04%, and 5.92%, respectively, and 4.38%, 2.42%, 3.02%, and 6.00% with the existing quality control data (U [k=2]). For WBC, RBC, Hb, and Plt, the differences between the results obtained using the two methods were lower than 1.33%. The expanded uncertainty values were all less than the target uncertainties. The bootstrap resampling method allows the statistical analysis of the MU. Combining the Nordtest method and bootstrap resampling is considered a suitable alternative method for estimating the MU. Copyright © 2017 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  18. A retrospective evaluation method for in vitro mammalian genotoxicity tests using cytotoxicity index transformation formulae.

    PubMed

    Fujita, Yurika; Kasamatsu, Toshio; Ikeda, Naohiro; Nishiyama, Naohiro; Honda, Hiroshi

    2016-01-15

    Although in vitro chromosomal aberration tests and micronucleus tests have been widely used for genotoxicity evaluation, false-positive results have been reported under strong cytotoxic conditions. To reduce false-positive results, the new Organization for Economic Co-operation and Development (OECD) test guideline (TG) recommends the use of a new cytotoxicity index, relative increase in cell count or relative population doubling (RICC/RPD), instead of the traditionally used index, relative cell count (RCC). Although the use of the RICC/RPD may result in different outcomes and require re-evaluation of tested substances, it is impractical to re-evaluate all existing data. Therefore, we established a method to estimate test results from existing RCC data. First, we developed formulae to estimate RICC/RPD from RCC without cell counts by considering cell doubling time and experiment time. Next, the accuracy of the cytotoxicity index transformation formulae was verified by comparing estimated RICC/RPD and measured RICC/RPD for 3 major chemicals associated with false-positive genotoxicity test results: ethyl acrylate, eugenol and p-nitrophenol. Moreover, 25 compounds with false-positive in vitro chromosomal aberration (CA) test results were re-evaluated to establish a retrospective evaluation method based on derived estimated RICC/RPD values. The estimated RICC/RPD values were in good agreement with the measured RICC/RPD values for every concentration and chemical, and the estimated RICC suggested the possibility that 12 chemicals (48%) with previously judged false-positive results in fact had negative results. Our method enables transformation of RCC data into RICC/RPD values with a high degree of accuracy and will facilitate comprehensive retrospective evaluation of test results. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. A Different Web-Based Geocoding Service Using Fuzzy Techniques

    NASA Astrophysics Data System (ADS)

    Pahlavani, P.; Abbaspour, R. A.; Zare Zadiny, A.

    2015-12-01

    Geocoding - the process of finding position based on descriptive data such as address or postal code - is considered as one of the most commonly used spatial analyses. Many online map providers such as Google Maps, Bing Maps and Yahoo Maps present geocoding as one of their basic capabilities. Despite the diversity of geocoding services, users usually face some limitations when they use available online geocoding services. In existing geocoding services, proximity and nearness concept is not modelled appropriately as well as these services search address only by address matching based on descriptive data. In addition there are also some limitations in display searching results. Resolving these limitations can enhance efficiency of the existing geocoding services. This paper proposes the idea of integrating fuzzy technique with geocoding process to resolve these limitations. In order to implement the proposed method, a web-based system is designed. In proposed method, nearness to places is defined by fuzzy membership functions and multiple fuzzy distance maps are created. Then these fuzzy distance maps are integrated using fuzzy overlay technique for obtain the results. Proposed methods provides different capabilities for users such as ability to search multi-part addresses, searching places based on their location, non-point representation of results as well as displaying search results based on their priority.

  20. Approximate convective heating equations for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Zoby, E. V.; Moss, J. N.; Sutton, K.

    1979-01-01

    Laminar and turbulent heating-rate equations appropriate for engineering predictions of the convective heating rates about blunt reentry spacecraft at hypersonic conditions are developed. The approximate methods are applicable to both nonreacting and reacting gas mixtures for either constant or variable-entropy edge conditions. A procedure which accounts for variable-entropy effects and is not based on mass balancing is presented. Results of the approximate heating methods are in good agreement with existing experimental results as well as boundary-layer and viscous-shock-layer solutions.

  1. Directional virtual backbone based data aggregation scheme for Wireless Visual Sensor Networks.

    PubMed

    Zhang, Jing; Liu, Shi-Jian; Tsai, Pei-Wei; Zou, Fu-Min; Ji, Xiao-Rong

    2018-01-01

    Data gathering is a fundamental task in Wireless Visual Sensor Networks (WVSNs). Features of directional antennas and the visual data make WVSNs more complex than the conventional Wireless Sensor Network (WSN). The virtual backbone is a technique, which is capable of constructing clusters. The version associating with the aggregation operation is also referred to as the virtual backbone tree. In most of the existing literature, the main focus is on the efficiency brought by the construction of clusters that the existing methods neglect local-balance problems in general. To fill up this gap, Directional Virtual Backbone based Data Aggregation Scheme (DVBDAS) for the WVSNs is proposed in this paper. In addition, a measurement called the energy consumption density is proposed for evaluating the adequacy of results in the cluster-based construction problems. Moreover, the directional virtual backbone construction scheme is proposed by considering the local-balanced factor. Furthermore, the associated network coding mechanism is utilized to construct DVBDAS. Finally, both the theoretical analysis of the proposed DVBDAS and the simulations are given for evaluating the performance. The experimental results prove that the proposed DVBDAS achieves higher performance in terms of both the energy preservation and the network lifetime extension than the existing methods.

  2. Molecular Mechanisms of Zinc Oxide Nanoparticle-Induced Genotoxicity Short Running Title: Genotoxicity of ZnO NPs

    PubMed Central

    Scherzad, Agmal; Meyer, Till; Kleinsasser, Norbert

    2017-01-01

    Background: Zinc oxide nanoparticles (ZnO NPs) are among the most frequently applied nanomaterials in consumer products. Evidence exists regarding the cytotoxic effects of ZnO NPs in mammalian cells; however, knowledge about the potential genotoxicity of ZnO NPs is rare, and results presented in the current literature are inconsistent. Objectives: The aim of this review is to summarize the existing data regarding the DNA damage that ZnO NPs induce, and focus on the possible molecular mechanisms underlying genotoxic events. Methods: Electronic literature databases were systematically searched for studies that report on the genotoxicity of ZnO NPs. Results: Several methods and different endpoints demonstrate the genotoxic potential of ZnO NPs. Most publications describe in vitro assessments of the oxidative DNA damage triggered by dissoluted Zn2+ ions. Most genotoxicological investigations of ZnO NPs address acute exposure situations. Conclusion: Existing evidence indicates that ZnO NPs possibly have the potential to damage DNA. However, there is a lack of long-term exposure experiments that clarify the intracellular bioaccumulation of ZnO NPs and the possible mechanisms of DNA repair and cell survival. PMID:29240707

  3. The use of periodization in exercise prescriptions for inactive adults: A systematic review

    PubMed Central

    Strohacker, Kelley; Fazzino, Daniel; Breslin, Whitney L.; Xu, Xiaomeng

    2015-01-01

    Background Periodization of exercise is a method typically used in sports training, but the impact of periodized exercise on health outcomes in untrained adults is unclear. Purpose This review aims to summarize existing research wherein aerobic or resistance exercise was prescribed to inactive adults using a recognized periodization method. Methods A search of relevant databases, conducted between January and February of 2014, yielded 21 studies published between 2000 and 2013 that assessed the impact of periodized exercise on health outcomes in untrained participants. Results Substantial heterogeneity existed between studies, even under the same periodization method. Compared to baseline values or non-training control groups, prescribing periodized resistance or aerobic exercise yielded significant improvements in health outcomes related to traditional and emerging risk factors for cardiovascular disease, low-back and neck/shoulder pain, disease severity, and quality of life, with mixed results for increasing bone mineral density. Conclusions Although it is premature to conclude that periodized exercise is superior to non-periodized exercise for improving health outcomes, periodization appears to be a feasible means of prescribing exercise to inactive adults within an intervention setting. Further research is necessary to understand the effectiveness of periodizing aerobic exercise, the psychological effects of periodization, and the feasibility of implementing flexible non-linear methods. PMID:26844095

  4. Impact-acoustics inspection of tile-wall bonding integrity via wavelet transform and hidden Markov models

    NASA Astrophysics Data System (ADS)

    Luk, B. L.; Liu, K. P.; Tong, F.; Man, K. F.

    2010-05-01

    The impact-acoustics method utilizes different information contained in the acoustic signals generated by tapping a structure with a small metal object. It offers a convenient and cost-efficient way to inspect the tile-wall bonding integrity. However, the existence of the surface irregularities will cause abnormal multiple bounces in the practical inspection implementations. The spectral characteristics from those bounces can easily be confused with the signals obtained from different bonding qualities. As a result, it will deteriorate the classic feature-based classification methods based on frequency domain. Another crucial difficulty posed by the implementation is the additive noise existing in the practical environments that may also cause feature mismatch and false judgment. In order to solve this problem, the work described in this paper aims to develop a robust inspection method that applies model-based strategy, and utilizes the wavelet domain features with hidden Markov modeling. It derives a bonding integrity recognition approach with enhanced immunity to surface roughness as well as the environmental noise. With the help of the specially designed artificial sample slabs, experiments have been carried out with impact acoustic signals contaminated by real environmental noises acquired under practical inspection background. The results are compared with those using classic method to demonstrate the effectiveness of the proposed method.

  5. Solutions of the two-dimensional Hubbard model: Benchmarks and results from a wide range of numerical algorithms

    DOE PAGES

    LeBlanc, J. P. F.; Antipov, Andrey E.; Becca, Federico; ...

    2015-12-14

    Numerical results for ground-state and excited-state properties (energies, double occupancies, and Matsubara-axis self-energies) of the single-orbital Hubbard model on a two-dimensional square lattice are presented, in order to provide an assessment of our ability to compute accurate results in the thermodynamic limit. Many methods are employed, including auxiliary-field quantum Monte Carlo, bare and bold-line diagrammatic Monte Carlo, method of dual fermions, density matrix embedding theory, density matrix renormalization group, dynamical cluster approximation, diffusion Monte Carlo within a fixed-node approximation, unrestricted coupled cluster theory, and multireference projected Hartree-Fock methods. Comparison of results obtained by different methods allows for the identification ofmore » uncertainties and systematic errors. The importance of extrapolation to converged thermodynamic-limit values is emphasized. Furthermore, cases where agreement between different methods is obtained establish benchmark results that may be useful in the validation of new approaches and the improvement of existing methods.« less

  6. Identification and confirmation of chemical residues by chromatography-mass spectrometry and other techniques

    USDA-ARS?s Scientific Manuscript database

    A quantitative answer cannot exist in an analysis without a qualitative component to give enough confidence that the result meets the analytical needs for the analysis (i.e. the result relates to the analyte and not something else). Just as a quantitative method must typically undergo an empirical ...

  7. Multilabel learning via random label selection for protein subcellular multilocations prediction.

    PubMed

    Wang, Xiao; Li, Guo-Zheng

    2013-01-01

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multilocation proteins to multiple proteins with single location, which does not take correlations among different subcellular locations into account. In this paper, a novel method named random label selection (RALS) (multilabel learning via RALS), which extends the simple binary relevance (BR) method, is proposed to learn from multilocation proteins in an effective and efficient way. RALS does not explicitly find the correlations among labels, but rather implicitly attempts to learn the label correlations from data by augmenting original feature space with randomly selected labels as its additional input features. Through the fivefold cross-validation test on a benchmark data set, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark data sets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multilocations of proteins. The prediction web server is available at >http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.

  8. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    NASA Astrophysics Data System (ADS)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  9. Double temporal sparsity based accelerated reconstruction of compressively sensed resting-state fMRI.

    PubMed

    Aggarwal, Priya; Gupta, Anubha

    2017-12-01

    A number of reconstruction methods have been proposed recently for accelerated functional Magnetic Resonance Imaging (fMRI) data collection. However, existing methods suffer with the challenge of greater artifacts at high acceleration factors. This paper addresses the issue of accelerating fMRI collection via undersampled k-space measurements combined with the proposed method based on l 1 -l 1 norm constraints, wherein we impose first l 1 -norm sparsity on the voxel time series (temporal data) in the transformed domain and the second l 1 -norm sparsity on the successive difference of the same temporal data. Hence, we name the proposed method as Double Temporal Sparsity based Reconstruction (DTSR) method. The robustness of the proposed DTSR method has been thoroughly evaluated both at the subject level and at the group level on real fMRI data. Results are presented at various acceleration factors. Quantitative analysis in terms of Peak Signal-to-Noise Ratio (PSNR) and other metrics, and qualitative analysis in terms of reproducibility of brain Resting State Networks (RSNs) demonstrate that the proposed method is accurate and robust. In addition, the proposed DTSR method preserves brain networks that are important for studying fMRI data. Compared to the existing methods, the DTSR method shows promising potential with an improvement of 10-12 dB in PSNR with acceleration factors upto 3.5 on resting state fMRI data. Simulation results on real data demonstrate that DTSR method can be used to acquire accelerated fMRI with accurate detection of RSNs. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. A simplified method in comparison with comprehensive interaction incremental dynamic analysis to assess seismic performance of jacket-type offshore platforms

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.

    2015-12-01

    The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.

  11. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    NASA Astrophysics Data System (ADS)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  12. Conformal mapping for multiple terminals

    PubMed Central

    Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao

    2016-01-01

    Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746

  13. WE-FG-207B-05: Iterative Reconstruction Via Prior Image Constrained Total Generalized Variation for Spectral CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, S; Zhang, Y; Ma, J

    Purpose: To investigate iterative reconstruction via prior image constrained total generalized variation (PICTGV) for spectral computed tomography (CT) using fewer projections while achieving greater image quality. Methods: The proposed PICTGV method is formulated as an optimization problem, which balances the data fidelity and prior image constrained total generalized variation of reconstructed images in one framework. The PICTGV method is based on structure correlations among images in the energy domain and high-quality images to guide the reconstruction of energy-specific images. In PICTGV method, the high-quality image is reconstructed from all detector-collected X-ray signals and is referred as the broad-spectrum image. Distinctmore » from the existing reconstruction methods applied on the images with first order derivative, the higher order derivative of the images is incorporated into the PICTGV method. An alternating optimization algorithm is used to minimize the PICTGV objective function. We evaluate the performance of PICTGV on noise and artifacts suppressing using phantom studies and compare the method with the conventional filtered back-projection method as well as TGV based method without prior image. Results: On the digital phantom, the proposed method outperforms the existing TGV method in terms of the noise reduction, artifacts suppression, and edge detail preservation. Compared to that obtained by the TGV based method without prior image, the relative root mean square error in the images reconstructed by the proposed method is reduced by over 20%. Conclusion: The authors propose an iterative reconstruction via prior image constrained total generalize variation for spectral CT. Also, we have developed an alternating optimization algorithm and numerically demonstrated the merits of our approach. Results show that the proposed PICTGV method outperforms the TGV method for spectral CT.« less

  14. From empirical data to time-inhomogeneous continuous Markov processes.

    PubMed

    Lencastre, Pedro; Raischel, Frank; Rogers, Tim; Lind, Pedro G

    2016-03-01

    We present an approach for testing for the existence of continuous generators of discrete stochastic transition matrices. Typically, existing methods to ascertain the existence of continuous Markov processes are based on the assumption that only time-homogeneous generators exist. Here a systematic extension to time inhomogeneity is presented, based on new mathematical propositions incorporating necessary and sufficient conditions, which are then implemented computationally and applied to numerical data. A discussion concerning the bridging between rigorous mathematical results on the existence of generators to its computational implementation is presented. Our detection algorithm shows to be effective in more than 60% of tested matrices, typically 80% to 90%, and for those an estimate of the (nonhomogeneous) generator matrix follows. We also solve the embedding problem analytically for the particular case of three-dimensional circulant matrices. Finally, a discussion of possible applications of our framework to problems in different fields is briefly addressed.

  15. Optimization for routing vehicles of seafood product transportation

    NASA Astrophysics Data System (ADS)

    Soenandi, I. A.; Juan, Y.; Budi, M.

    2017-12-01

    Recently, increasing usage of marine products is creating new challenges for businesses of marine products in terms of transportation that used to carry the marine products like seafood to the main warehouse. This can be a problem if the carrier fleet is limited, and there are time constraints in terms of the freshness of the marine product. There are many ways to solve this problem, including the optimization of routing vehicles. In this study, this strategy is to implement in the marine product business in Indonesia with such an expected arrangement of the company to optimize routing problem in transportation with time and capacity windows. Until now, the company has not used the scientific method to manage the routing of their vehicle from warehouse to the location of marine products source. This study will solve a stochastic Vehicle Routing Problems (VRP) with time and capacity windows by using the comparison of six methods and looking the best results for the optimization, in this situation the company could choose the best method, in accordance with the existing condition. In this research, we compared the optimization with another method such as branch and bound, dynamic programming and Ant Colony Optimization (ACO). Finally, we get the best result after running ACO algorithm with existing travel time data. With ACO algorithm was able to reduce vehicle travel time by 3189.65 minutes, which is about 23% less than existing and based on consideration of the constraints of time within 2 days (including rest time for the driver) using 28 tons capacity of truck and the companies need two units of vehicles for transportation.

  16. How Magnetic Disturbance Influences the Attitude and Heading in Magnetic and Inertial Sensor-Based Orientation Estimation

    PubMed Central

    Li, Qingguo

    2017-01-01

    With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method. PMID:29283432

  17. Corrosion performance tests for reinforcing steel in concrete : test procedures.

    DOT National Transportation Integrated Search

    2009-09-01

    The existing test method to assess the corrosion performance of reinforcing steel embedded in concrete, mainly : ASTM G109, is labor intensive, time consuming, slow to provide comparative results, and often expensive. : However, corrosion of reinforc...

  18. Corrosion performance tests for reinforcing steel in concrete : technical report.

    DOT National Transportation Integrated Search

    2009-10-01

    The existing test method used to assess the corrosion performance of reinforcing steel embedded in : concrete, mainly ASTM G 109, is labor intensive, time consuming, slow to provide comparative results, : and can be expensive. However, with corrosion...

  19. A Nested PCR Assay to Avoid False Positive Detection of the Microsporidian Enterocytozoon hepatopenaei (EHP) in Environmental Samples in Shrimp Farms

    PubMed Central

    Jaroenlak, Pattana; Sanguanrut, Piyachat; Williams, Bryony A. P.; Stentiford, Grant D.; Flegel, Timothy W.; Sritunyalucksana, Kallaya

    2016-01-01

    Hepatopancreatic microsporidiosis (HPM) caused by Enterocytozoon hepatopenaei (EHP) is an important disease of cultivated shrimp. Heavy infections may lead to retarded growth and unprofitable harvests. Existing PCR detection methods target the EHP small subunit ribosomal RNA (SSU rRNA) gene (SSU-PCR). However, we discovered that they can give false positive test results due to cross reactivity of the SSU-PCR primers with DNA from closely related microsporidia that infect other aquatic organisms. This is problematic for investigating and monitoring EHP infection pathways. To overcome this problem, a sensitive and specific nested PCR method was developed for detection of the spore wall protein (SWP) gene of EHP (SWP-PCR). The new SWP-PCR method did not produce false positive results from closely related microsporidia. The first PCR step of the SWP-PCR method was 100 times (104 plasmid copies per reaction vial) more sensitive than that of the existing SSU-PCR method (106 copies) but sensitivity was equal for both in the nested step (10 copies). Since the hepatopancreas of cultivated shrimp is not currently known to be infected with microsporidia other than EHP, the SSU-PCR methods are still valid for analyzing hepatopancreatic samples despite the lower sensitivity than the SWP-PCR method. However, due to its greater specificity and sensitivity, we recommend that the SWP-PCR method be used to screen for EHP in feces, feed and environmental samples for potential EHP carriers. PMID:27832178

  20. A Nested PCR Assay to Avoid False Positive Detection of the Microsporidian Enterocytozoon hepatopenaei (EHP) in Environmental Samples in Shrimp Farms.

    PubMed

    Jaroenlak, Pattana; Sanguanrut, Piyachat; Williams, Bryony A P; Stentiford, Grant D; Flegel, Timothy W; Sritunyalucksana, Kallaya; Itsathitphaisarn, Ornchuma

    2016-01-01

    Hepatopancreatic microsporidiosis (HPM) caused by Enterocytozoon hepatopenaei (EHP) is an important disease of cultivated shrimp. Heavy infections may lead to retarded growth and unprofitable harvests. Existing PCR detection methods target the EHP small subunit ribosomal RNA (SSU rRNA) gene (SSU-PCR). However, we discovered that they can give false positive test results due to cross reactivity of the SSU-PCR primers with DNA from closely related microsporidia that infect other aquatic organisms. This is problematic for investigating and monitoring EHP infection pathways. To overcome this problem, a sensitive and specific nested PCR method was developed for detection of the spore wall protein (SWP) gene of EHP (SWP-PCR). The new SWP-PCR method did not produce false positive results from closely related microsporidia. The first PCR step of the SWP-PCR method was 100 times (104 plasmid copies per reaction vial) more sensitive than that of the existing SSU-PCR method (106 copies) but sensitivity was equal for both in the nested step (10 copies). Since the hepatopancreas of cultivated shrimp is not currently known to be infected with microsporidia other than EHP, the SSU-PCR methods are still valid for analyzing hepatopancreatic samples despite the lower sensitivity than the SWP-PCR method. However, due to its greater specificity and sensitivity, we recommend that the SWP-PCR method be used to screen for EHP in feces, feed and environmental samples for potential EHP carriers.

  1. Accuracy of p53 Codon 72 Polymorphism Status Determined by Multiple Laboratory Methods: A Latent Class Model Analysis

    PubMed Central

    Walter, Stephen D.; Riddell, Corinne A.; Rabachini, Tatiana; Villa, Luisa L.; Franco, Eduardo L.

    2013-01-01

    Introduction Studies on the association of a polymorphism in codon 72 of the p53 tumour suppressor gene (rs1042522) with cervical neoplasia have inconsistent results. While several methods for genotyping p53 exist, they vary in accuracy and are often discrepant. Methods We used latent class models (LCM) to examine the accuracy of six methods for p53 determination, all conducted by the same laboratory. We also examined the association of p53 with cytological cervical abnormalities, recognising potential test inaccuracy. Results Pairwise disagreement between laboratory methods occurred approximately 10% of the time. Given the estimated true p53 status of each woman, we found that each laboratory method is most likely to classify a woman to her correct status. Arg/Arg women had the highest risk of squamous intraepithelial lesions (SIL). Test accuracy was independent of cytology. There was no strong evidence for correlations of test errors. Discussion Empirical analyses ignore possible laboratory errors, and so are inherently biased, but test accuracy estimated by the LCM approach is unbiased when model assumptions are met. LCM analysis avoids ambiguities arising from empirical test discrepancies, obviating the need to regard any of the methods as a “gold” standard measurement. The methods we presented here to analyse the p53 data can be applied in many other situations where multiple tests exist, but where none of them is a gold standard. PMID:23441193

  2. Semiparametric methods to contrast gap time survival functions: Application to repeat kidney transplantation.

    PubMed

    Shu, Xu; Schaubel, Douglas E

    2016-06-01

    Times between successive events (i.e., gap times) are of great importance in survival analysis. Although many methods exist for estimating covariate effects on gap times, very few existing methods allow for comparisons between gap times themselves. Motivated by the comparison of primary and repeat transplantation, our interest is specifically in contrasting the gap time survival functions and their integration (restricted mean gap time). Two major challenges in gap time analysis are non-identifiability of the marginal distributions and the existence of dependent censoring (for all but the first gap time). We use Cox regression to estimate the (conditional) survival distributions of each gap time (given the previous gap times). Combining fitted survival functions based on those models, along with multiple imputation applied to censored gap times, we then contrast the first and second gap times with respect to average survival and restricted mean lifetime. Large-sample properties are derived, with simulation studies carried out to evaluate finite-sample performance. We apply the proposed methods to kidney transplant data obtained from a national organ transplant registry. Mean 10-year graft survival of the primary transplant is significantly greater than that of the repeat transplant, by 3.9 months (p=0.023), a result that may lack clinical importance. © 2015, The International Biometric Society.

  3. Directional Histogram Ratio at Random Probes: A Local Thresholding Criterion for Capillary Images

    PubMed Central

    Lu, Na; Silva, Jharon; Gu, Yu; Gerber, Scott; Wu, Hulin; Gelbard, Harris; Dewhurst, Stephen; Miao, Hongyu

    2013-01-01

    With the development of micron-scale imaging techniques, capillaries can be conveniently visualized using methods such as two-photon and whole mount microscopy. However, the presence of background staining, leaky vessels and the diffusion of small fluorescent molecules can lead to significant complexity in image analysis and loss of information necessary to accurately quantify vascular metrics. One solution to this problem is the development of accurate thresholding algorithms that reliably distinguish blood vessels from surrounding tissue. Although various thresholding algorithms have been proposed, our results suggest that without appropriate pre- or post-processing, the existing approaches may fail to obtain satisfactory results for capillary images that include areas of contamination. In this study, we propose a novel local thresholding algorithm, called directional histogram ratio at random probes (DHR-RP). This method explicitly considers the geometric features of tube-like objects in conducting image binarization, and has a reliable performance in distinguishing small vessels from either clean or contaminated background. Experimental and simulation studies suggest that our DHR-RP algorithm is superior over existing thresholding methods. PMID:23525856

  4. Modern Methods of Rail Welding

    NASA Astrophysics Data System (ADS)

    Kozyrev, Nikolay A.; Kozyreva, Olga A.; Usoltsev, Aleksander A.; Kryukov, Roman E.; Shevchenko, Roman A.

    2017-10-01

    Existing methods of rail welding, which are enable to get continuous welded rail track, are observed in this article. Analysis of existing welding methods allows considering an issue of continuous rail track in detail. Metallurgical and welding technologies of rail welding and also process technologies reducing aftereffects of temperature exposure are important factors determining the quality and reliability of the continuous rail track. Analysis of the existing methods of rail welding enable to find the research line for solving this problem.

  5. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  6. Predicting protein complexes using a supervised learning method combined with local structural information.

    PubMed

    Dong, Yadong; Sun, Yongqi; Qin, Chao

    2018-01-01

    The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.

  7. Study on the system-level test method of digital metering in smart substation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Yang, Min; Hu, Juan; Li, Fuchao; Luo, Ruixi; Li, Jinsong; Ai, Bing

    2017-03-01

    Nowadays, the test methods of digital metering system in smart substation are used to test and evaluate the performance of a single device, but these methods can only effectively guarantee the accuracy and reliability of the measurement results of a digital metering device in a single run, it does not completely reflect the performance when each device constitutes a complete system. This paper introduced the shortages of the existing test methods. A system-level test method of digital metering in smart substation was proposed, and the feasibility of the method was proved by the actual test.

  8. A modified form of conjugate gradient method for unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa

    2016-06-01

    Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.

  9. Interpretation of ERTS-MSS images of a Savanna area in eastern Colombia

    NASA Technical Reports Server (NTRS)

    Elberson, G. W. W.

    1973-01-01

    The application of ERTS-1 imagery for extrapolating existing soil maps into unmapped areas of the Llanos Orientales of Colombia, South America is discussed. Interpretations of ERTS-1 data were made according to conventional photointerpretation techniques. Most units delineated in the existing reconnaissance soil map at a scale of 1:250,000 could be recognized and delineated in the ERTS image. The methods of interpretation are described and the results obtained for specific areas are analyzed.

  10. Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions

    NASA Astrophysics Data System (ADS)

    Jung, J. Y.; Niemann, J. D.; Greimann, B. P.

    2016-12-01

    Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.

  11. Matrix elements for type 1 unitary irreducible representations of the Lie superalgebra gl(m|n)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gould, Mark D.; Isaac, Phillip S.; Werry, Jason L.

    Using our recent results on eigenvalues of invariants associated to the Lie superalgebra gl(m|n), we use characteristic identities to derive explicit matrix element formulae for all gl(m|n) generators, particularly non-elementary generators, on finite dimensional type 1 unitary irreducible representations. We compare our results with existing works that deal with only subsets of the class of type 1 unitary representations, all of which only present explicit matrix elements for elementary generators. Our work therefore provides an important extension to existing methods, and thus highlights the strength of our techniques which exploit the characteristic identities.

  12. Wavenumber selection in Benard convection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Catton, I.

    1988-11-01

    The results of three related studies dealing with wavenumber selection in Rayleigh--Benard convection are reported. The first, an extension of the power integral method, is used to argue for the existence of multi-wavenumbers at all supercritical wavenumbers. Most existing closure schemes are shown to be inadequate. A thermodynamic stability criterion is shown to give reasonable results but requires empirical measurement of one parameter for closure. The third study uses an asymptotic approach based in part on geometric considerations and requires no empiricism to obtain good predictions of the wavenumber. These predictions, however, can only be used for certain planforms ofmore » convection.« less

  13. Three-Dimensional Flow of Nanofluid Induced by an Exponentially Stretching Sheet: An Application to Solar Energy

    PubMed Central

    Khan, Junaid Ahmad; Mustafa, M.; Hayat, T.; Sheikholeslami, M.; Alsaedi, A.

    2015-01-01

    This work deals with the three-dimensional flow of nanofluid over a bi-directional exponentially stretching sheet. The effects of Brownian motion and thermophoretic diffusion of nanoparticles are considered in the mathematical model. The temperature and nanoparticle volume fraction at the sheet are also distributed exponentially. Local similarity solutions are obtained by an implicit finite difference scheme known as Keller-box method. The results are compared with the existing studies in some limiting cases and found in good agreement. The results reveal the existence of interesting Sparrow-Gregg-type hills for temperature distribution corresponding to some range of parametric values. PMID:25785857

  14. A sampling framework for incorporating quantitative mass spectrometry data in protein interaction analysis.

    PubMed

    Tucker, George; Loh, Po-Ru; Berger, Bonnie

    2013-10-04

    Comprehensive protein-protein interaction (PPI) maps are a powerful resource for uncovering the molecular basis of genetic interactions and providing mechanistic insights. Over the past decade, high-throughput experimental techniques have been developed to generate PPI maps at proteome scale, first using yeast two-hybrid approaches and more recently via affinity purification combined with mass spectrometry (AP-MS). Unfortunately, data from both protocols are prone to both high false positive and false negative rates. To address these issues, many methods have been developed to post-process raw PPI data. However, with few exceptions, these methods only analyze binary experimental data (in which each potential interaction tested is deemed either observed or unobserved), neglecting quantitative information available from AP-MS such as spectral counts. We propose a novel method for incorporating quantitative information from AP-MS data into existing PPI inference methods that analyze binary interaction data. Our approach introduces a probabilistic framework that models the statistical noise inherent in observations of co-purifications. Using a sampling-based approach, we model the uncertainty of interactions with low spectral counts by generating an ensemble of possible alternative experimental outcomes. We then apply the existing method of choice to each alternative outcome and aggregate results over the ensemble. We validate our approach on three recent AP-MS data sets and demonstrate performance comparable to or better than state-of-the-art methods. Additionally, we provide an in-depth discussion comparing the theoretical bases of existing approaches and identify common aspects that may be key to their performance. Our sampling framework extends the existing body of work on PPI analysis using binary interaction data to apply to the richer quantitative data now commonly available through AP-MS assays. This framework is quite general, and many enhancements are likely possible. Fruitful future directions may include investigating more sophisticated schemes for converting spectral counts to probabilities and applying the framework to direct protein complex prediction methods.

  15. A collocation-shooting method for solving fractional boundary value problems

    NASA Astrophysics Data System (ADS)

    Al-Mdallal, Qasem M.; Syam, Muhammed I.; Anwar, M. N.

    2010-12-01

    In this paper, we discuss the numerical solution of special class of fractional boundary value problems of order 2. The method of solution is based on a conjugating collocation and spline analysis combined with shooting method. A theoretical analysis about the existence and uniqueness of exact solution for the present class is proven. Two examples involving Bagley-Torvik equation subject to boundary conditions are also presented; numerical results illustrate the accuracy of the present scheme.

  16. Walsh-Hadamard transform kernel-based feature vector for shot boundary detection.

    PubMed

    Lakshmi, Priya G G; Domnic, S

    2014-12-01

    Video shot boundary detection (SBD) is the first step of video analysis, summarization, indexing, and retrieval. In SBD process, videos are segmented into basic units called shots. In this paper, a new SBD method is proposed using color, edge, texture, and motion strength as vector of features (feature vector). Features are extracted by projecting the frames on selected basis vectors of Walsh-Hadamard transform (WHT) kernel and WHT matrix. After extracting the features, based on the significance of the features, weights are calculated. The weighted features are combined to form a single continuity signal, used as input for Procedure Based shot transition Identification process (PBI). Using the procedure, shot transitions are classified into abrupt and gradual transitions. Experimental results are examined using large-scale test sets provided by the TRECVID 2007, which has evaluated hard cut and gradual transition detection. To evaluate the robustness of the proposed method, the system evaluation is performed. The proposed method yields F1-Score of 97.4% for cut, 78% for gradual, and 96.1% for overall transitions. We have also evaluated the proposed feature vector with support vector machine classifier. The results show that WHT-based features can perform well than the other existing methods. In addition to this, few more video sequences are taken from the Openvideo project and the performance of the proposed method is compared with the recent existing SBD method.

  17. NOTE: Solving the ECG forward problem by means of a meshless finite element method

    NASA Astrophysics Data System (ADS)

    Li, Z. S.; Zhu, S. A.; He, Bin

    2007-07-01

    The conventional numerical computational techniques such as the finite element method (FEM) and the boundary element method (BEM) require laborious and time-consuming model meshing. The new meshless FEM only uses the boundary description and the node distribution and no meshing of the model is required. This paper presents the fundamentals and implementation of meshless FEM and the meshless FEM method is adapted to solve the electrocardiography (ECG) forward problem. The method is evaluated on a single-layer torso model, in which the analytical solution exists, and tested in a realistic geometry homogeneous torso model, with satisfactory results being obtained. The present results suggest that the meshless FEM may provide an alternative for ECG forward solutions.

  18. Modified harmonic balance method for the solution of nonlinear jerk equations

    NASA Astrophysics Data System (ADS)

    Rahman, M. Saifur; Hasan, A. S. M. Z.

    2018-03-01

    In this paper, a second approximate solution of nonlinear jerk equations (third order differential equation) can be obtained by using modified harmonic balance method. The method is simpler and easier to carry out the solution of nonlinear differential equations due to less number of nonlinear equations are required to solve than the classical harmonic balance method. The results obtained from this method are compared with those obtained from the other existing analytical methods that are available in the literature and the numerical method. The solution shows a good agreement with the numerical solution as well as the analytical methods of the available literature.

  19. Evaluating variability with atomistic simulations: the effect of potential and calculation methodology on the modeling of lattice and elastic constants

    NASA Astrophysics Data System (ADS)

    Hale, Lucas M.; Trautt, Zachary T.; Becker, Chandler A.

    2018-07-01

    Atomistic simulations using classical interatomic potentials are powerful investigative tools linking atomic structures to dynamic properties and behaviors. It is well known that different interatomic potentials produce different results, thus making it necessary to characterize potentials based on how they predict basic properties. Doing so makes it possible to compare existing interatomic models in order to select those best suited for specific use cases, and to identify any limitations of the models that may lead to unrealistic responses. While the methods for obtaining many of these properties are often thought of as simple calculations, there are many underlying aspects that can lead to variability in the reported property values. For instance, multiple methods may exist for computing the same property and values may be sensitive to certain simulation parameters. Here, we introduce a new high-throughput computational framework that encodes various simulation methodologies as Python calculation scripts. Three distinct methods for evaluating the lattice and elastic constants of bulk crystal structures are implemented and used to evaluate the properties across 120 interatomic potentials, 18 crystal prototypes, and all possible combinations of unique lattice site and elemental model pairings. Analysis of the results reveals which potentials and crystal prototypes are sensitive to the calculation methods and parameters, and it assists with the verification of potentials, methods, and molecular dynamics software. The results, calculation scripts, and computational infrastructure are self-contained and openly available to support researchers in performing meaningful simulations.

  20. Investigation of 2-stage meta-analysis methods for joint longitudinal and time-to-event data through simulation and real data application.

    PubMed

    Sudell, Maria; Tudur Smith, Catrin; Gueyffier, François; Kolamunnage-Dona, Ruwanthi

    2018-04-15

    Joint modelling of longitudinal and time-to-event data is often preferred over separate longitudinal or time-to-event analyses as it can account for study dropout, error in longitudinally measured covariates, and correlation between longitudinal and time-to-event outcomes. The joint modelling literature focuses mainly on the analysis of single studies with no methods currently available for the meta-analysis of joint model estimates from multiple studies. We propose a 2-stage method for meta-analysis of joint model estimates. These methods are applied to the INDANA dataset to combine joint model estimates of systolic blood pressure with time to death, time to myocardial infarction, and time to stroke. Results are compared to meta-analyses of separate longitudinal or time-to-event models. A simulation study is conducted to contrast separate versus joint analyses over a range of scenarios. Using the real dataset, similar results were obtained by using the separate and joint analyses. However, the simulation study indicated a benefit of use of joint rather than separate methods in a meta-analytic setting where association exists between the longitudinal and time-to-event outcomes. Where evidence of association between longitudinal and time-to-event outcomes exists, results from joint models over standalone analyses should be pooled in 2-stage meta-analyses. © 2017 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  1. Investigation of random walks knee cartilage segmentation model using inter-observer reproducibility: Data from the osteoarthritis initiative.

    PubMed

    Hong-Seng, Gan; Sayuti, Khairil Amir; Karim, Ahmad Helmy Abdul

    2017-01-01

    Existing knee cartilage segmentation methods have reported several technical drawbacks. In essence, graph cuts remains highly susceptible to image noise despite extended research interest; active shape model is often constraint by the selection of training data while shortest path have demonstrated shortcut problem in the presence of weak boundary, which is a common problem in medical images. The aims of this study is to investigate the capability of random walks as knee cartilage segmentation method. Experts would scribble on knee cartilage image to initialize random walks segmentation. Then, reproducibility of the method is assessed against manual segmentation by using Dice Similarity Index. The evaluation consists of normal cartilage and diseased cartilage sections which is divided into whole and single cartilage categories. A total of 15 normal images and 10 osteoarthritic images were included. The results showed that random walks method has demonstrated high reproducibility in both normal cartilage (observer 1: 0.83±0.028 and observer 2: 0.82±0.026) and osteoarthritic cartilage (observer 1: 0.80±0.069 and observer 2: 0.83±0.029). Besides, results from both experts were found to be consistent with each other, suggesting the inter-observer variation is insignificant (Normal: P=0.21; Diseased: P=0.15). The proposed segmentation model has overcame technical problems reported by existing semi-automated techniques and demonstrated highly reproducible and consistent results against manual segmentation method.

  2. Improved methods to estimate the effective impervious area in urban catchments using rainfall-runoff data

    NASA Astrophysics Data System (ADS)

    Ebrahimian, Ali; Wilson, Bruce N.; Gulliver, John S.

    2016-05-01

    Impervious surfaces are useful indicators of the urbanization impacts on water resources. Effective impervious area (EIA), which is the portion of total impervious area (TIA) that is hydraulically connected to the drainage system, is a better catchment parameter in the determination of actual urban runoff. Development of reliable methods for quantifying EIA rather than TIA is currently one of the knowledge gaps in the rainfall-runoff modeling context. The objective of this study is to improve the rainfall-runoff data analysis method for estimating EIA fraction in urban catchments by eliminating the subjective part of the existing method and by reducing the uncertainty of EIA estimates. First, the theoretical framework is generalized using a general linear least square model and using a general criterion for categorizing runoff events. Issues with the existing method that reduce the precision of the EIA fraction estimates are then identified and discussed. Two improved methods, based on ordinary least square (OLS) and weighted least square (WLS) estimates, are proposed to address these issues. The proposed weighted least squares method is then applied to eleven urban catchments in Europe, Canada, and Australia. The results are compared to map measured directly connected impervious area (DCIA) and are shown to be consistent with DCIA values. In addition, both of the improved methods are applied to nine urban catchments in Minnesota, USA. Both methods were successful in removing the subjective component inherent in the analysis of rainfall-runoff data of the current method. The WLS method is more robust than the OLS method and generates results that are different and more precise than the OLS method in the presence of heteroscedastic residuals in our rainfall-runoff data.

  3. Effects of empty bins on image upscaling in capsule endoscopy

    NASA Astrophysics Data System (ADS)

    Rukundo, Olivier

    2017-07-01

    This paper presents a preliminary study of the effect of empty bins on image upscaling in capsule endoscopy. The presented study was conducted based on results of existing contrast enhancement and interpolation methods. A low contrast enhancement method based on pixels consecutiveness and modified bilinear weighting scheme has been developed to distinguish between necessary empty bins and unnecessary empty bins in the effort to minimize the number of empty bins in the input image, before further processing. Linear interpolation methods have been used for upscaling input images with stretched histograms. Upscaling error differences and similarity indices between pairs of interpolation methods have been quantified using the mean squared error and feature similarity index techniques. Simulation results demonstrated more promising effects using the developed method than other contrast enhancement methods mentioned.

  4. Segmentation of malignant lesions in 3D breast ultrasound using a depth-dependent model.

    PubMed

    Tan, Tao; Gubern-Mérida, Albert; Borelli, Cristina; Manniesing, Rashindra; van Zelst, Jan; Wang, Lei; Zhang, Wei; Platel, Bram; Mann, Ritse M; Karssemeijer, Nico

    2016-07-01

    Automated 3D breast ultrasound (ABUS) has been proposed as a complementary screening modality to mammography for early detection of breast cancers. To facilitate the interpretation of ABUS images, automated diagnosis and detection techniques are being developed, in which malignant lesion segmentation plays an important role. However, automated segmentation of cancer in ABUS is challenging since lesion edges might not be well defined. In this study, the authors aim at developing an automated segmentation method for malignant lesions in ABUS that is robust to ill-defined cancer edges and posterior shadowing. A segmentation method using depth-guided dynamic programming based on spiral scanning is proposed. The method automatically adjusts aggressiveness of the segmentation according to the position of the voxels relative to the lesion center. Segmentation is more aggressive in the upper part of the lesion (close to the transducer) than at the bottom (far away from the transducer), where posterior shadowing is usually visible. The authors used Dice similarity coefficient (Dice) for evaluation. The proposed method is compared to existing state of the art approaches such as graph cut, level set, and smart opening and an existing dynamic programming method without depth dependence. In a dataset of 78 cancers, our proposed segmentation method achieved a mean Dice of 0.73 ± 0.14. The method outperforms an existing dynamic programming method (0.70 ± 0.16) on this task (p = 0.03) and it is also significantly (p < 0.001) better than graph cut (0.66 ± 0.18), level set based approach (0.63 ± 0.20) and smart opening (0.65 ± 0.12). The proposed depth-guided dynamic programming method achieves accurate breast malignant lesion segmentation results in automated breast ultrasound.

  5. Fabricating data: How substituting values for nondetects can ruin results, and what can be done about it

    USGS Publications Warehouse

    Helsel, D.R.

    2006-01-01

    The most commonly used method in environmental chemistry to deal with values below detection limits is to substitute a fraction of the detection limit for each nondetect. Two decades of research has shown that this fabrication of values produces poor estimates of statistics, and commonly obscures patterns and trends in the data. Papers using substitution may conclude that significant differences, correlations, and regression relationships do not exist, when in fact they do. The reverse may also be true. Fortunately, good alternative methods for dealing with nondetects already exist, and are summarized here with references to original sources. Substituting values for nondetects should be used rarely, and should generally be considered unacceptable in scientific research. There are better ways.

  6. Research on periodic orbits in the three problem

    NASA Astrophysics Data System (ADS)

    Fernández, S.; Gámez, J.

    In order to investigate the possible existence of small planets in extrasolar systems, a restricted, circular and plane three body problem is used. One of the two primaries has a mass similar to the Sun and the other one has a mass greater than Jupiter. Periodic and quasi-periodic orbits for the third body with different values of the Jacobi constant (C) are found by numerical methods. One of the three cases studied is fictitious, the others resemble two real systems of ext rasolar planets. The Everhart method is used and the results show the existence of periodic and quasi-periodic orbits for the lesser value of C. Irregular orbits appear for the other values of C, specially on the exterior zone of the secondary body.

  7. TMA Vessel Segmentation Based on Color and Morphological Features: Application to Angiogenesis Research

    PubMed Central

    Fernández-Carrobles, M. Milagro; Tadeo, Irene; Bueno, Gloria; Noguera, Rosa; Déniz, Oscar; Salido, Jesús; García-Rojo, Marcial

    2013-01-01

    Given that angiogenesis and lymphangiogenesis are strongly related to prognosis in neoplastic and other pathologies and that many methods exist that provide different results, we aim to construct a morphometric tool allowing us to measure different aspects of the shape and size of vascular vessels in a complete and accurate way. The developed tool presented is based on vessel closing which is an essential property to properly characterize the size and the shape of vascular and lymphatic vessels. The method is fast and accurate improving existing tools for angiogenesis analysis. The tool also improves the accuracy of vascular density measurements, since the set of endothelial cells forming a vessel is considered as a single object. PMID:24489494

  8. Velocity profile, water-surface slope, and bed-material size for selected streams in Colorado

    USGS Publications Warehouse

    Marchand, J.P.; Jarrett, R.D.; Jones, L.L.

    1984-01-01

    Existing methods for determining the mean velocity in a vertical sampling section do not address the conditions present in high-gradient, shallow-depth streams common to mountainous regions such as Colorado. The report presents velocity-profile data that were collected for 11 streamflow-gaging stations in Colorado using both a standard Price type AA current meter and a prototype Price Model PAA current meter. Computational results are compiled that will enable mean velocities calculated from measurements by the two current meters to be compared with each other and with existing methods for determining mean velocity. Water-surface slope, bed-material size, and flow-characteristic data for the 11 sites studied also are presented. (USGS)

  9. Wavelet-based image compression using shuffling and bit plane correlation

    NASA Astrophysics Data System (ADS)

    Kim, Seungjong; Jeong, Jechang

    2000-12-01

    In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.

  10. Using Participatory and Service Design to Identify Emerging Needs and Perceptions of Library Services among Science and Engineering Researchers Based at a Satellite Campus

    ERIC Educational Resources Information Center

    Johnson, Andrew; Kuglitsch, Rebecca; Bresnahan, Megan

    2015-01-01

    This study used participatory and service design methods to identify emerging research needs and existing perceptions of library services among science and engineering faculty, post-graduate, and graduate student researchers based at a satellite campus at the University of Colorado Boulder. These methods, and the results of the study, allowed us…

  11. Evaluation of methods for determining hardware projected life

    NASA Technical Reports Server (NTRS)

    1971-01-01

    An investigation of existing methods of predicting hardware life is summarized by reviewing programs having long life requirements, current research efforts on long life problems, and technical papers reporting work on life predicting techniques. The results indicate that there are no accurate quantitative means to predict hardware life for system level hardware. The effectiveness of test programs and the cause of hardware failures is considered.

  12. The lattice Boltzmann method and the problem of turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Djenidi, L.

    2015-03-10

    This paper reports a brief review of numerical simulations of homogeneous isotopic turbulence (HIT) using the lattice Boltzmann method (LBM). The LBM results shows that the details of HIT are well captured and in agreement with existing data. This clearly indicates that the LBM is as good as current Navier-Stokes solvers and is very much adequate for investigating the problem of turbulence.

  13. Droplet Microarray Based on Superhydrophobic-Superhydrophilic Patterns for Single Cell Analysis.

    PubMed

    Jogia, Gabriella E; Tronser, Tina; Popova, Anna A; Levkin, Pavel A

    2016-12-09

    Single-cell analysis provides fundamental information on individual cell response to different environmental cues and is a growing interest in cancer and stem cell research. However, current existing methods are still facing challenges in performing such analysis in a high-throughput manner whilst being cost-effective. Here we established the Droplet Microarray (DMA) as a miniaturized screening platform for high-throughput single-cell analysis. Using the method of limited dilution and varying cell density and seeding time, we optimized the distribution of single cells on the DMA. We established culturing conditions for single cells in individual droplets on DMA obtaining the survival of nearly 100% of single cells and doubling time of single cells comparable with that of cells cultured in bulk cell population using conventional methods. Our results demonstrate that the DMA is a suitable platform for single-cell analysis, which carries a number of advantages compared with existing technologies allowing for treatment, staining and spot-to-spot analysis of single cells over time using conventional analysis methods such as microscopy.

  14. Multiple Testing of Gene Sets from Gene Ontology: Possibilities and Pitfalls.

    PubMed

    Meijer, Rosa J; Goeman, Jelle J

    2016-09-01

    The use of multiple testing procedures in the context of gene-set testing is an important but relatively underexposed topic. If a multiple testing method is used, this is usually a standard familywise error rate (FWER) or false discovery rate (FDR) controlling procedure in which the logical relationships that exist between the different (self-contained) hypotheses are not taken into account. Taking those relationships into account, however, can lead to more powerful variants of existing multiple testing procedures and can make summarizing and interpreting the final results easier. We will show that, from the perspective of interpretation as well as from the perspective of power improvement, FWER controlling methods are more suitable than FDR controlling methods. As an example of a possible power improvement, we suggest a modified version of the popular method by Holm, which we also implemented in the R package cherry. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  15. Covariance analysis for evaluating head trackers

    NASA Astrophysics Data System (ADS)

    Kang, Donghoon

    2017-10-01

    Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.

  16. A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences.

    PubMed

    Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen

    2015-01-01

    Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method.

  17. Development of the psychological impact of tinnitus interview: a clinician-administered measure of tinnitus-related distress.

    PubMed

    Henry, J L; Kangas, M; Wilson, P H

    2001-01-01

    The development of valid and reliable methods for assessing psychological aspects of tinnitus continues to be an important goal of research. Such assessment methods are potentially useful in clinical and research contexts. Existing self-report measures have a number of disadvantages, and so a need exists to develop a form of assessment that is less open to response bias and the effects of experimental demand. A new approach, the Psychological Impact of Tinnitus Interview (PITI), is described, and some preliminary data on its psychometric properties are reported. The results suggest that the PITI is capable of providing a measure of separate, relatively independent dimensions of tinnitus-related distress--namely, sleep difficulties, general distress, mood, suicidal aspects, and avoidance of or interference with normal activities. This method may lead to more refined measures of these dimensions of tinnitus-related psychological difficulties. The PITI should be regarded as a promising assessment tool for use in experimental settings, pending further work on its content, coding method, and administration.

  18. CS_TOTR: A new vertex centrality method for directed signed networks based on status theory

    NASA Astrophysics Data System (ADS)

    Ma, Yue; Liu, Min; Zhang, Peng; Qi, Xingqin

    Measuring the importance (or centrality) of vertices in a network is a significant topic in complex network analysis, which has significant applications in diverse domains, for example, disease control, spread of rumors, viral marketing and so on. Existing studies mainly focus on social networks with only positive (or friendship) relations, while signed networks with also negative (or enemy) relations are seldom studied. Various signed networks commonly exist in real world, e.g. a network indicating friendship/enmity, love/hate or trust/mistrust relationships. In this paper, we propose a new centrality method named CS_TOTR to give a ranking of vertices in directed signed networks. To design this new method, we use the “status theory” for signed networks, and also adopt the vertex ranking algorithm for a tournament and the topological sorting algorithm for a general directed graph. We apply this new centrality method on the famous Sampson Monastery dataset and obtain a convincing result which shows its validity.

  19. Using self-organizing maps to infill missing data in hydro-meteorological time series from the Logone catchment, Lake Chad basin.

    PubMed

    Nkiaka, E; Nawaz, N R; Lovett, J C

    2016-07-01

    Hydro-meteorological data is an important asset that can enhance management of water resources. But existing data often contains gaps, leading to uncertainties and so compromising their use. Although many methods exist for infilling data gaps in hydro-meteorological time series, many of these methods require inputs from neighbouring stations, which are often not available, while other methods are computationally demanding. Computing techniques such as artificial intelligence can be used to address this challenge. Self-organizing maps (SOMs), which are a type of artificial neural network, were used for infilling gaps in a hydro-meteorological time series in a Sudano-Sahel catchment. The coefficients of determination obtained were all above 0.75 and 0.65 while the average topographic error was 0.008 and 0.02 for rainfall and river discharge time series, respectively. These results further indicate that SOMs are a robust and efficient method for infilling missing gaps in hydro-meteorological time series.

  20. Guiding Conformation Space Search with an All-Atom Energy Potential

    PubMed Central

    Brunette, TJ; Brock, Oliver

    2009-01-01

    The most significant impediment for protein structure prediction is the inadequacy of conformation space search. Conformation space is too large and the energy landscape too rugged for existing search methods to consistently find near-optimal minima. To alleviate this problem, we present model-based search, a novel conformation space search method. Model-based search uses highly accurate information obtained during search to build an approximate, partial model of the energy landscape. Model-based search aggregates information in the model as it progresses, and in turn uses this information to guide exploration towards regions most likely to contain a near-optimal minimum. We validate our method by predicting the structure of 32 proteins, ranging in length from 49 to 213 amino acids. Our results demonstrate that model-based search is more effective at finding low-energy conformations in high-dimensional conformation spaces than existing search methods. The reduction in energy translates into structure predictions of increased accuracy. PMID:18536015

  1. Adapting a Cancer Literacy Measure for Use among Navajo Women

    PubMed Central

    Yost, Kathleen J.; Bauer, Mark C.; Buki, Lydia P.; Austin-Garrison, Martha; Garcia, Linda V.; Hughes, Christine A.; Patten, Christi A.

    2016-01-01

    Purpose The authors designed a community-based participatory research study to develop and test a family-based behavioral intervention to improve cancer literacy and promote mammography among Navajo women. Methods Using data from focus groups and discussions with a community advisory committee, they adapted an existing questionnaire to assess cancer knowledge, barriers to mammography, and cancer beliefs for use among Navajo women. Questions measuring health literacy, numeracy, self-efficacy, cancer communication, and family support were also adapted. Results The resulting questionnaire was found to have good content validity, and to be culturally and linguistically appropriate for use among Navajo women. Conclusions It is important to consider culture and not just language when adapting existing measures for use with AI/AN populations. English-language versions of existing literacy measures may not be culturally appropriate for AI/AN populations, which could lead to a lack of semantic, technical, idiomatic, and conceptual equivalence, resulting in misinterpretation of study outcomes. PMID:26879319

  2. Signal-to-noise ratio estimation using adaptive tuning on the piecewise cubic Hermite interpolation model for images.

    PubMed

    Sim, K S; Yeap, Z X; Tso, C P

    2016-11-01

    An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.

  3. All-Versus-Nothing Proof of Einstein-Podolsky-Rosen Steering

    PubMed Central

    Chen, Jing-Ling; Ye, Xiang-Jun; Wu, Chunfeng; Su, Hong-Yi; Cabello, Adán; Kwek, L. C.; Oh, C. H.

    2013-01-01

    Einstein-Podolsky-Rosen steering is a form of quantum nonlocality intermediate between entanglement and Bell nonlocality. Although Schrödinger already mooted the idea in 1935, steering still defies a complete understanding. In analogy to “all-versus-nothing” proofs of Bell nonlocality, here we present a proof of steering without inequalities rendering the detection of correlations leading to a violation of steering inequalities unnecessary. We show that, given any two-qubit entangled state, the existence of certain projective measurement by Alice so that Bob's normalized conditional states can be regarded as two different pure states provides a criterion for Alice-to-Bob steerability. A steering inequality equivalent to the all-versus-nothing proof is also obtained. Our result clearly demonstrates that there exist many quantum states which do not violate any previously known steering inequality but are indeed steerable. Our method offers advantages over the existing methods for experimentally testing steerability, and sheds new light on the asymmetric steering problem. PMID:23828242

  4. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  5. A normalization method for combination of laboratory test results from different electronic healthcare databases in a distributed research network.

    PubMed

    Yoon, Dukyong; Schuemie, Martijn J; Kim, Ju Han; Kim, Dong Ki; Park, Man Young; Ahn, Eun Kyoung; Jung, Eun-Young; Park, Dong Kyun; Cho, Soo Yeon; Shin, Dahye; Hwang, Yeonsoo; Park, Rae Woong

    2016-03-01

    Distributed research networks (DRNs) afford statistical power by integrating observational data from multiple partners for retrospective studies. However, laboratory test results across care sites are derived using different assays from varying patient populations, making it difficult to simply combine data for analysis. Additionally, existing normalization methods are not suitable for retrospective studies. We normalized laboratory results from different data sources by adjusting for heterogeneous clinico-epidemiologic characteristics of the data and called this the subgroup-adjusted normalization (SAN) method. Subgroup-adjusted normalization renders the means and standard deviations of distributions identical under population structure-adjusted conditions. To evaluate its performance, we compared SAN with existing methods for simulated and real datasets consisting of blood urea nitrogen, serum creatinine, hematocrit, hemoglobin, serum potassium, and total bilirubin. Various clinico-epidemiologic characteristics can be applied together in SAN. For simplicity of comparison, age and gender were used to adjust population heterogeneity in this study. In simulations, SAN had the lowest standardized difference in means (SDM) and Kolmogorov-Smirnov values for all tests (p < 0.05). In a real dataset, SAN had the lowest SDM and Kolmogorov-Smirnov values for blood urea nitrogen, hematocrit, hemoglobin, and serum potassium, and the lowest SDM for serum creatinine (p < 0.05). Subgroup-adjusted normalization performed better than normalization using other methods. The SAN method is applicable in a DRN environment and should facilitate analysis of data integrated across DRN partners for retrospective observational studies. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Use of the melting curve assay as a means for high-throughput quantification of Illumina sequencing libraries.

    PubMed

    Shinozuka, Hiroshi; Forster, John W

    2016-01-01

    Background. Multiplexed sequencing is commonly performed on massively parallel short-read sequencing platforms such as Illumina, and the efficiency of library normalisation can affect the quality of the output dataset. Although several library normalisation approaches have been established, none are ideal for highly multiplexed sequencing due to issues of cost and/or processing time. Methods. An inexpensive and high-throughput library quantification method has been developed, based on an adaptation of the melting curve assay. Sequencing libraries were subjected to the assay using the Bio-Rad Laboratories CFX Connect(TM) Real-Time PCR Detection System. The library quantity was calculated through summation of reduction of relative fluorescence units between 86 and 95 °C. Results.PCR-enriched sequencing libraries are suitable for this quantification without pre-purification of DNA. Short DNA molecules, which ideally should be eliminated from the library for subsequent processing, were differentiated from the target DNA in a mixture on the basis of differences in melting temperature. Quantification results for long sequences targeted using the melting curve assay were correlated with those from existing methods (R (2) > 0.77), and that observed from MiSeq sequencing (R (2) = 0.82). Discussion.The results of multiplexed sequencing suggested that the normalisation performance of the described method is equivalent to that of another recently reported high-throughput bead-based method, BeNUS. However, costs for the melting curve assay are considerably lower and processing times shorter than those of other existing methods, suggesting greater suitability for highly multiplexed sequencing applications.

  7. Cement bond evaluation method in horizontal wells using segmented bond tool

    NASA Astrophysics Data System (ADS)

    Song, Ruolong; He, Li

    2018-06-01

    Most of the existing cement evaluation technologies suffer from tool eccentralization due to gravity in highly deviated wells and horizontal wells. This paper proposes a correction method to lessen the effects of tool eccentralization on evaluation results of cement bond using segmented bond tool, which has an omnidirectional sonic transmitter and eight segmented receivers evenly arranged around the tool 2 ft from the transmitter. Using 3-D finite difference parallel numerical simulation method, we investigate the logging responses of centred and eccentred segmented bond tool in a variety of bond conditions. From the numerical results, we find that the tool eccentricity and channel azimuth can be estimated from measured sector amplitude. The average of the sector amplitude when the tool is eccentred can be corrected to the one when the tool is centred. Then the corrected amplitude will be used to calculate the channel size. The proposed method is applied to both synthetic and field data. For synthetic data, it turns out that this method can estimate the tool eccentricity with small error and the bond map is improved after correction. For field data, the tool eccentricity has a good agreement with the measured well deviation angle. Though this method still suffers from the low accuracy of calculating channel azimuth, the credibility of corrected bond map is improved especially in horizontal wells. It gives us a choice to evaluate the bond condition for horizontal wells using existing logging tool. The numerical results in this paper can provide aids for understanding measurements of segmented tool in both vertical and horizontal wells.

  8. Methodology for Computational Fluid Dynamic Validation for Medical Use: Application to Intracranial Aneurysm.

    PubMed

    Paliwal, Nikhil; Damiano, Robert J; Varble, Nicole A; Tutino, Vincent M; Dou, Zhongwang; Siddiqui, Adnan H; Meng, Hui

    2017-12-01

    Computational fluid dynamics (CFD) is a promising tool to aid in clinical diagnoses of cardiovascular diseases. However, it uses assumptions that simplify the complexities of the real cardiovascular flow. Due to high-stakes in the clinical setting, it is critical to calculate the effect of these assumptions in the CFD simulation results. However, existing CFD validation approaches do not quantify error in the simulation results due to the CFD solver's modeling assumptions. Instead, they directly compare CFD simulation results against validation data. Thus, to quantify the accuracy of a CFD solver, we developed a validation methodology that calculates the CFD model error (arising from modeling assumptions). Our methodology identifies independent error sources in CFD and validation experiments, and calculates the model error by parsing out other sources of error inherent in simulation and experiments. To demonstrate the method, we simulated the flow field of a patient-specific intracranial aneurysm (IA) in the commercial CFD software star-ccm+. Particle image velocimetry (PIV) provided validation datasets for the flow field on two orthogonal planes. The average model error in the star-ccm+ solver was 5.63 ± 5.49% along the intersecting validation line of the orthogonal planes. Furthermore, we demonstrated that our validation method is superior to existing validation approaches by applying three representative existing validation techniques to our CFD and experimental dataset, and comparing the validation results. Our validation methodology offers a streamlined workflow to extract the "true" accuracy of a CFD solver.

  9. Linnorm: improved statistical analysis for single cell RNA-seq expression data

    PubMed Central

    Yip, Shun H.; Wang, Panwen; Kocher, Jean-Pierre A.; Sham, Pak Chung

    2017-01-01

    Abstract Linnorm is a novel normalization and transformation method for the analysis of single cell RNA sequencing (scRNA-seq) data. Linnorm is developed to remove technical noises and simultaneously preserve biological variations in scRNA-seq data, such that existing statistical methods can be improved. Using real scRNA-seq data, we compared Linnorm with existing normalization methods, including NODES, SAMstrt, SCnorm, scran, DESeq and TMM. Linnorm shows advantages in speed, technical noise removal and preservation of cell heterogeneity, which can improve existing methods in the discovery of novel subtypes, pseudo-temporal ordering of cells, clustering analysis, etc. Linnorm also performs better than existing DEG analysis methods, including BASiCS, NODES, SAMstrt, Seurat and DESeq2, in false positive rate control and accuracy. PMID:28981748

  10. Pull-out fibers from composite materials at high rate of loading

    NASA Technical Reports Server (NTRS)

    Amijima, S.; Fujii, T.

    1981-01-01

    Numerical and experimental results are presented on the pullout phenomenon in composite materials at a high rate of loading. The finite element method was used, taking into account the existence of a virtual shear deformation layer as the interface between fiber and matrix. Experimental results agree well with those obtained by the finite element method. Numerical results show that the interlaminar shear stress is time dependent, in addition, it is shown to depend on the applied load time history. Under step pulse loading, the interlaminar shear stress fluctuates, finally decaying to its value under static loading.

  11. Evaluation of the immunogenicity of the dabigatran reversal agent idarucizumab during Phase I studies

    PubMed Central

    Norris, Stephen; Ramael, Steven; Ikushima, Ippei; Haazen, Wouter; Harada, Akiko; Moschetti, Viktoria; Imazu, Susumu; Reilly, Paul A.; Lang, Benjamin; Stangier, Joachim

    2017-01-01

    Aims Idarucizumab, a humanized monoclonal anti‐dabigatran antibody fragment, is effective in emergency reversal of dabigatran anticoagulation. Pre‐existing and treatment‐emergent anti‐idarucizumab antibodies (antidrug antibodies; ADA) may affect the safety and efficacy of idarucizumab. This analysis characterized the pre‐existing and treatment‐emergent ADA and assessed their impact on the pharmacokinetics and pharmacodynamics (PK/PD) of idarucizumab. Methods Data were pooled from three Phase I, randomized, double‐blind idarucizumab studies in healthy Caucasian subjects; elderly, renally impaired subjects; and healthy Japanese subjects. In plasma sampled before and after idarucizumab dosing, ADA were detected and titrated using a validated electrochemiluminescence method. ADA epitope specificities were examined using idarucizumab and two structurally related molecules. Idarucizumab PK/PD data were compared for subjects with and without pre‐existing ADA. Results Pre‐existing ADA were found in 33 out of 283 individuals (11.7%), seven of whom had intermittent ADA. Titres of pre‐existing and treatment‐emergent ADA were low, estimated equivalent to <0.3% of circulating idarucizumab after a 5 g dose. Pre‐existing ADA had no impact on dose‐normalized idarucizumab maximum plasma levels and exposure and, although data were limited, no impact on the reversal of dabigatran‐induced anticoagulation by idarucizumab. Treatment‐emergent ADA were detected in 20 individuals (19 out of 224 treated [8.5%]; 1 out of 59 received placebo [1.7%]) and were transient in ten. The majority had specificity primarily toward the C‐terminus of idarucizumab. There were no adverse events indicative of immunogenic reactions. Conclusion Pre‐existing and treatment‐emergent ADA were present at extremely low levels relative to the idarucizumab dosage under evaluation. The PK/PD of idarucizumab appeared to be unaffected by the presence of pre‐existing ADA. PMID:28230262

  12. Simultaneous Local Binary Feature Learning and Encoding for Homogeneous and Heterogeneous Face Recognition.

    PubMed

    Lu, Jiwen; Erin Liong, Venice; Zhou, Jie

    2017-08-09

    In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.

  13. A Radiation Solver for the National Combustion Code

    NASA Technical Reports Server (NTRS)

    Sockol, Peter M.

    2015-01-01

    A methodology is given that converts an existing finite volume radiative transfer method that requires input of local absorption coefficients to one that can treat a mixture of combustion gases and compute the coefficients on the fly from the local mixture properties. The Full-spectrum k-distribution method is used to transform the radiative transfer equation (RTE) to an alternate wave number variable, g . The coefficients in the transformed equation are calculated at discrete temperatures and participating species mole fractions that span the values of the problem for each value of g. These results are stored in a table and interpolation is used to find the coefficients at every cell in the field. Finally, the transformed RTE is solved for each g and Gaussian quadrature is used to find the radiant heat flux throughout the field. The present implementation is in an existing cartesian/cylindrical grid radiative transfer code and the local mixture properties are given by a solution of the National Combustion Code (NCC) on the same grid. Based on this work the intention is to apply this method to an existing unstructured grid radiation code which can then be coupled directly to NCC.

  14. A General Symbolic Method with Physical Applications

    NASA Astrophysics Data System (ADS)

    Smith, Gregory M.

    2000-06-01

    A solution to the problem of unifying the General Relativistic and Quantum Theoretical formalisms is given which introduces a new non-axiomatic symbolic method and an algebraic generalization of the Calculus to non-finite symbolisms without reference to the concept of a limit. An essential feature of the non-axiomatic method is the inadequacy of any (finite) statements: Identifying this aspect of the theory with the "existence of an external physical reality" both allows for the consistency of the method with the results of experiments and avoids the so-called "measurement problem" of quantum theory.

  15. Precise Relative Earthquake Magnitudes from Cross Correlation

    DOE PAGES

    Cleveland, K. Michael; Ammon, Charles J.

    2015-04-21

    We present a method to estimate precise relative magnitudes using cross correlation of seismic waveforms. Our method incorporates the intercorrelation of all events in a group of earthquakes, as opposed to individual event pairings relative to a reference event. This method works well when a reliable reference event does not exist. We illustrate the method using vertical strike-slip earthquakes located in the northeast Pacific and Panama fracture zone regions. Our results are generally consistent with the Global Centroid Moment Tensor catalog, which we use to establish a baseline for the relative event sizes.

  16. A systematic and efficient method to compute multi-loop master integrals

    NASA Astrophysics Data System (ADS)

    Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu

    2018-04-01

    We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  17. Experiment study on RC frame retrofitted by the external structure

    NASA Astrophysics Data System (ADS)

    Liu, Chunyang; Shi, Junji; Hiroshi, Kuramoto; Taguchi, Takashi; Kamiya, Takashi

    2016-09-01

    A new retrofitting method is proposed herein for reinforced concrete (RC) structures through attachment of an external structure. The external structure consists of a fiber concrete encased steel frame, connection slab and transverse beams. The external structure is connected to the existing structure through a connection slab and transverse beams. Pseudostatic experiments were carried out on one unretrofitted specimen and three retrofitted frame specimens. The characteristics, including failure mode, crack pattern, hysteresis loops behavior, relationship of strain and displacement of the concrete slab, are demonstrated. The results show that the load carrying capacity is obviously increased, and the extension length of the slab and the number of columns within the external frame are important influence factors on the working performance of the existing structure. In addition, the displacement difference between the existing structure and the outer structure was caused mainly by three factors: shear deformation of the slab, extraction of transverse beams, and drift of the conjunction part between the slab and the existing frame. Furthermore, the total deformation determined by the first two factors accounted for approximately 80% of the damage, therefore these factors should be carefully considered in engineering practice to enhance the effects of this new retrofitting method.

  18. The Correlation between Global Citizenship Perceptions and Cultural Intelligence Levels of Teachers

    ERIC Educational Resources Information Center

    Yüksel, Azize; Eres, Figen

    2018-01-01

    The increase of communication methods in the globalized world, the reduction of locality to a minimum in the economy and as a result of this, the migration from less economically developed countries to developed countries which in turn results in close interaction between ethnicities, all make it impossible for a homogenous society to exist and…

  19. Stability analysis of nonlinear systems with slope restricted nonlinearities.

    PubMed

    Liu, Xian; Du, Jiajia; Gao, Qing

    2014-01-01

    The problem of absolute stability of Lur'e systems with sector and slope restricted nonlinearities is revisited. Novel time-domain and frequency-domain criteria are established by using the Lyapunov method and the well-known Kalman-Yakubovich-Popov (KYP) lemma. The criteria strengthen some existing results. Simulations are given to illustrate the efficiency of the results.

  20. Usual and Unusual Care: Existing Practice Control Groups In Randomized Controlled Trials of Behavioral Interventions

    PubMed Central

    Freedland, Kenneth E.; Mohr, David C.; Davidson, Karina W.; Schwartz, Joseph E.

    2011-01-01

    Objective To examine the use of existing practice control groups in randomized controlled trials of behavioral interventions, and the role of extrinsic healthcare services in the design and conduct of behavioral trials. Method Selective qualitative review. Results Extrinsic healthcare services, also known as nonstudy care, have important but under-recognized effects on the design and conduct of behavioral trials. Usual care, treatment as usual, standard of care, and other existing practice control groups pose a variety of methodological and ethical challenges, but they play a vital role in behavioral intervention research. Conclusion This review highlights the need for a scientific consensus statement on control groups in behavioral trials. PMID:21536837

  1. A method for evaluating discoverability and navigability of recommendation algorithms.

    PubMed

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis

    2017-01-01

    Recommendations are increasingly used to support and enable discovery, browsing, and exploration of items. This is especially true for entertainment platforms such as Netflix or YouTube, where frequently, no clear categorization of items exists. Yet, the suitability of a recommendation algorithm to support these use cases cannot be comprehensively evaluated by any recommendation evaluation measures proposed so far. In this paper, we propose a method to expand the repertoire of existing recommendation evaluation techniques with a method to evaluate the discoverability and navigability of recommendation algorithms. The proposed method tackles this by means of first evaluating the discoverability of recommendation algorithms by investigating structural properties of the resulting recommender systems in terms of bow tie structure, and path lengths. Second, the method evaluates navigability by simulating three different models of information seeking scenarios and measuring the success rates. We show the feasibility of our method by applying it to four non-personalized recommendation algorithms on three data sets and also illustrate its applicability to personalized algorithms. Our work expands the arsenal of evaluation techniques for recommendation algorithms, extends from a one-click-based evaluation towards multi-click analysis, and presents a general, comprehensive method to evaluating navigability of arbitrary recommendation algorithms.

  2. New method to enhance the extraction yield of rutin from Sophora japonica using a novel ultrasonic extraction system by determining optimum ultrasonic frequency.

    PubMed

    Liao, Jianqing; Qu, Baida; Liu, Da; Zheng, Naiqin

    2015-11-01

    A new method has been proposed for enhancing extraction yield of rutin from Sophora japonica, in which a novel ultrasonic extraction system has been developed to perform the determination of optimum ultrasonic frequency by a two-step procedure. This study has systematically investigated the influence of a continuous frequency range of 20-92 kHz on rutin yields. The effects of different operating conditions on rutin yields have also been studied in detail such as solvent concentration, solvent to solid ratio, ultrasound power, temperature and particle size. A higher extraction yield was obtained at the ultrasonic frequency of 60-62 kHz which was little affected under other extraction conditions. Comparative studies between existing methods and the present method were done to verify the effectiveness of this method. Results indicated that the new extraction method gave a higher extraction yield compared with existing ultrasound-assisted extraction (UAE) and soxhlet extraction (SE). Thus, the potential use of this method may be promising for extraction of natural materials on an industrial scale in the future. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Reinforcement learning for resource allocation in LEO satellite networks.

    PubMed

    Usaha, Wipawee; Barria, Javier A

    2007-06-01

    In this paper, we develop and assess online decision-making algorithms for call admission and routing for low Earth orbit (LEO) satellite networks. It has been shown in a recent paper that, in a LEO satellite system, a semi-Markov decision process formulation of the call admission and routing problem can achieve better performance in terms of an average revenue function than existing routing methods. However, the conventional dynamic programming (DP) numerical solution becomes prohibited as the problem size increases. In this paper, two solution methods based on reinforcement learning (RL) are proposed in order to circumvent the computational burden of DP. The first method is based on an actor-critic method with temporal-difference (TD) learning. The second method is based on a critic-only method, called optimistic TD learning. The algorithms enhance performance in terms of requirements in storage, computational complexity and computational time, and in terms of an overall long-term average revenue function that penalizes blocked calls. Numerical studies are carried out, and the results obtained show that the RL framework can achieve up to 56% higher average revenue over existing routing methods used in LEO satellite networks with reasonable storage and computational requirements.

  4. Identifying self-interstitials of bcc and fcc crystals in molecular dynamics

    NASA Astrophysics Data System (ADS)

    Bukkuru, S.; Bhardwaj, U.; Warrier, M.; Rao, A. D. P.; Valsakumar, M. C.

    2017-02-01

    Identification of self-interstitials in molecular dynamics (MD) simulations is of critical importance. There exist several criteria for identifying the self-interstitial. Most of the existing methods use an assumed cut-off value for the displacement of an atom from its lattice position to identify the self-interstitial. The results obtained are affected by the chosen cut-off value. Moreover, these chosen cut-off values are independent of temperature. We have developed a novel unsupervised learning algorithm called Max-Space Clustering (MSC) to identify an appropriate cut-off value and its dependence on temperature. This method is compared with some widely used methods such as effective sphere (ES) method and nearest neighbor sphere (NNS) method. The cut-off radius obtained using our method shows a linear variation with temperature. The value of cut-off radius and its temperature dependence is derived for five bcc (Cr, Fe, Mo, Nb, W) and six fcc (Ag, Au, Cu, Ni, Pd, Pt) crystals. It is seen that the ratio of the cut-off values "r" to the lattice constant "a" lies between 0.23 and 0.3 at 300 K and this ratio is on an average smaller for the fcc crystals. Collision cascade simulations are carried out for Primary knock-on Atom (PKA) energies of 5 keV in Fe (at 300 K and 1000 K) and W (at 300 K and 2500 K) and the results are compared using the various methods.

  5. A YinYang bipolar fuzzy cognitive TOPSIS method to bipolar disorder diagnosis.

    PubMed

    Han, Ying; Lu, Zhenyu; Du, Zhenguang; Luo, Qi; Chen, Sheng

    2018-05-01

    Bipolar disorder is often mis-diagnosed as unipolar depression in the clinical diagnosis. The main reason is that, different from other diseases, bipolarity is the norm rather than exception in bipolar disorder diagnosis. YinYang bipolar fuzzy set captures bipolarity and has been successfully used to construct a unified inference mathematical modeling method to bipolar disorder clinical diagnosis. Nevertheless, symptoms and their interrelationships are not considered in the existing method, circumventing its ability to describe complexity of bipolar disorder. Thus, in this paper, a YinYang bipolar fuzzy multi-criteria group decision making method to bipolar disorder clinical diagnosis is developed. Comparing with the existing method, the new one is more comprehensive. The merits of the new method are listed as follows: First of all, multi-criteria group decision making method is introduced into bipolar disorder diagnosis for considering different symptoms and multiple doctors' opinions. Secondly, the discreet diagnosis principle is adopted by the revised TOPSIS method. Last but not the least, YinYang bipolar fuzzy cognitive map is provided for the understanding of interrelations among symptoms. The illustrated case demonstrates the feasibility, validity, and necessity of the theoretical results obtained. Moreover, the comparison analysis demonstrates that the diagnosis result is more accurate, when interrelations about symptoms are considered in the proposed method. In a conclusion, the main contribution of this paper is to provide a comprehensive mathematical approach to improve the accuracy of bipolar disorder clinical diagnosis, in which both bipolarity and complexity are considered. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Photodecomposition Profile of Curcumin in the Existence of Tungsten Trioxide Particles

    NASA Astrophysics Data System (ADS)

    Nandiyanto, A. B. D.; Zaen, R.; Oktiani, R.; Abdullah, A. G.

    2018-02-01

    The purpose of this study was to investigate the stability of curcumin solution in the existence of tungsten trioxide (WO3) particles under light illumination. In the experimental method, curcumin extracted from Indonesian local turmeric was added with WO3 microparticles and put into the photoreactor system. The photostability performance of curcumin was conducted for 22 hours using 100 W of Neon Lamp. The results showed that the curcumin solution was relatively stable. When curcumin without existence of WO3 was irradiated, no change in the curcumin concentration was found. However, when curcumin solution was mixed with WO3 particles, decreases in the concentration of curcumin was found. The concentration of curcumin with WO3 after light irradiation was about 73.58%. Based on the results, we concluded that the curcumin is relatively stable against light. However, its lightirradiation stability decreases with additional inorganic material.

  7. A Bayesian maximum entropy-based methodology for optimal spatiotemporal design of groundwater monitoring networks.

    PubMed

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    This paper presents a new methodology for analyzing the spatiotemporal variability of water table levels and redesigning a groundwater level monitoring network (GLMN) using the Bayesian Maximum Entropy (BME) technique and a multi-criteria decision-making approach based on ordered weighted averaging (OWA). The spatial sampling is determined using a hexagonal gridding pattern and a new method, which is proposed to assign a removal priority number to each pre-existing station. To design temporal sampling, a new approach is also applied to consider uncertainty caused by lack of information. In this approach, different time lag values are tested by regarding another source of information, which is simulation result of a numerical groundwater flow model. Furthermore, to incorporate the existing uncertainties in available monitoring data, the flexibility of the BME interpolation technique is taken into account in applying soft data and improving the accuracy of the calculations. To examine the methodology, it is applied to the Dehgolan plain in northwestern Iran. Based on the results, a configuration of 33 monitoring stations for a regular hexagonal grid of side length 3600 m is proposed, in which the time lag between samples is equal to 5 weeks. Since the variance estimation errors of the BME method are almost identical for redesigned and existing networks, the redesigned monitoring network is more cost-effective and efficient than the existing monitoring network with 52 stations and monthly sampling frequency.

  8. ADM For Solving Linear Second-Order Fredholm Integro-Differential Equations

    NASA Astrophysics Data System (ADS)

    Karim, Mohd F.; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Che-Him, Norziha; Roslan, Rozaini; Khalid, Kamil

    2018-04-01

    In this paper, we apply Adomian Decomposition Method (ADM) as numerically analyse linear second-order Fredholm Integro-differential Equations. The approximate solutions of the problems are calculated by Maple package. Some numerical examples have been considered to illustrate the ADM for solving this equation. The results are compared with the existing exact solution. Thus, the Adomian decomposition method can be the best alternative method for solving linear second-order Fredholm Integro-Differential equation. It converges to the exact solution quickly and in the same time reduces computational work for solving the equation. The result obtained by ADM shows the ability and efficiency for solving these equations.

  9. Compressive Sensing via Nonlocal Smoothed Rank Function

    PubMed Central

    Fan, Ya-Ru; Liu, Jun; Zhao, Xi-Le

    2016-01-01

    Compressive sensing (CS) theory asserts that we can reconstruct signals and images with only a small number of samples or measurements. Recent works exploiting the nonlocal similarity have led to better results in various CS studies. To better exploit the nonlocal similarity, in this paper, we propose a non-convex smoothed rank function based model for CS image reconstruction. We also propose an efficient alternating minimization method to solve the proposed model, which reduces a difficult and coupled problem to two tractable subproblems. Experimental results have shown that the proposed method performs better than several existing state-of-the-art CS methods for image reconstruction. PMID:27583683

  10. New method for taking into account finite nuclear mass in the determination of the absence of bound states: Application to e/sup +/H

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Armour, E.A.G.

    1982-06-07

    It has been known since the work of Aronson, Kleinman and Spruch, and Armour that, if the proton is considered to be infinitely massive, no bound state of a system made up of a positron and a hydrogen atom can exist. In this Letter a new method is introduced for taking into account finite nuclear mass. With use of this method it is shown that the inclusion of the finite mass of the proton does not result in the appearance of a bound state. This is the first time that this result has been established.

  11. Analysis of Bonded Joints Between the Facesheet and Flange of Corrugated Composite Panels

    NASA Technical Reports Server (NTRS)

    Yarrington, Phillip W.; Collier, Craig S.; Bednarcyk, Brett A.

    2008-01-01

    This paper outlines a method for the stress analysis of bonded composite corrugated panel facesheet to flange joints. The method relies on the existing HyperSizer Joints software, which analyzes the bonded joint, along with a beam analogy model that provides the necessary boundary loading conditions to the joint analysis. The method is capable of predicting the full multiaxial stress and strain fields within the flange to facesheet joint and thus can determine ply-level margins and evaluate delamination. Results comparing the method to NASTRAN finite element model stress fields are provided illustrating the accuracy of the method.

  12. Secure Indoor Localization Based on Extracting Trusted Fingerprint

    PubMed Central

    Yin, Xixi; Zheng, Yanliu; Wang, Chun

    2018-01-01

    Indoor localization based on WiFi has attracted a lot of research effort because of the widespread application of WiFi. Fingerprinting techniques have received much attention due to their simplicity and compatibility with existing hardware. However, existing fingerprinting localization algorithms may not resist abnormal received signal strength indication (RSSI), such as unexpected environmental changes, impaired access points (APs) or the introduction of new APs. Traditional fingerprinting algorithms do not consider the problem of new APs and impaired APs in the environment when using RSSI. In this paper, we propose a secure fingerprinting localization (SFL) method that is robust to variable environments, impaired APs and the introduction of new APs. In the offline phase, a voting mechanism and a fingerprint database update method are proposed. We use the mutual cooperation between reference anchor nodes to update the fingerprint database, which can reduce the interference caused by the user measurement data. We analyze the standard deviation of RSSI, mobilize the reference points in the database to vote on APs and then calculate the trust factors of APs based on the voting results. In the online phase, we first make a judgment about the new APs and the broken APs, then extract the secure fingerprints according to the trusted factors of APs and obtain the localization results by using the trusted fingerprints. In the experiment section, we demonstrate the proposed method and find that the proposed strategy can resist abnormal RSSI and can improve the localization accuracy effectively compared with the existing fingerprinting localization algorithms. PMID:29401755

  13. Secure Indoor Localization Based on Extracting Trusted Fingerprint.

    PubMed

    Luo, Juan; Yin, Xixi; Zheng, Yanliu; Wang, Chun

    2018-02-05

    [-5]Indoor localization based on WiFi has attracted a lot of research effort because of the widespread application of WiFi. Fingerprinting techniques have received much attention due to their simplicity and compatibility with existing hardware. However, existing fingerprinting localization algorithms may not resist abnormal received signal strength indication (RSSI), such as unexpected environmental changes, impaired access points (APs) or the introduction of new APs. Traditional fingerprinting algorithms do not consider the problem of new APs and impaired APs in the environment when using RSSI. In this paper, we propose a secure fingerprinting localization (SFL) method that is robust to variable environments, impaired APs and the introduction of new APs. In the offline phase, a voting mechanism and a fingerprint database update method are proposed. We use the mutual cooperation between reference anchor nodes to update the fingerprint database, which can reduce the interference caused by the user measurement data. We analyze the standard deviation of RSSI, mobilize the reference points in the database to vote on APs and then calculate the trust factors of APs based on the voting results. In the online phase, we first make a judgment about the new APs and the broken APs, then extract the secure fingerprints according to the trusted factors of APs and obtain the localization results by using the trusted fingerprints. In the experiment section, we demonstrate the proposed method and find that the proposed strategy can resist abnormal RSSI and can improve the localization accuracy effectively compared with the existing fingerprinting localization algorithms.

  14. Solution of the Bagley Torvik equation by fractional DTM

    NASA Astrophysics Data System (ADS)

    Arora, Geeta; Pratiksha

    2017-07-01

    In this paper, fractional differential transform method(DTM) is implemented on the Bagley Torvik equation. This equation models the viscoelastic behavior of geological strata, metals, glasses etc. It explains the motion of a rigid plate immersed in a Newtonian fluid. DTM is a simple, reliable and efficient method that gives a series solution. Caputo fractional derivative is considered throughout this work. Two examples are given to demonstrate the validity and applicability of the method and comparison is made with the existing results.

  15. Nonideal isentropic gas flow through converging-diverging nozzles

    NASA Technical Reports Server (NTRS)

    Bober, W.; Chow, W. L.

    1990-01-01

    A method for treating nonideal gas flows through converging-diverging nozzles is described. The method incorporates the Redlich-Kwong equation of state. The Runge-Kutta method is used to obtain a solution. Numerical results were obtained for methane gas. Typical plots of pressure, temperature, and area ratios as functions of Mach number are given. From the plots, it can be seen that there exists a range of reservoir conditions that require the gas to be treated as nonideal if an accurate solution is to be obtained.

  16. Theoretical studies of floating-reference method for NIR blood glucose sensing

    NASA Astrophysics Data System (ADS)

    Shi, Zhenzhi; Yang, Yue; Zhao, Huijuan; Chen, Wenliang; Liu, Rong; Xu, Kexin

    2011-03-01

    Non-invasive blood glucose monitoring using NIR light has been suffered from the variety of optical background that is mainly caused by the change of human body, such as the change of temperature, water concentration, and so on. In order to eliminate these internal influence and external interference a so called floating-reference method has been proposed to provide an internal reference. From the analysis of the diffuse reflectance spectrum, a position has been found where diffuse reflection of light is not sensitive to the glucose concentrations. Our previous work has proved the existence of reference position using diffusion equation. However, since glucose monitoring generally use the NIR light in region of 1000-2000nm, diffusion equation is not valid because of the high absorption coefficient and small source-detector separations. In this paper, steady-state high-order approximate model is used to further investigate the existence of the floating reference position in semi-infinite medium. Based on the analysis of different optical parameters on the impact of spatially resolved reflectance of light, we find that the existence of the floating-reference position is the result of the interaction of optical parameters. Comparing to the results of Monte Carlo simulation, the applicable region of diffusion approximation and higher-order approximation for the calculation of floating-reference position is discussed at the wavelength of 1000nm-1800nm, using the intralipid solution of different concentrations. The results indicate that when the reduced albedo is greater than 0.93, diffusion approximation results are more close to simulation results, otherwise the high order approximation is more applicable.

  17. Ensemble framework based real-time respiratory motion prediction for adaptive radiotherapy applications.

    PubMed

    Tatinati, Sivanagaraja; Nazarpour, Kianoush; Tech Ang, Wei; Veluvolu, Kalyana C

    2016-08-01

    Successful treatment of tumors with motion-adaptive radiotherapy requires accurate prediction of respiratory motion, ideally with a prediction horizon larger than the latency in radiotherapy system. Accurate prediction of respiratory motion is however a non-trivial task due to the presence of irregularities and intra-trace variabilities, such as baseline drift and temporal changes in fundamental frequency pattern. In this paper, to enhance the accuracy of the respiratory motion prediction, we propose a stacked regression ensemble framework that integrates heterogeneous respiratory motion prediction algorithms. We further address two crucial issues for developing a successful ensemble framework: (1) selection of appropriate prediction methods to ensemble (level-0 methods) among the best existing prediction methods; and (2) finding a suitable generalization approach that can successfully exploit the relative advantages of the chosen level-0 methods. The efficacy of the developed ensemble framework is assessed with real respiratory motion traces acquired from 31 patients undergoing treatment. Results show that the developed ensemble framework improves the prediction performance significantly compared to the best existing methods. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  18. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    NASA Astrophysics Data System (ADS)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  19. Role of short-range correlation in facilitation of wave propagation in a long-range ladder chain

    NASA Astrophysics Data System (ADS)

    Farzadian, O.; Niry, M. D.

    2018-09-01

    We extend a new method for generating a random chain, which has a kind of short-range correlation induced by a repeated sequence while retaining long-range correlation. Three distinct methods are considered to study the localization-delocalization transition of mechanical waves in one-dimensional disordered media with simultaneous existence of short and long-range correlation. First, a transfer-matrix method was used to calculate numerically the localization length of a wave in a binary chain. We found that the existence of short-range correlation in a long-range correlated chain can increase the localization length at the resonance frequency Ωc. Then, we carried out an analytical study of the delocalization properties of the waves in correlated disordered media around Ωc. Finally, we apply a dynamical method based on the direct numerical simulation of the wave equation to study the propagation of waves in the correlated chain. Imposing short-range correlation on the long-range background will lead the propagation to super-diffusive transport. The results obtained with all three methods are in agreement with each other.

  20. End of the chain? Rugosity and fine-scale bathymetry from existing underwater digital imagery using structure-from-motion (SfM) technology

    USGS Publications Warehouse

    Storlazzi, Curt; Dartnell, Peter; Hatcher, Gerry; Gibbs, Ann E.

    2016-01-01

    The rugosity or complexity of the seafloor has been shown to be an important ecological parameter for fish, algae, and corals. Historically, rugosity has been measured either using simple and subjective manual methods such as ‘chain-and-tape’ or complicated and expensive geophysical methods. Here, we demonstrate the application of structure-from-motion (SfM) photogrammetry to generate high-resolution, three-dimensional bathymetric models of a fringing reef from existing underwater video collected to characterize the seafloor. SfM techniques are capable of achieving spatial resolution that can be orders of magnitude greater than large-scale lidar and sonar mapping of coral reef ecosystems. The resulting data provide finer-scale measurements of bathymetry and rugosity that are more applicable to ecological studies of coral reefs than provided by the more expensive and time-consuming geophysical methods. Utilizing SfM techniques for characterizing the benthic habitat proved to be more effective and quantitatively powerful than conventional methods and thus might portend the end of the ‘chain-and-tape’ method for measuring benthic complexity.

  1. Selection of remedial alternatives for mine sites: a multicriteria decision analysis approach.

    PubMed

    Betrie, Getnet D; Sadiq, Rehan; Morin, Kevin A; Tesfamariam, Solomon

    2013-04-15

    The selection of remedial alternatives for mine sites is a complex task because it involves multiple criteria and often with conflicting objectives. However, an existing framework used to select remedial alternatives lacks multicriteria decision analysis (MCDA) aids and does not consider uncertainty in the selection of alternatives. The objective of this paper is to improve the existing framework by introducing deterministic and probabilistic MCDA methods. The Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) methods have been implemented in this study. The MCDA analysis involves processing inputs to the PROMETHEE methods that are identifying the alternatives, defining the criteria, defining the criteria weights using analytical hierarchical process (AHP), defining the probability distribution of criteria weights, and conducting Monte Carlo Simulation (MCS); running the PROMETHEE methods using these inputs; and conducting a sensitivity analysis. A case study was presented to demonstrate the improved framework at a mine site. The results showed that the improved framework provides a reliable way of selecting remedial alternatives as well as quantifying the impact of different criteria on selecting alternatives. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Efficiently computing exact geodesic loops within finite steps.

    PubMed

    Xin, Shi-Qing; He, Ying; Fu, Chi-Wing

    2012-06-01

    Closed geodesics, or geodesic loops, are crucial to the study of differential topology and differential geometry. Although the existence and properties of closed geodesics on smooth surfaces have been widely studied in mathematics community, relatively little progress has been made on how to compute them on polygonal surfaces. Most existing algorithms simply consider the mesh as a graph and so the resultant loops are restricted only on mesh edges, which are far from the actual geodesics. This paper is the first to prove the existence and uniqueness of geodesic loop restricted on a closed face sequence; it contributes also with an efficient algorithm to iteratively evolve an initial closed path on a given mesh into an exact geodesic loop within finite steps. Our proposed algorithm takes only an O(k) space complexity and an O(mk) time complexity (experimentally), where m is the number of vertices in the region bounded by the initial loop and the resultant geodesic loop, and k is the average number of edges in the edge sequences that the evolving loop passes through. In contrast to the existing geodesic curvature flow methods which compute an approximate geodesic loop within a predefined threshold, our method is exact and can apply directly to triangular meshes without needing to solve any differential equation with a numerical solver; it can run at interactive speed, e.g., in the order of milliseconds, for a mesh with around 50K vertices, and hence, significantly outperforms existing algorithms. Actually, our algorithm could run at interactive speed even for larger meshes. Besides the complexity of the input mesh, the geometric shape could also affect the number of evolving steps, i.e., the performance. We motivate our algorithm with an interactive shape segmentation example shown later in the paper.

  3. Localizing ECoG electrodes on the cortical anatomy without post-implantation imaging

    PubMed Central

    Gupta, Disha; Hill, N. Jeremy; Adamo, Matthew A.; Ritaccio, Anthony; Schalk, Gerwin

    2014-01-01

    Introduction Electrocorticographic (ECoG) grids are placed subdurally on the cortex in people undergoing cortical resection to delineate eloquent cortex. ECoG signals have high spatial and temporal resolution and thus can be valuable for neuroscientific research. The value of these data is highest when they can be related to the cortical anatomy. Existing methods that establish this relationship rely either on post-implantation imaging using computed tomography (CT), magnetic resonance imaging (MRI) or X-Rays, or on intra-operative photographs. For research purposes, it is desirable to localize ECoG electrodes on the brain anatomy even when post-operative imaging is not available or when intra-operative photographs do not readily identify anatomical landmarks. Methods We developed a method to co-register ECoG electrodes to the underlying cortical anatomy using only a pre-operative MRI, a clinical neuronavigation device (such as BrainLab VectorVision), and fiducial markers. To validate our technique, we compared our results to data collected from six subjects who also had post-grid implantation imaging available. We compared the electrode coordinates obtained by our fiducial-based method to those obtained using existing methods, which are based on co-registering pre- and post-grid implantation images. Results Our fiducial-based method agreed with the MRI–CT method to within an average of 8.24 mm (mean, median = 7.10 mm) across 6 subjects in 3 dimensions. It showed an average discrepancy of 2.7 mm when compared to the results of the intra-operative photograph method in a 2D coordinate system. As this method does not require post-operative imaging such as CTs, our technique should prove useful for research in intra-operative single-stage surgery scenarios. To demonstrate the use of our method, we applied our method during real-time mapping of eloquent cortex during a single-stage surgery. The results demonstrated that our method can be applied intra-operatively in the absence of post-operative imaging to acquire ECoG signals that can be valuable for neuroscientific investigations. PMID:25379417

  4. The emergence of an ethical duty to disclose genetic research results: international perspectives.

    PubMed

    Knoppers, Bartha Maria; Joly, Yann; Simard, Jacques; Durocher, Francine

    2006-11-01

    The last decade has witnessed the emergence of international ethics guidelines discussing the importance of disclosing global and also, in certain circumstances, individual genetic research results to participants. This discussion is all the more important considering the advent of pharmacogenomics and the increasing incidence of 'translational' genetic research in the post-genomic era. We surveyed both the literature and the ethical guidelines using selective keywords. We then analyzed our data using a qualitative method approach and singled out countries or policies that were representative of certain positions. From our findings, we conclude that at the international level, there now exists an ethical duty to return individual genetic research results subject to the existence of proof of validity, significance and benefit. Even where these criteria are met, the right of the research participant not to know also has to be taken into consideration. The existence of an ethical duty to return individual genetic research results begs several other questions: Who should have the responsibility of disclosing such results and when? To whom should the results be disclosed? How? Finally, will this ethical 'imperative' become a legally recognized duty as well?

  5. 40 CFR 66.21 - How to calculate the penalty.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... in an EPA approved research and development program where he determines that such participation would be appropriate. Information on appropriate research and development programs will be available from... existing technology or other emissions control method results in emission levels which satisfy the...

  6. 40 CFR 66.21 - How to calculate the penalty.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... in an EPA approved research and development program where he determines that such participation would be appropriate. Information on appropriate research and development programs will be available from... existing technology or other emissions control method results in emission levels which satisfy the...

  7. 40 CFR 66.21 - How to calculate the penalty.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... in an EPA approved research and development program where he determines that such participation would be appropriate. Information on appropriate research and development programs will be available from... existing technology or other emissions control method results in emission levels which satisfy the...

  8. 40 CFR 66.21 - How to calculate the penalty.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... in an EPA approved research and development program where he determines that such participation would be appropriate. Information on appropriate research and development programs will be available from... existing technology or other emissions control method results in emission levels which satisfy the...

  9. 40 CFR 66.21 - How to calculate the penalty.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in an EPA approved research and development program where he determines that such participation would be appropriate. Information on appropriate research and development programs will be available from... existing technology or other emissions control method results in emission levels which satisfy the...

  10. Toolbox for Evaluating Residents as Teachers

    ERIC Educational Resources Information Center

    Coverdale, John H.; Ismail, Nadia; Mian, Ayesha; Dewey, Charlene

    2010-01-01

    Objective: The authors review existing assessment tools related to evaluating residents' teaching skills and teaching effectiveness. Methods: PubMed and PsycInfo databases were searched using combinations of keywords including "residents," "residents as teachers," "teaching skills," and "assessments" or "rating scales." Results: Eleven evaluation…

  11. The cost of quality: Implementing generalization and suppression for anonymizing biomedical data with minimal information loss.

    PubMed

    Kohlmayer, Florian; Prasser, Fabian; Kuhn, Klaus A

    2015-12-01

    With the ARX data anonymization tool structured biomedical data can be de-identified using syntactic privacy models, such as k-anonymity. Data is transformed with two methods: (a) generalization of attribute values, followed by (b) suppression of data records. The former method results in data that is well suited for analyses by epidemiologists, while the latter method significantly reduces loss of information. Our tool uses an optimal anonymization algorithm that maximizes output utility according to a given measure. To achieve scalability, existing optimal anonymization algorithms exclude parts of the search space by predicting the outcome of data transformations regarding privacy and utility without explicitly applying them to the input dataset. These optimizations cannot be used if data is transformed with generalization and suppression. As optimal data utility and scalability are important for anonymizing biomedical data, we had to develop a novel method. In this article, we first confirm experimentally that combining generalization with suppression significantly increases data utility. Next, we proof that, within this coding model, the outcome of data transformations regarding privacy and utility cannot be predicted. As a consequence, existing algorithms fail to deliver optimal data utility. We confirm this finding experimentally. The limitation of previous work can be overcome at the cost of increased computational complexity. However, scalability is important for anonymizing data with user feedback. Consequently, we identify properties of datasets that may be predicted in our context and propose a novel and efficient algorithm. Finally, we evaluate our solution with multiple datasets and privacy models. This work presents the first thorough investigation of which properties of datasets can be predicted when data is anonymized with generalization and suppression. Our novel approach adopts existing optimization strategies to our context and combines different search methods. The experiments show that our method is able to efficiently solve a broad spectrum of anonymization problems. Our work shows that implementing syntactic privacy models is challenging and that existing algorithms are not well suited for anonymizing data with transformation models which are more complex than generalization alone. As such models have been recommended for use in the biomedical domain, our results are of general relevance for de-identifying structured biomedical data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  12. A low-complexity attitude control method for large-angle agile maneuvers of a spacecraft with control moment gyros

    NASA Astrophysics Data System (ADS)

    Kawajiri, Shota; Matunaga, Saburo

    2017-10-01

    This study examines a low-complexity control method that satisfies mechanical constraints by using control moment gyros for an agile maneuver. The method is designed based on the fact that a simple rotation around an Euler's principal axis corresponds to a well-approximated solution of a time-optimal rest-to-rest maneuver. With respect to an agile large-angle maneuver using CMGs, it is suggested that there exists a coasting period in which all gimbal angles are constant, and a constant body angular velocity is almost along the Euler's principal axis. The gimbals are driven such that the coasting period is generated in the proposed method. This allows the problem to be converted into obtaining only a coasting time and gimbal angles such that their combination maximizes body angular velocity along the rotational axis of the maneuver. The effectiveness of the proposed method is demonstrated by using numerical simulations. The results indicate that the proposed method shortens the settling time by 20-70% when compared to that of a traditional feedback method. Additionally, a comparison with an existing path planning method shows that the proposed method achieves a low computational complexity (that is approximately 150 times faster) and a certain level of shortness in the settling time.

  13. Clarifying values: an updated review

    PubMed Central

    2013-01-01

    Background Consensus guidelines have recommended that decision aids include a process for helping patients clarify their values. We sought to examine the theoretical and empirical evidence related to the use of values clarification methods in patient decision aids. Methods Building on the International Patient Decision Aid Standards (IPDAS) Collaboration’s 2005 review of values clarification methods in decision aids, we convened a multi-disciplinary expert group to examine key definitions, decision-making process theories, and empirical evidence about the effects of values clarification methods in decision aids. To summarize the current state of theory and evidence about the role of values clarification methods in decision aids, we undertook a process of evidence review and summary. Results Values clarification methods (VCMs) are best defined as methods to help patients think about the desirability of options or attributes of options within a specific decision context, in order to identify which option he/she prefers. Several decision making process theories were identified that can inform the design of values clarification methods, but no single “best” practice for how such methods should be constructed was determined. Our evidence review found that existing VCMs were used for a variety of different decisions, rarely referenced underlying theory for their design, but generally were well described in regard to their development process. Listing the pros and cons of a decision was the most common method used. The 13 trials that compared decision support with or without VCMs reached mixed results: some found that VCMs improved some decision-making processes, while others found no effect. Conclusions Values clarification methods may improve decision-making processes and potentially more distal outcomes. However, the small number of evaluations of VCMs and, where evaluations exist, the heterogeneity in outcome measures makes it difficult to determine their overall effectiveness or the specific characteristics that increase effectiveness. PMID:24625261

  14. A new stationary gridline artifact suppression method based on the 2D discrete wavelet transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Hui, E-mail: corinna@seu.edu.cn; Key Laboratory of Computer Network and Information Integration; Centre de Recherche en Information Biomédicale sino-français, Laboratoire International Associé, Inserm, Université de Rennes 1, Rennes 35000

    2015-04-15

    Purpose: In digital x-ray radiography, an antiscatter grid is inserted between the patient and the image receptor to reduce scattered radiation. If the antiscatter grid is used in a stationary way, gridline artifacts will appear in the final image. In most of the gridline removal image processing methods, the useful information with spatial frequencies close to that of the gridline is usually lost or degraded. In this study, a new stationary gridline suppression method is designed to preserve more of the useful information. Methods: The method is as follows. The input image is first recursively decomposed into several smaller subimagesmore » using a multiscale 2D discrete wavelet transform. The decomposition process stops when the gridline signal is found to be greater than a threshold in one or several of these subimages using a gridline detection module. An automatic Gaussian band-stop filter is then applied to the detected subimages to remove the gridline signal. Finally, the restored image is achieved using the corresponding 2D inverse discrete wavelet transform. Results: The processed images show that the proposed method can remove the gridline signal efficiently while maintaining the image details. The spectra of a 1D Fourier transform of the processed images demonstrate that, compared with some existing gridline removal methods, the proposed method has better information preservation after the removal of the gridline artifacts. Additionally, the performance speed is relatively high. Conclusions: The experimental results demonstrate the efficiency of the proposed method. Compared with some existing gridline removal methods, the proposed method can preserve more information within an acceptable execution time.« less

  15. Joint correction of Nyquist artifact and minuscule motion-induced aliasing artifact in interleaved diffusion weighted EPI data using a composite two-dimensional phase correction procedure

    PubMed Central

    Chang, Hing-Chiu; Chen, Nan-kuei

    2016-01-01

    Diffusion-weighted imaging (DWI) obtained with interleaved echo-planar imaging (EPI) pulse sequence has great potential of characterizing brain tissue properties at high spatial-resolution. However, interleaved EPI based DWI data may be corrupted by various types of aliasing artifacts. First, inconsistencies in k-space data obtained with opposite readout gradient polarities result in Nyquist artifact, which is usually reduced with 1D phase correction in post-processing. When there exist eddy current cross terms (e.g., in oblique-plane EPI), 2D phase correction is needed to effectively reduce Nyquist artifact. Second, minuscule motion induced phase inconsistencies in interleaved DWI scans result in image-domain aliasing artifact, which can be removed with reconstruction procedures that take shot-to-shot phase variations into consideration. In existing interleaved DWI reconstruction procedures, Nyquist artifact and minuscule motion-induced aliasing artifact are typically removed subsequently in two stages. Although the two-stage phase correction generally performs well for non-oblique plane EPI data obtained from well-calibrated system, the residual artifacts may still be pronounced in oblique-plane EPI data or when there exist eddy current cross terms. To address this challenge, here we report a new composite 2D phase correction procedure, which effective removes Nyquist artifact and minuscule motion induced aliasing artifact jointly in a single step. Our experimental results demonstrate that the new 2D phase correction method can much more effectively reduce artifacts in interleaved EPI based DWI data as compared with the existing two-stage artifact correction procedures. The new method robustly enables high-resolution DWI, and should prove highly valuable for clinical uses and research studies of DWI. PMID:27114342

  16. Quantitative evaluation of cross correlation between two finite-length time series with applications to single-molecule FRET.

    PubMed

    Hanson, Jeffery A; Yang, Haw

    2008-11-06

    The statistical properties of the cross correlation between two time series has been studied. An analytical expression for the cross correlation function's variance has been derived. On the basis of these results, a statistically robust method has been proposed to detect the existence and determine the direction of cross correlation between two time series. The proposed method has been characterized by computer simulations. Applications to single-molecule fluorescence spectroscopy are discussed. The results may also find immediate applications in fluorescence correlation spectroscopy (FCS) and its variants.

  17. New advances in the partial-reflection-drifts experiment using microprocessors

    NASA Technical Reports Server (NTRS)

    Ruggerio, R. L.; Bowhill, S. A.

    1982-01-01

    Improvements to the partial reflection drifts experiment are completed. The results of the improvements include real time processing and simultaneous measurements of the D region with coherent scatter. Preliminary results indicate a positive correlation between drift velocities calculated by both methods during a two day interval. The possibility now exists for extended observations between partial reflection and coherent scatter. In addition, preliminary measurements could be performed between partial reflection and meteor radar to complete a comparison of methods used to determine velocities in the D region.

  18. Results of a Formal Methods Demonstration Project

    NASA Technical Reports Server (NTRS)

    Kelly, J.; Covington, R.; Hamilton, D.

    1994-01-01

    This paper describes the results of a cooperative study conducted by a team of researchers in formal methods at three NASA Centers to demonstrate FM techniques and to tailor them to critical NASA software systems. This pilot project applied FM to an existing critical software subsystem, the Shuttle's Jet Select subsystem (Phase I of an ongoing study). The present study shows that FM can be used successfully to uncover hidden issues in a highly critical and mature Functional Subsystem Software Requirements (FSSR) specification which are very difficult to discover by traditional means.

  19. A High Power Density Single-Phase PWM Rectifier with Active Ripple Energy Storage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ning, Puqi; Wang, Ruxi; Wang, Fei

    It is well known that there exist second-order harmonic current and corresponding ripple voltage on dc bus for single phase PWM rectifiers. The low frequency harmonic current is normally filtered using a bulk capacitor in the bus which results in low power density. This paper proposed an active ripple energy storage method that can effectively reduce the energy storage capacitance. The feed-forward control method and design considerations are provided. Simulation and 15 kW experimental results are provided for verification purposes.

  20. An Artificial Neural Networks Method for Solving Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Alharbi, Abir

    2010-09-01

    While there already exists many analytical and numerical techniques for solving PDEs, this paper introduces an approach using artificial neural networks. The approach consists of a technique developed by combining the standard numerical method, finite-difference, with the Hopfield neural network. The method is denoted Hopfield-finite-difference (HFD). The architecture of the nets, energy function, updating equations, and algorithms are developed for the method. The HFD method has been used successfully to approximate the solution of classical PDEs, such as the Wave, Heat, Poisson and the Diffusion equations, and on a system of PDEs. The software Matlab is used to obtain the results in both tabular and graphical form. The results are similar in terms of accuracy to those obtained by standard numerical methods. In terms of speed, the parallel nature of the Hopfield nets methods makes them easier to implement on fast parallel computers while some numerical methods need extra effort for parallelization.

  1. A New Moving Object Detection Method Based on Frame-difference and Background Subtraction

    NASA Astrophysics Data System (ADS)

    Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong

    2017-09-01

    Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.

  2. Linnorm: improved statistical analysis for single cell RNA-seq expression data.

    PubMed

    Yip, Shun H; Wang, Panwen; Kocher, Jean-Pierre A; Sham, Pak Chung; Wang, Junwen

    2017-12-15

    Linnorm is a novel normalization and transformation method for the analysis of single cell RNA sequencing (scRNA-seq) data. Linnorm is developed to remove technical noises and simultaneously preserve biological variations in scRNA-seq data, such that existing statistical methods can be improved. Using real scRNA-seq data, we compared Linnorm with existing normalization methods, including NODES, SAMstrt, SCnorm, scran, DESeq and TMM. Linnorm shows advantages in speed, technical noise removal and preservation of cell heterogeneity, which can improve existing methods in the discovery of novel subtypes, pseudo-temporal ordering of cells, clustering analysis, etc. Linnorm also performs better than existing DEG analysis methods, including BASiCS, NODES, SAMstrt, Seurat and DESeq2, in false positive rate control and accuracy. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Prediction and analysis of protein solubility using a novel scoring card method with dipeptide composition

    PubMed Central

    2012-01-01

    Background Existing methods for predicting protein solubility on overexpression in Escherichia coli advance performance by using ensemble classifiers such as two-stage support vector machine (SVM) based classifiers and a number of feature types such as physicochemical properties, amino acid and dipeptide composition, accompanied with feature selection. It is desirable to develop a simple and easily interpretable method for predicting protein solubility, compared to existing complex SVM-based methods. Results This study proposes a novel scoring card method (SCM) by using dipeptide composition only to estimate solubility scores of sequences for predicting protein solubility. SCM calculates the propensities of 400 individual dipeptides to be soluble using statistic discrimination between soluble and insoluble proteins of a training data set. Consequently, the propensity scores of all dipeptides are further optimized using an intelligent genetic algorithm. The solubility score of a sequence is determined by the weighted sum of all propensity scores and dipeptide composition. To evaluate SCM by performance comparisons, four data sets with different sizes and variation degrees of experimental conditions were used. The results show that the simple method SCM with interpretable propensities of dipeptides has promising performance, compared with existing SVM-based ensemble methods with a number of feature types. Furthermore, the propensities of dipeptides and solubility scores of sequences can provide insights to protein solubility. For example, the analysis of dipeptide scores shows high propensity of α-helix structure and thermophilic proteins to be soluble. Conclusions The propensities of individual dipeptides to be soluble are varied for proteins under altered experimental conditions. For accurately predicting protein solubility using SCM, it is better to customize the score card of dipeptide propensities by using a training data set under the same specified experimental conditions. The proposed method SCM with solubility scores and dipeptide propensities can be easily applied to the protein function prediction problems that dipeptide composition features play an important role. Availability The used datasets, source codes of SCM, and supplementary files are available at http://iclab.life.nctu.edu.tw/SCM/. PMID:23282103

  4. An eHealth Capabilities Framework for Graduates and Health Professionals: Mixed-Methods Study

    PubMed Central

    McGregor, Deborah; Keep, Melanie; Janssen, Anna; Spallek, Heiko; Quinn, Deleana; Jones, Aaron; Tseris, Emma; Yeung, Wilson; Togher, Leanne; Solman, Annette; Shaw, Tim

    2018-01-01

    Background The demand for an eHealth-ready and adaptable workforce is placing increasing pressure on universities to deliver eHealth education. At present, eHealth education is largely focused on components of eHealth rather than considering a curriculum-wide approach. Objective This study aimed to develop a framework that could be used to guide health curriculum design based on current evidence, and stakeholder perceptions of eHealth capabilities expected of tertiary health graduates. Methods A 3-phase, mixed-methods approach incorporated the results of a literature review, focus groups, and a Delphi process to develop a framework of eHealth capability statements. Results Participants (N=39) with expertise or experience in eHealth education, practice, or policy provided feedback on the proposed framework, and following the fourth iteration of this process, consensus was achieved. The final framework consisted of 4 higher-level capability statements that describe the learning outcomes expected of university graduates across the domains of (1) digital health technologies, systems, and policies; (2) clinical practice; (3) data analysis and knowledge creation; and (4) technology implementation and codesign. Across the capability statements are 40 performance cues that provide examples of how these capabilities might be demonstrated. Conclusions The results of this study inform a cross-faculty eHealth curriculum that aligns with workforce expectations. There is a need for educational curriculum to reinforce existing eHealth capabilities, adapt existing capabilities to make them transferable to novel eHealth contexts, and introduce new learning opportunities for interactions with technologies within education and practice encounters. As such, the capability framework developed may assist in the application of eHealth by emerging and existing health care professionals. Future research needs to explore the potential for integration of findings into workforce development programs. PMID:29764794

  5. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery.

    PubMed

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-07-19

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics.

  6. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery

    PubMed Central

    Siddiqui, Fasahat Ullah; Teng, Shyh Wei; Awrangjeb, Mohammad; Lu, Guojun

    2016-01-01

    Existing automatic building extraction methods are not effective in extracting buildings which are small in size and have transparent roofs. The application of large area threshold prohibits detection of small buildings and the use of ground points in generating the building mask prevents detection of transparent buildings. In addition, the existing methods use numerous parameters to extract buildings in complex environments, e.g., hilly area and high vegetation. However, the empirical tuning of large number of parameters reduces the robustness of building extraction methods. This paper proposes a novel Gradient-based Building Extraction (GBE) method to address these limitations. The proposed method transforms the Light Detection And Ranging (LiDAR) height information into intensity image without interpolation of point heights and then analyses the gradient information in the image. Generally, building roof planes have a constant height change along the slope of a roof plane whereas trees have a random height change. With such an analysis, buildings of a greater range of sizes with a transparent or opaque roof can be extracted. In addition, a local colour matching approach is introduced as a post-processing stage to eliminate trees. This stage of our proposed method does not require any manual setting and all parameters are set automatically from the data. The other post processing stages including variance, point density and shadow elimination are also applied to verify the extracted buildings, where comparatively fewer empirically set parameters are used. The performance of the proposed GBE method is evaluated on two benchmark data sets by using the object and pixel based metrics (completeness, correctness and quality). Our experimental results show the effectiveness of the proposed method in eliminating trees, extracting buildings of all sizes, and extracting buildings with and without transparent roof. When compared with current state-of-the-art building extraction methods, the proposed method outperforms the existing methods in various evaluation metrics. PMID:27447631

  7. Color normalization of histology slides using graph regularized sparse NMF

    NASA Astrophysics Data System (ADS)

    Sha, Lingdao; Schonfeld, Dan; Sethi, Amit

    2017-03-01

    Computer based automatic medical image processing and quantification are becoming popular in digital pathology. However, preparation of histology slides can vary widely due to differences in staining equipment, procedures and reagents, which can reduce the accuracy of algorithms that analyze their color and texture information. To re- duce the unwanted color variations, various supervised and unsupervised color normalization methods have been proposed. Compared with supervised color normalization methods, unsupervised color normalization methods have advantages of time and cost efficient and universal applicability. Most of the unsupervised color normaliza- tion methods for histology are based on stain separation. Based on the fact that stain concentration cannot be negative and different parts of the tissue absorb different stains, nonnegative matrix factorization (NMF), and particular its sparse version (SNMF), are good candidates for stain separation. However, most of the existing unsupervised color normalization method like PCA, ICA, NMF and SNMF fail to consider important information about sparse manifolds that its pixels occupy, which could potentially result in loss of texture information during color normalization. Manifold learning methods like Graph Laplacian have proven to be very effective in interpreting high-dimensional data. In this paper, we propose a novel unsupervised stain separation method called graph regularized sparse nonnegative matrix factorization (GSNMF). By considering the sparse prior of stain concentration together with manifold information from high-dimensional image data, our method shows better performance in stain color deconvolution than existing unsupervised color deconvolution methods, especially in keeping connected texture information. To utilized the texture information, we construct a nearest neighbor graph between pixels within a spatial area of an image based on their distances using heat kernal in lαβ space. The representation of a pixel in the stain density space is constrained to follow the feature distance of the pixel to pixels in the neighborhood graph. Utilizing color matrix transfer method with the stain concentrations found using our GSNMF method, the color normalization performance was also better than existing methods.

  8. Qualitative Maintenance Experience Handbook

    DTIC Science & Technology

    1975-10-20

    differences in type and location of actuators results. DESIRABLE FEATURES: 1. The simpler assist methods are easier to get to usually and are smaller...the wheels differ somewhat in method of removal, there exists no particular features that would qualify as 4 "undesirable." 3. The AV-8 requires special... different airplanes, this survey identifies desirable and unde- sirable features evident in the various installations of the same compo- nent. In essence

  9. Offline signature verification using convolution Siamese network

    NASA Astrophysics Data System (ADS)

    Xing, Zi-Jian; Yin, Fei; Wu, Yi-Chao; Liu, Cheng-Lin

    2018-04-01

    This paper presents an offline signature verification approach using convolutional Siamese neural network. Unlike the existing methods which consider feature extraction and metric learning as two independent stages, we adopt a deepleaning based framework which combines the two stages together and can be trained end-to-end. The experimental results on two offline public databases (GPDSsynthetic and CEDAR) demonstrate the superiority of our method on the offline signature verification problem.

  10. Shape-from-focus by tensor voting.

    PubMed

    Hariharan, R; Rajagopalan, A N

    2012-07-01

    In this correspondence, we address the task of recovering shape-from-focus (SFF) as a perceptual organization problem in 3-D. Using tensor voting, depth hypotheses from different focus operators are validated based on their likelihood to be part of a coherent 3-D surface, thereby exploiting scene geometry and focus information to generate reliable depth estimates. The proposed method is fast and yields significantly better results compared with existing SFF methods.

  11. Simultaneous Nonrigid Registration, Segmentation, and Tumor Detection in MRI Guided Cervical Cancer Radiation Therapy

    PubMed Central

    Lu, Chao; Chelikani, Sudhakar; Jaffray, David A.; Milosevic, Michael F.; Staib, Lawrence H.; Duncan, James S.

    2013-01-01

    External beam radiation therapy (EBRT) for the treatment of cancer enables accurate placement of radiation dose on the cancerous region. However, the deformation of soft tissue during the course of treatment, such as in cervical cancer, presents significant challenges for the delineation of the target volume and other structures of interest. Furthermore, the presence and regression of pathologies such as tumors may violate registration constraints and cause registration errors. In this paper, automatic segmentation, nonrigid registration and tumor detection in cervical magnetic resonance (MR) data are addressed simultaneously using a unified Bayesian framework. The proposed novel method can generate a tumor probability map while progressively identifying the boundary of an organ of interest based on the achieved nonrigid transformation. The method is able to handle the challenges of significant tumor regression and its effect on surrounding tissues. The new method was compared to various currently existing algorithms on a set of 36 MR data from six patients, each patient has six T2-weighted MR cervical images. The results show that the proposed approach achieves an accuracy comparable to manual segmentation and it significantly outperforms the existing registration algorithms. In addition, the tumor detection result generated by the proposed method has a high agreement with manual delineation by a qualified clinician. PMID:22328178

  12. A Multi-Objective Partition Method for Marine Sensor Networks Based on Degree of Event Correlation.

    PubMed

    Huang, Dongmei; Xu, Chenyixuan; Zhao, Danfeng; Song, Wei; He, Qi

    2017-09-21

    Existing marine sensor networks acquire data from sea areas that are geographically divided, and store the data independently in their affiliated sea area data centers. In the case of marine events across multiple sea areas, the current network structure needs to retrieve data from multiple data centers, and thus severely affects real-time decision making. In this study, in order to provide a fast data retrieval service for a marine sensor network, we use all the marine sensors as the vertices, establish the edge based on marine events, and abstract the marine sensor network as a graph. Then, we construct a multi-objective balanced partition method to partition the abstract graph into multiple regions and store them in the cloud computing platform. This method effectively increases the correlation of the sensors and decreases the retrieval cost. On this basis, an incremental optimization strategy is designed to dynamically optimize existing partitions when new sensors are added into the network. Experimental results show that the proposed method can achieve the optimal layout for distributed storage in the process of disaster data retrieval in the China Sea area, and effectively optimize the result of partitions when new buoys are deployed, which eventually will provide efficient data access service for marine events.

  13. Estimating the size of hidden populations using respondent-driven sampling data: Case examples from Morocco

    PubMed Central

    Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S

    2015-01-01

    Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908

  14. Color transfer between high-dynamic-range images

    NASA Astrophysics Data System (ADS)

    Hristova, Hristina; Cozot, Rémi; Le Meur, Olivier; Bouatouch, Kadi

    2015-09-01

    Color transfer methods alter the look of a source image with regards to a reference image. So far, the proposed color transfer methods have been limited to low-dynamic-range (LDR) images. Unlike LDR images, which are display-dependent, high-dynamic-range (HDR) images contain real physical values of the world luminance and are able to capture high luminance variations and finest details of real world scenes. Therefore, there exists a strong discrepancy between the two types of images. In this paper, we bridge the gap between the color transfer domain and the HDR imagery by introducing HDR extensions to LDR color transfer methods. We tackle the main issues of applying a color transfer between two HDR images. First, to address the nature of light and color distributions in the context of HDR imagery, we carry out modifications of traditional color spaces. Furthermore, we ensure high precision in the quantization of the dynamic range for histogram computations. As image clustering (based on light and colors) proved to be an important aspect of color transfer, we analyze it and adapt it to the HDR domain. Our framework has been applied to several state-of-the-art color transfer methods. Qualitative experiments have shown that results obtained with the proposed adaptation approach exhibit less artifacts and are visually more pleasing than results obtained when straightforwardly applying existing color transfer methods to HDR images.

  15. Does post-exercise massage treatment reduce delayed onset muscle soreness? A systematic review

    PubMed Central

    Ernst, E.

    1998-01-01

    BACKGROUND: Delayed onset muscle soreness (DOMS) is a frequent problem after unaccustomed exercise. No universally accepted treatment exists. Massage therapy is often recommended for this condition but uncertainty exists about its effectiveness. AIM: To determine whether post-exercise massage alleviates the symptoms of DOMS after a bout of strenuous exercise. METHOD: Various computerised literature searches were carried out and located seven controlled trials. RESULTS: Most of the trials were burdened with serious methodological flaws, and their results are far from uniform. However, most suggest that post-exercise massage may alleviate symptoms of DOMS. CONCLUSIONS: Massage therapy may be a promising treatment for DOMS. Definitive studies are warranted. 


 PMID:9773168

  16. Making the transition to workload-based staffing: using the Workload Indicators of Staffing Need method in Uganda.

    PubMed

    Namaganda, Grace; Oketcho, Vincent; Maniple, Everd; Viadro, Claire

    2015-08-31

    Uganda's health workforce is characterized by shortages and inequitable distribution of qualified health workers. To ascertain staffing levels, Uganda uses fixed government-approved norms determined by facility type. This approach cannot distinguish between facilities of the same type that have different staffing needs. The Workload Indicators of Staffing Need (WISN) method uses workload to determine number and type of staff required in a given facility. The national WISN assessment sought to demonstrate the limitations of the existing norms and generate evidence to influence health unit staffing and staff deployment for efficient utilization of available scarce human resources. A national WISN assessment (September 2012) used purposive sampling to select 136 public health facilities in 33/112 districts. The study examined staffing requirements for five cadres (nursing assistants, nurses, midwives, clinical officers, doctors) at health centres II (n = 59), III (n = 53) and IV (n = 13) and hospitals (n = 11). Using health management information system workload data (1 July 2010-30 June 2011), the study compared current and required staff, assessed workload pressure and evaluated the adequacy of the existing staffing norms. By the WISN method, all three types of health centres had fewer nurses (42-70%) and midwives (53-67%) than required and consequently exhibited high workload pressure (30-58%) for those cadres. Health centres IV and hospitals lacked doctors (39-42%) but were adequately staffed with clinical officers. All facilities displayed overstaffing of nursing assistants. For all cadres at health centres III and IV other than nursing assistants, the fixed norms or existing staffing or both fell short of the WISN staffing requirements, with, for example, only half as many nurses and midwives as required. The WISN results demonstrate the inadequacies of existing staffing norms, particularly for health centres III and IV. The results provide an evidence base to reshape policy, adopt workload-based norms, review scopes of practice and target human resource investments. In the near term, the government could redistribute existing health workers to improve staffing equity in line with the WISN results. Longer term revision of staffing norms and investments to effectively reflect actual workloads and ensure provision of quality services at all levels is needed.

  17. CNN-BLPred: a Convolutional neural network based predictor for β-Lactamases (BL) and their classes.

    PubMed

    White, Clarence; Ismail, Hamid D; Saigo, Hiroto; Kc, Dukka B

    2017-12-28

    The β-Lactamase (BL) enzyme family is an important class of enzymes that plays a key role in bacterial resistance to antibiotics. As the newly identified number of BL enzymes is increasing daily, it is imperative to develop a computational tool to classify the newly identified BL enzymes into one of its classes. There are two types of classification of BL enzymes: Molecular Classification and Functional Classification. Existing computational methods only address Molecular Classification and the performance of these existing methods is unsatisfactory. We addressed the unsatisfactory performance of the existing methods by implementing a Deep Learning approach called Convolutional Neural Network (CNN). We developed CNN-BLPred, an approach for the classification of BL proteins. The CNN-BLPred uses Gradient Boosted Feature Selection (GBFS) in order to select the ideal feature set for each BL classification. Based on the rigorous benchmarking of CCN-BLPred using both leave-one-out cross-validation and independent test sets, CCN-BLPred performed better than the other existing algorithms. Compared with other architectures of CNN, Recurrent Neural Network, and Random Forest, the simple CNN architecture with only one convolutional layer performs the best. After feature extraction, we were able to remove ~95% of the 10,912 features using Gradient Boosted Trees. During 10-fold cross validation, we increased the accuracy of the classic BL predictions by 7%. We also increased the accuracy of Class A, Class B, Class C, and Class D performance by an average of 25.64%. The independent test results followed a similar trend. We implemented a deep learning algorithm known as Convolutional Neural Network (CNN) to develop a classifier for BL classification. Combined with feature selection on an exhaustive feature set and using balancing method such as Random Oversampling (ROS), Random Undersampling (RUS) and Synthetic Minority Oversampling Technique (SMOTE), CNN-BLPred performs significantly better than existing algorithms for BL classification.

  18. A new procedure for calculating contact stresses in gear teeth

    NASA Technical Reports Server (NTRS)

    Somprakit, Paisan; Huston, Ronald L.

    1991-01-01

    A numerical procedure for evaluating and monitoring contact stresses in meshing gear teeth is discussed. The procedure is intended to extend the range of applicability and to improve the accuracy of gear contact stress analysis. The procedure is based upon fundamental solution from the theory of elasticity. It is an iterative numerical procedure. The method is believed to have distinct advantages over the classical Hertz method, the finite-element method, and over existing approaches with the boundary element method. Unlike many classical contact stress analyses, friction effects and sliding are included. Slipping and sticking in the contact region are studied. Several examples are discussed. The results are in agreement with classical results. Applications are presented for spur gears.

  19. Why are Formal Methods Not Used More Widely?

    NASA Technical Reports Server (NTRS)

    Knight, John C.; DeJong, Colleen L.; Gibble, Matthew S.; Nakano, Luis G.

    1997-01-01

    Despite extensive development over many years and significant demonstrated benefits, formal methods remain poorly accepted by industrial practitioners. Many reasons have been suggested for this situation such as a claim that they extent the development cycle, that they require difficult mathematics, that inadequate tools exist, and that they are incompatible with other software packages. There is little empirical evidence that any of these reasons is valid. The research presented here addresses the question of why formal methods are not used more widely. The approach used was to develop a formal specification for a safety-critical application using several specification notations and assess the results in a comprehensive evaluation framework. The results of the experiment suggests that there remain many impediments to the routine use of formal methods.

  20. Adaptive identification of vessel's added moments of inertia with program motion

    NASA Astrophysics Data System (ADS)

    Alyshev, A. S.; Melnikov, V. G.

    2018-05-01

    In this paper, we propose a new experimental method for determining the moments of inertia of the ship model. The paper gives a brief review of existing methods, a description of the proposed method and experimental stand, test procedures and calculation formulas and experimental results. The proposed method is based on the energy approach with special program motions. The ship model is fixed in a special rack consisting of a torsion element and a set of additional servo drives with flywheels (reactive wheels), which correct the motion. The servo drives with an adaptive controller provide the symmetry of the motion, which is necessary for the proposed identification procedure. The effectiveness of the proposed approach is confirmed by experimental results.

  1. Combining existing numerical models with data assimilation using weighted least-squares finite element methods.

    PubMed

    Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J

    2017-01-01

    A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. The Impact of Symptoms and Impairments on Overall Health in US National Health Data

    PubMed Central

    Stewart, Susan T.; Woodward, Rebecca M.; Rosen, Allison B.; Cutler, David M.

    2015-01-01

    Objective To assess the effects on overall self-rated health of the broad range of symptoms and impairments that are routinely asked about in national surveys. Data We use data from adults in the nationally representative Medical Expenditure Panel Survey (MEPS) 2002 with validation in an independent sample from MEPS 2000. Methods Regression analysis is used to relate impairments and symptoms to a 100-point self-rating of general health status. The effect of each impairment and symptom on health-related quality of life (HRQOL) is estimated from regression coefficients, accounting for interactions between them. Results Impairments and symptoms most strongly associated with overall health include pain, self-care limitations, and having little or no energy. The most prevalent are moderate pain, severe anxiety, moderate depressive symptoms, and low energy. Effects are stable across different waves of MEPS, and questions cover a broader range of impairments and symptoms than existing health measurement instruments. Conclusions This method makes use of the rich detail on impairments and symptoms in existing national data, quantifying their independent effects on overall health. Given the ongoing availability of these data and the shortcomings of traditional utility methods, it would be valuable to compare existing HRQOL measures to other methods, such as the one presented herein, for use in tracking population health over time. PMID:18725850

  3. Matrix completion by deep matrix factorization.

    PubMed

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. TESTING OF INDOOR RADON REDUCTION TECHNIQUES IN 19 MARYLAND HOUSES

    EPA Science Inventory

    The report gives results of testing of indoor radon reduction techniques in 19 existing houses in Maryland. The focus was on passive measures: various passive soil depressurization methods, where natural wind and temperature effects are utilized to develop suction in the system; ...

  5. Military Support for Youth Development: An Exploratory Analysis

    DTIC Science & Technology

    1994-01-01

    This report assesses existing evidence about the potential of military service and training as methods to prepare disadvantaged youth for productive...whether veterans in general receive a positive or negative return to military service; for disadvantaged veterans, it suggests little if any effect. Results

  6. Two phase modeling of nanofluid flow in existence of melting heat transfer by means of HAM

    NASA Astrophysics Data System (ADS)

    Sheikholeslami, M.; Jafaryar, M.; Bateni, K.; Ganji, D. D.

    2018-02-01

    In this article, Buongiorno Model is applied for investigation of nanofluid flow over a stretching plate in existence of magnetic field. Radiation and Melting heat transfer are taken into account. Homotopy analysis method (HAM) is selected to solve ODEs which are obtained from similarity transformation. Roles of Brownian motion, thermophoretic parameter, Hartmann number, porosity parameter, Melting parameter and Eckert number are presented graphically. Results indicate that nanofluid velocity and concentration enhance with rise of melting parameter. Nusselt number reduces with increase of porosity and melting parameters.

  7. The potential of genetic algorithms for conceptual design of rotor systems

    NASA Technical Reports Server (NTRS)

    Crossley, William A.; Wells, Valana L.; Laananen, David H.

    1993-01-01

    The capabilities of genetic algorithms as a non-calculus based, global search method make them potentially useful in the conceptual design of rotor systems. Coupling reasonably simple analysis tools to the genetic algorithm was accomplished, and the resulting program was used to generate designs for rotor systems to match requirements similar to those of both an existing helicopter and a proposed helicopter design. This provides a comparison with the existing design and also provides insight into the potential of genetic algorithms in design of new rotors.

  8. Boundary cooled rocket engines for space storable propellants

    NASA Technical Reports Server (NTRS)

    Kesselring, R. C.; Mcfarland, B. L.; Knight, R. M.; Gurnitz, R. N.

    1972-01-01

    An evaluation of an existing analytical heat transfer model was made to develop the technology of boundary film/conduction cooled rocket thrust chambers to the space storable propellant combination oxygen difluoride/diborane. Critical design parameters were identified and their importance determined. Test reduction methods were developed to enable data obtained from short duration hot firings with a thin walled (calorimeter) chamber to be used quantitatively evaluate the heat absorbing capability of the vapor film. The modification of the existing like-doublet injector was based on the results obtained from the calorimeter firings.

  9. Ergodic channel capacity of spatial correlated multiple-input multiple-output free space optical links using multipulse pulse-position modulation

    NASA Astrophysics Data System (ADS)

    Wang, Huiqin; Wang, Xue; Cao, Minghua

    2017-02-01

    The spatial correlation extensively exists in the multiple-input multiple-output (MIMO) free space optical (FSO) communication systems due to the channel fading and the antenna space limitation. Wilkinson's method was utilized to investigate the impact of spatial correlation on the MIMO FSO communication system employing multipulse pulse-position modulation. Simulation results show that the existence of spatial correlation reduces the ergodic channel capacity, and the reception diversity is more competent to resist this kind of performance degradation.

  10. Microwave imaging of spinning object using orbital angular momentum

    NASA Astrophysics Data System (ADS)

    Liu, Kang; Li, Xiang; Gao, Yue; Wang, Hongqiang; Cheng, Yongqiang

    2017-09-01

    The linear Doppler shift used for the detection of a spinning object becomes significantly weakened when the line of sight (LOS) is perpendicular to the object, which will result in the failure of detection. In this paper, a new detection and imaging technique for spinning objects is developed. The rotational Doppler phenomenon is observed by using the microwave carrying orbital angular momentum (OAM). To converge the radiation energy on the area where objects might exist, the generation method of OAM beams is proposed based on the frequency diversity principle, and the imaging model is derived accordingly. The detection method of the rotational Doppler shift and the imaging approach of the azimuthal profiles are proposed, which are verified by proof-of-concept experiments. Simulation and experimental results demonstrate that OAM beams can still be used to obtain the azimuthal profiles of spinning objects even when the LOS is perpendicular to the object. This work remedies the insufficiency in existing microwave sensing technology and offers a new solution to the object identification problem.

  11. Classification of high dimensional multispectral image data

    NASA Technical Reports Server (NTRS)

    Hoffbeck, Joseph P.; Landgrebe, David A.

    1993-01-01

    A method for classifying high dimensional remote sensing data is described. The technique uses a radiometric adjustment to allow a human operator to identify and label training pixels by visually comparing the remotely sensed spectra to laboratory reflectance spectra. Training pixels for material without obvious spectral features are identified by traditional means. Features which are effective for discriminating between the classes are then derived from the original radiance data and used to classify the scene. This technique is applied to Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data taken over Cuprite, Nevada in 1992, and the results are compared to an existing geologic map. This technique performed well even with noisy data and the fact that some of the materials in the scene lack absorption features. No adjustment for the atmosphere or other scene variables was made to the data classified. While the experimental results compare favorably with an existing geologic map, the primary purpose of this research was to demonstrate the classification method, as compared to the geology of the Cuprite scene.

  12. Sketch Matching on Topology Product Graph.

    PubMed

    Liang, Shuang; Luo, Jun; Liu, Wenyin; Wei, Yichen

    2015-08-01

    Sketch matching is the fundamental problem in sketch based interfaces. After years of study, it remains challenging when there exists large irregularity and variations in the hand drawn sketch shapes. While most existing works exploit topology relations and graph representations for this problem, they are usually limited by the coarse topology exploration and heuristic (thus suboptimal) similarity metrics between graphs. We present a new sketch matching method with two novel contributions. We introduce a comprehensive definition of topology relations, which results in a rich and informative graph representation of sketches. For graph matching, we propose topology product graph that retains the full correspondence for matching two graphs. Based on it, we derive an intuitive sketch similarity metric whose exact solution is easy to compute. In addition, the graph representation and new metric naturally support partial matching, an important practical problem that received less attention in the literature. Extensive experimental results on a real challenging dataset and the superior performance of our method show that it outperforms the state-of-the-art.

  13. Impact of diet on the design of waste processors in CELSS

    NASA Technical Reports Server (NTRS)

    Waleh, Ahmad; Kanevsky, Valery; Nguyen, Thoi K.; Upadhye, Ravi; Wydeven, Theodore

    1991-01-01

    The preliminary results of a design analysis for a waste processor which employs existing technologies and takes into account the constraints of human diet are presented. The impact of diet is determined by using a model and an algorithm developed for the control and management of diet in a Controlled Ecological Life Support System (CELSS). A material and energy balance model for thermal oxidation of waste is developed which is consistent with both physical/chemical methods of incineration and supercritical water oxidation. The two models yield quantitative analysis of the diet and waste streams and the specific design parameters for waste processors, respectively. The results demonstrate that existing technologies can meet the demands of waste processing, but the choice and design of the processors or processing methods will be sensitive to the constraints of diet. The numerical examples are chosen to display the nature and extent of the gap in the available experiment information about CELSS requirements.

  14. Mathematical modeling of the aerodynamics of high-angle-of-attack maneuvers

    NASA Technical Reports Server (NTRS)

    Schiff, L. B.; Tobak, M.; Malcolm, G. N.

    1980-01-01

    This paper is a review of the current state of aerodynamic mathematical modeling for aircraft motions at high angles of attack. The mathematical model serves to define a set of characteristic motions from whose known aerodynamic responses the aerodynamic response to an arbitrary high angle-of-attack flight maneuver can be predicted. Means are explored of obtaining stability parameter information in terms of the characteristic motions, whether by wind-tunnel experiments, computational methods, or by parameter-identification methods applied to flight-test data. A rationale is presented for selecting and verifying the aerodynamic mathematical model at the lowest necessary level of complexity. Experimental results describing the wing-rock phenomenon are shown to be accommodated within the most recent mathematical model by admitting the existence of aerodynamic hysteresis in the steady-state variation of the rolling moment with roll angle. Interpretation of the experimental results in terms of bifurcation theory reveals the general conditions under which aerodynamic hysteresis must exist.

  15. [How can the impact of Health Technology Assessment (HTA) in the Austrian healthcare system be assessed? Design of a conceptual framework].

    PubMed

    Schumacher, I; Zechmeister, I

    2012-04-01

    In Austria research in Health Technology Assessment (HTA) has been conducted since the 1990s. Research in HTA aims at supporting an adequate and efficient use of health care resources in order to sustain a publicly financed and solidary health care system. Ultimately, HTA research should result in better health of the population. Research results should provide independent information for decision makers. For legitimizing further research resources and for prioritizing future HTA research and guaranteeing the value of future research, HTA research needs itself to undergo evaluation. Aim of the study is to design a conceptual framework for evaluating the impact of HTA research in Austria on the basis of the existing literature. An already existing review which presents methods and concepts how to evaluate HTA-impact was updated by a systematic research including literature of the years 2004-January 2010. Results were analysed in regard to 4 categories: definition of the term impact, target groups and system levels, operationalisation of indicators and evaluation methods. Overall, 19 publications were included. Referring to the 4 categories, an explanation of impact has to take into account HTAs multidisciplinary setting and needs a context related definition. Target groups, system levels, indicators and methods depend on the impact defined. Studies investigated direct and indirect impact and were focused on different target groups like physicians, nurses and decision makers on the micro-, and meso level, as well as politicians and reimbursement institutions on the macro level. Except for one reference all studies applied already known and mostly qualitative methods for measuring the impact of HTA research. Thus, an appropriate pool of instruments seems to be available. There is a lack of information about validity of applied methods and indicators. By adapting adequate methods and concepts a conceptual framework for the Austrian HTA-Impact evaluation has been designed. The paper presents an overview of existing methods for the evaluation of the HTA research. This has been used to identify useful approaches for measuring the HTA-impact in Austria. By providing a context sensitive framework for impact evaluation in Austria the Austrian HTA-research contributes to the international trend of impact-evaluation. © Georg Thieme Verlag KG Stuttgart · New York.

  16. Use of Information Sources by Cancer Patients: Results of a Systematic Review of the Research Literature

    ERIC Educational Resources Information Center

    Ankem, Kalyani

    2006-01-01

    Objectives: Existing findings on cancer patients' use of information sources were synthesized to (1) rank the most and least used information sources and the most helpful information sources and to (2) find the impact of patient demographics and situations on use of information sources. Methods: To synthesize results found across studies, a…

  17. A Phenomenological Study on the Lived Experience of First and Second Year Teachers in Standards-Based Grading Districts

    ERIC Educational Resources Information Center

    Battistone, William A., Jr.

    2017-01-01

    Problem: There is an existing cycle of questionable grading practices at the K-12 level. As a result, districts continue to search for innovative methods of evaluating and reporting student progress. One result of this effort has been the adoption of a standards-based grading approach. Research concerning standards-based grading implementation has…

  18. A Practical, Robust Methodology for Acquiring New Observation Data Using Computationally Expensive Groundwater Models

    NASA Astrophysics Data System (ADS)

    Siade, Adam J.; Hall, Joel; Karelse, Robert N.

    2017-11-01

    Regional groundwater flow models play an important role in decision making regarding water resources; however, the uncertainty embedded in model parameters and model assumptions can significantly hinder the reliability of model predictions. One way to reduce this uncertainty is to collect new observation data from the field. However, determining where and when to obtain such data is not straightforward. There exist a number of data-worth and experimental design strategies developed for this purpose. However, these studies often ignore issues related to real-world groundwater models such as computational expense, existing observation data, high-parameter dimension, etc. In this study, we propose a methodology, based on existing methods and software, to efficiently conduct such analyses for large-scale, complex regional groundwater flow systems for which there is a wealth of available observation data. The method utilizes the well-established d-optimality criterion, and the minimax criterion for robust sampling strategies. The so-called Null-Space Monte Carlo method is used to reduce the computational burden associated with uncertainty quantification. And, a heuristic methodology, based on the concept of the greedy algorithm, is proposed for developing robust designs with subsets of the posterior parameter samples. The proposed methodology is tested on a synthetic regional groundwater model, and subsequently applied to an existing, complex, regional groundwater system in the Perth region of Western Australia. The results indicate that robust designs can be obtained efficiently, within reasonable computational resources, for making regional decisions regarding groundwater level sampling.

  19. Ionization potential for the 1s{sup 2}2s{sup 2} of berylliumlike systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chung, K.T.; Zhu, X.W.; Wang, Z.W.

    1993-05-01

    The 1s{sup 2}2s{sup 2}, ground state energies of beryllium- like systems are calculated with a full-core plus correlation method. A partial saturation of basis functions method is used to extrapolated a better nonrelativistic energy. The 1s{sup 2}2s{sup 2} ionization potentials are calculated by including the relativistic corrections, mass polarization and QED effects. These results are compared with the existing theoretical and experimental data in the literature. The predicted BeI, CIII, NIV, and OV ionization potentials are within the quoted experimental error. Our result for FVI, 1267606.7 cm{sup -1}, supports the recent experiment of Engstrom, 1267606(2) cm{sup -1}, over the datummore » in the existing data tables. The predicted specific mass polarization contribution to the ionization potential for BeI, 0.00688 a.u., agrees with the 0.00674(100) a.u. from the experiment of Wen. Using the calculated results of Z=4-10, 15, and 20, we extrapolated the results for other Z systems up to Z=25 for which the ionization potentials are not explicitly computed.« less

  20. Tabu Search enhances network robustness under targeted attacks

    NASA Astrophysics Data System (ADS)

    Sun, Shi-wen; Ma, Yi-lin; Li, Rui-qi; Wang, Li; Xia, Cheng-yi

    2016-03-01

    We focus on the optimization of network robustness with respect to intentional attacks on high-degree nodes. Given an existing network, this problem can be considered as a typical single-objective combinatorial optimization problem. Based on the heuristic Tabu Search optimization algorithm, a link-rewiring method is applied to reconstruct the network while keeping the degree of every node unchanged. Through numerical simulations, BA scale-free network and two real-world networks are investigated to verify the effectiveness of the proposed optimization method. Meanwhile, we analyze how the optimization affects other topological properties of the networks, including natural connectivity, clustering coefficient and degree-degree correlation. The current results can help to improve the robustness of existing complex real-world systems, as well as to provide some insights into the design of robust networks.

  1. On the long time behavior of non-autonomous Lotka-Volterra models with diffusion via the sub-supertrajectory method

    NASA Astrophysics Data System (ADS)

    Langa, José A.; Rodríguez-Bernal, Aníbal; Suárez, Antonio

    In this paper we study in detail the geometrical structure of global pullback and forwards attractors associated to non-autonomous Lotka-Volterra systems in all the three cases of competition, symbiosis or prey-predator. In particular, under some conditions on the parameters, we prove the existence of a unique nondegenerate global solution for these models, which attracts any other complete bounded trajectory. Thus, we generalize the existence of a unique strictly positive stable (stationary) solution from the autonomous case and we extend to Lotka-Volterra systems the result for scalar logistic equations. To this end we present the sub-supertrajectory tool as a generalization of the now classical sub-supersolution method. In particular, we also conclude pullback and forwards permanence for the above models.

  2. Research of ceramic matrix for a safe immobilization of radioactive sludge waste

    NASA Astrophysics Data System (ADS)

    Dorofeeva, Ludmila; Orekhov, Dmitry

    2018-03-01

    The research and improvement of the existing method for radioactive waste hardening by fixation in a ceramic matrix was carried out. For the samples covered with the sodium silicate and tested after the storage on the air the speed of a radionuclides leaching was determined. The properties of a clay ceramics and the optimum conditions of sintering were defined. The experimental data about the influence of a temperature mode sintering, water quantities, sludge and additives in the samples on their mechanical durability and a water resistance were obtained. The comparative analysis of the conducted research is aimed at improvement of the existing method of the hardening radioactive waste by inclusion in a ceramic matrix and reveals the advantages of the received results over analogs.

  3. Natural Language Processing Methods and Systems for Biomedical Ontology Learning

    PubMed Central

    Liu, Kaihong; Hogan, William R.; Crowley, Rebecca S.

    2010-01-01

    While the biomedical informatics community widely acknowledges the utility of domain ontologies, there remain many barriers to their effective use. One important requirement of domain ontologies is that they must achieve a high degree of coverage of the domain concepts and concept relationships. However, the development of these ontologies is typically a manual, time-consuming, and often error-prone process. Limited resources result in missing concepts and relationships as well as difficulty in updating the ontology as knowledge changes. Methodologies developed in the fields of natural language processing, information extraction, information retrieval and machine learning provide techniques for automating the enrichment of an ontology from free-text documents. In this article, we review existing methodologies and developed systems, and discuss how existing methods can benefit the development of biomedical ontologies. PMID:20647054

  4. Methods for assessing the quality of mammalian embryos: How far we are from the gold standard?

    PubMed

    Rocha, José C; Passalia, Felipe; Matos, Felipe D; Maserati, Marc P; Alves, Mayra F; Almeida, Tamie G de; Cardoso, Bruna L; Basso, Andrea C; Nogueira, Marcelo F G

    2016-08-01

    Morphological embryo classification is of great importance for many laboratory techniques, from basic research to the ones applied to assisted reproductive technology. However, the standard classification method for both human and cattle embryos, is based on quality parameters that reflect the overall morphological quality of the embryo in cattle, or the quality of the individual embryonic structures, more relevant in human embryo classification. This assessment method is biased by the subjectivity of the evaluator and even though several guidelines exist to standardize the classification, it is not a method capable of giving reliable and trustworthy results. Latest approaches for the improvement of quality assessment include the use of data from cellular metabolism, a new morphological grading system, development kinetics and cleavage symmetry, embryo cell biopsy followed by pre-implantation genetic diagnosis, zona pellucida birefringence, ion release by the embryo cells and so forth. Nowadays there exists a great need for evaluation methods that are practical and non-invasive while being accurate and objective. A method along these lines would be of great importance to embryo evaluation by embryologists, clinicians and other professionals who work with assisted reproductive technology. Several techniques shows promising results in this sense, one being the use of digital images of the embryo as basis for features extraction and classification by means of artificial intelligence techniques (as genetic algorithms and artificial neural networks). This process has the potential to become an accurate and objective standard for embryo quality assessment.

  5. Methods for assessing the quality of mammalian embryos: How far we are from the gold standard?

    PubMed Central

    Rocha, José C.; Passalia, Felipe; Matos, Felipe D.; Maserati Jr, Marc P.; Alves, Mayra F.; de Almeida, Tamie G.; Cardoso, Bruna L.; Basso, Andrea C.; Nogueira, Marcelo F. G.

    2016-01-01

    Morphological embryo classification is of great importance for many laboratory techniques, from basic research to the ones applied to assisted reproductive technology. However, the standard classification method for both human and cattle embryos, is based on quality parameters that reflect the overall morphological quality of the embryo in cattle, or the quality of the individual embryonic structures, more relevant in human embryo classification. This assessment method is biased by the subjectivity of the evaluator and even though several guidelines exist to standardize the classification, it is not a method capable of giving reliable and trustworthy results. Latest approaches for the improvement of quality assessment include the use of data from cellular metabolism, a new morphological grading system, development kinetics and cleavage symmetry, embryo cell biopsy followed by pre-implantation genetic diagnosis, zona pellucida birefringence, ion release by the embryo cells and so forth. Nowadays there exists a great need for evaluation methods that are practical and non-invasive while being accurate and objective. A method along these lines would be of great importance to embryo evaluation by embryologists, clinicians and other professionals who work with assisted reproductive technology. Several techniques shows promising results in this sense, one being the use of digital images of the embryo as basis for features extraction and classification by means of artificial intelligence techniques (as genetic algorithms and artificial neural networks). This process has the potential to become an accurate and objective standard for embryo quality assessment. PMID:27584609

  6. Pairing field methods to improve inference in wildlife surveys while accommodating detection covariance

    USGS Publications Warehouse

    Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.

    2017-01-01

    It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.

  7. Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.

    PubMed

    Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui

    2018-02-01

    In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.

  8. Cantilever spring constant calibration using laser Doppler vibrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohler, Benjamin

    2007-06-15

    Uncertainty in cantilever spring constants is a critical issue in atomic force microscopy (AFM) force measurements. Though numerous methods exist for calibrating cantilever spring constants, the accuracy of these methods can be limited by both the physical models themselves as well as uncertainties in their experimental implementation. Here we report the results from two of the most common calibration methods, the thermal tune method and the Sader method. These were implemented on a standard AFM system as well as using laser Doppler vibrometry (LDV). Using LDV eliminates some uncertainties associated with optical lever detection on an AFM. It also offersmore » considerably higher signal to noise deflection measurements. We find that AFM and LDV result in similar uncertainty in the calibrated spring constants, about 5%, using either the thermal tune or Sader methods provided that certain limitations of the methods and instrumentation are observed.« less

  9. Activity coefficients from molecular simulations using the OPAS method

    NASA Astrophysics Data System (ADS)

    Kohns, Maximilian; Horsch, Martin; Hasse, Hans

    2017-10-01

    A method for determining activity coefficients by molecular dynamics simulations is presented. It is an extension of the OPAS (osmotic pressure for the activity of the solvent) method in previous work for studying the solvent activity in electrolyte solutions. That method is extended here to study activities of all components in mixtures of molecular species. As an example, activity coefficients in liquid mixtures of water and methanol are calculated for 298.15 K and 323.15 K at 1 bar using molecular models from the literature. These dense and strongly interacting mixtures pose a significant challenge to existing methods for determining activity coefficients by molecular simulation. It is shown that the new method yields accurate results for the activity coefficients which are in agreement with results obtained with a thermodynamic integration technique. As the partial molar volumes are needed in the proposed method, the molar excess volume of the system water + methanol is also investigated.

  10. Building Inventory Database on the Urban Scale Using GIS for Earthquake Risk Assessment

    NASA Astrophysics Data System (ADS)

    Kaplan, O.; Avdan, U.; Guney, Y.; Helvaci, C.

    2016-12-01

    The majority of the existing buildings are not safe against earthquakes in most of the developing countries. Before a devastating earthquake, existing buildings need to be assessed and the vulnerable ones must be determined. Determining the seismic performance of existing buildings which is usually made with collecting the attributes of existing buildings, making the analysis and the necessary queries, and producing the result maps is very hard and complicated procedure that can be simplified with Geographic Information System (GIS). The aim of this study is to produce a building inventory database using GIS for assessing the earthquake risk of existing buildings. In this paper, a building inventory database for 310 buildings, located in Eskisehir, Turkey, was produced in order to assess the earthquake risk of the buildings. The results from this study show that 26% of the buildings have high earthquake risk, 33% of the buildings have medium earthquake risk and the 41% of the buildings have low earthquake risk. The produced building inventory database can be very useful especially for governments in dealing with the problem of determining seismically vulnerable buildings in the large existing building stocks. With the help of this kind of methods, determination of the buildings, which may collapse and cause life and property loss during a possible future earthquake, will be very quick, cheap and reliable.

  11. The effectiveness of ground-penetrating radar surveys in the location of unmarked burial sites in modern cemeteries

    NASA Astrophysics Data System (ADS)

    Fiedler, Sabine; Illich, Bernhard; Berger, Jochen; Graw, Matthias

    2009-07-01

    Ground-penetration radar (GPR) is a geophysical method that is commonly used in archaeological and forensic investigations, including the determination of the exact location of graves. Whilst the method is rapid and does not involve disturbance of the graves, the interpretation of GPR profiles is nevertheless difficult and often leads to incorrect results. Incorrect identifications could hinder criminal investigations and complicate burials in cemeteries that have no information on the location of previously existing graves. In order to increase the number of unmarked graves that are identified, the GPR results need to be verified by comparing them with the soil and vegetation properties of the sites examined. We used a modern cemetery to assess the results obtained with GPR which we then compared with previously obtained tachymetric data and with an excavation of the graves where doubt existed. Certain soil conditions tended to make the application of GPR difficult on occasions, but a rough estimation of the location of the graves was always possible. The two different methods, GPR survey and tachymetry, both proved suitable for correctly determining the exact location of the majority of graves. The present study thus shows that GPR is a reliable method for determining the exact location of unmarked graves in modern cemeteries. However, the method did not allow statements to be made on the stage of decay of the bodies. Such information would assist in deciding what should be done with graves where ineffective degradation creates a problem for reusing graves following the standard resting time of 25 years.

  12. A mesh generation and machine learning framework for Drosophila gene expression pattern image analysis

    PubMed Central

    2013-01-01

    Background Multicellular organisms consist of cells of many different types that are established during development. Each type of cell is characterized by the unique combination of expressed gene products as a result of spatiotemporal gene regulation. Currently, a fundamental challenge in regulatory biology is to elucidate the gene expression controls that generate the complex body plans during development. Recent advances in high-throughput biotechnologies have generated spatiotemporal expression patterns for thousands of genes in the model organism fruit fly Drosophila melanogaster. Existing qualitative methods enhanced by a quantitative analysis based on computational tools we present in this paper would provide promising ways for addressing key scientific questions. Results We develop a set of computational methods and open source tools for identifying co-expressed embryonic domains and the associated genes simultaneously. To map the expression patterns of many genes into the same coordinate space and account for the embryonic shape variations, we develop a mesh generation method to deform a meshed generic ellipse to each individual embryo. We then develop a co-clustering formulation to cluster the genes and the mesh elements, thereby identifying co-expressed embryonic domains and the associated genes simultaneously. Experimental results indicate that the gene and mesh co-clusters can be correlated to key developmental events during the stages of embryogenesis we study. The open source software tool has been made available at http://compbio.cs.odu.edu/fly/. Conclusions Our mesh generation and machine learning methods and tools improve upon the flexibility, ease-of-use and accuracy of existing methods. PMID:24373308

  13. A novel method of utilizing permeable reactive kiddle (PRK) for the remediation of acid mine drainage.

    PubMed

    Lee, Woo-Chun; Lee, Sang-Woo; Yun, Seong-Taek; Lee, Pyeong-Koo; Hwang, Yu Sik; Kim, Soon-Oh

    2016-01-15

    Numerous technologies have been developed and applied to remediate AMD, but each has specific drawbacks. To overcome the limitations of existing methods and improve their effectiveness, we propose a novel method utilizing permeable reactive kiddle (PRK). This manuscript explores the performance of the PRK method. In line with the concept of green technology, the PRK method recycles industrial waste, such as steel slag and waste cast iron. Our results demonstrate that the PRK method can be applied to remediate AMD under optimal operational conditions. Especially, this method allows for simple installation and cheap expenditure, compared with established technologies. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Common method biases in behavioral research: a critical review of the literature and recommended remedies.

    PubMed

    Podsakoff, Philip M; MacKenzie, Scott B; Lee, Jeong-Yeon; Podsakoff, Nathan P

    2003-10-01

    Interest in the problem of method biases has a long history in the behavioral sciences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases, discuss the cognitive processes through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for how to select appropriate procedural and statistical remedies for different types of research settings.

  15. Robust iterative method for nonlinear Helmholtz equation

    NASA Astrophysics Data System (ADS)

    Yuan, Lijun; Lu, Ya Yan

    2017-08-01

    A new iterative method is developed for solving the two-dimensional nonlinear Helmholtz equation which governs polarized light in media with the optical Kerr nonlinearity. In the strongly nonlinear regime, the nonlinear Helmholtz equation could have multiple solutions related to phenomena such as optical bistability and symmetry breaking. The new method exhibits a much more robust convergence behavior than existing iterative methods, such as frozen-nonlinearity iteration, Newton's method and damped Newton's method, and it can be used to find solutions when good initial guesses are unavailable. Numerical results are presented for the scattering of light by a nonlinear circular cylinder based on the exact nonlocal boundary condition and a pseudospectral method in the polar coordinate system.

  16. Dynamically Evolving Sectors for Convective Weather Impact

    NASA Technical Reports Server (NTRS)

    Drew, Michael C.

    2010-01-01

    A new strategy for altering existing sector boundaries in response to blocking convective weather is presented. This method seeks to improve the reduced capacity of sectors directly affected by weather by moving boundaries in a direction that offers the greatest capacity improvement. The boundary deformations are shared by neighboring sectors within the region in a manner that preserves their shapes and sizes as much as possible. This reduces the controller workload involved with learning new sector designs. The algorithm that produces the altered sectors is based on a force-deflection mesh model that needs only nominal traffic patterns and the shape of the blocking weather for input. It does not require weather-affected traffic patterns that would have to be predicted by simulation. When compared to an existing optimal sector design method, the sectors produced by the new algorithm are more similar to the original sector shapes, resulting in sectors that may be more suitable for operational use because the change is not as drastic. Also, preliminary results show that this method produces sectors that can equitably distribute the workload of rerouted weather-affected traffic throughout the region where inclement weather is present. This is demonstrated by sector aircraft count distributions of simulated traffic in weather-affected regions.

  17. Generating Virtual Patients by Multivariate and Discrete Re-Sampling Techniques.

    PubMed

    Teutonico, D; Musuamba, F; Maas, H J; Facius, A; Yang, S; Danhof, M; Della Pasqua, O

    2015-10-01

    Clinical Trial Simulations (CTS) are a valuable tool for decision-making during drug development. However, to obtain realistic simulation scenarios, the patients included in the CTS must be representative of the target population. This is particularly important when covariate effects exist that may affect the outcome of a trial. The objective of our investigation was to evaluate and compare CTS results using re-sampling from a population pool and multivariate distributions to simulate patient covariates. COPD was selected as paradigm disease for the purposes of our analysis, FEV1 was used as response measure and the effects of a hypothetical intervention were evaluated in different populations in order to assess the predictive performance of the two methods. Our results show that the multivariate distribution method produces realistic covariate correlations, comparable to the real population. Moreover, it allows simulation of patient characteristics beyond the limits of inclusion and exclusion criteria in historical protocols. Both methods, discrete resampling and multivariate distribution generate realistic pools of virtual patients. However the use of a multivariate distribution enable more flexible simulation scenarios since it is not necessarily bound to the existing covariate combinations in the available clinical data sets.

  18. A simple and rapid method for direct determination of Al(III) based on the enhanced resonance Rayleigh scattering of hemin-functionalized graphene-Al(III) system

    NASA Astrophysics Data System (ADS)

    Ling, Yu; Chen, Ling Xiao; Dong, Jiang Xue; Li, Nian Bing; Luo, Hong Qun

    2016-03-01

    A novel method for direct determination of Al(III) by using hemin-functionalized graphene (H-GO) has been established based on the enhancement of resonance Rayleigh scattering (RRS) intensity. The characteristics of RRS spectra, the optimum reaction conditions, and the reaction mechanism have been investigated. In this experiment, the Al(III) would exist in sol-gel Al(OH)3 species under the condition of pH 5.9 in aqueous solutions. When H-GO existed in the solution, the sol-gel Al(OH)3 would react with H-GO and result in enhancement of RRS intensity, owing to the enhanced hydrophobicity of H-GO surface. Therefore, a simple and rapid sensor for Al(III) was developed. The increased intensity of RRS is directly proportional to the concentration of Al(III) in the range of 10 nM-6 μM, along with a detection limit of 0.87 nM. Moreover, the sensor has been applied to determination of Al(III) concentration in real water and aspirin tablet samples with satisfactory results. Therefore, the proposed method is promising as an effective means for selective and sensitive determination of Al(III).

  19. Quantification of polyhydroxyalkanoates in mixed and pure cultures biomass by Fourier transform infrared spectroscopy: comparison of different approaches.

    PubMed

    Isak, I; Patel, M; Riddell, M; West, M; Bowers, T; Wijeyekoon, S; Lloyd, J

    2016-08-01

    Fourier transform infrared (FTIR) spectroscopy was used in this study for the rapid quantification of polyhydroxyalkanoates (PHA) in mixed and pure culture bacterial biomass. Three different statistical analysis methods (regression, partial least squares (PLS) and nonlinear) were applied to the FTIR data and the results were plotted against the PHA values measured with the reference gas chromatography technique. All methods predicted PHA content in mixed culture biomass with comparable efficiency, indicated by similar residuals values. The PHA in these cultures ranged from low to medium concentration (0-44 wt% of dried biomass content). However, for the analysis of the combined mixed and pure culture biomass with PHA concentration ranging from low to high (0-93% of dried biomass content), the PLS method was most efficient. This paper reports, for the first time, the use of a single calibration model constructed with a combination of mixed and pure cultures covering a wide PHA range, for predicting PHA content in biomass. Currently no one universal method exists for processing FTIR data for polyhydroxyalkanoates (PHA) quantification. This study compares three different methods of analysing FTIR data for quantification of PHAs in biomass. A new data-processing approach was proposed and the results were compared against existing literature methods. Most publications report PHA quantification of medium range in pure culture. However, in our study we encompassed both mixed and pure culture biomass containing a broader range of PHA in the calibration curve. The resulting prediction model is useful for rapid quantification of a wider range of PHA content in biomass. © 2016 The Society for Applied Microbiology.

  20. Modelling the monetary value of a QALY: a new approach based on UK data.

    PubMed

    Mason, Helen; Jones-Lee, Michael; Donaldson, Cam

    2009-08-01

    Debate about the monetary value of a quality-adjusted life year (QALY) has existed in the health economics literature for some time. More recently, concern about such a value has arisen in UK health policy. This paper reports on an attempt to 'model' a willingness-to-pay-based value of a QALY from the existing value of preventing a statistical fatality (VPF) currently used in UK public sector decision making. Two methods of deriving the value of a QALY from the existing UK VPF are outlined: one conventional and one new. The advantages and disadvantages of each of the approaches are discussed as well as the implications of the results for policy and health economic evaluation methodology.

  1. Technology Solutions Case Study: Excavationless: Exterior-Side Foundation Insulation for Existing Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Building science research supports installing exterior (soil side) foundation insulation as the optimal method to enhance the hygrothermal performance of new homes. With exterior foundation insulation, water management strategies are maximized while insulating the basement space and ensuring a more even temperature at the foundation wall. This project describes an innovative, minimally invasive foundation insulation upgrade technique on an existing home that uses hydrovac excavation technology combined with a liquid insulating foam. Cost savings over the traditional excavation process ranged from 23% to 50%. The excavationless process could result in even greater savings since replacement of building structures, exterior features,more » utility meters, and landscaping would be minimal or non-existent in an excavationless process.« less

  2. Positive-unlabeled learning for disease gene identification

    PubMed Central

    Yang, Peng; Li, Xiao-Li; Mei, Jian-Ping; Kwoh, Chee-Keong; Ng, See-Kiong

    2012-01-01

    Background: Identifying disease genes from human genome is an important but challenging task in biomedical research. Machine learning methods can be applied to discover new disease genes based on the known ones. Existing machine learning methods typically use the known disease genes as the positive training set P and the unknown genes as the negative training set N (non-disease gene set does not exist) to build classifiers to identify new disease genes from the unknown genes. However, such kind of classifiers is actually built from a noisy negative set N as there can be unknown disease genes in N itself. As a result, the classifiers do not perform as well as they could be. Result: Instead of treating the unknown genes as negative examples in N, we treat them as an unlabeled set U. We design a novel positive-unlabeled (PU) learning algorithm PUDI (PU learning for disease gene identification) to build a classifier using P and U. We first partition U into four sets, namely, reliable negative set RN, likely positive set LP, likely negative set LN and weak negative set WN. The weighted support vector machines are then used to build a multi-level classifier based on the four training sets and positive training set P to identify disease genes. Our experimental results demonstrate that our proposed PUDI algorithm outperformed the existing methods significantly. Conclusion: The proposed PUDI algorithm is able to identify disease genes more accurately by treating the unknown data more appropriately as unlabeled set U instead of negative set N. Given that many machine learning problems in biomedical research do involve positive and unlabeled data instead of negative data, it is possible that the machine learning methods for these problems can be further improved by adopting PU learning methods, as we have done here for disease gene identification. Availability and implementation: The executable program and data are available at http://www1.i2r.a-star.edu.sg/∼xlli/PUDI/PUDI.html. Contact: xlli@i2r.a-star.edu.sg or yang0293@e.ntu.edu.sg Supplementary information: Supplementary Data are available at Bioinformatics online. PMID:22923290

  3. Multi-Optimisation Consensus Clustering

    NASA Astrophysics Data System (ADS)

    Li, Jian; Swift, Stephen; Liu, Xiaohui

    Ensemble Clustering has been developed to provide an alternative way of obtaining more stable and accurate clustering results. It aims to avoid the biases of individual clustering algorithms. However, it is still a challenge to develop an efficient and robust method for Ensemble Clustering. Based on an existing ensemble clustering method, Consensus Clustering (CC), this paper introduces an advanced Consensus Clustering algorithm called Multi-Optimisation Consensus Clustering (MOCC), which utilises an optimised Agreement Separation criterion and a Multi-Optimisation framework to improve the performance of CC. Fifteen different data sets are used for evaluating the performance of MOCC. The results reveal that MOCC can generate more accurate clustering results than the original CC algorithm.

  4. Ortho Image and DTM Generation with Intelligent Methods

    NASA Astrophysics Data System (ADS)

    Bagheri, H.; Sadeghian, S.

    2013-10-01

    Nowadays the artificial intelligent algorithms has considered in GIS and remote sensing. Genetic algorithm and artificial neural network are two intelligent methods that are used for optimizing of image processing programs such as edge extraction and etc. these algorithms are very useful for solving of complex program. In this paper, the ability and application of genetic algorithm and artificial neural network in geospatial production process like geometric modelling of satellite images for ortho photo generation and height interpolation in raster Digital Terrain Model production process is discussed. In first, the geometric potential of Ikonos-2 and Worldview-2 with rational functions, 2D & 3D polynomials were tested. Also comprehensive experiments have been carried out to evaluate the viability of the genetic algorithm for optimization of rational function, 2D & 3D polynomials. Considering the quality of Ground Control Points, the accuracy (RMSE) with genetic algorithm and 3D polynomials method for Ikonos-2 Geo image was 0.508 pixel sizes and the accuracy (RMSE) with GA algorithm and rational function method for Worldview-2 image was 0.930 pixel sizes. For more another optimization artificial intelligent methods, neural networks were used. With the use of perceptron network in Worldview-2 image, a result of 0.84 pixel sizes with 4 neurons in middle layer was gained. The final conclusion was that with artificial intelligent algorithms it is possible to optimize the existing models and have better results than usual ones. Finally the artificial intelligence methods, like genetic algorithms as well as neural networks, were examined on sample data for optimizing interpolation and for generating Digital Terrain Models. The results then were compared with existing conventional methods and it appeared that these methods have a high capacity in heights interpolation and that using these networks for interpolating and optimizing the weighting methods based on inverse distance leads to a high accurate estimation of heights.

  5. A sampling and classification item selection approach with content balancing.

    PubMed

    Chen, Pei-Hua

    2015-03-01

    Existing automated test assembly methods typically employ constrained combinatorial optimization. Constructing forms sequentially based on an optimization approach usually results in unparallel forms and requires heuristic modifications. Methods based on a random search approach have the major advantage of producing parallel forms sequentially without further adjustment. This study incorporated a flexible content-balancing element into the statistical perspective item selection method of the cell-only method (Chen et al. in Educational and Psychological Measurement, 72(6), 933-953, 2012). The new method was compared with a sequential interitem distance weighted deviation model (IID WDM) (Swanson & Stocking in Applied Psychological Measurement, 17(2), 151-166, 1993), a simultaneous IID WDM, and a big-shadow-test mixed integer programming (BST MIP) method to construct multiple parallel forms based on matching a reference form item-by-item. The results showed that the cell-only method with content balancing and the sequential and simultaneous versions of IID WDM yielded results comparable to those obtained using the BST MIP method. The cell-only method with content balancing is computationally less intensive than the sequential and simultaneous versions of IID WDM.

  6. Expanding Clinical Laboratory Tobacco Product Evaluation Methods to Loose-leaf Tobacco Vaporizers

    PubMed Central

    Lopez, Alexa A.; Hiler, Marzena; Maloney, Sarah; Eissenberg, Thomas; Breland, Alison

    2016-01-01

    Background Novel tobacco products entering the US market include electronic cigarettes (ECIGs) and products advertised to “heat, not burn” tobacco. There is a growing literature regarding the acute effects of ECIGs. Less is known about “heat, not burn” products. This study’s purpose was to expand existing clinical laboratory methods to examine, in cigarette smokers, the acute effects of a “heat, not burn” “loose-leaf tobacco vaporizer” (LLTV). Methods Plasma nicotine and breath carbon monoxide (CO) concentration and tobacco abstinence symptom severity were measured before and after two 10-puff (30-sec interpuff interval) product use bouts separated by 60 minutes. LLTV effects were compared to participants’ own brand (OB) cigarettes and an ECIG (3.3 V; 1.5 Ohm; 18 mg/ml nicotine). Results Relative to OB, LLTV increased plasma nicotine concentration to a lesser degree, did not increase CO, and appeared to not reduce abstinence symptoms as effectively. Relative to ECIG, LLTV nicotine and CO delivery and abstinence symptom suppression did not differ. Participants reported that both the LLTV and ECIG were significantly less satisfying than OB. Conclusions Results demonstrate that LLTVs are capable of delivering nicotine and suppressing tobacco abstinence symptoms partially; acute effects of these products can be evaluated using existing clinical laboratory methods. Results can inform tobacco product regulation and may be predictive of the extent that these products have the potential to benefit or harm overall public health. PMID:27768968

  7. Designing stellarator coils by a modified Newton method using FOCUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  8. Laplace-transform-based method to calculate back-reflected radiance from an isotropically scattering half-space

    NASA Astrophysics Data System (ADS)

    Rinzema, K.; Hoenders, B. J.; Ferwerda, H. A.

    1997-07-01

    We present a method to determine the back-reflected radiance from an isotropically scattering half-space with matched boundary. This method has the advantage that it leads very quickly to the relevant equations, the numerical solution of which is also quite easy. Essentially, the method is derived from a mathematical criterion that effectively forbids the existence of solutions to the transport equation which grow exponentially as one moves away from the surface and deeper into the medium. Preliminary calculations for infinitely wide beams yield results which agree very well with what is found in the literature.

  9. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  10. Search automation of the generalized method of device operational characteristics improvement

    NASA Astrophysics Data System (ADS)

    Petrova, I. Yu; Puchkova, A. A.; Zaripova, V. M.

    2017-01-01

    The article presents brief results of analysis of existing search methods of the closest patents, which can be applied to determine generalized methods of device operational characteristics improvement. There were observed the most widespread clustering algorithms and metrics for determining the proximity degree between two documents. The article proposes the technique of generalized methods determination; it has two implementation variants and consists of 7 steps. This technique has been implemented in the “Patents search” subsystem of the “Intellect” system. Also the article gives an example of the use of the proposed technique.

  11. Designing stellarator coils by a modified Newton method using FOCUS

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-06-01

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  12. Designing stellarator coils by a modified Newton method using FOCUS

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...

    2018-03-22

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  13. Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.

    PubMed

    Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha

    2017-03-01

    This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.

  14. SU-G-BRA-11: Tumor Tracking in An Iterative Volume of Interest Based 4D CBCT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, R; Pan, T; Ahmad, M

    2016-06-15

    Purpose: 4D CBCT can allow evaluation of tumor motion immediately prior to radiation therapy, but suffers from heavy artifacts that limit its ability to track tumors. Various iterative and compressed sensing reconstructions have been proposed to reduce these artifacts, but are costly time-wise and can degrade the image quality of bony anatomy for alignment with regularization. We have previously proposed an iterative volume of interest (I4D VOI) method which minimizes reconstruction time and maintains image quality of bony anatomy by focusing a 4D reconstruction within a VOI. The purpose of this study is to test the tumor tracking accuracy ofmore » this method compared to existing methods. Methods: Long scan (8–10 mins) CBCT data with corresponding RPM data was collected for 12 lung cancer patients. The full data set was sorted into 8 phases and reconstructed using FDK cone beam reconstruction to serve as a gold standard. The data was reduced in way that maintains a normal breathing pattern and used to reconstruct 4D images using FDK, low and high regularization TV minimization (λ=2,10), and the proposed I4D VOI method with PTVs used for the VOI. Tumor trajectories were found using rigid registration within the VOI for each reconstruction and compared to the gold standard. Results: The root mean square error (RMSE) values were 2.70mm for FDK, 2.50mm for low regularization TV, 1.48mm for high regularization TV, and 2.34mm for I4D VOI. Streak artifacts in I4D VOI were reduced compared to FDK and images were less blurred than TV reconstructed images. Conclusion: I4D VOI performed at least as well as existing methods in tumor tracking, with the exception of high regularization TV minimization. These results along with the reconstruction time and outside VOI image quality advantages suggest I4D VOI to be an improvement over existing methods. Funding support provided by CPRIT grant RP110562-P2-01.« less

  15. Knowledge based word-concept model estimation and refinement for biomedical text mining.

    PubMed

    Jimeno Yepes, Antonio; Berlanga, Rafael

    2015-02-01

    Text mining of scientific literature has been essential for setting up large public biomedical databases, which are being widely used by the research community. In the biomedical domain, the existence of a large number of terminological resources and knowledge bases (KB) has enabled a myriad of machine learning methods for different text mining related tasks. Unfortunately, KBs have not been devised for text mining tasks but for human interpretation, thus performance of KB-based methods is usually lower when compared to supervised machine learning methods. The disadvantage of supervised methods though is they require labeled training data and therefore not useful for large scale biomedical text mining systems. KB-based methods do not have this limitation. In this paper, we describe a novel method to generate word-concept probabilities from a KB, which can serve as a basis for several text mining tasks. This method not only takes into account the underlying patterns within the descriptions contained in the KB but also those in texts available from large unlabeled corpora such as MEDLINE. The parameters of the model have been estimated without training data. Patterns from MEDLINE have been built using MetaMap for entity recognition and related using co-occurrences. The word-concept probabilities were evaluated on the task of word sense disambiguation (WSD). The results showed that our method obtained a higher degree of accuracy than other state-of-the-art approaches when evaluated on the MSH WSD data set. We also evaluated our method on the task of document ranking using MEDLINE citations. These results also showed an increase in performance over existing baseline retrieval approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Theoretical and experimental investigation of supersonic aerodynamic characteristics of a twin-fuselage concept

    NASA Technical Reports Server (NTRS)

    Wood, R. M.; Miller, D. S.; Brentner, K. S.

    1983-01-01

    A theoretical and experimental investigation has been conducted to evaluate the fundamental supersonic aerodynamic characteristics of a generic twin-body model at a Mach number of 2.70. Results show that existing aerodynamic prediction methods are adequate for making preliminary aerodynamic estimates.

  17. Hard exudates segmentation based on learned initial seeds and iterative graph cut.

    PubMed

    Kusakunniran, Worapan; Wu, Qiang; Ritthipravat, Panrasee; Zhang, Jian

    2018-05-01

    (Background and Objective): The occurrence of hard exudates is one of the early signs of diabetic retinopathy which is one of the leading causes of the blindness. Many patients with diabetic retinopathy lose their vision because of the late detection of the disease. Thus, this paper is to propose a novel method of hard exudates segmentation in retinal images in an automatic way. (Methods): The existing methods are based on either supervised or unsupervised learning techniques. In addition, the learned segmentation models may often cause miss-detection and/or fault-detection of hard exudates, due to the lack of rich characteristics, the intra-variations, and the similarity with other components in the retinal image. Thus, in this paper, the supervised learning based on the multilayer perceptron (MLP) is only used to identify initial seeds with high confidences to be hard exudates. Then, the segmentation is finalized by unsupervised learning based on the iterative graph cut (GC) using clusters of initial seeds. Also, in order to reduce color intra-variations of hard exudates in different retinal images, the color transfer (CT) is applied to normalize their color information, in the pre-processing step. (Results): The experiments and comparisons with the other existing methods are based on the two well-known datasets, e_ophtha EX and DIARETDB1. It can be seen that the proposed method outperforms the other existing methods in the literature, with the sensitivity in the pixel-level of 0.891 for the DIARETDB1 dataset and 0.564 for the e_ophtha EX dataset. The cross datasets validation where the training process is performed on one dataset and the testing process is performed on another dataset is also evaluated in this paper, in order to illustrate the robustness of the proposed method. (Conclusions): This newly proposed method integrates the supervised learning and unsupervised learning based techniques. It achieves the improved performance, when compared with the existing methods in the literature. The robustness of the proposed method for the scenario of cross datasets could enhance its practical usage. That is, the trained model could be more practical for unseen data in the real-world situation, especially when the capturing environments of training and testing images are not the same. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Comparison of gamma-oryzanol contents in crude rice bran oils from different sources by various determination methods.

    PubMed

    Yoshie, Ayano; Kanda, Ayato; Nakamura, Takahiro; Igusa, Hisao; Hara, Setsuko

    2009-01-01

    Although there are various determination methods for gamma -oryzanol contained in rice bran oil by absorptiometry, normal-phase HPLC, and reversed-phase HPLC, their accuracies and the correlations among them have not been revealed yet. Chloroform-containing mixed solvents are widely used as mobile phases in some HPLC methods, but researchers have been apprehensive about its use in terms of safety for the human body and the environment.In the present study, a simple and accurate determination method was developed by improving the reversed-phase HPLC method. This novel HPLC method uses methanol/acetonitrile/acetic acid (52/45/3 v/v/v), a non-chlorinated solvent, as the mobile phase, and shows an excellent linearity (y = 0.9527x + 0.1241, R(2) = 0.9974) with absorptiometry. The mean relative errors among the existing 3 methods and the novel method, determined by adding fixed amounts of gamma-oryzanol into refined rice salad oil, were -4.7% for the absorptiometry, -6.8% for the existing normal-phase HPLC, +4.6% for the existing reversed-phase HPLC, and -1.6% for the novel reversed-phase HPLC method. gamma -Oryzanol content in 12 kinds of crude rice bran oils obtained from different sources were determined by the four methods. The mean content of those oils were 1.75+/-0.18% for the absorptiometry, 1.29+/-0.11% for the existing normal-phase HPLC, 1.51+/-0.10% for the existing reversed-phase HPLC, and 1.54+/-0.19% for the novel reversed-phase HPLC method.

  19. Method selection for sustainability assessments: The case of recovery of resources from waste water.

    PubMed

    Zijp, M C; Waaijers-van der Loop, S L; Heijungs, R; Broeren, M L M; Peeters, R; Van Nieuwenhuijzen, A; Shen, L; Heugens, E H W; Posthuma, L

    2017-07-15

    Sustainability assessments provide scientific support in decision procedures towards sustainable solutions. However, in order to contribute in identifying and choosing sustainable solutions, the sustainability assessment has to fit the decision context. Two complicating factors exist. First, different stakeholders tend to have different views on what a sustainability assessment should encompass. Second, a plethora of sustainability assessment methods exist, due to the multi-dimensional characteristic of the concept. Different methods provide other representations of sustainability. Based on a literature review, we present a protocol to facilitate method selection together with stakeholders. The protocol guides the exploration of i) the decision context, ii) the different views of stakeholders and iii) the selection of pertinent assessment methods. In addition, we present an online tool for method selection. This tool identifies assessment methods that meet the specifications obtained with the protocol, and currently contains characteristics of 30 sustainability assessment methods. The utility of the protocol and the tool are tested in a case study on the recovery of resources from domestic waste water. In several iterations, a combination of methods was selected, followed by execution of the selected sustainability assessment methods. The assessment results can be used in the first phase of the decision procedure that leads to a strategic choice for sustainable resource recovery from waste water in the Netherlands. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Development of wheelchair caster testing equipment and preliminary testing of caster models

    PubMed Central

    Mhatre, Anand; Ott, Joseph

    2017-01-01

    Background Because of the adverse environmental conditions present in less-resourced environments (LREs), the World Health Organization (WHO) has recommended that specialised wheelchair test methods may need to be developed to support product quality standards in these environments. A group of experts identified caster test methods as a high priority because of their common failure in LREs, and the insufficiency of existing test methods described in the International Organization for Standardization (ISO) Wheelchair Testing Standards (ISO 7176). Objectives To develop and demonstrate the feasibility of a caster system test method. Method Background literature and expert opinions were collected to identify existing caster test methods, caster failures common in LREs and environmental conditions present in LREs. Several conceptual designs for the caster testing method were developed, and through an iterative process using expert feedback, a final concept and a design were developed and a prototype was fabricated. Feasibility tests were conducted by testing a series of caster systems from wheelchairs used in LREs, and failure modes were recorded and compared to anecdotal reports about field failures. Results The new caster testing system was developed and it provides the flexibility to expose caster systems to typical conditions in LREs. Caster failures such as stem bolt fractures, fork fractures, bearing failures and tire cracking occurred during testing trials and are consistent with field failures. Conclusion The new caster test system has the capability to incorporate necessary test factors that degrade caster quality in LREs. Future work includes developing and validating a testing protocol that results in failure modes common during wheelchair use in LRE. PMID:29062762

Top