Sample records for previously existing methods

  1. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques.

    PubMed

    Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.

  2. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques

    PubMed Central

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634

  3. An Early Years Toolbox for Assessing Early Executive Function, Language, Self-Regulation, and Social Development: Validity, Reliability, and Preliminary Norms

    ERIC Educational Resources Information Center

    Howard, Steven J.; Melhuish, Edward

    2017-01-01

    Several methods of assessing executive function (EF), self-regulation, language development, and social development in young children have been developed over previous decades. Yet new technologies make available methods of assessment not previously considered. In resolving conceptual and pragmatic limitations of existing tools, the Early Years…

  4. Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation.

    PubMed

    Jimeno-Yepes, Antonio J; McInnes, Bridget T; Aronson, Alan R

    2011-06-02

    Evaluation of Word Sense Disambiguation (WSD) methods in the biomedical domain is difficult because the available resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We present a method that can be used to automatically develop a WSD test collection using the Unified Medical Language System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. We demonstrate the use of this method by developing such a data set, called MSH WSD. In our method, the Metathesaurus is first screened to identify ambiguous terms whose possible senses consist of two or more MeSH headings. We then use each ambiguous term and its corresponding MeSH heading to extract MEDLINE citations where the term and only one of the MeSH headings co-occur. The term found in the MEDLINE citation is automatically assigned the UMLS CUI linked to the MeSH heading. Each instance has been assigned a UMLS Concept Unique Identifier (CUI). We compare the characteristics of the MSH WSD data set to the previously existing NLM WSD data set. The resulting MSH WSD data set consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203 ambiguous entities. For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from MEDLINE.We evaluated the reliability of the MSH WSD data set using existing knowledge-based methods and compared their performance to that of the results previously obtained by these algorithms on the pre-existing data set, NLM WSD. We show that the knowledge-based methods achieve different results but keep their relative performance except for the Journal Descriptor Indexing (JDI) method, whose performance is below the other methods. The MSH WSD data set allows the evaluation of WSD algorithms in the biomedical domain. Compared to previously existing data sets, MSH WSD contains a larger number of biomedical terms/abbreviations and covers the largest set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions.

  5. Propagation-based x-ray phase contrast imaging using an iterative phase diversity technique

    NASA Astrophysics Data System (ADS)

    Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.

    2018-03-01

    Through the use of a phase diversity technique, we demonstrate a near-field in-line x-ray phase contrast algorithm that provides improved object reconstruction when compared to our previous iterative methods for a homogeneous sample. Like our previous methods, the new technique uses the sample refractive index distribution during the reconstruction process. The technique complements existing monochromatic and polychromatic methods and is useful in situations where experimental phase contrast data is affected by noise.

  6. Exploiting MeSH indexing in MEDLINE to generate a data set for word sense disambiguation

    PubMed Central

    2011-01-01

    Background Evaluation of Word Sense Disambiguation (WSD) methods in the biomedical domain is difficult because the available resources are either too small or too focused on specific types of entities (e.g. diseases or genes). We present a method that can be used to automatically develop a WSD test collection using the Unified Medical Language System (UMLS) Metathesaurus and the manual MeSH indexing of MEDLINE. We demonstrate the use of this method by developing such a data set, called MSH WSD. Methods In our method, the Metathesaurus is first screened to identify ambiguous terms whose possible senses consist of two or more MeSH headings. We then use each ambiguous term and its corresponding MeSH heading to extract MEDLINE citations where the term and only one of the MeSH headings co-occur. The term found in the MEDLINE citation is automatically assigned the UMLS CUI linked to the MeSH heading. Each instance has been assigned a UMLS Concept Unique Identifier (CUI). We compare the characteristics of the MSH WSD data set to the previously existing NLM WSD data set. Results The resulting MSH WSD data set consists of 106 ambiguous abbreviations, 88 ambiguous terms and 9 which are a combination of both, for a total of 203 ambiguous entities. For each ambiguous term/abbreviation, the data set contains a maximum of 100 instances per sense obtained from MEDLINE. We evaluated the reliability of the MSH WSD data set using existing knowledge-based methods and compared their performance to that of the results previously obtained by these algorithms on the pre-existing data set, NLM WSD. We show that the knowledge-based methods achieve different results but keep their relative performance except for the Journal Descriptor Indexing (JDI) method, whose performance is below the other methods. Conclusions The MSH WSD data set allows the evaluation of WSD algorithms in the biomedical domain. Compared to previously existing data sets, MSH WSD contains a larger number of biomedical terms/abbreviations and covers the largest set of UMLS Semantic Types. Furthermore, the MSH WSD data set has been generated automatically reusing already existing annotations and, therefore, can be regenerated from subsequent UMLS versions. PMID:21635749

  7. Analysis of modal behavior at frequency cross-over

    NASA Astrophysics Data System (ADS)

    Costa, Robert N., Jr.

    1994-11-01

    The existence of the mode crossing condition is detected and analyzed in the Active Control of Space Structures Model 4 (ACOSS4). The condition is studied for its contribution to the inability of previous algorithms to successfully optimize the structure and converge to a feasible solution. A new algorithm is developed to detect and correct for mode crossings. The existence of the mode crossing condition is verified in ACOSS4 and found not to have appreciably affected the solution. The structure is then successfully optimized using new analytic methods based on modal expansion. An unrelated error in the optimization algorithm previously used is verified and corrected, thereby equipping the optimization algorithm with a second analytic method for eigenvector differentiation based on Nelson's Method. The second structure is the Control of Flexible Structures (COFS). The COFS structure is successfully reproduced and an initial eigenanalysis completed.

  8. An Early Years Toolbox for Assessing Early Executive Function, Language, Self-Regulation, and Social Development: Validity, Reliability, and Preliminary Norms

    PubMed Central

    Howard, Steven J.; Melhuish, Edward

    2016-01-01

    Several methods of assessing executive function (EF), self-regulation, language development, and social development in young children have been developed over previous decades. Yet new technologies make available methods of assessment not previously considered. In resolving conceptual and pragmatic limitations of existing tools, the Early Years Toolbox (EYT) offers substantial advantages for early assessment of language, EF, self-regulation, and social development. In the current study, results of our large-scale administration of this toolbox to 1,764 preschool and early primary school students indicated very good reliability, convergent validity with existing measures, and developmental sensitivity. Results were also suggestive of better capture of children’s emerging abilities relative to comparison measures. Preliminary norms are presented, showing a clear developmental trajectory across half-year age groups. The accessibility of the EYT, as well as its advantages over existing measures, offers considerably enhanced opportunities for objective measurement of young children’s abilities to enable research and educational applications. PMID:28503022

  9. An improvement of convergence of a dispersion-relation preserving method for the classical Boussinesq equation

    NASA Astrophysics Data System (ADS)

    Jang, T. S.

    2018-03-01

    A dispersion-relation preserving (DRP) method, as a semi-analytic iterative procedure, has been proposed by Jang (2017) for integrating the classical Boussinesq equation. It has been shown to be a powerful numerical procedure for simulating a nonlinear dispersive wave system because it preserves the dispersion-relation, however, there still exists a potential flaw, e.g., a restriction on nonlinear wave amplitude and a small region of convergence (ROC) and so on. To remedy the flaw, a new DRP method is proposed in this paper, aimed at improving convergence performance. The improved method is proved to have convergence properties and dispersion-relation preserving nature for small waves; of course, unique existence of the solutions is also proved. In addition, by a numerical experiment, the method is confirmed to be good at observing nonlinear wave phenomena such as moving solitary waves and their binary collision with different wave amplitudes. Especially, it presents a ROC (much) wider than that of the previous method by Jang (2017). Moreover, it gives the numerical simulation of a high (or large-amplitude) nonlinear dispersive wave. In fact, it is demonstrated to simulate a large-amplitude solitary wave and the collision of two solitary waves with large-amplitudes that we have failed to simulate with the previous method. Conclusively, it is worth noting that better convergence results are achieved compared to Jang (2017); i.e., they represent a major improvement in practice over the previous method.

  10. Aligning Person-Centred Methods and Young People's Conceptualizations of Diversity

    ERIC Educational Resources Information Center

    Waite, Sue; Boyask, Ruth; Lawson, Hazel

    2010-01-01

    Many existing studies of diversity are concerned with social groups identified by externally determined factors, for example, ethnicity, gender, or educational attainment, and examine, either quantitatively or qualitatively, issues delineated by these. In evaluating methods used in previous research, we consider ways in which the adoption of…

  11. Prediction of heterotrimeric protein complexes by two-phase learning using neighboring kernels

    PubMed Central

    2014-01-01

    Background Protein complexes play important roles in biological systems such as gene regulatory networks and metabolic pathways. Most methods for predicting protein complexes try to find protein complexes with size more than three. It, however, is known that protein complexes with smaller sizes occupy a large part of whole complexes for several species. In our previous work, we developed a method with several feature space mappings and the domain composition kernel for prediction of heterodimeric protein complexes, which outperforms existing methods. Results We propose methods for prediction of heterotrimeric protein complexes by extending techniques in the previous work on the basis of the idea that most heterotrimeric protein complexes are not likely to share the same protein with each other. We make use of the discriminant function in support vector machines (SVMs), and design novel feature space mappings for the second phase. As the second classifier, we examine SVMs and relevance vector machines (RVMs). We perform 10-fold cross-validation computational experiments. The results suggest that our proposed two-phase methods and SVM with the extended features outperform the existing method NWE, which was reported to outperform other existing methods such as MCL, MCODE, DPClus, CMC, COACH, RRW, and PPSampler for prediction of heterotrimeric protein complexes. Conclusions We propose two-phase prediction methods with the extended features, the domain composition kernel, SVMs and RVMs. The two-phase method with the extended features and the domain composition kernel using SVM as the second classifier is particularly useful for prediction of heterotrimeric protein complexes. PMID:24564744

  12. A simplified analytic form for generation of axisymmetric plasma boundaries

    DOE PAGES

    Luce, Timothy C.

    2017-02-23

    An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less

  13. A simplified analytic form for generation of axisymmetric plasma boundaries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luce, Timothy C.

    An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less

  14. Betweenness-Based Method to Identify Critical Transmission Sectors for Supply Chain Environmental Pressure Mitigation.

    PubMed

    Liang, Sai; Qu, Shen; Xu, Ming

    2016-02-02

    To develop industry-specific policies for mitigating environmental pressures, previous studies primarily focus on identifying sectors that directly generate large amounts of environmental pressures (a.k.a. production-based method) or indirectly drive large amounts of environmental pressures through supply chains (e.g., consumption-based method). In addition to those sectors as important environmental pressure producers or drivers, there exist sectors that are also important to environmental pressure mitigation as transmission centers. Economy-wide environmental pressure mitigation might be achieved by improving production efficiency of these key transmission sectors, that is, using less upstream inputs to produce unitary output. We develop a betweenness-based method to measure the importance of transmission sectors, borrowing the betweenness concept from network analysis. We quantify the betweenness of sectors by examining supply chain paths extracted from structural path analysis that pass through a particular sector. We take China as an example and find that those critical transmission sectors identified by betweenness-based method are not always identifiable by existing methods. This indicates that betweenness-based method can provide additional insights that cannot be obtained with existing methods on the roles individual sectors play in generating economy-wide environmental pressures. Betweenness-based method proposed here can therefore complement existing methods for guiding sector-level environmental pressure mitigation strategies.

  15. Carbohydrate-Loading: A Safe and Effective Method of Improving Endurance Performance.

    ERIC Educational Resources Information Center

    Beeker, Richard T.; Israel, Richard G.

    Carbohydrate-loading prior to distance events is a common practice among endurance athletes. The purposes of this paper are to review previous research and to clarify misconceptions which may exist concerning carbohydrate-loading. The most effective method of carbohydrate-loading involves a training run of sufficient intensity and duration to…

  16. Arctic lead detection using a waveform mixture algorithm from CryoSat-2 data

    NASA Astrophysics Data System (ADS)

    Lee, Sanggyun; Kim, Hyun-cheol; Im, Jungho

    2018-05-01

    We propose a waveform mixture algorithm to detect leads from CryoSat-2 data, which is novel and different from the existing threshold-based lead detection methods. The waveform mixture algorithm adopts the concept of spectral mixture analysis, which is widely used in the field of hyperspectral image analysis. This lead detection method was evaluated with high-resolution (250 m) MODIS images and showed comparable and promising performance in detecting leads when compared to the previous methods. The robustness of the proposed approach also lies in the fact that it does not require the rescaling of parameters (i.e., stack standard deviation, stack skewness, stack kurtosis, pulse peakiness, and backscatter σ0), as it directly uses L1B waveform data, unlike the existing threshold-based methods. Monthly lead fraction maps were produced by the waveform mixture algorithm, which shows interannual variability of recent sea ice cover during 2011-2016, excluding the summer season (i.e., June to September). We also compared the lead fraction maps to other lead fraction maps generated from previously published data sets, resulting in similar spatiotemporal patterns.

  17. Examination of economical methods for repairing highway landslides.

    DOT National Transportation Integrated Search

    2005-04-01

    The Kentucky Transportation Cabinet spends millions of dollars each year in the repairs of highway landslides. In previous research, an inventory of highway landslides showed that about 1440 landslides of various sizes exist on major roadways maintai...

  18. Existence of topological multi-string solutions in Abelian gauge field theories

    NASA Astrophysics Data System (ADS)

    Han, Jongmin; Sohn, Juhee

    2017-11-01

    In this paper, we consider a general form of self-dual equations arising from Abelian gauge field theories coupled with the Einstein equations. By applying the super/subsolution method, we prove that topological multi-string solutions exist for any coupling constant, which improves previously known results. We provide two examples for application: the self-dual Einstein-Maxwell-Higgs model and the gravitational Maxwell gauged O(3) sigma model.

  19. ASSESSING AND COMBINING RELIABILITY OF PROTEIN INTERACTION SOURCES

    PubMed Central

    LEACH, SONIA; GABOW, AARON; HUNTER, LAWRENCE; GOLDBERG, DEBRA S.

    2008-01-01

    Integrating diverse sources of interaction information to create protein networks requires strategies sensitive to differences in accuracy and coverage of each source. Previous integration approaches calculate reliabilities of protein interaction information sources based on congruity to a designated ‘gold standard.’ In this paper, we provide a comparison of the two most popular existing approaches and propose a novel alternative for assessing reliabilities which does not require a gold standard. We identify a new method for combining the resultant reliabilities and compare it against an existing method. Further, we propose an extrinsic approach to evaluation of reliability estimates, considering their influence on the downstream tasks of inferring protein function and learning regulatory networks from expression data. Results using this evaluation method show 1) our method for reliability estimation is an attractive alternative to those requiring a gold standard and 2) the new method for combining reliabilities is less sensitive to noise in reliability assignments than the similar existing technique. PMID:17990508

  20. Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1989-01-01

    An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.

  1. Combining existing numerical models with data assimilation using weighted least-squares finite element methods.

    PubMed

    Rajaraman, Prathish K; Manteuffel, T A; Belohlavek, M; Heys, Jeffrey J

    2017-01-01

    A new approach has been developed for combining and enhancing the results from an existing computational fluid dynamics model with experimental data using the weighted least-squares finite element method (WLSFEM). Development of the approach was motivated by the existence of both limited experimental blood velocity in the left ventricle and inexact numerical models of the same flow. Limitations of the experimental data include measurement noise and having data only along a two-dimensional plane. Most numerical modeling approaches do not provide the flexibility to assimilate noisy experimental data. We previously developed an approach that could assimilate experimental data into the process of numerically solving the Navier-Stokes equations, but the approach was limited because it required the use of specific finite element methods for solving all model equations and did not support alternative numerical approximation methods. The new approach presented here allows virtually any numerical method to be used for approximately solving the Navier-Stokes equations, and then the WLSFEM is used to combine the experimental data with the numerical solution of the model equations in a final step. The approach dynamically adjusts the influence of the experimental data on the numerical solution so that more accurate data are more closely matched by the final solution and less accurate data are not closely matched. The new approach is demonstrated on different test problems and provides significantly reduced computational costs compared with many previous methods for data assimilation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  2. Improving Upon String Methods for Transition State Discovery.

    PubMed

    Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker

    2012-02-14

    Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.

  3. Identification of research hypotheses and new knowledge from scientific literature.

    PubMed

    Shardlow, Matthew; Batista-Navarro, Riza; Thompson, Paul; Nawaz, Raheel; McNaught, John; Ananiadou, Sophia

    2018-06-25

    Text mining (TM) methods have been used extensively to extract relations and events from the literature. In addition, TM techniques have been used to extract various types or dimensions of interpretative information, known as Meta-Knowledge (MK), from the context of relations and events, e.g. negation, speculation, certainty and knowledge type. However, most existing methods have focussed on the extraction of individual dimensions of MK, without investigating how they can be combined to obtain even richer contextual information. In this paper, we describe a novel, supervised method to extract new MK dimensions that encode Research Hypotheses (an author's intended knowledge gain) and New Knowledge (an author's findings). The method incorporates various features, including a combination of simple MK dimensions. We identify previously explored dimensions and then use a random forest to combine these with linguistic features into a classification model. To facilitate evaluation of the model, we have enriched two existing corpora annotated with relations and events, i.e., a subset of the GENIA-MK corpus and the EU-ADR corpus, by adding attributes to encode whether each relation or event corresponds to Research Hypothesis or New Knowledge. In the GENIA-MK corpus, these new attributes complement simpler MK dimensions that had previously been annotated. We show that our approach is able to assign different types of MK dimensions to relations and events with a high degree of accuracy. Firstly, our method is able to improve upon the previously reported state of the art performance for an existing dimension, i.e., Knowledge Type. Secondly, we also demonstrate high F1-score in predicting the new dimensions of Research Hypothesis (GENIA: 0.914, EU-ADR 0.802) and New Knowledge (GENIA: 0.829, EU-ADR 0.836). We have presented a novel approach for predicting New Knowledge and Research Hypothesis, which combines simple MK dimensions to achieve high F1-scores. The extraction of such information is valuable for a number of practical TM applications.

  4. New Internet search volume-based weighting method for integrating various environmental impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ji, Changyoon, E-mail: changyoon@yonsei.ac.kr; Hong, Taehoon, E-mail: hong7@yonsei.ac.kr

    Weighting is one of the steps in life cycle impact assessment that integrates various characterized environmental impacts as a single index. Weighting factors should be based on the society's preferences. However, most previous studies consider only the opinion of some people. Thus, this research proposes a new weighting method that determines the weighting factors of environmental impact categories by considering public opinion on environmental impacts using the Internet search volumes for relevant terms. To validate the new weighting method, the weighting factors for six environmental impacts calculated by the new weighting method were compared with the existing weighting factors. Themore » resulting Pearson's correlation coefficient between the new and existing weighting factors was from 0.8743 to 0.9889. It turned out that the new weighting method presents reasonable weighting factors. It also requires less time and lower cost compared to existing methods and likewise meets the main requirements of weighting methods such as simplicity, transparency, and reproducibility. The new weighting method is expected to be a good alternative for determining the weighting factor. - Highlight: • A new weighting method using Internet search volume is proposed in this research. • The new weighting method reflects the public opinion using Internet search volume. • The correlation coefficient between new and existing weighting factors is over 0.87. • The new weighting method can present the reasonable weighting factors. • The proposed method can be a good alternative for determining the weighting factors.« less

  5. Term Cancellations in Computing Floating-Point Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Sasaki, Tateaki; Kako, Fujio

    We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.

  6. A Method that Will Captivate U.

    PubMed

    Martin, Sophie; Coller, Jeff

    2015-09-03

    In an age of next-generation sequencing, the ability to purify RNA transcripts has become a critical issue. In this issue, Duffy et al. (2015) improve on a pre-existing technique of RNA labeling and purification by 4-thiouridine tagging. By increasing the efficiency of RNA capture, this method will enhance the ability to study RNA dynamics, especially for transcripts normally inefficiently captured by previous methods. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Solution and reasoning reuse in space planning and scheduling applications

    NASA Technical Reports Server (NTRS)

    Verfaillie, Gerard; Schiex, Thomas

    1994-01-01

    In the space domain, as in other domains, the CSP (Constraint Satisfaction Problems) techniques are increasingly used to represent and solve planning and scheduling problems. But these techniques have been developed to solve CSP's which are composed of fixed sets of variables and constraints, whereas many planning and scheduling problems are dynamic. It is therefore important to develop methods which allow a new solution to be rapidly found, as close as possible to the previous one, when some variables or constraints are added or removed. After presenting some existing approaches, this paper proposes a simple and efficient method, which has been developed on the basis of the dynamic backtracking algorithm. This method allows previous solution and reasoning to be reused in the framework of a CSP which is close to the previous one. Some experimental results on general random CSPs and on operation scheduling problems for remote sensing satellites are given.

  8. Conducting Slug Tests in Mini-Piezometers.

    PubMed

    Fritz, Bradley G; Mackley, Rob D; Arntzen, Evan V

    2016-03-01

    Slug tests performed using mini-piezometers with internal diameters as small as 0.43 cm can provide a cost effective tool for hydraulic characterization. We evaluated the hydraulic properties of the apparatus in a laboratory environment and compared those results with field tests of mini-piezometers installed into locations with varying hydraulic properties. Based on our evaluation, slug tests conducted in mini-piezometers using the fabrication and installation approach described here are effective within formations where the hydraulic conductivity is less than 1 × 10(-3) cm/s. While these constraints limit the potential application of this method, the benefits to this approach are that the installation, measurement, and analysis is cost effective, and the installation can be completed in areas where other (larger diameter) methods might not be possible. Additionally, this methodology could be applied to existing mini-piezometers previously installed for other purposes. Such analysis of existing installations could be beneficial in interpreting previously collected data (e.g., water-quality data or hydraulic head data). © 2015, National Ground Water Association.

  9. A neural network based reputation bootstrapping approach for service selection

    NASA Astrophysics Data System (ADS)

    Wu, Quanwang; Zhu, Qingsheng; Li, Peng

    2015-10-01

    With the concept of service-oriented computing becoming widely accepted in enterprise application integration, more and more computing resources are encapsulated as services and published online. Reputation mechanism has been studied to establish trust on prior unknown services. One of the limitations of current reputation mechanisms is that they cannot assess the reputation of newly deployed services as no record of their previous behaviours exists. Most of the current bootstrapping approaches merely assign default reputation values to newcomers. However, by this kind of methods, either newcomers or existing services will be favoured. In this paper, we present a novel reputation bootstrapping approach, where correlations between features and performance of existing services are learned through an artificial neural network (ANN) and they are then generalised to establish a tentative reputation when evaluating new and unknown services. Reputations of services published previously by the same provider are also incorporated for reputation bootstrapping if available. The proposed reputation bootstrapping approach is seamlessly embedded into an existing reputation model and implemented in the extended service-oriented architecture. Empirical studies of the proposed approach are shown at last.

  10. A classical Perron method for existence of smooth solutions to boundary value and obstacle problems for degenerate-elliptic operators via holomorphic maps

    NASA Astrophysics Data System (ADS)

    Feehan, Paul M. N.

    2017-09-01

    We prove existence of solutions to boundary value problems and obstacle problems for degenerate-elliptic, linear, second-order partial differential operators with partial Dirichlet boundary conditions using a new version of the Perron method. The elliptic operators considered have a degeneracy along a portion of the domain boundary which is similar to the degeneracy of a model linear operator identified by Daskalopoulos and Hamilton [9] in their study of the porous medium equation or the degeneracy of the Heston operator [21] in mathematical finance. Existence of a solution to the partial Dirichlet problem on a half-ball, where the operator becomes degenerate on the flat boundary and a Dirichlet condition is only imposed on the spherical boundary, provides the key additional ingredient required for our Perron method. Surprisingly, proving existence of a solution to this partial Dirichlet problem with ;mixed; boundary conditions on a half-ball is more challenging than one might expect. Due to the difficulty in developing a global Schauder estimate and due to compatibility conditions arising where the ;degenerate; and ;non-degenerate boundaries; touch, one cannot directly apply the continuity or approximate solution methods. However, in dimension two, there is a holomorphic map from the half-disk onto the infinite strip in the complex plane and one can extend this definition to higher dimensions to give a diffeomorphism from the half-ball onto the infinite ;slab;. The solution to the partial Dirichlet problem on the half-ball can thus be converted to a partial Dirichlet problem on the slab, albeit for an operator which now has exponentially growing coefficients. The required Schauder regularity theory and existence of a solution to the partial Dirichlet problem on the slab can nevertheless be obtained using previous work of the author and C. Pop [16]. Our Perron method relies on weak and strong maximum principles for degenerate-elliptic operators, concepts of continuous subsolutions and supersolutions for boundary value and obstacle problems for degenerate-elliptic operators, and maximum and comparison principle estimates previously developed by the author [13].

  11. Development in Children with Achondroplasia: A Prospective Clinical Cohort Study

    ERIC Educational Resources Information Center

    Ireland, Penelope J.; Donaghey, Samantha; McGill, James; Zankl, Andreas; Ware, Robert S.; Pacey, Verity; Ault, Jenny; Savarirayan, Ravi; Sillence, David; Thompson, Elizabeth; Townshend, Sharron; Johnston, Leanne M.

    2012-01-01

    Aim: Achondroplasia is characterized by delays in the development of communication and motor skills. While previously reported developmental profiles exist across gross motor, fine motor, feeding, and communication skills, there has been no prospective study of development across multiple areas simultaneously. Method: This Australasian…

  12. Joint histogram-based cost aggregation for stereo matching.

    PubMed

    Min, Dongbo; Lu, Jiangbo; Do, Minh N

    2013-10-01

    This paper presents a novel method for performing efficient cost aggregation in stereo matching. The cost aggregation problem is reformulated from the perspective of a histogram, giving us the potential to reduce the complexity of the cost aggregation in stereo matching significantly. Differently from previous methods which have tried to reduce the complexity in terms of the size of an image and a matching window, our approach focuses on reducing the computational redundancy that exists among the search range, caused by a repeated filtering for all the hypotheses. Moreover, we also reduce the complexity of the window-based filtering through an efficient sampling scheme inside the matching window. The tradeoff between accuracy and complexity is extensively investigated by varying the parameters used in the proposed method. Experimental results show that the proposed method provides high-quality disparity maps with low complexity and outperforms existing local methods. This paper also provides new insights into complexity-constrained stereo-matching algorithm design.

  13. Comparison of MRI-based estimates of articular cartilage contact area in the tibiofemoral joint.

    PubMed

    Henderson, Christopher E; Higginson, Jill S; Barrance, Peter J

    2011-01-01

    Knee osteoarthritis (OA) detrimentally impacts the lives of millions of older Americans through pain and decreased functional ability. Unfortunately, the pathomechanics and associated deviations from joint homeostasis that OA patients experience are not well understood. Alterations in mechanical stress in the knee joint may play an essential role in OA; however, existing literature in this area is limited. The purpose of this study was to evaluate the ability of an existing magnetic resonance imaging (MRI)-based modeling method to estimate articular cartilage contact area in vivo. Imaging data of both knees were collected on a single subject with no history of knee pathology at three knee flexion angles. Intra-observer reliability and sensitivity studies were also performed to determine the role of operator-influenced elements of the data processing on the results. The method's articular cartilage contact area estimates were compared with existing contact area estimates in the literature. The method demonstrated an intra-observer reliability of 0.95 when assessed using Pearson's correlation coefficient and was found to be most sensitive to changes in the cartilage tracings on the peripheries of the compartment. The articular cartilage contact area estimates at full extension were similar to those reported in the literature. The relationships between tibiofemoral articular cartilage contact area and knee flexion were also qualitatively and quantitatively similar to those previously reported. The MRI-based knee modeling method was found to have high intra-observer reliability, sensitivity to peripheral articular cartilage tracings, and agreeability with previous investigations when using data from a single healthy adult. Future studies will implement this modeling method to investigate the role that mechanical stress may play in progression of knee OA through estimation of articular cartilage contact area.

  14. Bifurcating fronts for the Taylor-Couette problem in infinite cylinders

    NASA Astrophysics Data System (ADS)

    Hărăguş-Courcelle, M.; Schneider, G.

    We show the existence of bifurcating fronts for the weakly unstable Taylor-Couette problem in an infinite cylinder. These fronts connect a stationary bifurcating pattern, here the Taylor vortices, with the trivial ground state, here the Couette flow. In order to show the existence result we improve a method which was already used in establishing the existence of bifurcating fronts for the Swift-Hohenberg equation by Collet and Eckmann, 1986, and by Eckmann and Wayne, 1991. The existence proof is based on spatial dynamics and center manifold theory. One of the difficulties in applying center manifold theory comes from an infinite number of eigenvalues on the imaginary axis for vanishing bifurcation parameter. But nevertheless, a finite dimensional reduction is possible, since the eigenvalues leave the imaginary axis with different velocities, if the bifurcation parameter is increased. In contrast to previous work we have to use normalform methods and a non-standard cut-off function to obtain a center manifold which is large enough to contain the bifurcating fronts.

  15. Four applications of permutation methods to testing a single-mediator model.

    PubMed

    Taylor, Aaron B; MacKinnon, David P

    2012-09-01

    Four applications of permutation tests to the single-mediator model are described and evaluated in this study. Permutation tests work by rearranging data in many possible ways in order to estimate the sampling distribution for the test statistic. The four applications to mediation evaluated here are the permutation test of ab, the permutation joint significance test, and the noniterative and iterative permutation confidence intervals for ab. A Monte Carlo simulation study was used to compare these four tests with the four best available tests for mediation found in previous research: the joint significance test, the distribution of the product test, and the percentile and bias-corrected bootstrap tests. We compared the different methods on Type I error, power, and confidence interval coverage. The noniterative permutation confidence interval for ab was the best performer among the new methods. It successfully controlled Type I error, had power nearly as good as the most powerful existing methods, and had better coverage than any existing method. The iterative permutation confidence interval for ab had lower power than do some existing methods, but it performed better than any other method in terms of coverage. The permutation confidence interval methods are recommended when estimating a confidence interval is a primary concern. SPSS and SAS macros that estimate these confidence intervals are provided.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soerensen, M.P.; Davidson, A.; Pedersen, N.F.

    We use the method of cell-to-cell mapping to locate attractors, basins, and saddle nodes in the phase plane of a driven Josephson junction. The cell-mapping method is discussed in some detail, emphasizing its ability to provide a global view of the phase plane. Our computations confirm the existence of a previously reported interior crisis. In addition, we observe a boundary crisis for a small shift in one parameter. The cell-mapping method allows us to show both crises explicitly in the phase plane, at low computational cost.

  17. GPU-Q-J, a fast method for calculating root mean square deviation (RMSD) after optimal superposition

    PubMed Central

    2011-01-01

    Background Calculation of the root mean square deviation (RMSD) between the atomic coordinates of two optimally superposed structures is a basic component of structural comparison techniques. We describe a quaternion based method, GPU-Q-J, that is stable with single precision calculations and suitable for graphics processor units (GPUs). The application was implemented on an ATI 4770 graphics card in C/C++ and Brook+ in Linux where it was 260 to 760 times faster than existing unoptimized CPU methods. Source code is available from the Compbio website http://software.compbio.washington.edu/misc/downloads/st_gpu_fit/ or from the author LHH. Findings The Nutritious Rice for the World Project (NRW) on World Community Grid predicted de novo, the structures of over 62,000 small proteins and protein domains returning a total of 10 billion candidate structures. Clustering ensembles of structures on this scale requires calculation of large similarity matrices consisting of RMSDs between each pair of structures in the set. As a real-world test, we calculated the matrices for 6 different ensembles from NRW. The GPU method was 260 times faster that the fastest existing CPU based method and over 500 times faster than the method that had been previously used. Conclusions GPU-Q-J is a significant advance over previous CPU methods. It relieves a major bottleneck in the clustering of large numbers of structures for NRW. It also has applications in structure comparison methods that involve multiple superposition and RMSD determination steps, particularly when such methods are applied on a proteome and genome wide scale. PMID:21453553

  18. Optical Sensors and Methods for Underwater 3D Reconstruction

    PubMed Central

    Massot-Campos, Miquel; Oliver-Codina, Gabriel

    2015-01-01

    This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389

  19. Robust rotational-velocity-Verlet integration methods.

    PubMed

    Rozmanov, Dmitri; Kusalik, Peter G

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  20. Robust rotational-velocity-Verlet integration methods

    NASA Astrophysics Data System (ADS)

    Rozmanov, Dmitri; Kusalik, Peter G.

    2010-05-01

    Two rotational integration algorithms for rigid-body dynamics are proposed in velocity-Verlet formulation. The first method uses quaternion dynamics and was derived from the original rotational leap-frog method by Svanberg [Mol. Phys. 92, 1085 (1997)]; it produces time consistent positions and momenta. The second method is also formulated in terms of quaternions but it is not quaternion specific and can be easily adapted for any other orientational representation. Both the methods are tested extensively and compared to existing rotational integrators. The proposed integrators demonstrated performance at least at the level of previously reported rotational algorithms. The choice of simulation parameters is also discussed.

  1. The Identification and Assessment of Late-Life ADHD in Memory Clinics

    ERIC Educational Resources Information Center

    Fischer, Barbara L.; Gunter-Hunt, Gail; Steinhafel, Courtney Holm; Howell, Timothy

    2012-01-01

    Objective: Little data exist about ADHD in late life. While evaluating patients' memory problems, the memory clinic staff has periodically identified ADHD in previously undiagnosed older adults. The authors conducted a survey to assess the extent to which other memory clinics view ADHD as a relevant clinical issue. Method: The authors developed…

  2. Defining Quality in Undergraduate Education: Directions for Future Research Informed by a Literature Review

    ERIC Educational Resources Information Center

    Bowers, Alison W.; Ranganathan, Shyam; Simmons, Denise R.

    2018-01-01

    Objectives: This research brief explores the literature addressing quality in undergraduate education to identify what previous research has said about quality and to offer future directions for research on quality in undergraduate education. Method: We conducted a scoping review to provide a broad overview of existing research. Using targeted…

  3. Heating and flooding: A unified approach for rapid generation of free energy surfaces

    NASA Astrophysics Data System (ADS)

    Chen, Ming; Cuendet, Michel A.; Tuckerman, Mark E.

    2012-07-01

    We propose a general framework for the efficient sampling of conformational equilibria in complex systems and the generation of associated free energy hypersurfaces in terms of a set of collective variables. The method is a strategic synthesis of the adiabatic free energy dynamics approach, previously introduced by us and others, and existing schemes using Gaussian-based adaptive bias potentials to disfavor previously visited regions. In addition, we suggest sampling the thermodynamic force instead of the probability density to reconstruct the free energy hypersurface. All these elements are combined into a robust extended phase-space formalism that can be easily incorporated into existing molecular dynamics packages. The unified scheme is shown to outperform both metadynamics and adiabatic free energy dynamics in generating two-dimensional free energy surfaces for several example cases including the alanine dipeptide in the gas and aqueous phases and the met-enkephalin oligopeptide. In addition, the method can efficiently generate higher dimensional free energy landscapes, which we demonstrate by calculating a four-dimensional surface in the Ramachandran angles of the gas-phase alanine tripeptide.

  4. Conducting Slug Tests in Mini-Piezometers: B.G. Fritz Ground Water xx, no. x: x-xx

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fritz, Bradley G.; Mackley, Rob D.; Arntzen, Evan V.

    Slug tests performed using mini-piezometers with diameters as small as 0.43 cm can provide a cost effective tool for hydraulic characterization. We evaluated the hydraulic properties of the apparatus in an infinite hydraulic conductivity environment and compared those results with field tests of mini-piezometers installed into locations with varying hydraulic properties. Based on our evaluation, slug tests conducted in mini-piezometers using the fabrication and installation approach described here are effective within formations where the hydraulic conductivity is less than 1 x 10-3 cm/s. While these constraints limit the potential application of this method, the benefits to this approach are thatmore » the installation, measurement and analysis is extremely cost effective, and the installation can be completed in areas where other (larger diameter) methods might not be possible. Additionally, this methodology could be applied to existing mini-piezometers previously installed for other purposes. Such analysis of existing installations could be beneficial in interpreting previously collected data (e.g. water quality data or hydraulic head data).« less

  5. Obtaining tight bounds on higher-order interferences with a 5-path interferometer

    NASA Astrophysics Data System (ADS)

    Kauten, Thomas; Keil, Robert; Kaufmann, Thomas; Pressl, Benedikt; Brukner, Časlav; Weihs, Gregor

    2017-03-01

    Within the established theoretical framework of quantum mechanics, interference always occurs between pairs of paths through an interferometer. Higher order interferences with multiple constituents are excluded by Born’s rule and can only exist in generalized probabilistic theories. Thus, high-precision experiments searching for such higher order interferences are a powerful method to distinguish between quantum mechanics and more general theories. Here, we perform such a test in an optical multi-path interferometer, which avoids crucial systematic errors, has access to the entire phase space and is more stable than previous experiments. Our results are in accordance with quantum mechanics and rule out the existence of higher order interference terms in optical interferometry to an extent that is more than four orders of magnitude smaller than the expected pairwise interference, refining previous bounds by two orders of magnitude.

  6. BRDF invariant stereo using light transport constancy.

    PubMed

    Wang, Liang; Yang, Ruigang; Davis, James E

    2007-09-01

    Nearly all existing methods for stereo reconstruction assume that scene reflectance is Lambertian and make use of brightness constancy as a matching invariant. We introduce a new invariant for stereo reconstruction called light transport constancy (LTC), which allows completely arbitrary scene reflectance (bidirectional reflectance distribution functions (BRDFs)). This invariant can be used to formulate a rank constraint on multiview stereo matching when the scene is observed by several lighting configurations in which only the lighting intensity varies. In addition, we show that this multiview constraint can be used with as few as two cameras and two lighting configurations. Unlike previous methods for BRDF invariant stereo, LTC does not require precisely configured or calibrated light sources or calibration objects in the scene. Importantly, the new constraint can be used to provide BRDF invariance to any existing stereo method whenever appropriate lighting variation is available.

  7. A New Method for Reconstructing Sea-Level and Deep-Sea-Temperature Variability over the Past 5.3 Million Years

    NASA Astrophysics Data System (ADS)

    Rohling, E. J.

    2014-12-01

    Ice volume (and hence sea level) and deep-sea temperature are key measures of global climate change. Sea level has been documented using several independent methods over the past 0.5 million years (Myr). Older periods, however, lack such independent validation; all existing records are related to deep-sea oxygen isotope (d18O) data that are influenced by processes unrelated to sea level. For deep-sea temperature, only one continuous high-resolution (Mg/Ca-based) record exists, with related sea-level estimates, spanning the past 1.5 Myr. We have recently presented a novel sea-level reconstruction, with associated estimates of deep-sea temperature, which independently validates the previous 0-1.5 Myr reconstruction and extends it back to 5.3 Myr ago. A serious of caveats applies to this new method, especially in older times of its application, as is always the case with new methods. Independent validation exercises are needed to elucidate where consistency exists, and where solutions drift away from each other. A key observation from our new method is that a large temporal offset existed during the onset of Plio-Pleistocene ice ages, between a marked cooling step at 2.73 Myr ago and the first major glaciation at 2.15 Myr ago. This observation relies on relative changes within the dataset, which are more robust than absolute values. I will discuss our method and its main caveats and avenues for improvement.

  8. Implementation of an anonymisation tool for clinical trials using a clinical trial processor integrated with an existing trial patient data information system.

    PubMed

    Aryanto, Kadek Y E; Broekema, André; Oudkerk, Matthijs; van Ooijen, Peter M A

    2012-01-01

    To present an adapted Clinical Trial Processor (CTP) test set-up for receiving, anonymising and saving Digital Imaging and Communications in Medicine (DICOM) data using external input from the original database of an existing clinical study information system to guide the anonymisation process. Two methods are presented for an adapted CTP test set-up. In the first method, images are pushed from the Picture Archiving and Communication System (PACS) using the DICOM protocol through a local network. In the second method, images are transferred through the internet using the HTTPS protocol. In total 25,000 images from 50 patients were moved from the PACS, anonymised and stored within roughly 2 h using the first method. In the second method, an average of 10 images per minute were transferred and processed over a residential connection. In both methods, no duplicated images were stored when previous images were retransferred. The anonymised images are stored in appropriate directories. The CTP can transfer and process DICOM images correctly in a very easy set-up providing a fast, secure and stable environment. The adapted CTP allows easy integration into an environment in which patient data are already included in an existing information system.

  9. Sizing up arthropod genomes: an evaluation of the impact of environmental variation on genome size estimates by flow cytometry and the use of qPCR as a method of estimation.

    PubMed

    Gregory, T Ryan; Nathwani, Paula; Bonnett, Tiffany R; Huber, Dezene P W

    2013-09-01

    A study was undertaken to evaluate both a pre-existing method and a newly proposed approach for the estimation of nuclear genome sizes in arthropods. First, concerns regarding the reliability of the well-established method of flow cytometry relating to impacts of rearing conditions on genome size estimates were examined. Contrary to previous reports, a more carefully controlled test found negligible environmental effects on genome size estimates in the fly Drosophila melanogaster. Second, a more recently touted method based on quantitative real-time PCR (qPCR) was examined in terms of ease of use, efficiency, and (most importantly) accuracy using four test species: the flies Drosophila melanogaster and Musca domestica and the beetles Tribolium castaneum and Dendroctonus ponderosa. The results of this analysis demonstrated that qPCR has the tendency to produce substantially different genome size estimates from other established techniques while also being far less efficient than existing methods.

  10. High temperature gas-cooled reactor (HTGR) graphite pebble fuel: Review of technologies for reprocessing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mcwilliams, A. J.

    2015-09-08

    This report reviews literature on reprocessing high temperature gas-cooled reactor graphite fuel components. A basic review of the various fuel components used in the pebble bed type reactors is provided along with a survey of synthesis methods for the fabrication of the fuel components. Several disposal options are considered for the graphite pebble fuel elements including the storage of intact pebbles, volume reduction by separating the graphite from fuel kernels, and complete processing of the pebbles for waste storage. Existing methods for graphite removal are presented and generally consist of mechanical separation techniques such as crushing and grinding chemical techniquesmore » through the use of acid digestion and oxidation. Potential methods for reprocessing the graphite pebbles include improvements to existing methods and novel technologies that have not previously been investigated for nuclear graphite waste applications. The best overall method will be dependent on the desired final waste form and needs to factor in the technical efficiency, political concerns, cost, and implementation.« less

  11. Enhancing to method for extracting Social network by the relation existence

    NASA Astrophysics Data System (ADS)

    Elfida, Maria; Matyuso Nasution, M. K.; Sitompul, O. S.

    2018-01-01

    To get the trusty information about the social network extracted from the Web requires a reliable method, but for optimal resultant required the method that can overcome the complexity of information resources. This paper intends to reveal ways to overcome the constraints of social network extraction leading to high complexity by identifying relationships among social actors. By changing the treatment of the procedure used, we obtain the complexity is smaller than the previous procedure. This has also been demonstrated in an experiment by using the denial sample.

  12. The Filament Sensor for Near Real-Time Detection of Cytoskeletal Fiber Structures

    PubMed Central

    Eltzner, Benjamin; Wollnik, Carina; Gottschlich, Carsten; Huckemann, Stephan; Rehfeldt, Florian

    2015-01-01

    A reliable extraction of filament data from microscopic images is of high interest in the analysis of acto-myosin structures as early morphological markers in mechanically guided differentiation of human mesenchymal stem cells and the understanding of the underlying fiber arrangement processes. In this paper, we propose the filament sensor (FS), a fast and robust processing sequence which detects and records location, orientation, length, and width for each single filament of an image, and thus allows for the above described analysis. The extraction of these features has previously not been possible with existing methods. We evaluate the performance of the proposed FS in terms of accuracy and speed in comparison to three existing methods with respect to their limited output. Further, we provide a benchmark dataset of real cell images along with filaments manually marked by a human expert as well as simulated benchmark images. The FS clearly outperforms existing methods in terms of computational runtime and filament extraction accuracy. The implementation of the FS and the benchmark database are available as open source. PMID:25996921

  13. Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.

    PubMed

    Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng

    2018-05-01

    Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.

  14. Models and theories of prescribing decisions: A review and suggested a new model.

    PubMed

    Murshid, Mohsen Ali; Mohaidin, Zurina

    2017-01-01

    To date, research on the prescribing decisions of physician lacks sound theoretical foundations. In fact, drug prescribing by doctors is a complex phenomenon influenced by various factors. Most of the existing studies in the area of drug prescription explain the process of decision-making by physicians via the exploratory approach rather than theoretical. Therefore, this review is an attempt to suggest a value conceptual model that explains the theoretical linkages existing between marketing efforts, patient and pharmacist and physician decision to prescribe the drugs. The paper follows an inclusive review approach and applies the previous theoretical models of prescribing behaviour to identify the relational factors. More specifically, the report identifies and uses several valuable perspectives such as the 'persuasion theory - elaboration likelihood model', the stimuli-response marketing model', the 'agency theory', the theory of planned behaviour,' and 'social power theory,' in developing an innovative conceptual paradigm. Based on the combination of existing methods and previous models, this paper suggests a new conceptual model of the physician decision-making process. This unique model has the potential for use in further research.

  15. Writing Integrative Reviews of the Literature: Methods and Purposes

    ERIC Educational Resources Information Center

    Torraco, Richard J.

    2016-01-01

    This article discusses the integrative review of the literature as a distinctive form of research that uses existing literature to create new knowledge. As an expansion and update of a previously published article on this topic, it acknowledges the growth and appeal of this form of research to scholars, it identifies the main components of the…

  16. Geometric isomers of sex pheromone components do not affect attractancy of Conopomorpha cramerella in cocoa plantations

    USDA-ARS?s Scientific Manuscript database

    Sex pheromone of cocoa pod borer (CPB), Conopomorpha cramerella, has previously been identified as a blend of (E,Z,Z)- and (E,E,Z)-4,6,10-hexadecatrienyl acetates and the corresponding alcohols. These pheromone components have been synthesized with modification of the existing method and relative at...

  17. Effectiveness of IMPACT:Ability to Improve Safety and Self-Advocacy Skills in Students with Disabilities--Follow-Up Study

    ERIC Educational Resources Information Center

    Dryden, Eileen M.; Desmarais, Jeffrey; Arsenault, Lisa

    2017-01-01

    Background: Research shows that individuals with disabilities are more likely to experience abuse than their peers without disabilities. Yet, few evidenced-based abuse prevention interventions exist. This study examines whether positive outcomes identified previously in an evaluation of IMPACT:Ability were maintained 1 year later. Methods: A…

  18. Overlapping illusions by transformation optics without any negative refraction material.

    PubMed

    Sun, Fei; He, Sailing

    2016-01-11

    A novel method to achieve an overlapping illusion without any negative refraction index material is introduced with the help of the optic-null medium (ONM) designed by an extremely stretching spatial transformation. Unlike the previous methods to achieve such an optical illusion by transformation optics (TO), our method can achieve a power combination and reshape the radiation pattern at the same time. Unlike the overlapping illusion with some negative refraction index material, our method is not sensitive to the loss of the materials. Other advantages over existing methods are discussed. Numerical simulations are given to verify the performance of the proposed devices.

  19. Reconstructed imaging of acoustic cloak using time-lapse reversal method

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Cheng, Ying; Xu, Jian-yi; Li, Bo; Liu, Xiao-jun

    2014-08-01

    We proposed and investigated a solution to the inverse acoustic cloak problem, an anti-stealth technology to make cloaks visible, using the time-lapse reversal (TLR) method. The TLR method reconstructs the image of an unknown acoustic cloak by utilizing scattered acoustic waves. Compared to previous anti-stealth methods, the TLR method can determine not only the existence of a cloak but also its exact geometric information like definite shape, size, and position. Here, we present the process for TLR reconstruction based on time reversal invariance. This technology may have potential applications in detecting various types of cloaks with different geometric parameters.

  20. Pancreaticoduodenectomy following gastrectomy reconstructed with Billroth II or Roux-en-Y method: Case series and literature review.

    PubMed

    Kawamoto, Yusuke; Ome, Yusuke; Kouda, Yusuke; Saga, Kennichi; Park, Taebum; Kawamoto, Kazuyuki

    2017-01-01

    The ideal reconstruction method for pancreaticoduodenectomy following a gastrectomy with Billroth II or Roux-en-Y reconstruction is unclear. We reviewed a series of seven pancreaticoduodenectomies performed after gastrectomy with the Billroth II or Roux-en-Y method. While preserving the existing gastrojejunostomy or esophagojejunostomy, pancreaticojejunostomy and hepaticojejunostomy were performed by the Roux-en-Y method using a new Roux limb in all cases. Four patients experienced postoperative complications, although the specific complications varied. A review of the literature revealed 13 cases of pancreaticoduodenectomy following gastrectomy with Billroth II or Roux-en-Y reconstruction. Three patients out of six (50%) in whom the past afferent limb was used for the reconstruction of the pancreaticojejunostomy and hepaticojejunostomy experienced afferent loop syndrome, while 14 previous and current patients in whom a new jejeunal limb was used did not experience this complication. The Roux-en-Y method, using the distal intestine of previous gastrojejunostomy or jejunojejunostomy as a new jejunal limb for pancreaticojejunostomy and hepaticojejunostomy, may be a better reconstruction method to avoid the complication of afferent loop syndrome after previous gastrectomy with Billroth II or Roux-en-Y reconstruction if the afferent limb is less than 40cm. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Pseudo-simple heteroclinic cycles in R4

    NASA Astrophysics Data System (ADS)

    Chossat, Pascal; Lohse, Alexander; Podvigina, Olga

    2018-06-01

    We study pseudo-simple heteroclinic cycles for a Γ-equivariant system in R4 with finite Γ ⊂ O(4) , and their nearby dynamics. In particular, in a first step towards a full classification - analogous to that which exists already for the class of simple cycles - we identify all finite subgroups of O(4) admitting pseudo-simple cycles. To this end we introduce a constructive method to build equivariant dynamical systems possessing a robust heteroclinic cycle. Extending a previous study we also investigate the existence of periodic orbits close to a pseudo-simple cycle, which depends on the symmetry groups of equilibria in the cycle. Moreover, we identify subgroups Γ ⊂ O(4) , Γ ⊄ SO(4) , admitting fragmentarily asymptotically stable pseudo-simple heteroclinic cycles. (It has been previously shown that for Γ ⊂ SO(4) pseudo-simple cycles generically are completely unstable.) Finally, we study a generalized heteroclinic cycle, which involves a pseudo-simple cycle as a subset.

  2. Cell-fusion method to visualize interphase nuclear pore formation.

    PubMed

    Maeshima, Kazuhiro; Funakoshi, Tomoko; Imamoto, Naoko

    2014-01-01

    In eukaryotic cells, the nucleus is a complex and sophisticated organelle that organizes genomic DNA to support essential cellular functions. The nuclear surface contains many nuclear pore complexes (NPCs), channels for macromolecular transport between the cytoplasm and nucleus. It is well known that the number of NPCs almost doubles during interphase in cycling cells. However, the mechanism of NPC formation is poorly understood, presumably because a practical system for analysis does not exist. The most difficult obstacle in the visualization of interphase NPC formation is that NPCs already exist after nuclear envelope formation, and these existing NPCs interfere with the observation of nascent NPCs. To overcome this obstacle, we developed a novel system using the cell-fusion technique (heterokaryon method), previously also used to analyze the shuttling of macromolecules between the cytoplasm and the nucleus, to visualize the newly synthesized interphase NPCs. In addition, we used a photobleaching approach that validated the cell-fusion method. We recently used these methods to demonstrate the role of cyclin-dependent protein kinases and of Pom121 in interphase NPC formation in cycling human cells. Here, we describe the details of the cell-fusion approach and compare the system with other NPC formation visualization methods. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Quantifying construction and demolition waste: an analytical review.

    PubMed

    Wu, Zezhou; Yu, Ann T W; Shen, Liyin; Liu, Guiwen

    2014-09-01

    Quantifying construction and demolition (C&D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C&D waste generation at both regional and project levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C&D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Automated web service composition supporting conditional branch structures

    NASA Astrophysics Data System (ADS)

    Wang, Pengwei; Ding, Zhijun; Jiang, Changjun; Zhou, Mengchu

    2014-01-01

    The creation of value-added services by automatic composition of existing ones is gaining a significant momentum as the potential silver bullet in service-oriented architecture. However, service composition faces two aspects of difficulties. First, users' needs present such characteristics as diversity, uncertainty and personalisation; second, the existing services run in a real-world environment that is highly complex and dynamically changing. These difficulties may cause the emergence of nondeterministic choices in the process of service composition, which has gone beyond what the existing automated service composition techniques can handle. According to most of the existing methods, the process model of composite service includes sequence constructs only. This article presents a method to introduce conditional branch structures into the process model of composite service when needed, in order to satisfy users' diverse and personalised needs and adapt to the dynamic changes of real-world environment. UML activity diagrams are used to represent dependencies in composite service. Two types of user preferences are considered in this article, which have been ignored by the previous work and a simple programming language style expression is adopted to describe them. Two different algorithms are presented to deal with different situations. A real-life case is provided to illustrate the proposed concepts and methods.

  5. Transport Test Problems for Hybrid Methods Development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Miller, Erin A.; Wittman, Richard S.

    2011-12-28

    This report presents 9 test problems to guide testing and development of hybrid calculations for the ADVANTG code at ORNL. These test cases can be used for comparing different types of radiation transport calculations, as well as for guiding the development of variance reduction methods. Cases are drawn primarily from existing or previous calculations with a preference for cases which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22.

  6. Returns to Scale in the Production of Hospital Services

    PubMed Central

    Berry, Ralph E.

    1967-01-01

    The primary purpose of this article is to investigate whether or not economies of scale exist in the production of hospital services. In previous studies the results have implied the existence of economies of scale, but the question has not been satisfactorily resolved. The factor most responsible for clouding the issue is the overwhelming prevalence of product differences in the outputs of hospitals. In this study a method which avoids the problem of product differentiation is developed. The analysis strongly supports the conclusion that hospital services are produced subject to economies of scale. PMID:6054380

  7. The development of a primary dental care outreach course.

    PubMed

    Waterhouse, P; Maguire, A; Tabari, D; Hind, V; Lloyd, J

    2008-02-01

    The aim of this work was to develop the first north-east based primary dental care outreach (PDCO) course for clinical dental undergraduate students at Newcastle University. The process of course design will be described and involved review of the existing Bachelor of Dental Surgery (BDS) degree course in relation to previously published learning outcomes. Areas were identified where the existing BDS course did not meet fully these outcomes. This was followed by setting the PDCO course aims and objectives, intended learning outcomes, curriculum and structure. The educational strategy and methods of teaching and learning were subsequently developed together with a strategy for overall quality control of the teaching and learning experience. The newly developed curriculum was aligned with appropriate student assessment methods, including summative, formative and ipsative elements.

  8. Conformal mapping for multiple terminals

    PubMed Central

    Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao

    2016-01-01

    Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746

  9. All-Versus-Nothing Proof of Einstein-Podolsky-Rosen Steering

    PubMed Central

    Chen, Jing-Ling; Ye, Xiang-Jun; Wu, Chunfeng; Su, Hong-Yi; Cabello, Adán; Kwek, L. C.; Oh, C. H.

    2013-01-01

    Einstein-Podolsky-Rosen steering is a form of quantum nonlocality intermediate between entanglement and Bell nonlocality. Although Schrödinger already mooted the idea in 1935, steering still defies a complete understanding. In analogy to “all-versus-nothing” proofs of Bell nonlocality, here we present a proof of steering without inequalities rendering the detection of correlations leading to a violation of steering inequalities unnecessary. We show that, given any two-qubit entangled state, the existence of certain projective measurement by Alice so that Bob's normalized conditional states can be regarded as two different pure states provides a criterion for Alice-to-Bob steerability. A steering inequality equivalent to the all-versus-nothing proof is also obtained. Our result clearly demonstrates that there exist many quantum states which do not violate any previously known steering inequality but are indeed steerable. Our method offers advantages over the existing methods for experimentally testing steerability, and sheds new light on the asymmetric steering problem. PMID:23828242

  10. Novel and efficient tag SNPs selection algorithms.

    PubMed

    Chen, Wen-Pei; Hung, Che-Lun; Tsai, Suh-Jen Jane; Lin, Yaw-Ling

    2014-01-01

    SNPs are the most abundant forms of genetic variations amongst species; the association studies between complex diseases and SNPs or haplotypes have received great attention. However, these studies are restricted by the cost of genotyping all SNPs; thus, it is necessary to find smaller subsets, or tag SNPs, representing the rest of the SNPs. In fact, the existing tag SNP selection algorithms are notoriously time-consuming. An efficient algorithm for tag SNP selection was presented, which was applied to analyze the HapMap YRI data. The experimental results show that the proposed algorithm can achieve better performance than the existing tag SNP selection algorithms; in most cases, this proposed algorithm is at least ten times faster than the existing methods. In many cases, when the redundant ratio of the block is high, the proposed algorithm can even be thousands times faster than the previously known methods. Tools and web services for haplotype block analysis integrated by hadoop MapReduce framework are also developed using the proposed algorithm as computation kernels.

  11. Validation of a new ELISA method for in vitro potency testing of hepatitis A vaccines.

    PubMed

    Morgeaux, S; Variot, P; Daas, A; Costanzo, A

    2013-01-01

    The goal of the project was to standardise a new in vitro method in replacement of the existing standard method for the determination of hepatitis A virus antigen content in hepatitis A vaccines (HAV) marketed in Europe. This became necessary due to issues with the method used previously, requiring the use of commercial test kits. The selected candidate method, not based on commercial kits, had already been used for many years by an Official Medicines Control Laboratory (OMCL) for routine testing and batch release of HAV. After a pre-qualification phase (Phase 1) that showed the suitability of the commercially available critical ELISA reagents for the determination of antigen content in marketed HAV present on the European market, an international collaborative study (Phase 2) was carried out in order to fully validate the method. Eleven laboratories took part in the collaborative study. They performed assays with the candidate standard method and, in parallel, for comparison purposes, with their own in-house validated methods where these were available. The study demonstrated that the new assay provides a more reliable and reproducible method when compared to the existing standard method. A good correlation of the candidate standard method with the in vivo immunogenicity assay in mice was shown previously for both potent and sub-potent (stressed) vaccines. Thus, the new standard method validated during the collaborative study may be implemented readily by manufacturers and OMCLs for routine batch release but also for in-process control or consistency testing. The new method was approved in October 2012 by Group of Experts 15 of the European Pharmacopoeia (Ph. Eur.) as the standard method for in vitro potency testing of HAV. The relevant texts will be revised accordingly. Critical reagents such as coating reagent and detection antibodies have been adopted by the Ph. Eur. Commission and are available from the EDQM as Ph. Eur. Biological Reference Reagents (BRRs).

  12. Utility-preserving anonymization for health data publishing.

    PubMed

    Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn

    2017-07-11

    Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.

  13. Determining a carbohydrate profile for Hansenula polymorpha

    NASA Technical Reports Server (NTRS)

    Petersen, G. R.

    1985-01-01

    The determination of the levels of carbohydrates in the yeast Hansenula polymorpha required the development of new analytical procedures. Existing fractionation and analytical methods were adapted to deal with the problems involved with the lysis of whole cells. Using these new procedures, the complete carbohydrate profiles of H. polymorpha and selected mutant strains were determined and shown to correlate favourably with previously published results.

  14. The Impact of a Therapy Dog Program on Children's Reading Skills and Attitudes toward Reading

    ERIC Educational Resources Information Center

    Kirnan, Jean; Siminerio, Steven; Wong, Zachary

    2016-01-01

    An existing school program in which therapy dogs are integrated into the reading curriculum was analyzed to determine the effect on student reading. Previous literature suggests an improvement in both reading skills and attitudes towards reading when students read in the presence of a therapy dog. Using a mixed method model, the researchers…

  15. Ocular Chromatic Aberrations and Their Effects on Polychromatic Retinal Image Quality

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxiao

    Previous studies of ocular chromatic aberrations have concentrated on chromatic difference of focus (CDF). Less is known about the chromatic difference of image position (CDP) in the peripheral retina and no experimental attempt has been made to measure the ocular chromatic difference of magnification (CDM). Consequently, theoretical modelling of human eyes is incomplete. The insufficient knowledge of ocular chromatic aberrations is partially responsible for two unsolved applied vision problems: (1) how to improve vision by correcting ocular chromatic aberration? (2) what is the impact of ocular chromatic aberration on the use of isoluminance gratings as a tool in spatial-color vision?. Using optical ray tracing methods, MTF analysis methods of image quality, and psychophysical methods, I have developed a more complete model of ocular chromatic aberrations and their effects on vision. The ocular CDM was determined psychophysically by measuring the tilt in the apparent frontal parallel plane (AFPP) induced by interocular difference in image wavelength. This experimental result was then used to verify a theoretical relationship between the ocular CDM, the ocular CDF and the entrance pupil of the eye. In the retinal image after correcting the ocular CDF with existing achromatizing methods, two forms of chromatic aberration (CDM and chromatic parallax) were examined. The CDM was predicted by theoretical ray tracing and measured with the same method used to determine ocular CDM. The chromatic parallax was predicted with a nodal ray model and measured with the two-color vernier alignment method. The influence of these two aberrations on polychromatic MTF were calculated. Using this improved model of ocular chromatic aberration, luminance artifacts in the images of isoluminance gratings were calculated. The predicted luminance artifacts were then compared with experimental data from previous investigators. The results show that: (1) A simple relationship exists between two major chromatic aberrations and the location of the pupil; (2) The ocular CDM is measurable and varies among individuals; (3) All existing methods to correct ocular chromatic aberration face another aberration, chromatic parallax, which is inherent in the methodology; (4) Ocular chromatic aberrations have the potential to contaminate psychophysical experimental results on human spatial-color vision.

  16. Environmental stress cracking of polymers

    NASA Technical Reports Server (NTRS)

    Mahan, K. I.

    1980-01-01

    A two point bending method for use in studying the environmental stress cracking and crazing phenomena is described and demonstrated for a variety of polymer/solvent systems. Critical strain values obtained from these curves are reported for various polymer/solvent systems including a considerable number of systems for which critical strain values have not been previously reported. Polymers studied using this technique include polycarbonate (PC), ABS, high impact styrene (HIS), polyphenylene oxide (PPO), and polymethyl methacrylate (PMMA). Critical strain values obtained using this method compared favorably with available existing data. The major advantage of the technique is the ability to obtain time vs. strain curves over a short period of time. The data obtained suggests that over a short period of time the transition in most of the polymer solvent systems is more gradual than previously believed.

  17. 3D motion and strain estimation of the heart: initial clinical findings

    NASA Astrophysics Data System (ADS)

    Barbosa, Daniel; Hristova, Krassimira; Loeckx, Dirk; Rademakers, Frank; Claus, Piet; D'hooge, Jan

    2010-03-01

    The quantitative assessment of regional myocardial function remains an important goal in clinical cardiology. As such, tissue Doppler imaging and speckle tracking based methods have been introduced to estimate local myocardial strain. Recently, volumetric ultrasound has become more readily available, allowing therefore the 3D estimation of motion and myocardial deformation. Our lab has previously presented a method based on spatio-temporal elastic registration of ultrasound volumes to estimate myocardial motion and deformation in 3D, overcoming the spatial limitations of the existing methods. This method was optimized on simulated data sets in previous work and is currently tested in a clinical setting. In this manuscript, 10 healthy volunteers, 10 patient with myocardial infarction and 10 patients with arterial hypertension were included. The cardiac strain values extracted with the proposed method were compared with the ones estimated with 1D tissue Doppler imaging and 2D speckle tracking in all patient groups. Although the absolute values of the 3D strain components assessed by this new methodology were not identical to the reference methods, the relationship between the different patient groups was similar.

  18. An improved, robust, axial line singularity method for bodies of revolution

    NASA Technical Reports Server (NTRS)

    Hemsch, Michael J.

    1989-01-01

    The failures encountered in attempts to increase the range of applicability of the axial line singularity method for representing incompressible, inviscid flow about an inclined and slender body-of-revolution are presently noted to be common to all efforts to solve Fredholm equations of the first kind. It is shown that a previously developed smoothing technique yields a robust method for numerical solution of the governing equations; this technique is easily retrofitted to existing codes, and allows the number of circularities to be increased until the most accurate line singularity solution is obtained.

  19. A unified framework for unraveling the functional interaction structure of a biomolecular network based on stimulus-response experimental data.

    PubMed

    Cho, Kwang-Hyun; Choo, Sang-Mok; Wellstead, Peter; Wolkenhauer, Olaf

    2005-08-15

    We propose a unified framework for the identification of functional interaction structures of biomolecular networks in a way that leads to a new experimental design procedure. In developing our approach, we have built upon previous work. Thus we begin by pointing out some of the restrictions associated with existing structure identification methods and point out how these restrictions may be eased. In particular, existing methods use specific forms of experimental algebraic equations with which to identify the functional interaction structure of a biomolecular network. In our work, we employ an extended form of these experimental algebraic equations which, while retaining their merits, also overcome some of their disadvantages. Experimental data are required in order to estimate the coefficients of the experimental algebraic equation set associated with the structure identification task. However, experimentalists are rarely provided with guidance on which parameters to perturb, and to what extent, to perturb them. When a model of network dynamics is required then there is also the vexed question of sample rate and sample time selection to be resolved. Supplying some answers to these questions is the main motivation of this paper. The approach is based on stationary and/or temporal data obtained from parameter perturbations, and unifies the previous approaches of Kholodenko et al. (PNAS 99 (2002) 12841-12846) and Sontag et al. (Bioinformatics 20 (2004) 1877-1886). By way of demonstration, we apply our unified approach to a network model which cannot be properly identified by existing methods. Finally, we propose an experiment design methodology, which is not limited by the amount of parameter perturbations, and illustrate its use with an in numero example.

  20. The effectiveness of ground-penetrating radar surveys in the location of unmarked burial sites in modern cemeteries

    NASA Astrophysics Data System (ADS)

    Fiedler, Sabine; Illich, Bernhard; Berger, Jochen; Graw, Matthias

    2009-07-01

    Ground-penetration radar (GPR) is a geophysical method that is commonly used in archaeological and forensic investigations, including the determination of the exact location of graves. Whilst the method is rapid and does not involve disturbance of the graves, the interpretation of GPR profiles is nevertheless difficult and often leads to incorrect results. Incorrect identifications could hinder criminal investigations and complicate burials in cemeteries that have no information on the location of previously existing graves. In order to increase the number of unmarked graves that are identified, the GPR results need to be verified by comparing them with the soil and vegetation properties of the sites examined. We used a modern cemetery to assess the results obtained with GPR which we then compared with previously obtained tachymetric data and with an excavation of the graves where doubt existed. Certain soil conditions tended to make the application of GPR difficult on occasions, but a rough estimation of the location of the graves was always possible. The two different methods, GPR survey and tachymetry, both proved suitable for correctly determining the exact location of the majority of graves. The present study thus shows that GPR is a reliable method for determining the exact location of unmarked graves in modern cemeteries. However, the method did not allow statements to be made on the stage of decay of the bodies. Such information would assist in deciding what should be done with graves where ineffective degradation creates a problem for reusing graves following the standard resting time of 25 years.

  1. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    NASA Astrophysics Data System (ADS)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  2. Semiparametric methods to contrast gap time survival functions: Application to repeat kidney transplantation.

    PubMed

    Shu, Xu; Schaubel, Douglas E

    2016-06-01

    Times between successive events (i.e., gap times) are of great importance in survival analysis. Although many methods exist for estimating covariate effects on gap times, very few existing methods allow for comparisons between gap times themselves. Motivated by the comparison of primary and repeat transplantation, our interest is specifically in contrasting the gap time survival functions and their integration (restricted mean gap time). Two major challenges in gap time analysis are non-identifiability of the marginal distributions and the existence of dependent censoring (for all but the first gap time). We use Cox regression to estimate the (conditional) survival distributions of each gap time (given the previous gap times). Combining fitted survival functions based on those models, along with multiple imputation applied to censored gap times, we then contrast the first and second gap times with respect to average survival and restricted mean lifetime. Large-sample properties are derived, with simulation studies carried out to evaluate finite-sample performance. We apply the proposed methods to kidney transplant data obtained from a national organ transplant registry. Mean 10-year graft survival of the primary transplant is significantly greater than that of the repeat transplant, by 3.9 months (p=0.023), a result that may lack clinical importance. © 2015, The International Biometric Society.

  3. Combinational Reasoning of Quantitative Fuzzy Topological Relations for Simple Fuzzy Regions

    PubMed Central

    Liu, Bo; Li, Dajun; Xia, Yuanping; Ruan, Jian; Xu, Lili; Wu, Huanyi

    2015-01-01

    In recent years, formalization and reasoning of topological relations have become a hot topic as a means to generate knowledge about the relations between spatial objects at the conceptual and geometrical levels. These mechanisms have been widely used in spatial data query, spatial data mining, evaluation of equivalence and similarity in a spatial scene, as well as for consistency assessment of the topological relations of multi-resolution spatial databases. The concept of computational fuzzy topological space is applied to simple fuzzy regions to efficiently and more accurately solve fuzzy topological relations. Thus, extending the existing research and improving upon the previous work, this paper presents a new method to describe fuzzy topological relations between simple spatial regions in Geographic Information Sciences (GIS) and Artificial Intelligence (AI). Firstly, we propose a new definition for simple fuzzy line segments and simple fuzzy regions based on the computational fuzzy topology. And then, based on the new definitions, we also propose a new combinational reasoning method to compute the topological relations between simple fuzzy regions, moreover, this study has discovered that there are (1) 23 different topological relations between a simple crisp region and a simple fuzzy region; (2) 152 different topological relations between two simple fuzzy regions. In the end, we have discussed some examples to demonstrate the validity of the new method, through comparisons with existing fuzzy models, we showed that the proposed method can compute more than the existing models, as it is more expressive than the existing fuzzy models. PMID:25775452

  4. Metal artifact reduction for CT-based luggage screening.

    PubMed

    Karimi, Seemeen; Martz, Harry; Cosman, Pamela

    2015-01-01

    In aviation security, checked luggage is screened by computed tomography scanning. Metal objects in the bags create artifacts that degrade image quality. Though there exist metal artifact reduction (MAR) methods mainly in medical imaging literature, they require knowledge of the materials in the scan, or are outlier rejection methods. To improve and evaluate a MAR method we previously introduced, that does not require knowledge of the materials in the scan, and gives good results on data with large quantities and different kinds of metal. We describe in detail an optimization which de-emphasizes metal projections and has a constraint for beam hardening and scatter. This method isolates and reduces artifacts in an intermediate image, which is then fed to a previously published sinogram replacement method. We evaluate the algorithm for luggage data containing multiple and large metal objects. We define measures of artifact reduction, and compare this method against others in MAR literature. Metal artifacts were reduced in our test images, even for multiple and large metal objects, without much loss of structure or resolution. Our MAR method outperforms the methods with which we compared it. Our approach does not make assumptions about image content, nor does it discard metal projections.

  5. Two types of modes in finite size one-dimensional coaxial photonic crystals: General rules and experimental evidence

    NASA Astrophysics Data System (ADS)

    El Boudouti, E. H.; El Hassouani, Y.; Djafari-Rouhani, B.; Aynaou, H.

    2007-08-01

    We demonstrate analytically and experimentally the existence and behavior of two types of modes in finite size one-dimensional coaxial photonic crystals made of N cells with vanishing magnetic field on both sides. We highlight the existence of N-1 confined modes in each band and one mode by gap associated to either one or the other of the two surfaces surrounding the structure. The latter modes are independent of N . These results generalize our previous findings on the existence of surface modes in two semi-infinite superlattices obtained from the cleavage of an infinite superlattice between two cells. The analytical results are obtained by means of the Green’s function method, whereas the experiments are carried out using coaxial cables in the radio-frequency regime.

  6. Adaptation of Decoy Fusion Strategy for Existing Multi-Stage Search Workflows

    NASA Astrophysics Data System (ADS)

    Ivanov, Mark V.; Levitsky, Lev I.; Gorshkov, Mikhail V.

    2016-09-01

    A number of proteomic database search engines implement multi-stage strategies aiming at increasing the sensitivity of proteome analysis. These approaches often employ a subset of the original database for the secondary stage of analysis. However, if target-decoy approach (TDA) is used for false discovery rate (FDR) estimation, the multi-stage strategies may violate the underlying assumption of TDA that false matches are distributed uniformly across the target and decoy databases. This violation occurs if the numbers of target and decoy proteins selected for the second search are not equal. Here, we propose a method of decoy database generation based on the previously reported decoy fusion strategy. This method allows unbiased TDA-based FDR estimation in multi-stage searches and can be easily integrated into existing workflows utilizing popular search engines and post-search algorithms.

  7. Oral aniracetam treatment in C57BL/6J mice without pre-existing cognitive dysfunction reveals no changes in learning, memory, anxiety or stereotypy

    PubMed Central

    Reynolds, Conner D.; Jefferson, Taylor S.; Volquardsen, Meagan; Pandian, Ashvini; Smith, Gregory D.; Holley, Andrew J.; Lugo, Joaquin N.

    2017-01-01

    Background: The piracetam analog, aniracetam, has recently received attention for its cognition enhancing potential, with minimal reported side effects.  Previous studies report the drug to be effective in both human and non-human models with pre-existing cognitive dysfunction, but few studies have evaluated its efficacy in healthy subjects. A previous study performed in our laboratory found no cognitive enhancing effects of oral aniracetam administration 1-hour prior to behavioral testing in naïve C57BL/6J mice. Methods: The current study aims to further evaluate this drug by administration of aniracetam 30 minutes prior to testing in order to optimize any cognitive enhancing effects. In this study, all naïve C57BL/6J mice were tested in tasks of delayed fear conditioning, novel object recognition, rotarod, open field, elevated plus maze, and marble burying. Results: Across all tasks, animals in the treatment group failed to show enhanced learning when compared to controls. Conclusions: These results provide further evidence suggesting that aniracetam conveys no therapeutic benefit to subjects without pre-existing cognitive dysfunction. PMID:29946420

  8. Models and theories of prescribing decisions: A review and suggested a new model

    PubMed Central

    Mohaidin, Zurina

    2017-01-01

    To date, research on the prescribing decisions of physician lacks sound theoretical foundations. In fact, drug prescribing by doctors is a complex phenomenon influenced by various factors. Most of the existing studies in the area of drug prescription explain the process of decision-making by physicians via the exploratory approach rather than theoretical. Therefore, this review is an attempt to suggest a value conceptual model that explains the theoretical linkages existing between marketing efforts, patient and pharmacist and physician decision to prescribe the drugs. The paper follows an inclusive review approach and applies the previous theoretical models of prescribing behaviour to identify the relational factors. More specifically, the report identifies and uses several valuable perspectives such as the ‘persuasion theory - elaboration likelihood model’, the stimuli–response marketing model’, the ‘agency theory’, the theory of planned behaviour,’ and ‘social power theory,’ in developing an innovative conceptual paradigm. Based on the combination of existing methods and previous models, this paper suggests a new conceptual model of the physician decision-making process. This unique model has the potential for use in further research. PMID:28690701

  9. Evaluation of cancer mortality in a cohort of workers exposed to low-level radiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lea, C.S.

    1995-12-01

    The purpose of this dissertation was to re-analyze existing data to explore methodologic approaches that may determine whether excess cancer mortality in the ORNL cohort can be explained by time-related factors not previously considered; grouping of cancer outcomes; selection bias due to choice of method selected to incorporate an empirical induction period; or the type of statistical model chosen.

  10. Patrol force allocation for law enforcement: An introductory planning guide

    NASA Technical Reports Server (NTRS)

    Sohn, R. L.; Kennedy, R. D.

    1976-01-01

    Previous and current methods for analyzing police patrol forces are reviewed and discussed. The steps in developing an allocation analysis procedure are defined, including the prediction of the rate of calls for service, determination of the number of patrol units needed, designing sectors, and analyzing dispatch strategies. Existing computer programs used for this purpose are briefly described, and some results of their application are given.

  11. Exponential Stability of Almost Periodic Solutions for Memristor-Based Neural Networks with Distributed Leakage Delays.

    PubMed

    Xu, Changjin; Li, Peiluan; Pang, Yicheng

    2016-12-01

    In this letter, we deal with a class of memristor-based neural networks with distributed leakage delays. By applying a new Lyapunov function method, we obtain some sufficient conditions that ensure the existence, uniqueness, and global exponential stability of almost periodic solutions of neural networks. We apply the results of this solution to prove the existence and stability of periodic solutions for this delayed neural network with periodic coefficients. We then provide an example to illustrate the effectiveness of the theoretical results. Our results are completely new and complement the previous studies Chen, Zeng, and Jiang ( 2014 ) and Jiang, Zeng, and Chen ( 2015 ).

  12. Classifying medical relations in clinical text via convolutional neural networks.

    PubMed

    He, Bin; Guan, Yi; Dai, Rui

    2018-05-16

    Deep learning research on relation classification has achieved solid performance in the general domain. This study proposes a convolutional neural network (CNN) architecture with a multi-pooling operation for medical relation classification on clinical records and explores a loss function with a category-level constraint matrix. Experiments using the 2010 i2b2/VA relation corpus demonstrate these models, which do not depend on any external features, outperform previous single-model methods and our best model is competitive with the existing ensemble-based method. Copyright © 2018. Published by Elsevier B.V.

  13. A stage structure pest management model with impulsive state feedback control

    NASA Astrophysics Data System (ADS)

    Pang, Guoping; Chen, Lansun; Xu, Weijian; Fu, Gang

    2015-06-01

    A stage structure pest management model with impulsive state feedback control is investigated. We get the sufficient condition for the existence of the order-1 periodic solution by differential equation geometry theory and successor function. Further, we obtain a new judgement method for the stability of the order-1 periodic solution of the semi-continuous systems by referencing the stability analysis for limit cycles of continuous systems, which is different from the previous method of analog of Poincarè criterion. Finally, we analyze numerically the theoretical results obtained.

  14. Sparse Matrix for ECG Identification with Two-Lead Features.

    PubMed

    Tseng, Kuo-Kun; Luo, Jiao; Hegarty, Robert; Wang, Wenmin; Haiting, Dong

    2015-01-01

    Electrocardiograph (ECG) human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods.

  15. Direct measurement of resonance strengths in 34S(α ,γ )38Ar at astrophysically relevant energies using the DRAGON recoil separator

    NASA Astrophysics Data System (ADS)

    Connolly, D.; O'Malley, P. D.; Akers, C.; Chen, A. A.; Christian, G.; Davids, B.; Erikson, L.; Fallis, J.; Fulton, B. R.; Greife, U.; Hager, U.; Hutcheon, D. A.; Ilyushkin, S.; Laird, A. M.; Mahl, A.; Ruiz, C.

    2018-03-01

    Background: Nucleosynthesis of mid-mass elements is thought to occur under hot and explosive astrophysical conditions. Radiative α capture on 34S has been shown to impact nucleosynthesis in several such conditions, including core and shell oxygen burning, explosive oxygen burning, and type Ia supernovae. Purpose: Broad uncertainties exist in the literature for the strengths of three resonances within the astrophysically relevant energy range (ECM=1.94 -3.42 MeV at T =2.2 GK ). Further, there are several states in 38Ar within this energy range which have not been previously measured. This work aimed to remeasure the resonance strengths of states for which broad uncertainty existed as well as to measure the resonance strengths and energies of previously unmeasured states. Methods: Resonance strengths and energies of eight narrow resonances (five of which had not been previously studied) were measured in inverse kinematics with the DRAGON facility at TRIUMF by impinging an isotopically pure beam of 34S ions on a windowless 4He gas target. Prompt γ emissions of de-exciting 38Ar recoils were detected in an array of bismuth germanate scintillators in coincidence with recoil nuclei, which were separated from unreacted beam ions by an electromagnetic mass separator and detected by a time-of-flight system and a multianode ionization chamber. Results: The present measurements agree with previous results. Broad uncertainty in the resonance strength of the ECM=2709 keV resonance persists. Resonance strengths and energies were determined for five low-energy resonances which had not been studied previously, and their strengths were determined to be significantly weaker than those of previously measured resonances. Conclusions: The five previously unmeasured resonances were found not to contribute significantly to the total thermonuclear reaction rate. A median total thermonuclear reaction rate calculated using data from the present work along with existing literature values using the STARLIB rate calculator agrees with the NON-SMOKER statistical model calculation as well as the REACLIB and STARLIB library rates at explosive and nonexplosive oxygen-burning temperatures (T =3 -4 GK and T =1.5 -2.7 GK , respectively).

  16. Stable and low diffusive hybrid upwind splitting methods

    NASA Technical Reports Server (NTRS)

    Coquel, Frederic; Liou, Meng-Sing

    1992-01-01

    We introduce in this paper a new concept for upwinding: the Hybrid Upwind Splitting (HUS). This original strategy for upwinding is achieved by combining the two previous existing approaches, the Flux Vector (FVS) and Flux Difference Splittings (FDS), while retaining their own interesting features. Indeed, our approach yields upwind methods that share the robustness of FVS schemes in the capture of nonlinear waves and the accuracy of some FDS schemes in the capture of linear waves. We describe here some examples of such HUS methods obtained by hybridizing the Osher approach with FVS schemes. Numerical illustrations are displayed and will prove in particular the relevance of the HUS methods we propose for viscous calculations.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sano, Yukio; Sano, Tomokazu

    A quadratic equation for the temperature-independent Grueneisen coefficient {gamma} was derived by a method in which the Walsh-Christian and Mie-Grueneisen equations are combined. Some previously existing ab initio temperature Hugoniots for hexagonal close-packed solid Fe are inaccurate because the constant-volume specific heats on the Hugoniots CVH, which are related uniquely to the solutions of the quadratic equation, have values that are too small. A CVH distribution in the solid phase range was demonstrated to agree approximately with a previous ab initio distribution. In contrast, the corresponding {gamma} distribution was significantly different from the ab initio distribution in the lower pressuremore » region. The causes of these disagreements are clarified.« less

  18. Drift-Free Position Estimation of Periodic or Quasi-Periodic Motion Using Inertial Sensors

    PubMed Central

    Latt, Win Tun; Veluvolu, Kalyana Chakravarthy; Ang, Wei Tech

    2011-01-01

    Position sensing with inertial sensors such as accelerometers and gyroscopes usually requires other aided sensors or prior knowledge of motion characteristics to remove position drift resulting from integration of acceleration or velocity so as to obtain accurate position estimation. A method based on analytical integration has previously been developed to obtain accurate position estimate of periodic or quasi-periodic motion from inertial sensors using prior knowledge of the motion but without using aided sensors. In this paper, a new method is proposed which employs linear filtering stage coupled with adaptive filtering stage to remove drift and attenuation. The prior knowledge of the motion the proposed method requires is only approximate band of frequencies of the motion. Existing adaptive filtering methods based on Fourier series such as weighted-frequency Fourier linear combiner (WFLC), and band-limited multiple Fourier linear combiner (BMFLC) are modified to combine with the proposed method. To validate and compare the performance of the proposed method with the method based on analytical integration, simulation study is performed using periodic signals as well as real physiological tremor data, and real-time experiments are conducted using an ADXL-203 accelerometer. Results demonstrate that the performance of the proposed method outperforms the existing analytical integration method. PMID:22163935

  19. Scalable detection of statistically significant communities and hierarchies, using message passing for modularity

    PubMed Central

    Zhang, Pan; Moore, Cristopher

    2014-01-01

    Modularity is a popular measure of community structure. However, maximizing the modularity can lead to many competing partitions, with almost the same modularity, that are poorly correlated with each other. It can also produce illusory ‘‘communities’’ in random graphs where none exist. We address this problem by using the modularity as a Hamiltonian at finite temperature and using an efficient belief propagation algorithm to obtain the consensus of many partitions with high modularity, rather than looking for a single partition that maximizes it. We show analytically and numerically that the proposed algorithm works all of the way down to the detectability transition in networks generated by the stochastic block model. It also performs well on real-world networks, revealing large communities in some networks where previous work has claimed no communities exist. Finally we show that by applying our algorithm recursively, subdividing communities until no statistically significant subcommunities can be found, we can detect hierarchical structure in real-world networks more efficiently than previous methods. PMID:25489096

  20. Differentiation and Exploration of Model MACP for HE VER 1.0 on Prototype Performance Measurement Application for Higher Education

    NASA Astrophysics Data System (ADS)

    El Akbar, R. Reza; Anshary, Muhammad Adi Khairul; Hariadi, Dennis

    2018-02-01

    Model MACP for HE ver.1. Is a model that describes how to perform measurement and monitoring performance for Higher Education. Based on a review of the research related to the model, there are several parts of the model component to develop in further research, so this research has four main objectives. The first objective is to differentiate the CSF (critical success factor) components in the previous model, the two key KPI (key performance indicators) exploration in the previous model, the three based on the previous objective, the new and more detailed model design. The final goal is the fourth designed prototype application for performance measurement in higher education, based on a new model created. The method used is explorative research method and application design using prototype method. The results of this study are first, forming a more detailed new model for measurement and monitoring of performance in higher education, differentiation and exploration of the Model MACP for HE Ver.1. The second result compiles a dictionary of college performance measurement by re-evaluating the existing indicators. The third result is the design of prototype application of performance measurement in higher education.

  1. A novel load balanced energy conservation approach in WSN using biogeography based optimization

    NASA Astrophysics Data System (ADS)

    Kaushik, Ajay; Indu, S.; Gupta, Daya

    2017-09-01

    Clustering sensor nodes is an effective technique to reduce energy consumption of the sensor nodes and maximize the lifetime of Wireless sensor networks. Balancing load of the cluster head is an important factor in long run operation of WSNs. In this paper we propose a novel load balancing approach using biogeography based optimization (LB-BBO). LB-BBO uses two separate fitness functions to perform load balancing of equal and unequal load respectively. The proposed method is simulated using matlab and compared with existing methods. The proposed method shows better performance than all the previous works implemented for energy conservation in WSN

  2. Computing the Evans function via solving a linear boundary value ODE

    NASA Astrophysics Data System (ADS)

    Wahl, Colin; Nguyen, Rose; Ventura, Nathaniel; Barker, Blake; Sandstede, Bjorn

    2015-11-01

    Determining the stability of traveling wave solutions to partial differential equations can oftentimes be computationally intensive but of great importance to understanding the effects of perturbations on the physical systems (chemical reactions, hydrodynamics, etc.) they model. For waves in one spatial dimension, one may linearize around the wave and form an Evans function - an analytic Wronskian-like function which has zeros that correspond in multiplicity to the eigenvalues of the linearized system. If eigenvalues with a positive real part do not exist, the traveling wave will be stable. Two methods exist for calculating the Evans function numerically: the exterior-product method and the method of continuous orthogonalization. The first is numerically expensive, and the second reformulates the originally linear system as a nonlinear system. We develop a new algorithm for computing the Evans function through appropriate linear boundary-value problems. This algorithm is cheaper than the previous methods, and we prove that it preserves analyticity of the Evans function. We also provide error estimates and implement it on some classical one- and two-dimensional systems, one being the Swift-Hohenberg equation in a channel, to show the advantages.

  3. Underwater psychophysical audiogram of a young male California sea lion (Zalophus californianus).

    PubMed

    Mulsow, Jason; Houser, Dorian S; Finneran, James J

    2012-05-01

    Auditory evoked potential (AEP) data are commonly obtained in air while sea lions are under gas anesthesia; a procedure that precludes the measurement of underwater hearing sensitivity. This is a substantial limitation considering the importance of underwater hearing data in designing criteria aimed at mitigating the effects of anthropogenic noise exposure. To determine if some aspects of underwater hearing sensitivity can be predicted using rapid aerial AEP methods, this study measured underwater psychophysical thresholds for a young male California sea lion (Zalophus californianus) for which previously published aerial AEP thresholds exist. Underwater thresholds were measured in an aboveground pool at frequencies between 1 and 38 kHz. The underwater audiogram was very similar to those previously published for California sea lions, suggesting that the current and previously obtained psychophysical data are representative for this species. The psychophysical and previously measured AEP audiograms were most similar in terms of high-frequency hearing limit (HFHL), although the underwater HFHL was sharper and occurred at a higher frequency. Aerial AEP methods are useful for predicting reductions in the HFHL that are potentially independent of the testing medium, such as those due to age-related sensorineural hearing loss.

  4. Nonlinear PET parametric image reconstruction with MRI information using kernel method

    NASA Astrophysics Data System (ADS)

    Gong, Kuang; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2017-03-01

    Positron Emission Tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neurology. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information. Previously we have used kernel learning to embed MR information in static PET reconstruction and direct Patlak reconstruction. Here we extend this method to direct reconstruction of nonlinear parameters in a compartment model by using the alternating direction of multiplier method (ADMM) algorithm. Simulation studies show that the proposed method can produce superior parametric images compared with existing methods.

  5. Experiences Using Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1996-01-01

    This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  6. Advanced Waveform Simulation for Seismic Monitoring

    DTIC Science & Technology

    2008-09-01

    velocity model. The method separates the main arrivals of the regional waveform into 5 windows: Pnl (vertical and radial components), Rayleigh (vertical and...ranges out to 10°, including extensive observations of crustal thinning and thickening and various Pnl complexities. Broadband modeling in 1D, 2D...existing models perform in predicting the various regional phases, Rayleigh waves, Love waves, and Pnl waves. Previous events from this Basin-and-Range

  7. Reverse engineering of integrated circuits

    DOEpatents

    Chisholm, Gregory H.; Eckmann, Steven T.; Lain, Christopher M.; Veroff, Robert L.

    2003-01-01

    Software and a method therein to analyze circuits. The software comprises several tools, each of which perform particular functions in the Reverse Engineering process. The analyst, through a standard interface, directs each tool to the portion of the task to which it is most well suited, rendering previously intractable problems solvable. The tools are generally used iteratively to produce a successively more abstract picture of a circuit, about which incomplete a priori knowledge exists.

  8. Issues associated with Galilean invariance on a moving solid boundary in the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping

    2017-01-01

    In lattice Boltzmann simulations involving moving solid boundaries, the momentum exchange between the solid and fluid phases was recently found to be not fully consistent with the principle of local Galilean invariance (GI) when the bounce-back schemes (BBS) and the momentum exchange method (MEM) are used. In the past, this inconsistency was resolved by introducing modified MEM schemes so that the overall moving-boundary algorithm could be more consistent with GI. However, in this paper we argue that the true origin of this violation of Galilean invariance (VGI) in the presence of a moving solid-fluid interface is due to the BBS itself, as the VGI error not only exists in the hydrodynamic force acting on the solid phase, but also in the boundary force exerted on the fluid phase, according to Newton's Third Law. The latter, however, has so far gone unnoticed in previously proposed modified MEM schemes. Based on this argument, we conclude that the previous modifications to the momentum exchange method are incomplete solutions to the VGI error in the lattice Boltzmann method (LBM). An implicit remedy to the VGI error in the LBM and its limitation is then revealed. To address the VGI error for a case when this implicit remedy does not exist, a bounce-back scheme based on coordinate transformation is proposed. Numerical tests in both laminar and turbulent flows show that the proposed scheme can effectively eliminate the errors associated with the usual bounce-back implementations on a no-slip solid boundary, and it can maintain an accurate momentum exchange calculation with minimal computational overhead.

  9. Automated railroad reconstruction from remote sensing image based on texture filter

    NASA Astrophysics Data System (ADS)

    Xiao, Jie; Lu, Kaixia

    2018-03-01

    Techniques of remote sensing have been improved incredibly in recent years and very accurate results and high resolution images can be acquired. There exist possible ways to use such data to reconstruct railroads. In this paper, an automated railroad reconstruction method from remote sensing images based on Gabor filter was proposed. The method is divided in three steps. Firstly, the edge-oriented railroad characteristics (such as line features) in a remote sensing image are detected using Gabor filter. Secondly, two response images with the filtering orientations perpendicular to each other are fused to suppress the noise and acquire a long stripe smooth region of railroads. Thirdly, a set of smooth regions can be extracted by firstly computing global threshold for the previous result image using Otsu's method and then converting it to a binary image based on the previous threshold. This workflow is tested on a set of remote sensing images and was found to deliver very accurate results in a quickly and highly automated manner.

  10. A Temperature-Dependent Phase-Field Model for Phase Separation and Damage

    NASA Astrophysics Data System (ADS)

    Heinemann, Christian; Kraus, Christiane; Rocca, Elisabetta; Rossi, Riccarda

    2017-07-01

    In this paper we study a model for phase separation and damage in thermoviscoelastic materials. The main novelty of the paper consists in the fact that, in contrast with previous works in the literature concerning phase separation and damage processes in elastic media, in our model we encompass thermal processes, nonlinearly coupled with the damage, concentration and displacement evolutions. More particularly, we prove the existence of "entropic weak solutions", resorting to a solvability concept first introduced in Feireisl (Comput Math Appl 53:461-490, 2007) in the framework of Fourier-Navier-Stokes systems and then recently employed in Feireisl et al. (Math Methods Appl Sci 32:1345-1369, 2009) and Rocca and Rossi (Math Models Methods Appl Sci 24:1265-1341, 2014) for the study of PDE systems for phase transition and damage. Our global-in-time existence result is obtained by passing to the limit in a carefully devised time-discretization scheme.

  11. Analytical solutions for solute transport in groundwater and riverine flow using Green's Function Method and pertinent coordinate transformation method

    NASA Astrophysics Data System (ADS)

    Sanskrityayn, Abhishek; Suk, Heejun; Kumar, Naveen

    2017-04-01

    In this study, analytical solutions of one-dimensional pollutant transport originating from instantaneous and continuous point sources were developed in groundwater and riverine flow using both Green's Function Method (GFM) and pertinent coordinate transformation method. Dispersion coefficient and flow velocity are considered spatially and temporally dependent. The spatial dependence of the velocity is linear, non-homogeneous and that of dispersion coefficient is square of that of velocity, while the temporal dependence is considered linear, exponentially and asymptotically decelerating and accelerating. Our proposed analytical solutions are derived for three different situations depending on variations of dispersion coefficient and velocity, respectively which can represent real physical processes occurring in groundwater and riverine systems. First case refers to steady solute transport situation in steady flow in which dispersion coefficient and velocity are only spatially dependent. The second case represents transient solute transport in steady flow in which dispersion coefficient is spatially and temporally dependent while the velocity is spatially dependent. Finally, the third case indicates transient solute transport in unsteady flow in which both dispersion coefficient and velocity are spatially and temporally dependent. The present paper demonstrates the concentration distribution behavior from a point source in realistically occurring flow domains of hydrological systems including groundwater and riverine water in which the dispersivity of pollutant's mass is affected by heterogeneity of the medium as well as by other factors like velocity fluctuations, while velocity is influenced by water table slope and recharge rate. Such capabilities give the proposed method's superiority about application of various hydrological problems to be solved over other previously existing analytical solutions. Especially, to author's knowledge, any other solution doesn't exist for both spatially and temporally variations of dispersion coefficient and velocity. In this study, the existing analytical solutions from previous widely known studies are used for comparison as validation tools to verify the proposed analytical solution as well as the numerical code of the Two-Dimensional Subsurface Flow, Fate and Transport of Microbes and Chemicals (2DFATMIC) code and the developed 1D finite difference code (FDM). All such solutions show perfect match with the respective proposed solutions.

  12. Experiences Using Lightweight Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1997-01-01

    This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  13. Water Mapping Using Multispectral Airborne LIDAR Data

    NASA Astrophysics Data System (ADS)

    Yan, W. Y.; Shaker, A.; LaRocque, P. E.

    2018-04-01

    This study investigates the use of the world's first multispectral airborne LiDAR sensor, Optech Titan, manufactured by Teledyne Optech to serve the purpose of automatic land-water classification with a particular focus on near shore region and river environment. Although there exist recent studies utilizing airborne LiDAR data for shoreline detection and water surface mapping, the majority of them only perform experimental testing on clipped data subset or rely on data fusion with aerial/satellite image. In addition, most of the existing approaches require manual intervention or existing tidal/datum data for sample collection of training data. To tackle the drawbacks of previous approaches, we propose and develop an automatic data processing workflow for land-water classification using multispectral airborne LiDAR data. Depending on the nature of the study scene, two methods are proposed for automatic training data selection. The first method utilizes the elevation/intensity histogram fitted with Gaussian mixture model (GMM) to preliminarily split the land and water bodies. The second method mainly relies on the use of a newly developed scan line elevation intensity ratio (SLIER) to estimate the water surface data points. Regardless of the training methods being used, feature spaces can be constructed using the multispectral LiDAR intensity, elevation and other features derived from these parameters. The comprehensive workflow was tested with two datasets collected for different near shore region and river environment, where the overall accuracy yielded better than 96 %.

  14. Respiratory Artefact Removal in Forced Oscillation Measurements: A Machine Learning Approach.

    PubMed

    Pham, Thuy T; Thamrin, Cindy; Robinson, Paul D; McEwan, Alistair L; Leong, Philip H W

    2017-08-01

    Respiratory artefact removal for the forced oscillation technique can be treated as an anomaly detection problem. Manual removal is currently considered the gold standard, but this approach is laborious and subjective. Most existing automated techniques used simple statistics and/or rejected anomalous data points. Unfortunately, simple statistics are insensitive to numerous artefacts, leading to low reproducibility of results. Furthermore, rejecting anomalous data points causes an imbalance between the inspiratory and expiratory contributions. From a machine learning perspective, such methods are unsupervised and can be considered simple feature extraction. We hypothesize that supervised techniques can be used to find improved features that are more discriminative and more highly correlated with the desired output. Features thus found are then used for anomaly detection by applying quartile thresholding, which rejects complete breaths if one of its features is out of range. The thresholds are determined by both saliency and performance metrics rather than qualitative assumptions as in previous works. Feature ranking indicates that our new landmark features are among the highest scoring candidates regardless of age across saliency criteria. F1-scores, receiver operating characteristic, and variability of the mean resistance metrics show that the proposed scheme outperforms previous simple feature extraction approaches. Our subject-independent detector, 1IQR-SU, demonstrated approval rates of 80.6% for adults and 98% for children, higher than existing methods. Our new features are more relevant. Our removal is objective and comparable to the manual method. This is a critical work to automate forced oscillation technique quality control.

  15. Participatory methods for the assessment of the ownership status of free-roaming dogs in Bali, Indonesia, for disease control and animal welfare.

    PubMed

    Morters, M K; Bharadwaj, S; Whay, H R; Cleaveland, S; Damriyasa, I Md; Wood, J L N

    2014-09-01

    The existence of unowned, free-roaming dogs capable of maintaining adequate body condition without direct human oversight has serious implications for disease control and animal welfare, including reducing effective vaccination coverage against rabies through limiting access for vaccination, and absolving humans from the responsibility of providing adequate care for a domesticated species. Mark-recapture methods previously used to estimate the fraction of unowned dogs in free-roaming populations have limitations, particularly when most of the dogs are owned. We used participatory methods, described as Participatory Rural Appraisal (PRA), as a novel alternative to mark-recapture methods in two villages in Bali, Indonesia. PRA was implemented at the banjar (or sub-village)-level to obtain consensus on the food sources of the free-roaming dogs. Specific methods included semi-structured discussion, visualisation tools and ranking. The PRA results agreed with the preceding household surveys and direct observations, designed to evaluate the same variables, and confirmed that a population of unowned, free-roaming dogs in sufficiently good condition to be sustained independently of direct human support was unlikely to exist. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  16. A reference estimator based on composite sensor pattern noise for source device identification

    NASA Astrophysics Data System (ADS)

    Li, Ruizhe; Li, Chang-Tsun; Guan, Yu

    2014-02-01

    It has been proved that Sensor Pattern Noise (SPN) can serve as an imaging device fingerprint for source camera identification. Reference SPN estimation is a very important procedure within the framework of this application. Most previous works built reference SPN by averaging the SPNs extracted from 50 images of blue sky. However, this method can be problematic. Firstly, in practice we may face the problem of source camera identification in the absence of the imaging cameras and reference SPNs, which means only natural images with scene details are available for reference SPN estimation rather than blue sky images. It is challenging because the reference SPN can be severely contaminated by image content. Secondly, the number of available reference images sometimes is too few for existing methods to estimate a reliable reference SPN. In fact, existing methods lack consideration of the number of available reference images as they were designed for the datasets with abundant images to estimate the reference SPN. In order to deal with the aforementioned problem, in this work, a novel reference estimator is proposed. Experimental results show that our proposed method achieves better performance than the methods based on the averaged reference SPN, especially when few reference images used.

  17. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach.

    PubMed

    Park, Hyunseok; Magee, Christopher L

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents.

  18. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach

    PubMed Central

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents. PMID:28135304

  19. Ensemble-based prediction of RNA secondary structures.

    PubMed

    Aghaeepour, Nima; Hoos, Holger H

    2013-04-24

    Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.

  20. Anatomically-Aided PET Reconstruction Using the Kernel Method

    PubMed Central

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-01-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810

  1. Anatomically-aided PET reconstruction using the kernel method.

    PubMed

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  2. Anatomically-aided PET reconstruction using the kernel method

    NASA Astrophysics Data System (ADS)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  3. Fitting Formulae and Constraints for the Existence of S-type and P-type Habitable Zones in Binary Systems

    NASA Astrophysics Data System (ADS)

    Wang, Zhaopeng; Cuntz, Manfred

    2017-10-01

    We derive fitting formulae for the quick determination of the existence of S-type and P-type habitable zones (HZs) in binary systems. Based on previous work, we consider the limits of the climatological HZ in binary systems (which sensitively depend on the system parameters) based on a joint constraint encompassing planetary orbital stability and a habitable region for a possible system planet. Additionally, we employ updated results on planetary climate models obtained by Kopparapu and collaborators. Our results are applied to four P-type systems (Kepler-34, Kepler-35, Kepler-413, and Kepler-1647) and two S-type systems (TrES-2 and KOI-1257). Our method allows us to gauge the existence of climatological HZs for these systems in a straightforward manner with detailed consideration of the observational uncertainties. Further applications may include studies of other existing systems as well as systems to be identified through future observational campaigns.

  4. Fitting Formulae and Constraints for the Existence of S-type and P-type Habitable Zones in Binary Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang Zhaopeng; Cuntz, Manfred, E-mail: zhaopeng.wang@mavs.uta.edu, E-mail: cuntz@uta.edu

    We derive fitting formulae for the quick determination of the existence of S-type and P-type habitable zones (HZs) in binary systems. Based on previous work, we consider the limits of the climatological HZ in binary systems (which sensitively depend on the system parameters) based on a joint constraint encompassing planetary orbital stability and a habitable region for a possible system planet. Additionally, we employ updated results on planetary climate models obtained by Kopparapu and collaborators. Our results are applied to four P-type systems (Kepler-34, Kepler-35, Kepler-413, and Kepler-1647) and two S-type systems (TrES-2 and KOI-1257). Our method allows us tomore » gauge the existence of climatological HZs for these systems in a straightforward manner with detailed consideration of the observational uncertainties. Further applications may include studies of other existing systems as well as systems to be identified through future observational campaigns.« less

  5. English semantic word-pair norms and a searchable Web portal for experimental stimulus creation.

    PubMed

    Buchanan, Erin M; Holmes, Jessica L; Teasley, Marilee L; Hutchison, Keith A

    2013-09-01

    As researchers explore the complexity of memory and language hierarchies, the need to expand normed stimulus databases is growing. Therefore, we present 1,808 words, paired with their features and concept-concept information, that were collected using previously established norming methods (McRae, Cree, Seidenberg, & McNorgan Behavior Research Methods 37:547-559, 2005). This database supplements existing stimuli and complements the Semantic Priming Project (Hutchison, Balota, Cortese, Neely, Niemeyer, Bengson, & Cohen-Shikora 2010). The data set includes many types of words (including nouns, verbs, adjectives, etc.), expanding on previous collections of nouns and verbs (Vinson & Vigliocco Journal of Neurolinguistics 15:317-351, 2008). We describe the relation between our and other semantic norms, as well as giving a short review of word-pair norms. The stimuli are provided in conjunction with a searchable Web portal that allows researchers to create a set of experimental stimuli without prior programming knowledge. When researchers use this new database in tandem with previous norming efforts, precise stimuli sets can be created for future research endeavors.

  6. Optic-null space medium for cover-up cloaking without any negative refraction index materials

    PubMed Central

    Sun, Fei; He, Sailing

    2016-01-01

    With the help of optic-null medium, we propose a new way to achieve invisibility by covering up the scattering without using any negative refraction index materials. Compared with previous methods to achieve invisibility, the function of our cloak is to cover up the scattering of the objects to be concealed by a background object of strong scattering. The concealed object can receive information from the outside world without being detected. Numerical simulations verify the performance of our cloak. The proposed method will be a great addition to existing invisibility technology. PMID:27383833

  7. Finite-time synchronization control of a class of memristor-based recurrent neural networks.

    PubMed

    Jiang, Minghui; Wang, Shuangtao; Mei, Jun; Shen, Yanjun

    2015-03-01

    This paper presents a global and local finite-time synchronization control law for memristor neural networks. By utilizing the drive-response concept, differential inclusions theory, and Lyapunov functional method, we establish several sufficient conditions for finite-time synchronization between the master and corresponding slave memristor-based neural network with the designed controller. In comparison with the existing results, the proposed stability conditions are new, and the obtained results extend some previous works on conventional recurrent neural networks. Two numerical examples are provided to illustrate the effective of the design method. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Optic-null space medium for cover-up cloaking without any negative refraction index materials.

    PubMed

    Sun, Fei; He, Sailing

    2016-07-07

    With the help of optic-null medium, we propose a new way to achieve invisibility by covering up the scattering without using any negative refraction index materials. Compared with previous methods to achieve invisibility, the function of our cloak is to cover up the scattering of the objects to be concealed by a background object of strong scattering. The concealed object can receive information from the outside world without being detected. Numerical simulations verify the performance of our cloak. The proposed method will be a great addition to existing invisibility technology.

  9. A topological proof of chaos for two nonlinear heterogeneous triopoly game models

    NASA Astrophysics Data System (ADS)

    Pireddu, Marina

    2016-08-01

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizes its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called "Stretching Along the Paths" technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.

  10. A topological proof of chaos for two nonlinear heterogeneous triopoly game models.

    PubMed

    Pireddu, Marina

    2016-08-01

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizes its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called "Stretching Along the Paths" technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.

  11. Method of Testing Oxygen Regulators

    NASA Technical Reports Server (NTRS)

    Sontag, Harcourt; Borlik, E L

    1935-01-01

    Oxygen regulators are used in aircraft to regulate automatically the flow of oxygen to the pilot from a cylinder at pressures ranging up to 150 atmospheres. The instruments are adjusted to open at an altitude of about 15,000 ft. and thereafter to deliver oxygen at a rate which increases with the altitude. The instruments are tested to determine the rate of flow of oxygen delivered at various altitudes and to detect any mechanical defects which may exist. A method of testing oxygen regulators was desired in which the rate of flow could be determined more accurately than by the test method previously used (reference 1) and by which instruments defective mechanically could be detected. The new method of test fulfills these requirements.

  12. Peopling the past: new perspectives on the ancient Maya.

    PubMed

    Robin, C

    2001-01-02

    The new direction in Maya archaeology is toward achieving a greater understanding of people and their roles and their relations in the past. To answer emerging humanistic questions about ancient people's lives Mayanists are increasingly making use of new and existing scientific methods from archaeology and other disciplines. Maya archaeology is bridging the divide between the humanities and sciences to answer questions about ancient people previously considered beyond the realm of archaeological knowledge.

  13. A Thermal Dehydrogenative Diels–Alder Reaction of Styrenes for the Concise Synthesis of Functionalized Naphthalenes

    PubMed Central

    Kocsis, Laura S.; Benedetti, Erica

    2012-01-01

    Functionalized naphthalenes are valuable building blocks in many important areas. A microwave-assisted, intramolecular dehydrogenative Diels-Alder reaction of styrenyl derivatives to provide cyclopenta[b]naphthalene substructures not previously accessible using existing synthetic methods is described. The synthetic utility of these uniquely functionalized naphthalenes was demonstrated by a single-step conversion of one of these cycloadducts to a fluorophore bearing a structural resemblance to Prodan. PMID:22913473

  14. A thermal dehydrogenative Diels-Alder reaction of styrenes for the concise synthesis of functionalized naphthalenes.

    PubMed

    Kocsis, Laura S; Benedetti, Erica; Brummond, Kay M

    2012-09-07

    Functionalized naphthalenes are valuable building blocks in many important areas. A microwave-assisted, intramolecular dehydrogenative Diels-Alder reaction of styrenyl derivatives to provide cyclopenta[b]naphthalene substructures not previously accessible using existing synthetic methods is described. The synthetic utility of these uniquely functionalized naphthalenes was demonstrated by a single-step conversion of one of these cycloadducts to a fluorophore bearing a structural resemblance to Prodan.

  15. Hazards and Possibilities of Optical Breakdown Effects Below the Threshold for Shockwave and Bubble Formation

    DTIC Science & Technology

    2006-07-01

    precision of the determination of Rmax, we established a refined method based on the model of bubble formation described above in section 3.6.1 and the...development can be modeled by hydrodynamic codes based on tabulated equation-of-state data . This has previously demonstrated on ps optical breakdown...per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and

  16. Research progress on expansive soil cracks under changing environment.

    PubMed

    Shi, Bei-xiao; Zheng, Cheng-feng; Wu, Jin-kun

    2014-01-01

    Engineering problems shunned previously rise to the surface gradually with the activities of reforming the natural world in depth, the problem of expansive soil crack under the changing environment becoming a control factor of expansive soil slope stability. The problem of expansive soil crack has gradually become a research hotspot, elaborates the occurrence and development of cracks from the basic properties of expansive soil, and points out the role of controlling the crack of expansive soil strength. We summarize the existing research methods and results of expansive soil crack characteristics. Improving crack measurement and calculation method and researching the crack depth measurement, statistical analysis method, crack depth and surface feature relationship will be the future direction.

  17. Direct 2-D reconstructions of conductivity and permittivity from EIT data on a human chest.

    PubMed

    Herrera, Claudia N L; Vallejo, Miguel F M; Mueller, Jennifer L; Lima, Raul G

    2015-01-01

    A novel direct D-bar reconstruction algorithm is presented for reconstructing a complex conductivity distribution from 2-D EIT data. The method is applied to simulated data and archival human chest data. Permittivity reconstructions with the aforementioned method and conductivity reconstructions with the previously existing nonlinear D-bar method for real-valued conductivities depicting ventilation and perfusion in the human chest are presented. This constitutes the first fully nonlinear D-bar reconstructions of human chest data and the first D-bar permittivity reconstructions of experimental data. The results of the human chest data reconstructions are compared on a circular domain versus a chest-shaped domain.

  18. GalaxyGPCRloop: Template-Based and Ab Initio Structure Sampling of the Extracellular Loops of G-Protein-Coupled Receptors.

    PubMed

    Won, Jonghun; Lee, Gyu Rie; Park, Hahnbeom; Seok, Chaok

    2018-06-07

    The second extracellular loops (ECL2s) of G-protein-coupled receptors (GPCRs) are often involved in GPCR functions, and their structures have important implications in drug discovery. However, structure prediction of ECL2 is difficult because of its long length and the structural diversity among different GPCRs. In this study, a new ECL2 conformational sampling method involving both template-based and ab initio sampling was developed. Inspired by the observation of similar ECL2 structures of closely related GPCRs, a template-based sampling method employing loop structure templates selected from the structure database was developed. A new metric for evaluating similarity of the target loop to templates was introduced for template selection. An ab initio loop sampling method was also developed to treat cases without highly similar templates. The ab initio method is based on the previously developed fragment assembly and loop closure method. A new sampling component that takes advantage of secondary structure prediction was added. In addition, a conserved disulfide bridge restraining ECL2 conformation was predicted and analytically incorporated into sampling, reducing the effective dimension of the conformational search space. The sampling method was combined with an existing energy function for comparison with previously reported loop structure prediction methods, and the benchmark test demonstrated outstanding performance.

  19. Frequency-area distribution of earthquake-induced landslides

    NASA Astrophysics Data System (ADS)

    Tanyas, H.; Allstadt, K.; Westen, C. J. V.

    2016-12-01

    Discovering the physical explanations behind the power-law distribution of landslides can provide valuable information to quantify triggered landslide events and as a consequence to understand the relation between landslide causes and impacts in terms of environmental settings of landslide affected area. In previous studies, the probability of landslide size was utilized for this quantification and the developed parameter was called a landslide magnitude (mL). The frequency-area distributions (FADs) of several landslide inventories were modelled and theoretical curves were established to identify the mL for any landslide inventory. In the observed landslide inventories, a divergence from the power-law distribution was recognized for the small landslides, referred to as the rollover, and this feature was taken into account in the established model. However, these analyses are based on a relatively limited number of inventories, each with a different triggering mechanism. Existing definition of the mL include some subjectivity, since it is based on a visual comparison between the theoretical curves and the FAD of the medium and large landslides. Additionally, the existed definition of mL introduces uncertainty due to the ambiguity in both the physical explanation of the rollover and its functional form. Here we focus on earthquake-induced landslides (EQIL) and aim to provide a rigorous method to estimate the mL and total landslide area of EQIL. We have gathered 36 EQIL inventories from around the globe. Using these inventories, we have evaluated existing explanations of the rollover and proposed an alternative explanation given the new data. Next, we propose a method to define the EQIL FAD curves, mL and to estimate the total landslide area. We utilize the total landslide areas obtained from inventories to compare them with our estimations and to validate our methodology. The results show that we calculate landslide magnitudes more accurately than previous methods.

  20. Improvement of determinating seafloor benchmark position with large-scale horizontal heterogeneity in the ocean area

    NASA Astrophysics Data System (ADS)

    Uemura, Y.; Tadokoro, K.; Matsuhiro, K.; Ikuta, R.

    2015-12-01

    The most critical issue in reducing the accuracy of seafloor positioning system, GPS/Acoustic technique, is large-scale thermal gradient of sound-speed structure [Muto et al., 2008] due to the ocean current. For example, Kuroshio Current, near our observation station, forms this structure. To improve the accuracy of seafloor benchmark position (SBP), we need to directly measure the structure frequently, or estimate it from travel time residual. The former, we repeatedly measure the sound-speed at Kuroshio axis using Underway CTD and try to apply analysis method of seafloor positioning [Yasuda et al., 2015 AGU meeting]. The latter, however, we cannot estimate the structure using travel time residual until now. Accordingly, in this study, we focus on azimuthal dependence of Estimated Mean Sound-Speed (EMSS). EMSS is defined as distance between vessel position and estimated SBP divided by travel time. If thermal gradient exists and SBP is true, EMSS should have azimuthal dependence with the assumption of horizontal layered sound-speed structure in our previous analysis method. We use the data at KMC located on the central part of Nankai Trough, Japan on Jan. 28, 2015, because on that day KMC was on the north edge of Kuroshio, where we expect that thermal gradient exists. In our analysis method, the hyper parameter (μ value) weights travel time residual and rate of change of sound speed structure. However, EMSS derived from μ value determined by Ikuta et al. [2008] does not have azimuthal dependence, that is, we cannot estimate thermal gradient. Thus, we expect SBP has a large bias. Therefore, in this study, we use another μ value and examine whether EMSS has azimuthal dependence or not. With the μ value of this study, which is 1 order of magnitude smaller than the previous value, EMSS has azimuthal dependence that is consistent with observation day's thermal gradient. This result shows that we can estimate the thermal gradient adequately. This SBP displaces 25.6 cm to the north and 11.8 cm to the east compared to previous SBP. This displacement reduces the bias of SBP and RMS of horizontal component in time series to 1/3. Therefore, determination of SBP is suitable when the thermal gradient exists on observation day and EMSS has azimuthal dependence for redetermination of μ value.

  1. Comprehensive Numerical Analysis of Finite Difference Time Domain Methods for Improving Optical Waveguide Sensor Accuracy

    PubMed Central

    Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly

    2016-01-01

    This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.

  2. Lactase persistence genotyping on whole blood by loop-mediated isothermal amplification and melting curve analysis.

    PubMed

    Abildgaard, Anders; Tovbjerg, Sara K; Giltay, Axel; Detemmerman, Liselot; Nissen, Peter H

    2018-03-26

    The lactase persistence phenotype is controlled by a regulatory enhancer region upstream of the Lactase (LCT) gene. In northern Europe, specifically the -13910C > T variant has been associated with lactase persistence whereas other persistence variants, e.g. -13907C > G and -13915 T > G, have been identified in Africa and the Middle East. The aim of the present study was to compare a previously developed high resolution melting assay (HRM) with a novel method based on loop-mediated isothermal amplification and melting curve analysis (LAMP-MC) with both whole blood and DNA as input material. To evaluate the LAMP-MC method, we used 100 whole blood samples and 93 DNA samples in a two tiered study. First, we studied the ability of the LAMP-MC method to produce specific melting curves for several variants of the LCT enhancer region. Next, we performed a blinded comparison between the LAMP-MC method and our existing HRM method with clinical samples of unknown genotype. The LAMP-MC method produced specific melting curves for the variants at position -13909, -13910, -13913 whereas the -13907C > G and -13915 T > G variants produced indistinguishable melting profiles. The LAMP-MC assay is a simple method for lactase persistence genotyping and compares well with our existing HRM method. Copyright © 2018. Published by Elsevier B.V.

  3. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  4. Improvement of the Threespine Stickleback Genome Using a Hi-C-Based Proximity-Guided Assembly.

    PubMed

    Peichel, Catherine L; Sullivan, Shawn T; Liachko, Ivan; White, Michael A

    2017-09-01

    Scaffolding genomes into complete chromosome assemblies remains challenging even with the rapidly increasing sequence coverage generated by current next-generation sequence technologies. Even with scaffolding information, many genome assemblies remain incomplete. The genome of the threespine stickleback (Gasterosteus aculeatus), a fish model system in evolutionary genetics and genomics, is not completely assembled despite scaffolding with high-density linkage maps. Here, we first test the ability of a Hi-C based proximity-guided assembly (PGA) to perform a de novo genome assembly from relatively short contigs. Using Hi-C based PGA, we generated complete chromosome assemblies from a distribution of short contigs (20-100 kb). We found that 96.40% of contigs were correctly assigned to linkage groups (LGs), with ordering nearly identical to the previous genome assembly. Using available bacterial artificial chromosome (BAC) end sequences, we provide evidence that some of the few discrepancies between the Hi-C assembly and the existing assembly are due to structural variation between the populations used for the 2 assemblies or errors in the existing assembly. This Hi-C assembly also allowed us to improve the existing assembly, assigning over 60% (13.35 Mb) of the previously unassigned (~21.7 Mb) contigs to LGs. Together, our results highlight the potential of the Hi-C based PGA method to be used in combination with short read data to perform relatively inexpensive de novo genome assemblies. This approach will be particularly useful in organisms in which it is difficult to perform linkage mapping or to obtain high molecular weight DNA required for other scaffolding methods. © The American Genetic Association 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. The Effects of Magnetic Anomalies Discovered at Mars on the Structure of the Martian Ionosphere and the Solar Wind Interaction as Follows from Radio Occultation Experiments

    NASA Technical Reports Server (NTRS)

    Ness, N. F.; Acuna, M. H.; Connerney, J. E. P.; Cloutier, P.; Kliore, A. J.; Breus, T. K.; Krymskii, A. M.; Bauer, S. J.

    1999-01-01

    The electron density distribution in the ionosphere of nonmagnetic (or weakly magnetized) planet depends not only on the solar ultraviolet intensity, but also on the nature of the SW interaction with this planet. Two scenarios previously have been developed based on the observations of the bow shock crossings and on the electron density distribution within the ionosphere. According to one of them Mars has an intrinsic magnetosphere produced by a dipole magnetic field and the Martian ionosphere is protected from the SW flow except during "overpressure conditions, when the planetary magnetic field can not balance the SW dynamic pressure. In the second scenario the Martian intrinsic magnetic dipole field is so weak that Mars has mainly an induced magnetosphere and a Venus-like SW/ionosphere interaction. Today the possible existence of a sufficiently strong global magnetic field that participates in the SW/Mars interaction can no longer be supported. The results obtained by the Mars-Global-Surveyor (MGS) space-craft show the existence of highly variable, but also very localized magnetic fields of crustal origin at Mars as high as 400-1500 nT. The absence of the large-scale global magnetic field at Mars makes it similar to Venus, except for possible effects of the magnetic anomalies associated with the remnant crustal magnetization. However the previous results on the Martian ionosphere obtained mainly by the radio occultation methods show that there appears to be a permanent existence of a global horizontal magnetic field in the Martian ionosphere. Moreover the global induced magnetic field in the Venus ionosphere is not typical at the solar zenith angles explored by the radio occultation methods. Additional information is contained in the original extended abstract.

  6. An improved cellular automaton method to model multispecies biofilms.

    PubMed

    Tang, Youneng; Valocchi, Albert J

    2013-10-01

    Biomass-spreading rules used in previous cellular automaton methods to simulate multispecies biofilm introduced extensive mixing between different biomass species or resulted in spatially discontinuous biomass concentration and distribution; this caused results based on the cellular automaton methods to deviate from experimental results and those from the more computationally intensive continuous method. To overcome the problems, we propose new biomass-spreading rules in this work: Excess biomass spreads by pushing a line of grid cells that are on the shortest path from the source grid cell to the destination grid cell, and the fractions of different biomass species in the grid cells on the path change due to the spreading. To evaluate the new rules, three two-dimensional simulation examples are used to compare the biomass distribution computed using the continuous method and three cellular automaton methods, one based on the new rules and the other two based on rules presented in two previous studies. The relationship between the biomass species is syntrophic in one example and competitive in the other two examples. Simulation results generated using the cellular automaton method based on the new rules agree much better with the continuous method than do results using the other two cellular automaton methods. The new biomass-spreading rules are no more complex to implement than the existing rules. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Gravity Compensation Method for Combined Accelerometer and Gyro Sensors Used in Cardiac Motion Measurements.

    PubMed

    Krogh, Magnus Reinsfelt; Nghiem, Giang M; Halvorsen, Per Steinar; Elle, Ole Jakob; Grymyr, Ole-Johannes; Hoff, Lars; Remme, Espen W

    2017-05-01

    A miniaturized accelerometer fixed to the heart can be used for monitoring of cardiac function. However, an accelerometer cannot differentiate between acceleration caused by motion and acceleration due to gravity. The accuracy of motion measurements is therefore dependent on how well the gravity component can be estimated and filtered from the measured signal. In this study we propose a new method for estimating the gravity, based on strapdown inertial navigation, using a combined accelerometer and gyro. The gyro was used to estimate the orientation of the gravity field and thereby remove it. We compared this method with two previously proposed gravity filtering methods in three experimental models using: (1) in silico computer simulated heart motion; (2) robot mimicked heart motion; and (3) in vivo measured motion on the heart in an animal model. The new method correlated excellently with the reference (r 2  > 0.93) and had a deviation from reference peak systolic displacement (6.3 ± 3.9 mm) below 0.2 ± 0.5 mm for the robot experiment model. The new method performed significantly better than the two previously proposed methods (p < 0.001). The results show that the proposed method using gyro can measure cardiac motion with high accuracy and performs better than existing methods for filtering the gravity component from the accelerometer signal.

  8. Neutrinos and the age of the universe

    NASA Technical Reports Server (NTRS)

    Symbalisty, E. M. D.; Yang, J.; Schramm, D. N.

    1980-01-01

    The age of the universe should be calculable by independent methods with similar results. Previous calculations using nucleochronometers, globular clusters and dynamical measurements coupled with Friedmann models and nucleosynthesis constraints have given different values of the age. A consistent age is reported, whose implications for the constituent mass density are very interesting and are affected by the existence of a third neutrino flavor, and by allowing the possibility that neutrinos may have a non-zero rest mass.

  9. Peopling the past: New perspectives on the ancient Maya

    PubMed Central

    Robin, Cynthia

    2001-01-01

    The new direction in Maya archaeology is toward achieving a greater understanding of people and their roles and their relations in the past. To answer emerging humanistic questions about ancient people's lives Mayanists are increasingly making use of new and existing scientific methods from archaeology and other disciplines. Maya archaeology is bridging the divide between the humanities and sciences to answer questions about ancient people previously considered beyond the realm of archaeological knowledge. PMID:11136245

  10. A review of statistical updating methods for clinical prediction models.

    PubMed

    Su, Ting-Li; Jaki, Thomas; Hickey, Graeme L; Buchan, Iain; Sperrin, Matthew

    2018-01-01

    A clinical prediction model is a tool for predicting healthcare outcomes, usually within a specific population and context. A common approach is to develop a new clinical prediction model for each population and context; however, this wastes potentially useful historical information. A better approach is to update or incorporate the existing clinical prediction models already developed for use in similar contexts or populations. In addition, clinical prediction models commonly become miscalibrated over time, and need replacing or updating. In this article, we review a range of approaches for re-using and updating clinical prediction models; these fall in into three main categories: simple coefficient updating, combining multiple previous clinical prediction models in a meta-model and dynamic updating of models. We evaluated the performance (discrimination and calibration) of the different strategies using data on mortality following cardiac surgery in the United Kingdom: We found that no single strategy performed sufficiently well to be used to the exclusion of the others. In conclusion, useful tools exist for updating existing clinical prediction models to a new population or context, and these should be implemented rather than developing a new clinical prediction model from scratch, using a breadth of complementary statistical methods.

  11. What Comes after Stable Octet? Stable Sub-Shell!

    ERIC Educational Resources Information Center

    Tan, Kim Chwee Daniel; Taber, Keith S.

    2005-01-01

    Previous research has shown that students' existing conceptions are critical to subsequent learning because there is interaction between the new knowledge that the students encounter and their existing knowledge from previous lessons. Taber (1999a) found A-level students in the UK had difficulty in understanding the principles determining the…

  12. Support vector machine-based facial-expression recognition method combining shape and appearance

    NASA Astrophysics Data System (ADS)

    Han, Eun Jung; Kang, Byung Jun; Park, Kang Ryoung; Lee, Sangyoun

    2010-11-01

    Facial expression recognition can be widely used for various applications, such as emotion-based human-machine interaction, intelligent robot interfaces, face recognition robust to expression variation, etc. Previous studies have been classified as either shape- or appearance-based recognition. The shape-based method has the disadvantage that the individual variance of facial feature points exists irrespective of similar expressions, which can cause a reduction of the recognition accuracy. The appearance-based method has a limitation in that the textural information of the face is very sensitive to variations in illumination. To overcome these problems, a new facial-expression recognition method is proposed, which combines both shape and appearance information, based on the support vector machine (SVM). This research is novel in the following three ways as compared to previous works. First, the facial feature points are automatically detected by using an active appearance model. From these, the shape-based recognition is performed by using the ratios between the facial feature points based on the facial-action coding system. Second, the SVM, which is trained to recognize the same and different expression classes, is proposed to combine two matching scores obtained from the shape- and appearance-based recognitions. Finally, a single SVM is trained to discriminate four different expressions, such as neutral, a smile, anger, and a scream. By determining the expression of the input facial image whose SVM output is at a minimum, the accuracy of the expression recognition is much enhanced. The experimental results showed that the recognition accuracy of the proposed method was better than previous researches and other fusion methods.

  13. Emerging applications of nanoparticles: Biomedical and environmental

    NASA Astrophysics Data System (ADS)

    Gulati, Shivani; Sachdeva, M.; Bhasin, K. K.

    2018-05-01

    Nanotechnology finds a wide range of applications from energy production to industrial fabrication processes to biomedical applications. Nanoparticles (NPs) can be engineered to possess unique compositions and functionalities to empower novel tools and techniques that have not existed previously in biomedical research. The unique size and shape dependent physicochemical properties along with their unique spectral and optical properties have prompted the development of a wide variety of potential applications in the field of diagnostics and medicines. In the plethora of scientific and technological fields, environmental safety is also a big concern. For this purpose, nanomaterials have been functionalized to cope up the existing pollution, improving manufacturing methods to reduce the generation of new pollution, and making alternative and more cost effective energy sources.

  14. Stability of phases of a square-well fluid within superposition approximation

    NASA Astrophysics Data System (ADS)

    Piasecki, Jarosław; Szymczak, Piotr; Kozak, John J.

    2013-04-01

    The analytic and numerical methods introduced previously to study the phase behavior of hard sphere fluids starting from the Yvon-Born-Green (YBG) equation under the Kirkwood superposition approximation (KSA) are adapted to the square-well fluid. We are able to show conclusively that the YBG equation under the KSA closure when applied to the square-well fluid: (i) predicts the existence of an absolute stability limit corresponding to freezing where undamped oscillations appear in the long-distance behavior of correlations, (ii) in accordance with earlier studies reveals the existence of a liquid-vapor transition by the appearance of a "near-critical region" where monotonically decaying correlations acquire very long range, although the system never loses stability.

  15. Existence and Regularity Results for the Inviscid Primitive Equations with Lateral Periodicity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamouda, Makram, E-mail: mahamoud@indiana.edu; Jung, Chang-Yeol, E-mail: changyeoljung@gmail.com; Temam, Roger, E-mail: temam@indiana.edu

    2016-06-15

    The article is devoted to prove the existence and regularity of the solutions of the 3D inviscid Linearized Primitive Equations (LPEs) in a channel with lateral periodicity. This was assumed in a previous work (Hamouda et al. in Discret Contin Dyn Syst Ser S 6(2):401–422, 2013) which is concerned with the boundary layers generated by the corresponding viscous problem. Although the equations under investigation here are of hyperbolic type, the standard methods do not apply because of the specificity of the hyperbolic system. A set of non-local boundary conditions for the inviscid LPEs has to be imposed at the lateralmore » boundary of the channel making thus the system well-posed.« less

  16. Array tomography of physiologically-characterized CNS synapses.

    PubMed

    Valenzuela, Ricardo A; Micheva, Kristina D; Kiraly, Marianna; Li, Dong; Madison, Daniel V

    2016-08-01

    The ability to correlate plastic changes in synaptic physiology with changes in synaptic anatomy has been very limited in the central nervous system because of shortcomings in existing methods for recording the activity of specific CNS synapses and then identifying and studying the same individual synapses on an anatomical level. We introduce here a novel approach that combines two existing methods: paired neuron electrophysiological recording and array tomography, allowing for the detailed molecular and anatomical study of synapses with known physiological properties. The complete mapping of a neuronal pair allows determining the exact number of synapses in the pair and their location. We have found that the majority of close appositions between the presynaptic axon and the postsynaptic dendrite in the pair contain synaptic specializations. The average release probability of the synapses between the two neurons in the pair is low, below 0.2, consistent with previous studies of these connections. Other questions, such as receptor distribution within synapses, can be addressed more efficiently by identifying only a subset of synapses using targeted partial reconstructions. In addition, time sensitive events can be captured with fast chemical fixation. Compared to existing methods, the present approach is the only one that can provide detailed molecular and anatomical information of electrophysiologically-characterized individual synapses. This method will allow for addressing specific questions about the properties of identified CNS synapses, even when they are buried within a cloud of millions of other brain circuit elements. Copyright © 2016. Published by Elsevier B.V.

  17. Activity coefficients from molecular simulations using the OPAS method

    NASA Astrophysics Data System (ADS)

    Kohns, Maximilian; Horsch, Martin; Hasse, Hans

    2017-10-01

    A method for determining activity coefficients by molecular dynamics simulations is presented. It is an extension of the OPAS (osmotic pressure for the activity of the solvent) method in previous work for studying the solvent activity in electrolyte solutions. That method is extended here to study activities of all components in mixtures of molecular species. As an example, activity coefficients in liquid mixtures of water and methanol are calculated for 298.15 K and 323.15 K at 1 bar using molecular models from the literature. These dense and strongly interacting mixtures pose a significant challenge to existing methods for determining activity coefficients by molecular simulation. It is shown that the new method yields accurate results for the activity coefficients which are in agreement with results obtained with a thermodynamic integration technique. As the partial molar volumes are needed in the proposed method, the molar excess volume of the system water + methanol is also investigated.

  18. A bicycle safety index for evaluating urban street facilities.

    PubMed

    Asadi-Shekari, Zohreh; Moeinaddini, Mehdi; Zaly Shah, Muhammad

    2015-01-01

    The objectives of this research are to conceptualize the Bicycle Safety Index (BSI) that considers all parts of the street and to propose a universal guideline with microscale details. A point system method comparing existing safety facilities to a defined standard is proposed to estimate the BSI. Two streets in Singapore and Malaysia are chosen to examine this model. The majority of previous measurements to evaluate street conditions for cyclists usually cannot cover all parts of streets, including segments and intersections. Previous models also did not consider all safety indicators and cycling facilities at a microlevel in particular. This study introduces a new concept of a practical BSI to complete previous studies using its practical, easy-to-follow, point system-based outputs. This practical model can be used in different urban settings to estimate the level of safety for cycling and suggest some improvements based on the standards.

  19. When the Mannequin Dies, Creation and Exploration of a Theoretical Framework Using a Mixed Methods Approach.

    PubMed

    Tripathy, Shreepada; Miller, Karen H; Berkenbosch, John W; McKinley, Tara F; Boland, Kimberly A; Brown, Seth A; Calhoun, Aaron W

    2016-06-01

    Controversy exists in the simulation community as to the emotional and educational ramifications of mannequin death due to learner action or inaction. No theoretical framework to guide future investigations of learner actions currently exists. The purpose of our study was to generate a model of the learner experience of mannequin death using a mixed methods approach. The study consisted of an initial focus group phase composed of 11 learners who had previously experienced mannequin death due to action or inaction on the part of learners as defined by Leighton (Clin Simul Nurs. 2009;5(2):e59-e62). Transcripts were analyzed using grounded theory to generate a list of relevant themes that were further organized into a theoretical framework. With the use of this framework, a survey was generated and distributed to additional learners who had experienced mannequin death due to action or inaction. Results were analyzed using a mixed methods approach. Forty-one clinicians completed the survey. A correlation was found between the emotional experience of mannequin death and degree of presession anxiety (P < 0.001). Debriefing was found to significantly reduce negative emotion and enhance satisfaction. Sixty-nine percent of respondents indicated that mannequin death enhanced learning. These results were used to modify our framework. Using the previous approach, we created a model of the effect of mannequin death on the educational and psychological state of learners. We offer the final model as a guide to future research regarding the learner experience of mannequin death.

  20. Blurred image recognition by legendre moment invariants

    PubMed Central

    Zhang, Hui; Shu, Huazhong; Han, Guo-Niu; Coatrieux, Gouenou; Luo, Limin; Coatrieux, Jean-Louis

    2010-01-01

    Processing blurred images is a key problem in many image applications. Existing methods to obtain blur invariants which are invariant with respect to centrally symmetric blur are based on geometric moments or complex moments. In this paper, we propose a new method to construct a set of blur invariants using the orthogonal Legendre moments. Some important properties of Legendre moments for the blurred image are presented and proved. The performance of the proposed descriptors is evaluated with various point-spread functions and different image noises. The comparison of the present approach with previous methods in terms of pattern recognition accuracy is also provided. The experimental results show that the proposed descriptors are more robust to noise and have better discriminative power than the methods based on geometric or complex moments. PMID:19933003

  1. Optimal patch code design via device characterization

    NASA Astrophysics Data System (ADS)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  2. Integrated Path Differential Absorption Lidar Optimizations Based on Pre-Analyzed Atmospheric Data for ASCENDS Mission Applications

    NASA Technical Reports Server (NTRS)

    Pliutau, Denis; Prasad, Narasimha S.

    2012-01-01

    In this paper a modeling method based on data reductions is investigated which includes pre analyzed MERRA atmospheric fields for quantitative estimates of uncertainties introduced in the integrated path differential absorption methods for the sensing of various molecules including CO2. This approach represents the extension of our existing lidar modeling framework previously developed and allows effective on- and offline wavelength optimizations and weighting function analysis to minimize the interference effects such as those due to temperature sensitivity and water vapor absorption. The new simulation methodology is different from the previous implementation in that it allows analysis of atmospheric effects over annual spans and the entire Earth coverage which was achieved due to the data reduction methods employed. The effectiveness of the proposed simulation approach is demonstrated with application to the mixing ratio retrievals for the future ASCENDS mission. Independent analysis of multiple accuracy limiting factors including the temperature, water vapor interferences, and selected system parameters is further used to identify favorable spectral regions as well as wavelength combinations facilitating the reduction in total errors in the retrieved XCO2 values.

  3. Wavelet based detection of manatee vocalizations

    NASA Astrophysics Data System (ADS)

    Gur, Berke M.; Niezrecki, Christopher

    2005-04-01

    The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. Several boater warning systems, based upon manatee vocalizations, have been proposed to reduce the number of collisions. Three detection methods based on the Fourier transform (threshold, harmonic content and autocorrelation methods) were previously suggested and tested. In the last decade, the wavelet transform has emerged as an alternative to the Fourier transform and has been successfully applied in various fields of science and engineering including the acoustic detection of dolphin vocalizations. As of yet, no prior research has been conducted in analyzing manatee vocalizations using the wavelet transform. Within this study, the wavelet transform is used as an alternative to the Fourier transform in detecting manatee vocalizations. The wavelet coefficients are analyzed and tested against a specified criterion to determine the existence of a manatee call. The performance of the method presented is tested on the same data previously used in the prior studies, and the results are compared. Preliminary results indicate that using the wavelet transform as a signal processing technique to detect manatee vocalizations shows great promise.

  4. Transcutaneous closure of chronic broncho-pleuro-cutaneous fistula by duct occluder device

    PubMed Central

    Marwah, Vikas; Ravikumar, R; Rajput, Ashok Kumar; Singh, Amandeep

    2016-01-01

    Bronchopleural fistula (BPF) is a well known complication of several pulmonary conditions posing challenging management problem and is often associated with high morbidity and mortality. Though no consensus exists on a definite closure management algorithm, strategies for closure widely include various methods like tube thoracostomy with suction, open surgical closure, bronchoscopy directed glue, coiling and sealants which now also includes use of occlusion devices. We report a case in which a novel method of delivery and closure of recurrent post-operative broncho-pleuro-cutaneous fistula by a duct occluder device was done transcutaneously which has not been previously described in literature. PMID:27051115

  5. Erratum: Erratum: Denoising Phase Unwrapping Algorithm for Precise Phase Shifting Interferometry

    NASA Astrophysics Data System (ADS)

    Phuc, Phan Huy; Rhee, Hyug-Gyo; Ghim, Young-Sik

    2018-06-01

    This is a revision of the reference list reported in the original article. In order to clear the contribution of the previous work on the incremental breadth-first search (IBFS) method applied to the PUMA algorithm, we add one more reference to the existing reference list, as in this erratum. Page 83 : In this paper, we propose an algorithm that modifies the Boykov-Kolmogorov (BK) algorithm using the incremental breadth-first search (IBFS) method [27, 28] to find paths from the source to the sink of a graph. [28] S. Ali, H. Khan, I. Shaik and F. Ali, Int. J. Eng. and Technol. 7, 254 (2015).

  6. Revisiting the Schönbein ozone measurement methodology

    NASA Astrophysics Data System (ADS)

    Ramírez-González, Ignacio A.; Añel, Juan A.; Saiz-López, Alfonso; García-Feal, Orlando; Cid, Antonio; Mejuto, Juan Carlos; Gimeno, Luis

    2017-04-01

    Trough the XIX century the Schönbein method gained a lot of popularity by its easy way to measure tropospheric ozone. Traditionally it has been considered that Schönbein measurements are not accurate enough to be useful. Detractors of this method argue that it is sensitive to meteorological conditions, being the most important the influence of relative humidity. As a consequence the data obtained by this method have usually been discarded. Here we revisit this method taking into account that values measured during the 19th century were taken using different measurement papers. We explore several concentrations of starch and potassium iodide, the basis for this measurement method. Our results are compared with the previous ones existing in the literature. The validity of the Schönbein methodology is discussed having into account humidity and other meteorological variables.

  7. On piecewise interpolation techniques for estimating solar radiation missing values in Kedah

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saaban, Azizan; Zainudin, Lutfi; Bakar, Mohd Nazari Abu

    2014-12-04

    This paper discusses the use of piecewise interpolation method based on cubic Ball and Bézier curves representation to estimate the missing value of solar radiation in Kedah. An hourly solar radiation dataset is collected at Alor Setar Meteorology Station that is taken from Malaysian Meteorology Deparment. The piecewise cubic Ball and Bézier functions that interpolate the data points are defined on each hourly intervals of solar radiation measurement and is obtained by prescribing first order derivatives at the starts and ends of the intervals. We compare the performance of our proposed method with existing methods using Root Mean Squared Errormore » (RMSE) and Coefficient of Detemination (CoD) which is based on missing values simulation datasets. The results show that our method is outperformed the other previous methods.« less

  8. Differentially Private Histogram Publication For Dynamic Datasets: An Adaptive Sampling Approach

    PubMed Central

    Li, Haoran; Jiang, Xiaoqian; Xiong, Li; Liu, Jinfei

    2016-01-01

    Differential privacy has recently become a de facto standard for private statistical data release. Many algorithms have been proposed to generate differentially private histograms or synthetic data. However, most of them focus on “one-time” release of a static dataset and do not adequately address the increasing need of releasing series of dynamic datasets in real time. A straightforward application of existing histogram methods on each snapshot of such dynamic datasets will incur high accumulated error due to the composibility of differential privacy and correlations or overlapping users between the snapshots. In this paper, we address the problem of releasing series of dynamic datasets in real time with differential privacy, using a novel adaptive distance-based sampling approach. Our first method, DSFT, uses a fixed distance threshold and releases a differentially private histogram only when the current snapshot is sufficiently different from the previous one, i.e., with a distance greater than a predefined threshold. Our second method, DSAT, further improves DSFT and uses a dynamic threshold adaptively adjusted by a feedback control mechanism to capture the data dynamics. Extensive experiments on real and synthetic datasets demonstrate that our approach achieves better utility than baseline methods and existing state-of-the-art methods. PMID:26973795

  9. Manifold Regularized Experimental Design for Active Learning.

    PubMed

    Zhang, Lining; Shum, Hubert P H; Shao, Ling

    2016-12-02

    Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.

  10. The Cauchy Problem in Local Spaces for the Complex Ginzburg-Landau EquationII. Contraction Methods

    NASA Astrophysics Data System (ADS)

    Ginibre, J.; Velo, G.

    We continue the study of the initial value problem for the complex Ginzburg-Landau equation (with a > 0, b > 0, g>= 0) in initiated in a previous paper [I]. We treat the case where the initial data and the solutions belong to local uniform spaces, more precisely to spaces of functions satisfying local regularity conditions and uniform bounds in local norms, but no decay conditions (or arbitrarily weak decay conditions) at infinity in . In [I] we used compactness methods and an extended version of recent local estimates [3] and proved in particular the existence of solutions globally defined in time with local regularity of the initial data corresponding to the spaces Lr for r>= 2 or H1. Here we treat the same problem by contraction methods. This allows us in particular to prove that the solutions obtained in [I] are unique under suitable subcriticality conditions, and to obtain for them additional regularity properties and uniform bounds. The method extends some of those previously applied to the nonlinear heat equation in global spaces to the framework of local uniform spaces.

  11. Washing with contaminated bar soap is unlikely to transfer bacteria.

    PubMed Central

    Heinze, J. E.; Yackovich, F.

    1988-01-01

    Recent reports of the isolation of microorganisms from used soap bars have raised the concern that bacteria may be transferred from contaminated soap bars during handwashing. Since only one study addressing this question has been published, we developed an additional procedure to test this concern. In our new method prewashed and softened commercial deodorant soap bars (0.8% triclocarban) not active against Gram-negative bacteria were inoculated with Escherichia coli and Pseudomonas aeruginosa to give mean total survival levels of 4.4 X 10(5) c.f.u. per bar which was 70-fold higher than those reported on used soap bars. Sixteen panelists were instructed to wash with the inoculated bars using their normal handwashing procedure. After washing, none of the 16 panelists had detectable levels of either test bacterium on their hands. Thus, the results obtained using our new method were in complete agreement with those obtained with the previously published method even though the two methods differ in a number of procedural aspects. These findings, along with other published reports, show that little hazard exists in routine handwashing with previously used soap bars and support the frequent use of soap and water for handwashing to prevent the spread of disease. PMID:3402545

  12. Model Predictive Control considering Reachable Range of Wheels for Leg / Wheel Mobile Robots

    NASA Astrophysics Data System (ADS)

    Suzuki, Naito; Nonaka, Kenichiro; Sekiguchi, Kazuma

    2016-09-01

    Obstacle avoidance is one of the important tasks for mobile robots. In this paper, we study obstacle avoidance control for mobile robots equipped with four legs comprised of three DoF SCARA leg/wheel mechanism, which enables the robot to change its shape adapting to environments. Our previous method achieves obstacle avoidance by model predictive control (MPC) considering obstacle size and lateral wheel positions. However, this method does not ensure existence of joint angles which achieves reference wheel positions calculated by MPC. In this study, we propose a model predictive control considering reachable mobile ranges of wheels positions by combining multiple linear constraints, where each reachable mobile range is approximated as a convex trapezoid. Thus, we achieve to formulate a MPC as a quadratic problem with linear constraints for nonlinear problem of longitudinal and lateral wheel position control. By optimization of MPC, the reference wheel positions are calculated, while each joint angle is determined by inverse kinematics. Considering reachable mobile ranges explicitly, the optimal joint angles are calculated, which enables wheels to reach the reference wheel positions. We verify its advantages by comparing the proposed method with the previous method through numerical simulations.

  13. Quantifying construction and demolition waste: An analytical review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Zezhou; Yu, Ann T.W., E-mail: bsannyu@polyu.edu.hk; Shen, Liyin

    2014-09-15

    Highlights: • Prevailing C and D waste quantification methodologies are identified and compared. • One specific methodology cannot fulfill all waste quantification scenarios. • A relevance tree for appropriate quantification methodology selection is proposed. • More attentions should be paid to civil and infrastructural works. • Classified information is suggested for making an effective waste management plan. - Abstract: Quantifying construction and demolition (C and D) waste generation is regarded as a prerequisite for the implementation of successful waste management. In literature, various methods have been employed to quantify the C and D waste generation at both regional and projectmore » levels. However, an integrated review that systemically describes and analyses all the existing methods has yet to be conducted. To bridge this research gap, an analytical review is conducted. Fifty-seven papers are retrieved based on a set of rigorous procedures. The characteristics of the selected papers are classified according to the following criteria - waste generation activity, estimation level and quantification methodology. Six categories of existing C and D waste quantification methodologies are identified, including site visit method, waste generation rate method, lifetime analysis method, classification system accumulation method, variables modelling method and other particular methods. A critical comparison of the identified methods is given according to their characteristics and implementation constraints. Moreover, a decision tree is proposed for aiding the selection of the most appropriate quantification method in different scenarios. Based on the analytical review, limitations of previous studies and recommendations of potential future research directions are further suggested.« less

  14. A Method of Retrospective Computerized System Validation for Drug Manufacturing Software Considering Modifications

    NASA Astrophysics Data System (ADS)

    Takahashi, Masakazu; Fukue, Yoshinori

    This paper proposes a Retrospective Computerized System Validation (RCSV) method for Drug Manufacturing Software (DMSW) that relates to drug production considering software modification. Because DMSW that is used for quality management and facility control affects big impact to quality of drugs, regulatory agency required proofs of adequacy for DMSW's functions and performance based on developed documents and test results. Especially, the work that explains adequacy for previously developed DMSW based on existing documents and operational records is called RCSV. When modifying RCSV conducted DMSW, it was difficult to secure consistency between developed documents and test results for modified DMSW parts and existing documents and operational records for non-modified DMSW parts. This made conducting RCSV difficult. In this paper, we proposed (a) definition of documents architecture, (b) definition of descriptive items and levels in the documents, (c) management of design information using database, (d) exhaustive testing, and (e) integrated RCSV procedure. As a result, we could conduct adequate RCSV securing consistency.

  15. Cardiac Rehabilitation Online Pilot: Extending Reach of Cardiac Rehabilitation.

    PubMed

    Higgins, Rosemary O; Rogerson, Michelle; Murphy, Barbara M; Navaratnam, Hema; Butler, Michael V; Barker, Lauren; Turner, Alyna; Lefkovits, Jeffrey; Jackson, Alun C

    While cardiac rehabilitation (CR) is recommended for all patients after an acute cardiac event, limitations exist in reach. The purpose of the current study was to develop and pilot a flexible online CR program based on self-management principles "Help Yourself Online." The program was designed as an alternative to group-based CR as well as to complement traditional CR. The program was based on existing self-management resources developed previously by the Heart Research Centre. Twenty-one patients admitted to Cabrini Health for an acute cardiac event were recruited to test the program. The program was evaluated using qualitative and quantitative methods. Quantitative results demonstrated that patients believed the program would assist them in their self-management. Qualitative evaluation, using focus group and interview methods with 15 patients, showed that patients perceived the online CR approach to be a useful instrument for self-management. Broader implications of the data include the acceptability of the intervention, timing of intervention delivery, and patients' desire for additional online community support.

  16. A general method for the inclusion of radiation chemistry in astrochemical models.

    PubMed

    Shingledecker, Christopher N; Herbst, Eric

    2018-02-21

    In this paper, we propose a general formalism that allows for the estimation of radiolysis decomposition pathways and rate coefficients suitable for use in astrochemical models, with a focus on solid phase chemistry. Such a theory can help increase the connection between laboratory astrophysics experiments and astrochemical models by providing a means for modelers to incorporate radiation chemistry into chemical networks. The general method proposed here is targeted particularly at the majority of species now included in chemical networks for which little radiochemical data exist; however, the method can also be used as a starting point for considering better studied species. We here apply our theory to the irradiation of H 2 O ice and compare the results with previous experimental data.

  17. SLIC superpixels compared to state-of-the-art superpixel methods.

    PubMed

    Achanta, Radhakrishna; Shaji, Appu; Smith, Kevin; Lucchi, Aurelien; Fua, Pascal; Süsstrunk, Sabine

    2012-11-01

    Computer vision applications have come to rely increasingly on superpixels in recent years, but it is not always clear what constitutes a good superpixel algorithm. In an effort to understand the benefits and drawbacks of existing methods, we empirically compare five state-of-the-art superpixel algorithms for their ability to adhere to image boundaries, speed, memory efficiency, and their impact on segmentation performance. We then introduce a new superpixel algorithm, simple linear iterative clustering (SLIC), which adapts a k-means clustering approach to efficiently generate superpixels. Despite its simplicity, SLIC adheres to boundaries as well as or better than previous methods. At the same time, it is faster and more memory efficient, improves segmentation performance, and is straightforward to extend to supervoxel generation.

  18. Terrain and refractivity effects on non-optical paths

    NASA Astrophysics Data System (ADS)

    Barrios, Amalia E.

    1994-07-01

    The split-step parabolic equation (SSPE) has been used extensively to model tropospheric propagation over the sea, but recent efforts have extended this method to propagation over arbitrary terrain. At the Naval Command, Control and Ocean Surveillance Center (NCCOSC), Research, Development, Test and Evaluation Division, a split-step Terrain Parabolic Equation Model (TPEM) has been developed that takes into account variable terrain and range-dependent refractivity profiles. While TPEM has been previously shown to compare favorably with measured data and other existing terrain models, two alternative methods to model radiowave propagation over terrain, implemented within TPEM, will be presented that give a two to ten-fold decrease in execution time. These two methods are also shown to agree well with measured data.

  19. The plant Polycomb repressive complex 1 (PRC1) existed in the ancestor of seed plants and has a complex duplication history.

    PubMed

    Berke, Lidija; Snel, Berend

    2015-03-13

    Polycomb repressive complex 1 (PRC1) is an essential protein complex for plant development. It catalyzes ubiquitination of histone H2A that is an important part of the transcription repression machinery. Absence of PRC1 subunits in Arabidopsis thaliana plants causes severe developmental defects. Many aspects of the plant PRC1 are elusive, including its origin and phylogenetic distribution. We established the evolutionary history of the plant PRC1 subunits (LHP1, Ring1a-b, Bmi1a-c, EMF1, and VRN1), enabled by sensitive phylogenetic methods and newly sequenced plant genomes from previously unsampled taxonomic groups. We showed that all PRC1 core subunits exist in gymnosperms, earlier than previously thought, and that VRN1 is a recent addition, found exclusively in eudicots. The retention of individual subunits in chlorophytes, mosses, lycophytes and monilophytes indicates that they can moonlight as part of other complexes or processes. Moreover, we showed that most PRC1 subunits underwent a complex, duplication-rich history that differs significantly between Brassicaceae and other eudicots. PRC1 existed in the last common ancestor of seed plants where it likely played an important regulatory role, aiding their radiation. The presence of LHP1, Ring1 and Bmi1 in mosses, lycophytes and monilophytes also suggests the presence of a primitive yet functional PRC1.

  20. A topological proof of chaos for two nonlinear heterogeneous triopoly game models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pireddu, Marina, E-mail: marina.pireddu@unimib.it

    We rigorously prove the existence of chaotic dynamics for two nonlinear Cournot triopoly game models with heterogeneous players, for which in the existing literature the presence of complex phenomena and strange attractors has been shown via numerical simulations. In the first model that we analyze, costs are linear but the demand function is isoelastic, while, in the second model, the demand function is linear and production costs are quadratic. As concerns the decisional mechanisms adopted by the firms, in both models one firm adopts a myopic adjustment mechanism, considering the marginal profit of the last period; the second firm maximizesmore » its own expected profit under the assumption that the competitors' production levels will not vary with respect to the previous period; the third firm acts adaptively, changing its output proportionally to the difference between its own output in the previous period and the naive expectation value. The topological method we employ in our analysis is the so-called “Stretching Along the Paths” technique, based on the Poincaré-Miranda Theorem and the properties of the cutting surfaces, which allows to prove the existence of a semi-conjugacy between the system under consideration and the Bernoulli shift, so that the former inherits from the latter several crucial chaotic features, among which a positive topological entropy.« less

  1. Robust and Imperceptible Watermarking of Video Streams for Low Power Devices

    NASA Astrophysics Data System (ADS)

    Ishtiaq, Muhammad; Jaffar, M. Arfan; Khan, Muhammad A.; Jan, Zahoor; Mirza, Anwar M.

    With the advent of internet, every aspect of life is going online. From online working to watching videos, everything is now available on the internet. With the greater business benefits, increased availability and other online business advantages, there is a major challenge of security and ownership of data. Videos downloaded from an online store can easily be shared among non-intended or unauthorized users. Invisible watermarking is used to hide copyright protection information in the videos. The existing methods of watermarking are less robust and imperceptible and also the computational complexity of these methods does not suit low power devices. In this paper, we have proposed a new method to address the problem of robustness and imperceptibility. Experiments have shown that our method has better robustness and imperceptibility as well as our method is computationally efficient than previous approaches in practice. Hence our method can easily be applied on low power devices.

  2. Statistical Algorithms Accounting for Background Density in the Detection of UXO Target Areas at DoD Munitions Sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzke, Brett D.; Wilson, John E.; Hathaway, J.

    2008-02-12

    Statistically defensible methods are presented for developing geophysical detector sampling plans and analyzing data for munitions response sites where unexploded ordnance (UXO) may exist. Detection methods for identifying areas of elevated anomaly density from background density are shown. Additionally, methods are described which aid in the choice of transect pattern and spacing to assure with degree of confidence that a target area (TA) of specific size, shape, and anomaly density will be identified using the detection methods. Methods for evaluating the sensitivity of designs to variation in certain parameters are also discussed. Methods presented have been incorporated into the Visualmore » Sample Plan (VSP) software (free at http://dqo.pnl.gov/vsp) and demonstrated at multiple sites in the United States. Application examples from actual transect designs and surveys from the previous two years are demonstrated.« less

  3. The complexity of classical music networks

    NASA Astrophysics Data System (ADS)

    Rolla, Vitor; Kestenberg, Juliano; Velho, Luiz

    2018-02-01

    Previous works suggest that musical networks often present the scale-free and the small-world properties. From a musician's perspective, the most important aspect missing in those studies was harmony. In addition to that, the previous works made use of outdated statistical methods. Traditionally, least-squares linear regression is utilised to fit a power law to a given data set. However, according to Clauset et al. such a traditional method can produce inaccurate estimates for the power law exponent. In this paper, we present an analysis of musical networks which considers the existence of chords (an essential element of harmony). Here we show that only 52.5% of music in our database presents the scale-free property, while 62.5% of those pieces present the small-world property. Previous works argue that music is highly scale-free; consequently, it sounds appealing and coherent. In contrast, our results show that not all pieces of music present the scale-free and the small-world properties. In summary, this research is focused on the relationship between musical notes (Do, Re, Mi, Fa, Sol, La, Si, and their sharps) and accompaniment in classical music compositions. More information about this research project is available at https://eden.dei.uc.pt/~vitorgr/MS.html.

  4. A visualization framework for design and evaluation

    NASA Astrophysics Data System (ADS)

    Blundell, Benjamin J.; Ng, Gary; Pettifer, Steve

    2006-01-01

    The creation of compelling visualisation paradigms is a craft often dominated by intuition and issues of aesthetics, with relatively few models to support good design. The majority of problem cases are approached by simply applying a previously evaluated visualisation technique. A large body of work exists covering the individual aspects of visualisation design such as the human cognition aspects visualisation methods for specific problem areas, psychology studies and so forth, yet most frameworks regarding visualisation are applied after-the-fact as an evaluation measure. We present an extensible framework for visualisation aimed at structuring the design process, increasing decision traceability and delineating the notions of function, aesthetics and usability. The framework can be used to derive a set of requirements for good visualisation design and evaluating existing visualisations, presenting possible improvements. Our framework achieves this by being both broad and general, built on top of existing works, with hooks for extensions and customizations. This paper shows how existing theories of information visualisation fit into the scheme, presents our experience in the application of this framework on several designs, and offers our evaluation of the framework and the designs studied.

  5. Launch team training system

    NASA Technical Reports Server (NTRS)

    Webb, J. T.

    1988-01-01

    A new approach to the training, certification, recertification, and proficiency maintenance of the Shuttle launch team is proposed. Previous training approaches are first reviewed. Short term program goals include expanding current training methods, improving the existing simulation capability, and scheduling training exercises with the same priority as hardware tests. Long-term goals include developing user requirements which would take advantage of state-of-the-art tools and techniques. Training requirements for the different groups of people to be trained are identified, and future goals are outlined.

  6. Research on Capturing of Customer Requirements Based on Innovation Theory

    NASA Astrophysics Data System (ADS)

    junwu, Ding; dongtao, Yang; zhenqiang, Bao

    To exactly and effectively capture customer requirements information, a new customer requirements capturing modeling method was proposed. Based on the analysis of function requirement models of previous products and the application of technology system evolution laws of the Theory of Innovative Problem Solving (TRIZ), the customer requirements could be evolved from existing product designs, through modifying the functional requirement unit and confirming the direction of evolution design. Finally, a case study was provided to illustrate the feasibility of the proposed approach.

  7. Aids in designing laboratory flumes

    USGS Publications Warehouse

    Williams, Garnett P.

    1971-01-01

    The upsurge of interest in our environment has caused research and instruction in the flow of water along open channels to become increasingly popular in universities and institutes. This, in turn, has brought a greater demand for properly-designed laboratory flumes. Whatever the reason for your interest, designing and building the flume will take a little preparation. You may choose a pattern exactly like a previous design, or you may follow the more time-consuming method of studying several existing flumes and combine the most desirable features of each.

  8. Spherical thin-shell wormholes and modified Chaplygin gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharif, M.; Azam, M., E-mail: msharif.math@pu.edu.pk, E-mail: azammath@gmail.com

    2013-05-01

    The purpose of this paper is to construct spherical thin-shell wormhole solutions through cut and paste technique and investigate the stability of these solutions in the vicinity of modified Chaplygin gas. The Darmois-Israel formalism is used to formulate the stresses of the surface concentrating the exotic matter. We explore the stability of the wormhole solutions by using the standard potential method. We conclude that there exist more stable as well as unstable solutions than the previous study with generalized Chaplygin gas [19].

  9. Automated detection of pulmonary nodules in PET/CT images: Ensemble false-positive reduction using a convolutional neural network technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teramoto, Atsushi, E-mail: teramoto@fujita-hu.ac.jp; Fujita, Hiroshi; Yamamuro, Osamu

    Purpose: Automated detection of solitary pulmonary nodules using positron emission tomography (PET) and computed tomography (CT) images shows good sensitivity; however, it is difficult to detect nodules in contact with normal organs, and additional efforts are needed so that the number of false positives (FPs) can be further reduced. In this paper, the authors propose an improved FP-reduction method for the detection of pulmonary nodules in PET/CT images by means of convolutional neural networks (CNNs). Methods: The overall scheme detects pulmonary nodules using both CT and PET images. In the CT images, a massive region is first detected using anmore » active contour filter, which is a type of contrast enhancement filter that has a deformable kernel shape. Subsequently, high-uptake regions detected by the PET images are merged with the regions detected by the CT images. FP candidates are eliminated using an ensemble method; it consists of two feature extractions, one by shape/metabolic feature analysis and the other by a CNN, followed by a two-step classifier, one step being rule based and the other being based on support vector machines. Results: The authors evaluated the detection performance using 104 PET/CT images collected by a cancer-screening program. The sensitivity in detecting candidates at an initial stage was 97.2%, with 72.8 FPs/case. After performing the proposed FP-reduction method, the sensitivity of detection was 90.1%, with 4.9 FPs/case; the proposed method eliminated approximately half the FPs existing in the previous study. Conclusions: An improved FP-reduction scheme using CNN technique has been developed for the detection of pulmonary nodules in PET/CT images. The authors’ ensemble FP-reduction method eliminated 93% of the FPs; their proposed method using CNN technique eliminates approximately half the FPs existing in the previous study. These results indicate that their method may be useful in the computer-aided detection of pulmonary nodules using PET/CT images.« less

  10. Predicting hot spots in protein interfaces based on protrusion index, pseudo hydrophobicity and electron-ion interaction pseudopotential features

    PubMed Central

    Xia, Junfeng; Yue, Zhenyu; Di, Yunqiang; Zhu, Xiaolei; Zheng, Chun-Hou

    2016-01-01

    The identification of hot spots, a small subset of protein interfaces that accounts for the majority of binding free energy, is becoming more important for the research of drug design and cancer development. Based on our previous methods (APIS and KFC2), here we proposed a novel hot spot prediction method. For each hot spot residue, we firstly constructed a wide variety of 108 sequence, structural, and neighborhood features to characterize potential hot spot residues, including conventional ones and new one (pseudo hydrophobicity) exploited in this study. We then selected 3 top-ranking features that contribute the most in the classification by a two-step feature selection process consisting of minimal-redundancy-maximal-relevance algorithm and an exhaustive search method. We used support vector machines to build our final prediction model. When testing our model on an independent test set, our method showed the highest F1-score of 0.70 and MCC of 0.46 comparing with the existing state-of-the-art hot spot prediction methods. Our results indicate that these features are more effective than the conventional features considered previously, and that the combination of our and traditional features may support the creation of a discriminative feature set for efficient prediction of hot spots in protein interfaces. PMID:26934646

  11. Space Suit Joint Torque Testing

    NASA Technical Reports Server (NTRS)

    Valish, Dana J.

    2011-01-01

    In 2009 and early 2010, a test was performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design meets the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future space suits. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis and a variance in torque values for some of the tested joints was apparent. Potential variables that could have affected the data were identified and re-testing was conducted in an attempt to eliminate these variables. The results of the retest will be used to determine if further testing and modification is necessary before the method can be validated.

  12. Evaluating Hierarchical Structure in Music Annotations

    PubMed Central

    McFee, Brian; Nieto, Oriol; Farbood, Morwaread M.; Bello, Juan Pablo

    2017-01-01

    Music exhibits structure at multiple scales, ranging from motifs to large-scale functional components. When inferring the structure of a piece, different listeners may attend to different temporal scales, which can result in disagreements when they describe the same piece. In the field of music informatics research (MIR), it is common to use corpora annotated with structural boundaries at different levels. By quantifying disagreements between multiple annotators, previous research has yielded several insights relevant to the study of music cognition. First, annotators tend to agree when structural boundaries are ambiguous. Second, this ambiguity seems to depend on musical features, time scale, and genre. Furthermore, it is possible to tune current annotation evaluation metrics to better align with these perceptual differences. However, previous work has not directly analyzed the effects of hierarchical structure because the existing methods for comparing structural annotations are designed for “flat” descriptions, and do not readily generalize to hierarchical annotations. In this paper, we extend and generalize previous work on the evaluation of hierarchical descriptions of musical structure. We derive an evaluation metric which can compare hierarchical annotations holistically across multiple levels. sing this metric, we investigate inter-annotator agreement on the multilevel annotations of two different music corpora, investigate the influence of acoustic properties on hierarchical annotations, and evaluate existing hierarchical segmentation algorithms against the distribution of inter-annotator agreement. PMID:28824514

  13. The Identification and Assessment of Late-life ADHD in Memory Clinics

    PubMed Central

    Fischer, Barbara L.; Gunter-Hunt, Gail; Steinhafel, Courtney Holm; Howell, Timothy

    2013-01-01

    INTRODUCTION Little data exists about attention deficit hyperactivity disorder (ADHD) in late life. While evaluating patients’ memory problems, our Memory Clinic staff has periodically identified ADHD in previously undiagnosed adults. We conducted a survey to assess the extent to which other memory clinics view ADHD as a relevant clinical issue. METHOD We developed and sent a questionnaire to Memory Clinics in the United States to ascertain how ADHD was identified and addressed. The percentage of responding memory clinics’ means of assessing and managing late-life ADHD comprised the measurements for this study. RESULTS Approximately one-half of responding memory clinics reported seeing ADHD patients. Of these, one-half reported identifying previously diagnosed cases, and almost one-half reported diagnosing ADHD themselves. One fifth of clinics reported screening regularly for ADHD, and few clinics described treatment methods. CONCLUSION Our results suggest that U.S. memory clinics may not adequately identify and address ADHD in late life. PMID:22173147

  14. Maximal use of kinematic information for the extraction of the mass of the top quark in single-lepton tt bar events at DO

    NASA Astrophysics Data System (ADS)

    Estrada Vigil, Juan Cruz

    The mass of the top (t) quark has been measured in the lepton+jets channel of tt¯ final states studied by the DØ and CDF experiments at Fermilab using data from Run I of the Tevatron pp¯ collider. The result published by DØ is 173.3 +/- 5.6(stat) +/- 5.5(syst) GeV. We present a different method to perform this measurement using the existing data. The new technique uses all available kinematic information in an event, and provides a significantly smaller statistical uncertainty than achieved in previous analyses. The preliminary results presented in this thesis indicate a statistical uncertainty for the extracted mass of the top quark of 3.5 GeV, which represents a significant improvement over the previous value of 5.6 GeV. The method of analysis is very general, and may be particularly useful in situations where there is a small signal and a large background.

  15. DISSOCIATIVE RECOMBINATION MEASUREMENTS OF HCl{sup +} USING AN ION STORAGE RING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novotný, O.; Stützel, J.; Savin, D. W.

    We have measured dissociative recombination (DR) of HCl{sup +} with electrons using a merged beams configuration at the TSR heavy-ion storage ring located at the Max Planck Institute for Nuclear Physics in Heidelberg, Germany. We present the measured absolute merged beams recombination rate coefficient for collision energies from 0 to 4.5 eV. We have also developed a new method for deriving the cross section from the measurements. Our approach does not suffer from approximations made by previously used methods. The cross section was transformed to a plasma rate coefficient for the electron temperature range from T = 10 to 5000more » K. We show that the previously used HCl{sup +} DR data underestimate the plasma rate coefficient by a factor of 1.5 at T = 10 K and overestimate it by a factor of three at T = 300 K. We also find that the new data may partly explain existing discrepancies between observed abundances of chlorine-bearing molecules and their astrochemical models.« less

  16. Incorporating temporal and clinical reasoning in a new measure of continuity of care.

    PubMed Central

    Spooner, S. A.

    1994-01-01

    Previously described quantitative methods for measuring continuity of care have assumed that perfect continuity exists when a patient sees only one provider, regardless of the temporal pattern and clinical context of the visits. This paper describes an implementation of a new operational model of continuity--the Temporal Continuity Index--that takes into account time intervals between well visits in a pediatric residency continuity clinic. Ideal continuity in this model is achieved when intervals between visits are appropriate based on the age of the patient and clinical context of the encounters. The fundamental concept in this model is the expectation interval, which contains the length of the maximum ideal follow-up interval for a visit and the maximum follow-up interval. This paper describes an initial implementation of the TCI model and compares TCI calculations to previous quantitative methods and proposes its use as part of the assessment of resident education in outpatient settings. PMID:7950019

  17. An automated detection for axonal boutons in vivo two-photon imaging of mouse

    NASA Astrophysics Data System (ADS)

    Li, Weifu; Zhang, Dandan; Xie, Qiwei; Chen, Xi; Han, Hua

    2017-02-01

    Activity-dependent changes in the synaptic connections of the brain are tightly related to learning and memory. Previous studies have shown that essentially all new synaptic contacts were made by adding new partners to existing synaptic elements. To further explore synaptic dynamics in specific pathways, concurrent imaging of pre and postsynaptic structures in identified connections is required. Consequently, considerable attention has been paid for the automated detection of axonal boutons. Different from most previous methods proposed in vitro data, this paper considers a more practical case in vivo neuron images which can provide real time information and direct observation of the dynamics of a disease process in mouse. Additionally, we present an automated approach for detecting axonal boutons by starting with deconvolving the original images, then thresholding the enhanced images, and reserving the regions fulfilling a series of criteria. Experimental result in vivo two-photon imaging of mouse demonstrates the effectiveness of our proposed method.

  18. Design issues for grid-connected photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Ropp, Michael Eugene

    1998-08-01

    Photovoltaics (PV) is the direct conversion of sunlight to electrical energy. In areas without centralized utility grids, the benefits of PV easily overshadow the present shortcomings of the technology. However, in locations with centralized utility systems, significant technical challenges remain before utility-interactive PV (UIPV) systems can be integrated into the mix of electricity sources. One challenge is that the needed computer design tools for optimal design of PV systems with curved PV arrays are not available, and even those that are available do not facilitate monitoring of the system once it is built. Another arises from the issue of islanding. Islanding occurs when a UIPV system continues to energize a section of a utility system after that section has been isolated from the utility voltage source. Islanding, which is potentially dangerous to both personnel and equipment, is difficult to prevent completely. The work contained within this thesis targets both of these technical challenges. In Task 1, a method for modeling a PV system with a curved PV array using only existing computer software is developed. This methodology also facilitates comparison of measured and modeled data for use in system monitoring. The procedure is applied to the Georgia Tech Aquatic Center (GTAC) FV system. In the work contained under Task 2, islanding prevention is considered. The existing state-of-the- art is thoroughly reviewed. In Subtask 2.1, an analysis is performed which suggests that standard protective relays are in fact insufficient to guarantee protection against islanding. In Subtask 2.2. several existing islanding prevention methods are compared in a novel way. The superiority of this new comparison over those used previously is demonstrated. A new islanding prevention method is the subject under Subtask 2.3. It is shown that it does not compare favorably with other existing techniques. However, in Subtask 2.4, a novel method for dramatically improving this new islanding prevention method is described. It is shown, both by computer modeling and experiment, that this new method is one of the most effective available today. Finally, under Subtask 2.5, the effects of certain types of loads; on the effectiveness of islanding prevention methods are discussed.

  19. Fast Principal-Component Analysis Reveals Convergent Evolution of ADH1B in Europe and East Asia

    PubMed Central

    Galinsky, Kevin J.; Bhatia, Gaurav; Loh, Po-Ru; Georgiev, Stoyan; Mukherjee, Sayan; Patterson, Nick J.; Price, Alkes L.

    2016-01-01

    Searching for genetic variants with unusual differentiation between subpopulations is an established approach for identifying signals of natural selection. However, existing methods generally require discrete subpopulations. We introduce a method that infers selection using principal components (PCs) by identifying variants whose differentiation along top PCs is significantly greater than the null distribution of genetic drift. To enable the application of this method to large datasets, we developed the FastPCA software, which employs recent advances in random matrix theory to accurately approximate top PCs while reducing time and memory cost from quadratic to linear in the number of individuals, a computational improvement of many orders of magnitude. We apply FastPCA to a cohort of 54,734 European Americans, identifying 5 distinct subpopulations spanning the top 4 PCs. Using the PC-based test for natural selection, we replicate previously known selected loci and identify three new genome-wide significant signals of selection, including selection in Europeans at ADH1B. The coding variant rs1229984∗T has previously been associated to a decreased risk of alcoholism and shown to be under selection in East Asians; we show that it is a rare example of independent evolution on two continents. We also detect selection signals at IGFBP3 and IGH, which have also previously been associated to human disease. PMID:26924531

  20. Modeling Aromatic Liquids:  Toluene, Phenol, and Pyridine.

    PubMed

    Baker, Christopher M; Grant, Guy H

    2007-03-01

    Aromatic groups are now acknowledged to play an important role in many systems of interest. However, existing molecular mechanics methods provide a poor representation of these groups. In a previous paper, we have shown that the molecular mechanics treatment of benzene can be improved by the incorporation of an explicit representation of the aromatic π electrons. Here, we develop this concept further, developing charge-separation models for toluene, phenol, and pyridine. Monte Carlo simulations are used to parametrize the models, via the reproduction of experimental thermodynamic data, and our models are shown to outperform an existing atom-centered model. The models are then used to make predictions about the structures of the liquids at the molecular level and are tested further through their application to the modeling of gas-phase dimers and cation-π interactions.

  1. Job Performance as Multivariate Dynamic Criteria: Experience Sampling and Multiway Component Analysis.

    PubMed

    Spain, Seth M; Miner, Andrew G; Kroonenberg, Pieter M; Drasgow, Fritz

    2010-08-06

    Questions about the dynamic processes that drive behavior at work have been the focus of increasing attention in recent years. Models describing behavior at work and research on momentary behavior indicate that substantial variation exists within individuals. This article examines the rationale behind this body of work and explores a method of analyzing momentary work behavior using experience sampling methods. The article also examines a previously unused set of methods for analyzing data produced by experience sampling. These methods are known collectively as multiway component analysis. Two archetypal techniques of multimode factor analysis, the Parallel factor analysis and the Tucker3 models, are used to analyze data from Miner, Glomb, and Hulin's (2010) experience sampling study of work behavior. The efficacy of these techniques for analyzing experience sampling data is discussed as are the substantive multimode component models obtained.

  2. Ambient fine particulate air pollution triggers ST-elevation myocardial infarction, but not non-ST elevation myocardial infarction: a case-crossover study.

    PubMed

    Gardner, Blake; Ling, Frederick; Hopke, Philip K; Frampton, Mark W; Utell, Mark J; Zareba, Wojciech; Cameron, Scott J; Chalupa, David; Kane, Cathleen; Kulandhaisamy, Suresh; Topf, Michael C; Rich, David Q

    2014-01-02

    We and others have shown that increases in particulate air pollutant (PM) concentrations in the previous hours and days have been associated with increased risks of myocardial infarction, but little is known about the relationships between air pollution and specific subsets of myocardial infarction, such as ST-elevation myocardial infarction (STEMI) and non ST-elevation myocardial infarction (NSTEMI). Using data from acute coronary syndrome patients with STEMI (n = 338) and NSTEMI (n = 339) and case-crossover methods, we estimated the risk of STEMI and NSTEMI associated with increased ambient fine particle (<2.5 um) concentrations, ultrafine particle (10-100 nm) number concentrations, and accumulation mode particle (100-500 nm) number concentrations in the previous few hours and days. We found a significant 18% increase in the risk of STEMI associated with each 7.1 μg/m³ increase in PM₂.₅ concentration in the previous hour prior to acute coronary syndrome onset, with smaller, non-significantly increased risks associated with increased fine particle concentrations in the previous 3, 12, and 24 hours. We found no pattern with NSTEMI. Estimates of the risk of STEMI associated with interquartile range increases in ultrafine particle and accumulation mode particle number concentrations in the previous 1 to 96 hours were all greater than 1.0, but not statistically significant. Patients with pre-existing hypertension had a significantly greater risk of STEMI associated with increased fine particle concentration in the previous hour than patients without hypertension. Increased fine particle concentrations in the hour prior to acute coronary syndrome onset were associated with an increased risk of STEMI, but not NSTEMI. Patients with pre-existing hypertension and other cardiovascular disease appeared particularly susceptible. Further investigation into mechanisms by which PM can preferentially trigger STEMI over NSTEMI within this rapid time scale is needed.

  3. How High is that Dune? A Comparison of Methods Used to Constrain the Morphometry of Aeolian Bedforms on Mars

    NASA Technical Reports Server (NTRS)

    Bourke, M.; Balme, M.; Beyer, R. A.; Williams, K. K.

    2004-01-01

    Methods traditionally used to estimate the relative height of surface features on Mars include: photoclinometry, shadow length and stereography. The MOLA data set enables a more accurate assessment of the surface topography of Mars. However, many small-scale aeolian bedforms remain below the sample resolution of the MOLA data set. In response to this a number of research teams have adopted and refined existing methods and applied them to high resolution (2-6 m/pixel) narrow angle MOC satellite images. Collectively, the methods provide data on a range of morphometric parameters (many not previously available for dunes on Mars). These include dune height, width, length, surface area, volume, longitudinal and cross profiles). This data will facilitate a more accurate analysis of aeolian bedforms on Mars. In this paper we undertake a comparative analysis of methods used to determine the height of aeolian dunes and ripples.

  4. Bounding the moment deficit rate on crustal faults using geodetic data: Methods

    DOE PAGES

    Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael

    2017-08-19

    Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less

  5. Bounding the moment deficit rate on crustal faults using geodetic data: Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maurer, Jeremy; Segall, Paul; Bradley, Andrew Michael

    Here, the geodetically derived interseismic moment deficit rate (MDR) provides a first-order constraint on earthquake potential and can play an important role in seismic hazard assessment, but quantifying uncertainty in MDR is a challenging problem that has not been fully addressed. We establish criteria for reliable MDR estimators, evaluate existing methods for determining the probability density of MDR, and propose and evaluate new methods. Geodetic measurements moderately far from the fault provide tighter constraints on MDR than those nearby. Previously used methods can fail catastrophically under predictable circumstances. The bootstrap method works well with strong data constraints on MDR, butmore » can be strongly biased when network geometry is poor. We propose two new methods: the Constrained Optimization Bounding Estimator (COBE) assumes uniform priors on slip rate (from geologic information) and MDR, and can be shown through synthetic tests to be a useful, albeit conservative estimator; the Constrained Optimization Bounding Linear Estimator (COBLE) is the corresponding linear estimator with Gaussian priors rather than point-wise bounds on slip rates. COBE matches COBLE with strong data constraints on MDR. We compare results from COBE and COBLE to previously published results for the interseismic MDR at Parkfield, on the San Andreas Fault, and find similar results; thus, the apparent discrepancy between MDR and the total moment release (seismic and afterslip) in the 2004 Parkfield earthquake remains.« less

  6. Space Suit Joint Torque Measurement Method Validation

    NASA Technical Reports Server (NTRS)

    Valish, Dana; Eversley, Karina

    2012-01-01

    In 2009 and early 2010, a test method was developed and performed to quantify the torque required to manipulate joints in several existing operational and prototype space suits. This was done in an effort to develop joint torque requirements appropriate for a new Constellation Program space suit system. The same test method was levied on the Constellation space suit contractors to verify that their suit design met the requirements. However, because the original test was set up and conducted by a single test operator there was some question as to whether this method was repeatable enough to be considered a standard verification method for Constellation or other future development programs. In order to validate the method itself, a representative subset of the previous test was repeated, using the same information that would be available to space suit contractors, but set up and conducted by someone not familiar with the previous test. The resultant data was compared using graphical and statistical analysis; the results indicated a significant variance in values reported for a subset of the re-tested joints. Potential variables that could have affected the data were identified and a third round of testing was conducted in an attempt to eliminate and/or quantify the effects of these variables. The results of the third test effort will be used to determine whether or not the proposed joint torque methodology can be applied to future space suit development contracts.

  7. Quantitation of peptides from non-invasive skin tapings using isotope dilution and tandem mass spectrometry.

    PubMed

    Reisdorph, Nichole; Armstrong, Michael; Powell, Roger; Quinn, Kevin; Legg, Kevin; Leung, Donald; Reisdorph, Rick

    2018-05-01

    Previous work from our laboratories utilized a novel skin taping method and mass spectrometry-based proteomics to discover clinical biomarkers of skin conditions; these included atopic dermatitis, Staphylococcus aureus colonization, and eczema herpeticum. While suitable for discovery purposes, semi-quantitative proteomics is generally time-consuming and expensive. Furthermore, depending on the method used, discovery-based proteomics can result in high variation and inadequate sensitivity to detect low abundant peptides. Therefore, we strove to develop a rapid, sensitive, and reproducible method to quantitate disease-related proteins from skin tapings. We utilized isotopically-labeled peptides and tandem mass spectrometry to obtain absolute quantitation values on 14 peptides from 7 proteins; these proteins had shown previous importance in skin disease. The method demonstrated good reproducibility, dynamic range, and linearity (R 2  > 0.993) when n = 3 standards were analyzed across 0.05-2.5 pmol. The method was used to determine if differences exist between skin proteins in a small group of atopic versus non-atopic individuals (n = 12). While only minimal differences were found, peptides were detected in all samples and exhibited good correlation between peptides for 5 of the 7 proteins (R 2  = 0.71-0.98). This method can be applied to larger cohorts to further establish the relationships of these proteins to skin disease. Copyright © 2017. Published by Elsevier B.V.

  8. Standard methods for sampling freshwater fishes: Opportunities for international collaboration

    USGS Publications Warehouse

    Bonar, Scott A.; Mercado-Silva, Norman; Hubert, Wayne A.; Beard, Douglas; Dave, Göran; Kubečka, Jan; Graeb, Brian D. S.; Lester, Nigel P.; Porath, Mark T.; Winfield, Ian J.

    2017-01-01

    With publication of Standard Methods for Sampling North American Freshwater Fishes in 2009, the American Fisheries Society (AFS) recommended standard procedures for North America. To explore interest in standardizing at intercontinental scales, a symposium attended by international specialists in freshwater fish sampling was convened at the 145th Annual AFS Meeting in Portland, Oregon, in August 2015. Participants represented all continents except Australia and Antarctica and were employed by state and federal agencies, universities, nongovernmental organizations, and consulting businesses. Currently, standardization is practiced mostly in North America and Europe. Participants described how standardization has been important for management of long-term data sets, promoting fundamental scientific understanding, and assessing efficacy of large spatial scale management strategies. Academics indicated that standardization has been useful in fisheries education because time previously used to teach how sampling methods are developed is now more devoted to diagnosis and treatment of problem fish communities. Researchers reported that standardization allowed increased sample size for method validation and calibration. Group consensus was to retain continental standards where they currently exist but to further explore international and intercontinental standardization, specifically identifying where synergies and bridges exist, and identify means to collaborate with scientists where standardization is limited but interest and need occur.

  9. A comparison of confidence interval methods for the intraclass correlation coefficient in community-based cluster randomization trials with a binary outcome.

    PubMed

    Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan

    2016-04-01

    Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.

  10. A Bayesian method to quantify azimuthal anisotropy model uncertainties: application to global azimuthal anisotropy in the upper mantle and transition zone

    NASA Astrophysics Data System (ADS)

    Yuan, K.; Beghein, C.

    2018-04-01

    Seismic anisotropy is a powerful tool to constrain mantle deformation, but its existence in the deep upper mantle and topmost lower mantle is still uncertain. Recent results from higher mode Rayleigh waves have, however, revealed the presence of 1 per cent azimuthal anisotropy between 300 and 800 km depth, and changes in azimuthal anisotropy across the mantle transition zone boundaries. This has important consequences for our understanding of mantle convection patterns and deformation of deep mantle material. Here, we propose a Bayesian method to model depth variations in azimuthal anisotropy and to obtain quantitative uncertainties on the fast seismic direction and anisotropy amplitude from phase velocity dispersion maps. We applied this new method to existing global fundamental and higher mode Rayleigh wave phase velocity maps to assess the likelihood of azimuthal anisotropy in the deep upper mantle and to determine whether previously detected changes in anisotropy at the transition zone boundaries are robustly constrained by those data. Our results confirm that deep upper-mantle azimuthal anisotropy is favoured and well constrained by the higher mode data employed. The fast seismic directions are in agreement with our previously published model. The data favour a model characterized, on average, by changes in azimuthal anisotropy at the top and bottom of the transition zone. However, this change in fast axes is not a global feature as there are regions of the model where the azimuthal anisotropy direction is unlikely to change across depths in the deep upper mantle. We were, however, unable to detect any clear pattern or connection with surface tectonics. Future studies will be needed to further improve the lateral resolution of this type of model at transition zone depths.

  11. Fossils matter: improved estimates of divergence times in Pinus reveal older diversification.

    PubMed

    Saladin, Bianca; Leslie, Andrew B; Wüest, Rafael O; Litsios, Glenn; Conti, Elena; Salamin, Nicolas; Zimmermann, Niklaus E

    2017-04-04

    The taxonomy of pines (genus Pinus) is widely accepted and a robust gene tree based on entire plastome sequences exists. However, there is a large discrepancy in estimated divergence times of major pine clades among existing studies, mainly due to differences in fossil placement and dating methods used. We currently lack a dated molecular phylogeny that makes use of the rich pine fossil record, and this study is the first to estimate the divergence dates of pines based on a large number of fossils (21) evenly distributed across all major clades, in combination with applying both node and tip dating methods. We present a range of molecular phylogenetic trees of Pinus generated within a Bayesian framework. We find the origin of crown Pinus is likely up to 30 Myr older (Early Cretaceous) than inferred in most previous studies (Late Cretaceous) and propose generally older divergence times for major clades within Pinus than previously thought. Our age estimates vary significantly between the different dating approaches, but the results generally agree on older divergence times. We present a revised list of 21 fossils that are suitable to use in dating or comparative analyses of pines. Reliable estimates of divergence times in pines are essential if we are to link diversification processes and functional adaptation of this genus to geological events or to changing climates. In addition to older divergence times in Pinus, our results also indicate that node age estimates in pines depend on dating approaches and the specific fossil sets used, reflecting inherent differences in various dating approaches. The sets of dated phylogenetic trees of pines presented here provide a way to account for uncertainties in age estimations when applying comparative phylogenetic methods.

  12. Transition zone structure beneath Ethiopia from 3-D fast marching pseudo-migration stacking

    NASA Astrophysics Data System (ADS)

    Benoit, M. H.; Lopez, A.; Levin, V.

    2008-12-01

    Several models for the origin of the Afar hotspot have been put forth over the last decade, but much ambiguity remains as to whether the hotspot tectonism found there is due to a shallow or deeply seated feature. Additionally, there has been much debate as to whether the hotspot owes its existence to a 'classic' mantle plume feature or if it is part of the African Superplume complex. To further understand the origin of the hotspot, we employ a new receiver function stacking method that incorporates a fast-marching three- dimensional ray tracing algorithm to improve upon existing studies of the mantle transition zone structure. Using teleseismic data from the Ethiopia Broadband Seismic Experiment and the EAGLE (Ethiopia Afar Grand Lithospheric Experiment) experiment, we stack receiver functions using a three-dimensional pseudo- migration technique to examine topography on the 410 and 660 km discontinuities. Previous methods of receiver function pseudo-migration incorporated ray tracing methods that were not able to ray trace through highly complicated 3-D structure, or the ray tracing techniques only produced 3-D time perturbations associated 1-D rays in a 3-D velocity medium. These previous techniques yielded confusing and incomplete results for when applied to the exceedingly complicated mantle structure beneath Ethiopia. Indeed, comparisons of the 1-D versus 3-D ray tracing techniques show that the 1-D technique mislocated structure laterally in the mantle by over 100 km. Preliminary results using our new technique show a shallower then average 410 km discontinuity and a deeper than average 660 km discontinuity over much of the region, suggested that the hotspot has a deep seated origin.

  13. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    PubMed

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  14. Molecular Dynamics Information Improves cis-Peptide-Based Function Annotation of Proteins.

    PubMed

    Das, Sreetama; Bhadra, Pratiti; Ramakumar, Suryanarayanarao; Pal, Debnath

    2017-08-04

    cis-Peptide bonds, whose occurrence in proteins is rare but evolutionarily conserved, are implicated to play an important role in protein function. This has led to their previous use in a homology-independent, fragment-match-based protein function annotation method. However, proteins are not static molecules; dynamics is integral to their activity. This is nicely epitomized by the geometric isomerization of cis-peptide to trans form for molecular activity. Hence we have incorporated both static (cis-peptide) and dynamics information to improve the prediction of protein molecular function. Our results show that cis-peptide information alone cannot detect functional matches in cases where cis-trans isomerization exists but 3D coordinates have been obtained for only the trans isomer or when the cis-peptide bond is incorrectly assigned as trans. On the contrary, use of dynamics information alone includes false-positive matches for cases where fragments with similar secondary structure show similar dynamics, but the proteins do not share a common function. Combining the two methods reduces errors while detecting the true matches, thereby enhancing the utility of our method in function annotation. A combined approach, therefore, opens up new avenues of improving existing automated function annotation methodologies.

  15. Integrative genetic risk prediction using non-parametric empirical Bayes classification.

    PubMed

    Zhao, Sihai Dave

    2017-06-01

    Genetic risk prediction is an important component of individualized medicine, but prediction accuracies remain low for many complex diseases. A fundamental limitation is the sample sizes of the studies on which the prediction algorithms are trained. One way to increase the effective sample size is to integrate information from previously existing studies. However, it can be difficult to find existing data that examine the target disease of interest, especially if that disease is rare or poorly studied. Furthermore, individual-level genotype data from these auxiliary studies are typically difficult to obtain. This article proposes a new approach to integrative genetic risk prediction of complex diseases with binary phenotypes. It accommodates possible heterogeneity in the genetic etiologies of the target and auxiliary diseases using a tuning parameter-free non-parametric empirical Bayes procedure, and can be trained using only auxiliary summary statistics. Simulation studies show that the proposed method can provide superior predictive accuracy relative to non-integrative as well as integrative classifiers. The method is applied to a recent study of pediatric autoimmune diseases, where it substantially reduces prediction error for certain target/auxiliary disease combinations. The proposed method is implemented in the R package ssa. © 2016, The International Biometric Society.

  16. Global optimization method based on ray tracing to achieve optimum figure error compensation

    NASA Astrophysics Data System (ADS)

    Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin

    2017-02-01

    Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.

  17. Analysis of regional brain mitochondrial bioenergetics and susceptibility to mitochondrial inhibition utilizing a microplate based system

    PubMed Central

    Sauerbeck, Andrew; Pandya, Jignesh; Singh, Indrapal; Bittman, Kevin; Readnower, Ryan; Bing, Guoying; Sullivan, Patrick

    2012-01-01

    The analysis of mitochondrial bioenergetic function typically has required 50–100 μg of protein per sample and at least 15 min per run when utilizing a Clark-type oxygen electrode. In the present work we describe a method utilizing the Seahorse Biosciences XF24 Flux Analyzer for measuring mitochondrial oxygen consumption simultaneously from multiple samples and utilizing only 5 μg of protein per sample. Utilizing this method we have investigated whether regionally based differences exist in mitochondria isolated from the cortex, striatum, hippocampus, and cerebellum. Analysis of basal mitochondrial bioenergetics revealed that minimal differences exist between the cortex, striatum, and hippocampus. However, the cerebellum exhibited significantly slower basal rates of Complex I and Complex II dependent oxygen consumption (p < 0.05). Mitochondrial inhibitors affected enzyme activity proportionally across all samples tested and only small differences existed in the effect of inhibitors on oxygen consumption. Investigation of the effect of rotenone administration on Complex I dependent oxygen consumption revealed that exposure to 10 pM rotenone led to a clear time dependent decrease in oxygen consumption beginning 12 min after administration (p < 0.05). These studies show that the utilization of this microplate based method for analysis of mitochondrial bioenergetics is effective at quantifying oxygen consumption simultaneously from multiple samples. Additionally, these studies indicate that minimal regional differences exist in mitochondria isolated from the cortex, striatum, or hippocampus. Furthermore, utilization of the mitochondrial inhibitors suggests that previous work indicating regionally specific deficits following systemic mitochondrial toxin exposure may not be the result of differences in the individual mitochondria from the affected regions. PMID:21402103

  18. Consistent forcing scheme in the cascaded lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Fei, Linlin; Luo, Kai Hong

    2017-11-01

    In this paper, we give an alternative derivation for the cascaded lattice Boltzmann method (CLBM) within a general multiple-relaxation-time (MRT) framework by introducing a shift matrix. When the shift matrix is a unit matrix, the CLBM degrades into an MRT LBM. Based on this, a consistent forcing scheme is developed for the CLBM. The consistency of the nonslip rule, the second-order convergence rate in space, and the property of isotropy for the consistent forcing scheme is demonstrated through numerical simulations of several canonical problems. Several existing forcing schemes previously used in the CLBM are also examined. The study clarifies the relation between MRT LBM and CLBM under a general framework.

  19. Constraining f(R) theories with cosmography

    NASA Astrophysics Data System (ADS)

    Anabella Teppa Pannia, Florencia; Esteban Perez Bergliaffa, Santiago

    2013-08-01

    A method to set constraints on the parameters of extended theories of gravitation is presented. It is based on the comparison of two series expansions of any observable that depends on H(z). The first expansion is of the cosmographical type, while the second uses the dependence of H with z furnished by a given type of extended theory. When applied to f(R) theories together with the redshift drift, the method yields limits on the parameters of two examples (the theory of Hu and Sawicki [1], and the exponential gravity introduced by Linder [2]) that are compatible with or more stringent than the existing ones, as well as a limit for a previously unconstrained parameter.

  20. Practical implementation of spectral-intensity dispersion-canceled optical coherence tomography with artifact suppression

    NASA Astrophysics Data System (ADS)

    Shirai, Tomohiro; Friberg, Ari T.

    2018-04-01

    Dispersion-canceled optical coherence tomography (OCT) based on spectral intensity interferometry was devised as a classical counterpart of quantum OCT to enhance the basic performance of conventional OCT. In this paper, we demonstrate experimentally that an alternative method of realizing this kind of OCT by means of two optical fiber couplers and a single spectrometer is a more practical and reliable option than the existing methods proposed previously. Furthermore, we develop a recipe for reducing multiple artifacts simultaneously on the basis of simple averaging and verify experimentally that it works successfully in the sense that all the artifacts are mitigated effectively and only the true signals carrying structural information about the sample survive.

  1. Which stocks are profitable? A network method to investigate the effects of network structure on stock returns

    NASA Astrophysics Data System (ADS)

    Chen, Kun; Luo, Peng; Sun, Bianxia; Wang, Huaiqing

    2015-10-01

    According to asset pricing theory, a stock's expected returns are determined by its exposure to systematic risk. In this paper, we propose a new method for analyzing the interaction effects among industries and stocks on stock returns. We construct a complex network based on correlations of abnormal stock returns and use centrality and modularity, two popular measures in social science, to determine the effect of interconnections on industry and stock returns. Supported by previous studies, our findings indicate that a relationship exists between inter-industry closeness and industry returns and between stock centrality and stock returns. The theoretical and practical contributions of these findings are discussed.

  2. Cheating prevention in visual cryptography.

    PubMed

    Hu, Chih-Ming; Tzeng, Wen-Guey

    2007-01-01

    Visual cryptography (VC) is a method of encrypting a secret image into shares such that stacking a sufficient number of shares reveals the secret image. Shares are usually presented in transparencies. Each participant holds a transparency. Most of the previous research work on VC focuses on improving two parameters: pixel expansion and contrast. In this paper, we studied the cheating problem in VC and extended VC. We considered the attacks of malicious adversaries who may deviate from the scheme in any way. We presented three cheating methods and applied them on attacking existent VC or extended VC schemes. We improved one cheat-preventing scheme. We proposed a generic method that converts a VCS to another VCS that has the property of cheating prevention. The overhead of the conversion is near optimal in both contrast degression and pixel expansion.

  3. Library fingerprints: a novel approach to the screening of virtual libraries.

    PubMed

    Klon, Anthony E; Diller, David J

    2007-01-01

    We propose a novel method to prioritize libraries for combinatorial synthesis and high-throughput screening that assesses the viability of a particular library on the basis of the aggregate physical-chemical properties of the compounds using a naïve Bayesian classifier. This approach prioritizes collections of related compounds according to the aggregate values of their physical-chemical parameters in contrast to single-compound screening. The method is also shown to be useful in screening existing noncombinatorial libraries when the compounds in these libraries have been previously clustered according to their molecular graphs. We show that the method used here is comparable or superior to the single-compound virtual screening of combinatorial libraries and noncombinatorial libraries and is superior to the pairwise Tanimoto similarity searching of a collection of combinatorial libraries.

  4. Physiological constraints on deceleration during the aerocapture of manned vehicles

    NASA Technical Reports Server (NTRS)

    Lyne, J. E.

    1992-01-01

    The peak deceleration load allowed for aerobraking of manned vehicles is a critical parameter in planning future excursions to Mars. However, considerable variation exists in the limits used by various investigators. The goal of this study was to determine the most appropriate level for this limit. Methods: Since previous U.S. space flights have been limited to 84 days duration, Soviet flight results were examined. Published details of Soviet entry trajectories were not available. However, personal communication with Soviet cosmonauts suggested that peak entry loads of 5-6 G had been encountered upon return from 8 months in orbit. Soyuz entry capsule's characteristics were established and the capsule's entry trajectory was numerically calculated. The results confirm a peak load of 5 to 6 G. Results: Although the Soviet flights were of shorter duration than expected Mars missions, evidence exists that the deceleration experience is applicable. G tolerance has been shown to stabilize after 1 to 3 months in space if adequate countermeasures are used. The calculated Soyuz deceleration histories are graphically compared with those expected for Mars aerobraking. Conclusions: Previous spaceflight experience supports the use of a 5 G limit for the aerocapture of a manned vehicle at Mars.

  5. Publication bias and the limited strength model of self-control: has the evidence for ego depletion been overestimated?

    PubMed

    Carter, Evan C; McCullough, Michael E

    2014-01-01

    Few models of self-control have generated as much scientific interest as has the limited strength model. One of the entailments of this model, the depletion effect, is the expectation that acts of self-control will be less effective when they follow prior acts of self-control. Results from a previous meta-analysis concluded that the depletion effect is robust and medium in magnitude (d = 0.62). However, when we applied methods for estimating and correcting for small-study effects (such as publication bias) to the data from this previous meta-analysis effort, we found very strong signals of publication bias, along with an indication that the depletion effect is actually no different from zero. We conclude that until greater certainty about the size of the depletion effect can be established, circumspection about the existence of this phenomenon is warranted, and that rather than elaborating on the model, research efforts should focus on establishing whether the basic effect exists. We argue that the evidence for the depletion effect is a useful case study for illustrating the dangers of small-study effects as well as some of the possible tools for mitigating their influence in psychological science.

  6. Direct detection of metal-insulator phase transitions using the modified Backus-Gilbert method

    NASA Astrophysics Data System (ADS)

    Ulybyshev, Maksim; Winterowd, Christopher; Zafeiropoulos, Savvas

    2018-03-01

    The detection of the (semi)metal-insulator phase transition can be extremely difficult if the local order parameter which characterizes the ordered phase is unknown. In some cases, it is even impossible to define a local order parameter: the most prominent example of such system is the spin liquid state. This state was proposed to exist in the Hubbard model on the hexagonal lattice in a region between the semimetal phase and the antiferromagnetic insulator phase. The existence of this phase has been the subject of a long debate. In order to detect these exotic phases we must use alternative methods to those used for more familiar examples of spontaneous symmetry breaking. We have modified the Backus-Gilbert method of analytic continuation which was previously used in the calculation of the pion quasiparticle mass in lattice QCD. The modification of the method consists of the introduction of the Tikhonov regularization scheme which was used to treat the ill-conditioned kernel. This modified Backus-Gilbert method is applied to the Euclidean propagators in momentum space calculated using the hybrid Monte Carlo algorithm. In this way, it is possible to reconstruct the full dispersion relation and to estimate the mass gap, which is a direct signal of the transition to the insulating state. We demonstrate the utility of this method in our calculations for the Hubbard model on the hexagonal lattice. We also apply the method to the metal-insulator phase transition in the Hubbard-Coulomb model on the square lattice.

  7. Human systems dynamics: Toward a computational model

    NASA Astrophysics Data System (ADS)

    Eoyang, Glenda H.

    2012-09-01

    A robust and reliable computational model of complex human systems dynamics could support advancements in theory and practice for social systems at all levels, from intrapersonal experience to global politics and economics. Models of human interactions have evolved from traditional, Newtonian systems assumptions, which served a variety of practical and theoretical needs of the past. Another class of models has been inspired and informed by models and methods from nonlinear dynamics, chaos, and complexity science. None of the existing models, however, is able to represent the open, high dimension, and nonlinear self-organizing dynamics of social systems. An effective model will represent interactions at multiple levels to generate emergent patterns of social and political life of individuals and groups. Existing models and modeling methods are considered and assessed against characteristic pattern-forming processes in observed and experienced phenomena of human systems. A conceptual model, CDE Model, based on the conditions for self-organizing in human systems, is explored as an alternative to existing models and methods. While the new model overcomes the limitations of previous models, it also provides an explanatory base and foundation for prospective analysis to inform real-time meaning making and action taking in response to complex conditions in the real world. An invitation is extended to readers to engage in developing a computational model that incorporates the assumptions, meta-variables, and relationships of this open, high dimension, and nonlinear conceptual model of the complex dynamics of human systems.

  8. Estimating the size of hidden populations using respondent-driven sampling data: Case examples from Morocco

    PubMed Central

    Johnston, Lisa G; McLaughlin, Katherine R; Rhilani, Houssine El; Latifi, Amina; Toufik, Abdalla; Bennani, Aziza; Alami, Kamal; Elomari, Boutaina; Handcock, Mark S

    2015-01-01

    Background Respondent-driven sampling is used worldwide to estimate the population prevalence of characteristics such as HIV/AIDS and associated risk factors in hard-to-reach populations. Estimating the total size of these populations is of great interest to national and international organizations, however reliable measures of population size often do not exist. Methods Successive Sampling-Population Size Estimation (SS-PSE) along with network size imputation allows population size estimates to be made without relying on separate studies or additional data (as in network scale-up, multiplier and capture-recapture methods), which may be biased. Results Ten population size estimates were calculated for people who inject drugs, female sex workers, men who have sex with other men, and migrants from sub-Sahara Africa in six different cities in Morocco. SS-PSE estimates fell within or very close to the likely values provided by experts and the estimates from previous studies using other methods. Conclusions SS-PSE is an effective method for estimating the size of hard-to-reach populations that leverages important information within respondent-driven sampling studies. The addition of a network size imputation method helps to smooth network sizes allowing for more accurate results. However, caution should be used particularly when there is reason to believe that clustered subgroups may exist within the population of interest or when the sample size is small in relation to the population. PMID:26258908

  9. Enriching plausible new hypothesis generation in PubMed.

    PubMed

    Baek, Seung Han; Lee, Dahee; Kim, Minjoo; Lee, Jong Ho; Song, Min

    2017-01-01

    Most of earlier studies in the field of literature-based discovery have adopted Swanson's ABC model that links pieces of knowledge entailed in disjoint literatures. However, the issue concerning their practicability remains to be solved since most of them did not deal with the context surrounding the discovered associations and usually not accompanied with clinical confirmation. In this study, we aim to propose a method that expands and elaborates the existing hypothesis by advanced text mining techniques for capturing contexts. We extend ABC model to allow for multiple B terms with various biological types. We were able to concretize a specific, metabolite-related hypothesis with abundant contextual information by using the proposed method. Starting from explaining the relationship between lactosylceramide and arterial stiffness, the hypothesis was extended to suggest a potential pathway consisting of lactosylceramide, nitric oxide, malondialdehyde, and arterial stiffness. The experiment by domain experts showed that it is clinically valid. The proposed method is designed to provide plausible candidates of the concretized hypothesis, which are based on extracted heterogeneous entities and detailed relation information, along with a reliable ranking criterion. Statistical tests collaboratively conducted with biomedical experts provide the validity and practical usefulness of the method unlike previous studies. Applying the proposed method to other cases, it would be helpful for biologists to support the existing hypothesis and easily expect the logical process within it.

  10. Analog self-powered harvester achieving switching pause control to increase harvested energy

    NASA Astrophysics Data System (ADS)

    Makihara, Kanjuro; Asahina, Kei

    2017-05-01

    In this paper, we propose a self-powered analog controller circuit to increase the efficiency of electrical energy harvesting from vibrational energy using piezoelectric materials. Although the existing synchronized switch harvesting on inductor (SSHI) method is designed to produce efficient harvesting, its switching operation generates a vibration-suppression effect that reduces the harvested levels of electrical energy. To solve this problem, the authors proposed—in a previous paper—a switching method that takes this vibration-suppression effect into account. This method temporarily pauses the switching operation, allowing the recovery of the mechanical displacement and, therefore, of the piezoelectric voltage. In this paper, we propose a self-powered analog circuit to implement this switching control method. Self-powered vibration harvesting is achieved in this study by attaching a newly designed circuit to an existing analog controller for SSHI. This circuit aims to effectively implement the aforementioned new switching control strategy, where switching is paused in some vibration peaks, in order to allow motion recovery and a consequent increase in the harvested energy. Harvesting experiments performed using the proposed circuit reveal that the proposed method can increase the energy stored in the storage capacitor by a factor of 8.5 relative to the conventional SSHI circuit. This proposed technique is useful to increase the harvested energy especially for piezoelectric systems having large coupling factor.

  11. Correlation, evaluation, and extension of linearized theories for tire motion and wheel shimmy

    NASA Technical Reports Server (NTRS)

    Smiley, Robert F

    1957-01-01

    An evaluation is made of the existing theories of a linearized tire motion and wheel shimmy. It is demonstrated that most of the previously published theories represent varying degrees of approximation to a summary theory developed in this report which is a minor modification of the basic theory of Von Schlippe and Dietrich. In most cases where strong differences exist between the previously published theories and summary theory, the previously published theories are shown to possess certain deficiencies. A series of systematic approximations to the summary theory is developed for the treatment of problems too simple to merit the use of the complete summary theory, and procedures are discussed for applying the summary theory and its systematic approximations to the shimmy of more complex landing-gear structures than have previously been considered. Comparisons of the existing experimental data with the predictions of the summary theory and the systematic approximations provide a fair substantiation of the more detailed approximate theories.

  12. Empirical Estimates of 0Day Vulnerabilities in Control Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miles A. McQueen; Wayne F. Boyer; Sean M. McBride

    2009-01-01

    We define a 0Day vulnerability to be any vulnerability, in deployed software, which has been discovered by at least one person but has not yet been publicly announced or patched. These 0Day vulnerabilities are of particular interest when assessing the risk to well managed control systems which have already effectively mitigated the publicly known vulnerabilities. In these well managed systems the risk contribution from 0Days will have proportionally increased. To aid understanding of how great a risk 0Days may pose to control systems, an estimate of how many are in existence is needed. Consequently, using the 0Day definition given above,more » we developed and applied a method for estimating how many 0Day vulnerabilities are in existence on any given day. The estimate is made by: empirically characterizing the distribution of the lifespans, measured in days, of 0Day vulnerabilities; determining the number of vulnerabilities publicly announced each day; and applying a novel method for estimating the number of 0Day vulnerabilities in existence on any given day using the number of vulnerabilities publicly announced each day and the previously derived distribution of 0Day lifespans. The method was first applied to a general set of software applications by analyzing the 0Day lifespans of 491 software vulnerabilities and using the daily rate of vulnerability announcements in the National Vulnerability Database. This led to a conservative estimate that in the worst year there were, on average, 2500 0Day software related vulnerabilities in existence on any given day. Using a smaller but intriguing set of 15 0Day software vulnerability lifespans representing the actual time from discovery to public disclosure, we then made a more aggressive estimate. In this case, we estimated that in the worst year there were, on average, 4500 0Day software vulnerabilities in existence on any given day. We then proceeded to identify the subset of software applications likely to be used in some control systems, analyzed the associated subset of vulnerabilities, and characterized their lifespans. Using the previously developed method of analysis, we very conservatively estimated 250 control system related 0Day vulnerabilities in existence on any given day. While reasonable, this first order estimate for control systems is probably far more conservative than those made for general software systems since the estimate did not include vulnerabilities unique to control system specific components. These control system specific vulnerabilities were unable to be included in the estimate for a variety of reasons with the most problematic being that the public announcement of unique control system vulnerabilities is very sparse. Consequently, with the intent to improve the above 0Day estimate for control systems, we first identified the additional, unique to control systems, vulnerability estimation constraints and then investigated new mechanisms which may be useful for estimating the number of unique 0Day software vulnerabilities found in control system components. We proceeded to identify a number of new mechanisms and approaches for estimating and incorporating control system specific vulnerabilities into an improved 0Day estimation method. These new mechanisms and approaches appear promising and will be more rigorously evaluated during the course of the next year.« less

  13. Exclusion-Based Capture and Enumeration of CD4+ T Cells from Whole Blood for Low-Resource Settings.

    PubMed

    Howard, Alexander L; Pezzi, Hannah M; Beebe, David J; Berry, Scott M

    2014-06-01

    In developing countries, demand exists for a cost-effective method to evaluate human immunodeficiency virus patients' CD4(+) T-helper cell count. The TH (CD4) cell count is the current marker used to identify when an HIV patient has progressed to acquired immunodeficiency syndrome, which results when the immune system can no longer prevent certain opportunistic infections. A system to perform TH count that obviates the use of costly flow cytometry will enable physicians to more closely follow patients' disease progression and response to therapy in areas where such advanced equipment is unavailable. Our system of two serially-operated immiscible phase exclusion-based cell isolations coupled with a rapid fluorescent readout enables exclusion-based isolation and accurate counting of T-helper cells at lower cost and from a smaller volume of blood than previous methods. TH cell isolation via immiscible filtration assisted by surface tension (IFAST) compares well against the established Dynal T4 Quant Kit and is sensitive at CD4 counts representative of immunocompromised patients (less than 200 TH cells per microliter of blood). Our technique retains use of open, simple-to-operate devices that enable IFAST as a high-throughput, automatable sample preparation method, improving throughput over previous low-resource methods. © 2013 Society for Laboratory Automation and Screening.

  14. Comparison of anatomical, functional and regression methods for estimating the rotation axes of the forearm.

    PubMed

    Fraysse, François; Thewlis, Dominic

    2014-11-07

    Numerous methods exist to estimate the pose of the axes of rotation of the forearm. These include anatomical definitions, such as the conventions proposed by the ISB, and functional methods based on instantaneous helical axes, which are commonly accepted as the modelling gold standard for non-invasive, in-vivo studies. We investigated the validity of a third method, based on regression equations, to estimate the rotation axes of the forearm. We also assessed the accuracy of both ISB methods. Axes obtained from a functional method were considered as the reference. Results indicate a large inter-subject variability in the axes positions, in accordance with previous studies. Both ISB methods gave the same level of accuracy in axes position estimations. Regression equations seem to improve estimation of the flexion-extension axis but not the pronation-supination axis. Overall, given the large inter-subject variability, the use of regression equations cannot be recommended. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. 2D photonic crystal complete band gap search using a cyclic cellular automaton refination

    NASA Astrophysics Data System (ADS)

    González-García, R.; Castañón, G.; Hernández-Figueroa, H. E.

    2014-11-01

    We present a refination method based on a cyclic cellular automaton (CCA) that simulates a crystallization-like process, aided with a heuristic evolutionary method called differential evolution (DE) used to perform an ordered search of full photonic band gaps (FPBGs) in a 2D photonic crystal (PC). The solution is proposed as a combinatorial optimization of the elements in a binary array. These elements represent the existence or absence of a dielectric material surrounded by air, thus representing a general geometry whose search space is defined by the number of elements in such array. A block-iterative frequency-domain method was used to compute the FPBGs on a PC, when present. DE has proved to be useful in combinatorial problems and we also present an implementation feature that takes advantage of the periodic nature of PCs to enhance the convergence of this algorithm. Finally, we used this methodology to find a PC structure with a 19% bandgap-to-midgap ratio without requiring previous information of suboptimal configurations and we made a statistical study of how it is affected by disorder in the borders of the structure compared with a previous work that uses a genetic algorithm.

  16. Least-Squares Support Vector Machine Approach to Viral Replication Origin Prediction

    PubMed Central

    Cruz-Cano, Raul; Chew, David S.H.; Kwok-Pui, Choi; Ming-Ying, Leung

    2010-01-01

    Replication of their DNA genomes is a central step in the reproduction of many viruses. Procedures to find replication origins, which are initiation sites of the DNA replication process, are therefore of great importance for controlling the growth and spread of such viruses. Existing computational methods for viral replication origin prediction have mostly been tested within the family of herpesviruses. This paper proposes a new approach by least-squares support vector machines (LS-SVMs) and tests its performance not only on the herpes family but also on a collection of caudoviruses coming from three viral families under the order of caudovirales. The LS-SVM approach provides sensitivities and positive predictive values superior or comparable to those given by the previous methods. When suitably combined with previous methods, the LS-SVM approach further improves the prediction accuracy for the herpesvirus replication origins. Furthermore, by recursive feature elimination, the LS-SVM has also helped find the most significant features of the data sets. The results suggest that the LS-SVMs will be a highly useful addition to the set of computational tools for viral replication origin prediction and illustrate the value of optimization-based computing techniques in biomedical applications. PMID:20729987

  17. Least-Squares Support Vector Machine Approach to Viral Replication Origin Prediction.

    PubMed

    Cruz-Cano, Raul; Chew, David S H; Kwok-Pui, Choi; Ming-Ying, Leung

    2010-06-01

    Replication of their DNA genomes is a central step in the reproduction of many viruses. Procedures to find replication origins, which are initiation sites of the DNA replication process, are therefore of great importance for controlling the growth and spread of such viruses. Existing computational methods for viral replication origin prediction have mostly been tested within the family of herpesviruses. This paper proposes a new approach by least-squares support vector machines (LS-SVMs) and tests its performance not only on the herpes family but also on a collection of caudoviruses coming from three viral families under the order of caudovirales. The LS-SVM approach provides sensitivities and positive predictive values superior or comparable to those given by the previous methods. When suitably combined with previous methods, the LS-SVM approach further improves the prediction accuracy for the herpesvirus replication origins. Furthermore, by recursive feature elimination, the LS-SVM has also helped find the most significant features of the data sets. The results suggest that the LS-SVMs will be a highly useful addition to the set of computational tools for viral replication origin prediction and illustrate the value of optimization-based computing techniques in biomedical applications.

  18. Application of the Gini correlation coefficient to infer regulatory relationships in transcriptome analysis.

    PubMed

    Ma, Chuang; Wang, Xiangfeng

    2012-09-01

    One of the computational challenges in plant systems biology is to accurately infer transcriptional regulation relationships based on correlation analyses of gene expression patterns. Despite several correlation methods that are applied in biology to analyze microarray data, concerns regarding the compatibility of these methods with the gene expression data profiled by high-throughput RNA transcriptome sequencing (RNA-Seq) technology have been raised. These concerns are mainly due to the fact that the distribution of read counts in RNA-Seq experiments is different from that of fluorescence intensities in microarray experiments. Therefore, a comprehensive evaluation of the existing correlation methods and, if necessary, introduction of novel methods into biology is appropriate. In this study, we compared four existing correlation methods used in microarray analysis and one novel method called the Gini correlation coefficient on previously published microarray-based and sequencing-based gene expression data in Arabidopsis (Arabidopsis thaliana) and maize (Zea mays). The comparisons were performed on more than 11,000 regulatory relationships in Arabidopsis, including 8,929 pairs of transcription factors and target genes. Our analyses pinpointed the strengths and weaknesses of each method and indicated that the Gini correlation can compensate for the shortcomings of the Pearson correlation, the Spearman correlation, the Kendall correlation, and the Tukey's biweight correlation. The Gini correlation method, with the other four evaluated methods in this study, was implemented as an R package named rsgcc that can be utilized as an alternative option for biologists to perform clustering analyses of gene expression patterns or transcriptional network analyses.

  19. Application of the Gini Correlation Coefficient to Infer Regulatory Relationships in Transcriptome Analysis[W][OA

    PubMed Central

    Ma, Chuang; Wang, Xiangfeng

    2012-01-01

    One of the computational challenges in plant systems biology is to accurately infer transcriptional regulation relationships based on correlation analyses of gene expression patterns. Despite several correlation methods that are applied in biology to analyze microarray data, concerns regarding the compatibility of these methods with the gene expression data profiled by high-throughput RNA transcriptome sequencing (RNA-Seq) technology have been raised. These concerns are mainly due to the fact that the distribution of read counts in RNA-Seq experiments is different from that of fluorescence intensities in microarray experiments. Therefore, a comprehensive evaluation of the existing correlation methods and, if necessary, introduction of novel methods into biology is appropriate. In this study, we compared four existing correlation methods used in microarray analysis and one novel method called the Gini correlation coefficient on previously published microarray-based and sequencing-based gene expression data in Arabidopsis (Arabidopsis thaliana) and maize (Zea mays). The comparisons were performed on more than 11,000 regulatory relationships in Arabidopsis, including 8,929 pairs of transcription factors and target genes. Our analyses pinpointed the strengths and weaknesses of each method and indicated that the Gini correlation can compensate for the shortcomings of the Pearson correlation, the Spearman correlation, the Kendall correlation, and the Tukey’s biweight correlation. The Gini correlation method, with the other four evaluated methods in this study, was implemented as an R package named rsgcc that can be utilized as an alternative option for biologists to perform clustering analyses of gene expression patterns or transcriptional network analyses. PMID:22797655

  20. A Particle Batch Smoother Approach to Snow Water Equivalent Estimation

    NASA Technical Reports Server (NTRS)

    Margulis, Steven A.; Girotto, Manuela; Cortes, Gonzalo; Durand, Michael

    2015-01-01

    This paper presents a newly proposed data assimilation method for historical snow water equivalent SWE estimation using remotely sensed fractional snow-covered area fSCA. The newly proposed approach consists of a particle batch smoother (PBS), which is compared to a previously applied Kalman-based ensemble batch smoother (EnBS) approach. The methods were applied over the 27-yr Landsat 5 record at snow pillow and snow course in situ verification sites in the American River basin in the Sierra Nevada (United States). This basin is more densely vegetated and thus more challenging for SWE estimation than the previous applications of the EnBS. Both data assimilation methods provided significant improvement over the prior (modeling only) estimates, with both able to significantly reduce prior SWE biases. The prior RMSE values at the snow pillow and snow course sites were reduced by 68%-82% and 60%-68%, respectively, when applying the data assimilation methods. This result is encouraging for a basin like the American where the moderate to high forest cover will necessarily obscure more of the snow-covered ground surface than in previously examined, less-vegetated basins. The PBS generally outperformed the EnBS: for snow pillows the PBSRMSE was approx.54%of that seen in the EnBS, while for snow courses the PBSRMSE was approx.79%of the EnBS. Sensitivity tests show relative insensitivity for both the PBS and EnBS results to ensemble size and fSCA measurement error, but a higher sensitivity for the EnBS to the mean prior precipitation input, especially in the case where significant prior biases exist.

  1. Statistical testing of association between menstruation and migraine.

    PubMed

    Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G

    2015-02-01

    To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.

  2. Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field

    PubMed Central

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  3. The Hubbard Model and Piezoresistivity

    NASA Astrophysics Data System (ADS)

    Celebonovic, V.; Nikolic, M. G.

    2018-02-01

    Piezoresistivity was discovered in the nineteenth century. Numerous applications of this phenomenon exist nowadays. The aim of the present paper is to explore the possibility of applying the Hubbard model to theoretical work on piezoresistivity. Results are encouraging, in the sense that numerical values of the strain gauge obtained by using the Hubbard model agree with results obtained by other methods. The calculation is simplified by the fact that it uses results for the electrical conductivity of 1D systems previously obtained within the Hubbard model by one of the present authors.

  4. Statistical significance of the rich-club phenomenon in complex networks

    NASA Astrophysics Data System (ADS)

    Jiang, Zhi-Qiang; Zhou, Wei-Xing

    2008-04-01

    We propose that the rich-club phenomenon in complex networks should be defined in the spirit of bootstrapping, in which a null model is adopted to assess the statistical significance of the rich-club detected. Our method can serve as a definition of the rich-club phenomenon and is applied to analyze three real networks and three model networks. The results show significant improvement compared with previously reported results. We report a dilemma with an exceptional example, showing that there does not exist an omnipotent definition for the rich-club phenomenon.

  5. [Comparison of microdilution and disk diffusion methods for the detection of fluconazole and voriconazole susceptibility against clinical Candida glabrata isolates and determination of changing susceptibility with new CLSI breakpoints].

    PubMed

    Hazırolan, Gülşen; Sarıbaş, Zeynep; Arıkan Akdağlı, Sevtap

    2016-07-01

    Candida albicans is the most frequently isolated species as the causative agent of Candida infections. However, in recent years, the isolation rate of non-albicans Candida species have increased. In many centers, Candida glabrata is one of the commonly isolated non-albicans species of C.glabrata infections which are difficult-to-treat due to decreased susceptibility to fluconazole and cross-resistance to other azoles. The aims of this study were to determine the in vitro susceptibility profiles of clinical C.glabrata isolates against fluconazole and voriconazole by microdilution and disk diffusion methods and to evaluate the results with both the previous (CLSI) and current species-specific CLSI (Clinical and Laboratory Standards Institute) clinical breakpoints. A total of 70 C.glabrata strains isolated from clinical samples were included in the study. The identification of the isolates was performed by morphologic examination on cornmeal Tween 80 agar and assimilation profiles obtained by using ID32C (BioMérieux, France). Broth microdilution and disk diffusion methods were performed according to CLSI M27-A3 and CLSI M44-A2 documents, respectively. The results were evaluated according to CLSI M27-A3 and M44-A2 documents and new vs. species-specific CLSI breakpoints. By using both previous and new CLSI breakpoints, broth microdilution test results showed that voriconazole has greater in vitro activity than fluconazole against C.glabrata isolates. For the two drugs tested, very major error was not observed with disk diffusion method when microdilution method was considered as the reference method. Since "susceptible" category no more exists for fluconazole vs. C.glabrata, the isolates that were interpreted as susceptible by previous breakpoints were evaluated as susceptible-dose dependent by current CLSI breakpoints. Since species-specific breakpoints remain yet undetermined for voriconazole, comparative analysis was not possible for this agent. The results obtained at 24 hours by disk diffusion method were evaluated by using both previous and current CLSI breakpoints and the agreement rates for fluconazole and voriconazole were 80% and 92.8% with previous CLSI breakpoint, 87.1% and 94.2% with new breakpoints, respectively. The high agreement rates between the two methods obtained by the new breakpoints in particular suggest that disk diffusion appears as a reliable alternative method in general for in vitro susceptibility testing of fluconazole and voriconazole against C.glabrata isolates.

  6. Simulation of the shallow groundwater-flow system in the Forest County Potawatomi Community, Forest County, Wisconsin

    USGS Publications Warehouse

    Fienen, Michael N.; Saad, David A.; Juckem, Paul F.

    2013-01-01

    The shallow groundwater system in the Forest County Potawatomi Comminity, Forest County, Wisconsin, was simulated by expanding and recalibrating a previously calibrated regional model. The existing model was updated using newly collected water-level measurements, inclusion of surface-water features beyond the previous near-field boundary, and refinements to surface-water features. The updated model then was used to calculate the area contributing recharge for seven existing and three proposed pumping locations on lands of the Forest County Potawatomi Community. The existing wells were the subject of a 2004 source-water evaluation in which areas contributing recharge were calculated using the fixed-radius method. The motivation for the present (2012) project was to improve the level of detail of areas contributing recharge for the existing wells and to provide similar analysis for the proposed wells. Delineated 5- and 10-year areas contributing recharge for existing and proposed wells extend from the areas of pumping to delineate the area at the surface contributing recharge to the wells. Steady-state pumping was simulated for two scenarios: a base-pumping scenario using pumping rates that reflect what the Community currently (2012) pumps (or plans to in the case of proposed wells), and a high-pumping scenario in which the rate was set to the maximum expected from wells installed in this area, according to the Forest County Potawatomi Community Natural Resources Department. In general, the 10-year areas contributing recharge did not intersect surface-water bodies. The 5- and 10-year areas contributing recharge simulated at the maximum pumping rate at Bug Lake Road may intersect Bug Lake. At the casino near the Town of Carter, Wisconsin, the 10-year areas contributing recharge intersect infiltration ponds. At the Devils Lake and Lois Crow Drive wells, areas contributing recharge are near cultural features, including residences.

  7. Aggravation of Pre-Existing Atrioventricular Block, Wenckebach Type, Provoked by Application of X-Ray Contrast Medium

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brodmann, Marianne, E-mail: marianne.brodmann@meduni-graz.at; Seinost, Gerald; Stark, Gerhard

    2006-12-15

    Background. Significant bradycardia followed by cardiac arrest related to single bolus administration of X-ray contrast medium into a peripheral artery has not, to our knowledge, been described in the literature. Methods and Results. While performing a percutaneous transluminal angioplasty of the left superficial femoral artery in a 68-year old patient with a pre-existing atrioventricular (AV) block, Wenckebach type, he developed an AV block III after a single bolus injection of intra-arterial X-ray contrast medium. Conclusion. We believe that application of contrast medium causes a transitory ischemia in the obstructed vessel and therefore elevation of endogenous adenosine. In the case ofmore » a previously damaged AV node this elevation of endogenous adenosine may be responsible for the development of a short period of third-degree AV block.« less

  8. Multistable orientation in a nematic liquid crystal cell induced by external field and interfacial interaction

    NASA Astrophysics Data System (ADS)

    Ong, Hiap Liew; Meyer, Robert B.; Hurd, Alan J.

    1984-04-01

    The effects of a short-range, arbitrary strength interfacial potential on the magnetic field, electric field, and optical field induced Freedericksz transition in a nematic liquid crystal cell are examined and the exact solution is obtained. By generalizing the criterion for the existence of a first-order optical field induced Freedericksz transition that was obtained previously [H. L. Ong, Phys. Rev. A 28, 2393 (1983)], the general criterion for the transition to be first order is obtained. Based on the existing experimental results, the possibility of surface induced first-order transitions is discussed and three simple empirical approaches are suggested for observing multistable orientation. The early results on the magnetic and electric fields induced Freedericksz transition and the inadequacy of the usual experimental observation methods (phase shift and capacitance measurements) are also discussed.

  9. Development, Testing, and Validation of a Model-Based Tool to Predict Operator Responses in Unexpected Workload Transitions

    NASA Technical Reports Server (NTRS)

    Sebok, Angelia; Wickens, Christopher; Sargent, Robert

    2015-01-01

    One human factors challenge is predicting operator performance in novel situations. Approaches such as drawing on relevant previous experience, and developing computational models to predict operator performance in complex situations, offer potential methods to address this challenge. A few concerns with modeling operator performance are that models need to realistic, and they need to be tested empirically and validated. In addition, many existing human performance modeling tools are complex and require that an analyst gain significant experience to be able to develop models for meaningful data collection. This paper describes an effort to address these challenges by developing an easy to use model-based tool, using models that were developed from a review of existing human performance literature and targeted experimental studies, and performing an empirical validation of key model predictions.

  10. The effects of spatial dynamics on a wormhole throat

    NASA Astrophysics Data System (ADS)

    Alias, Anuar; Wan Abdullah, Wan Ahmad Tajuddin

    2018-02-01

    Previous studies on dynamic wormholes were focused on the dynamics of the wormhole itself, be it either rotating or evolutionary in character and also in various frameworks from classical to braneworld cosmological models. In this work, we modeled a dynamic factor that represents the spatial dynamics in terms of spacetime expansion and contraction surrounding the wormhole itself. Using an RS2-based braneworld cosmological model, we modified the spacetime metric of Wong and subsequently employed the method of Bronnikov, where it is observed that a traversable wormhole is easier to exist in an expanding brane universe, however it is difficult to exist in a contracting brane universe due to stress-energy tensors requirement. This model of spatial dynamic factor affecting the wormhole throat can also be applied on the cyclic or the bounce universe model.

  11. Comment on 'Shang S. 2012. Calculating actual crop evapotranspiration under soil water stress conditions with appropriate numerical methods and time step. Hydrological Processes 26: 3338-3343. DOI: 10.1002/hyp.8405'

    NASA Technical Reports Server (NTRS)

    Yatheendradas, Soni; Narapusetty, Balachandrudu; Peters-Lidard, Christa; Funk, Christopher; Verdin, James

    2014-01-01

    A previous study analyzed errors in the numerical calculation of actual crop evapotranspiration (ET(sub a)) under soil water stress. Assuming no irrigation or precipitation, it constructed equations for ET(sub a) over limited soil-water ranges in a root zone drying out due to evapotranspiration. It then used a single crop-soil composite to provide recommendations about the appropriate usage of numerical methods under different values of the time step and the maximum crop evapotranspiration (ET(sub c)). This comment reformulates those ET(sub a) equations for applicability over the full range of soil water values, revealing a dependence of the relative error in numerical ET(sub a) on the initial soil water that was not seen in the previous study. It is shown that the recommendations based on a single crop-soil composite can be invalid for other crop-soil composites. Finally, a consideration of the numerical error in the time-cumulative value of ET(sub a) is discussed besides the existing consideration of that error over individual time steps as done in the previous study. This cumulative ET(sub a) is more relevant to the final crop yield.

  12. Clustering of Farsi sub-word images for whole-book recognition

    NASA Astrophysics Data System (ADS)

    Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier

    2015-01-01

    Redundancy of word and sub-word occurrences in large documents can be effectively utilized in an OCR system to improve recognition results. Most OCR systems employ language modeling techniques as a post-processing step; however these techniques do not use important pictorial information that exist in the text image. In case of large-scale recognition of degraded documents, this information is even more valuable. In our previous work, we proposed a subword image clustering method for the applications dealing with large printed documents. In our clustering method, the ideal case is when all equivalent sub-word images lie in one cluster. To overcome the issues of low print quality, the clustering method uses an image matching algorithm for measuring the distance between two sub-word images. The measured distance with a set of simple shape features were used to cluster all sub-word images. In this paper, we analyze the effects of adding more shape features on processing time, purity of clustering, and the final recognition rate. Previously published experiments have shown the efficiency of our method on a book. Here we present extended experimental results and evaluate our method on another book with totally different font face. Also we show that the number of the new created clusters in a page can be used as a criteria for assessing the quality of print and evaluating preprocessing phases.

  13. Evaluation of the PCR method for identification of Bifidobacterium species.

    PubMed

    Youn, S Y; Seo, J M; Ji, G E

    2008-01-01

    Bifidobacterium species are known for their beneficial effects on health and their wide use as probiotics. Although various polymerase chain reaction (PCR) methods for the identification of Bifidobacterium species have been published, the reliability of these methods remains open to question. In this study, we evaluated 37 previously reported PCR primer sets designed to amplify 16S rDNA, 23S rDNA, intergenic spacer regions, or repetitive DNA sequences of various Bifidobacterium species. Ten of 37 experimental primer sets showed specificity for B. adolescentis, B. angulatum, B. pseudocatenulatum, B. breve, B. bifidum, B. longum, B. longum biovar infantis and B. dentium. The results suggest that published Bifidobacterium primer sets should be re-evaluated for both reproducibility and specificity for the identification of Bifidobacterium species using PCR. Improvement of existing PCR methods will be needed to facilitate identification of other Bifidobacterium strains, such as B. animalis, B. catenulatum, B. thermophilum and B. subtile.

  14. Improved regulatory element prediction based on tissue-specific local epigenomic signatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Yupeng; Gorkin, David U.; Dickel, Diane E.

    Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulator y element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared withmore » existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types.« less

  15. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Information Filtering via Heterogeneous Diffusion in Online Bipartite Networks

    PubMed Central

    Zhang, Fu-Guo; Zeng, An

    2015-01-01

    The rapid expansion of Internet brings us overwhelming online information, which is impossible for an individual to go through all of it. Therefore, recommender systems were created to help people dig through this abundance of information. In networks composed by users and objects, recommender algorithms based on diffusion have been proven to be one of the best performing methods. Previous works considered the diffusion process from user to object, and from object to user to be equivalent. We show in this work that it is not the case and we improve the quality of the recommendation by taking into account the asymmetrical nature of this process. We apply this idea to modify the state-of-the-art recommendation methods. The simulation results show that the new methods can outperform these existing methods in both recommendation accuracy and diversity. Finally, this modification is checked to be able to improve the recommendation in a realistic case. PMID:26125631

  17. Airfoil self-noise and prediction

    NASA Technical Reports Server (NTRS)

    Brooks, Thomas F.; Pope, D. Stuart; Marcolini, Michael A.

    1989-01-01

    A prediction method is developed for the self-generated noise of an airfoil blade encountering smooth flow. The prediction methods for the individual self-noise mechanisms are semiempirical and are based on previous theoretical studies and data obtained from tests of two- and three-dimensional airfoil blade sections. The self-noise mechanisms are due to specific boundary-layer phenomena, that is, the boundary-layer turbulence passing the trailing edge, separated-boundary-layer and stalled flow over an airfoil, vortex shedding due to laminar boundary layer instabilities, vortex shedding from blunt trailing edges, and the turbulent vortex flow existing near the tip of lifting blades. The predictions are compared successfully with published data from three self-noise studies of different airfoil shapes. An application of the prediction method is reported for a large scale-model helicopter rotor, and the predictions compared well with experimental broadband noise measurements. A computer code of the method is given.

  18. Improved regulatory element prediction based on tissue-specific local epigenomic signatures

    DOE PAGES

    He, Yupeng; Gorkin, David U.; Dickel, Diane E.; ...

    2017-02-13

    Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulator y element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared withmore » existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types.« less

  19. Information Filtering via Heterogeneous Diffusion in Online Bipartite Networks.

    PubMed

    Zhang, Fu-Guo; Zeng, An

    2015-01-01

    The rapid expansion of Internet brings us overwhelming online information, which is impossible for an individual to go through all of it. Therefore, recommender systems were created to help people dig through this abundance of information. In networks composed by users and objects, recommender algorithms based on diffusion have been proven to be one of the best performing methods. Previous works considered the diffusion process from user to object, and from object to user to be equivalent. We show in this work that it is not the case and we improve the quality of the recommendation by taking into account the asymmetrical nature of this process. We apply this idea to modify the state-of-the-art recommendation methods. The simulation results show that the new methods can outperform these existing methods in both recommendation accuracy and diversity. Finally, this modification is checked to be able to improve the recommendation in a realistic case.

  20. Spotting the difference in molecular dynamics simulations of biomolecules

    NASA Astrophysics Data System (ADS)

    Sakuraba, Shun; Kono, Hidetoshi

    2016-08-01

    Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.

  1. Binding ligand prediction for proteins using partial matching of local surface patches.

    PubMed

    Sael, Lee; Kihara, Daisuke

    2010-01-01

    Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group.

  2. Binding Ligand Prediction for Proteins Using Partial Matching of Local Surface Patches

    PubMed Central

    Sael, Lee; Kihara, Daisuke

    2010-01-01

    Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group. PMID:21614188

  3. A comparison of students' achievement and attitude as a function of lecture/lab sequencing in a non-science majors introductory biology course

    NASA Astrophysics Data System (ADS)

    Hurst March, Robin Denise

    This investigation compared student achievement and attitudes toward science from three different sequencing approaches used in teaching biology to nonscience students. The three sequencing approaches were the lecture course only, lecture/laboratory courses taken together, and laboratory with previously taken lecture approach. The purposes of this study were to determine if (1) a relationship exists between the Attitude Towards Science in School Assessment (ATSSA) scores (Germann, 1988) and biology achievement, (2) a difference exists among the ATSSA scores and sequencing, (3) a difference exists among the biology achievement scores and sequencing, and (4) the ATSSA is a reliable instrument of science attitude assessment for the undergraduate students in an introductory biology nonmajors laboratory and lecture courses at a research I institution during the fall semester 1996. Fifty-four students comprised the lecture only group, 90 students comprised the lecture and laboratory taken together approach, and 23 students comprised the laboratory only approach. Research questions addressed were (1) What are the differences in student biology achievement as a function of the three different methods of instruction? (2) What are the differences in student attitude towards science as a function of the three different methods of instruction? (3) What is the relationship between post-attitude (ATSSA) and biology achievement for each of the three methods of instruction? An analysis of variance utilized the mean posttest scores on the ATSSA and mean achievement scores as the dependent variables. The independent variables were the three different sequences of enrollment in introductory biology. At the.05 level of significance, it was found that no significant difference existed between the ATTS and laboratory/lecture sequence. At the.05 level of significance, it was found that no significant difference existed between achievement and laboratory/lecture sequence. A Pearson product moment correlation was used to see if a relationship existed between posttest ATSSA scores and achievement totals in each sequence. A significant relationship was noted between the ATSSA and achievement in each sequence that involved a laboratory component.

  4. Barriers to asymptomatic screening and other STD services for adolescents and young adults: focus group discussions

    PubMed Central

    Tilson, Elizabeth C; Sanchez, Victoria; Ford, Chandra L; Smurzynski, Marlene; Leone, Peter A; Fox, Kimberley K; Irwin, Kathleen; Miller, William C

    2004-01-01

    Background Sexually transmitted diseases (STDs) are a major public health problem among young people and can lead to the spread of HIV. Previous studies have primarily addressed barriers to STD care for symptomatic patients. The purpose of our study was to identify perceptions about existing barriers to and ideal services for STDs, especially asymptomatic screening, among young people in a southeastern community. Methods Eight focus group discussions including 53 White, African American, and Latino youth (age 14–24) were conducted. Results Perceived barriers to care included lack of knowledge of STDs and available services, cost, shame associated with seeking services, long clinic waiting times, discrimination, and urethral specimen collection methods. Perceived features of ideal STD services included locations close to familiar places, extended hours, and urine-based screening. Television was perceived as the most effective route of disseminating STD information. Conclusions Further research is warranted to evaluate improving convenience, efficiency, and privacy of existing services; adding urine-based screening and new services closer to neighborhoods; and using mass media to disseminate STD information as strategies to increase STD screening. PMID:15189565

  5. a Theoretical and Experimental Investigation of 1/F Noise in the Alpha Decay Rates of AMERICIUM-241.

    NASA Astrophysics Data System (ADS)

    Pepper, Gary T.

    New experimental methods and data analysis techniques were used to investigate the hypothesis of the existence of 1/f noise in a alpha particle emission rates for ^{241}Am. Experimental estimates of the flicker floor were found to be almost two orders of magnitude less than Handel's theoretical prediction and previous measurements. The existence of a flicker floor for ^{57}Co decay, a process for which no charged particles are emitted, indicate that instrumental instability is likely responsible for the values of the flicker floor obtained. The experimental results and the theoretical arguments presented indicate that a re-examination of Handel's theory of 1/f noise is appropriate. Methods of numerical simulation of noise processes with a 1/f^{rm n} power spectral density were developed. These were used to investigate various statistical aspects of 1/f ^{rm n} noise. The probability density function for the Allan variance was investigated in order to establish confidence limits for the observations made. The effect of using grouped (correlated) data, for evaluating the Allan variance, was also investigated.

  6. The exponentiated Hencky-logarithmic strain energy. Part II: Coercivity, planar polyconvexity and existence of minimizers

    NASA Astrophysics Data System (ADS)

    Neff, Patrizio; Lankeit, Johannes; Ghiba, Ionel-Dumitrel; Martin, Robert; Steigmann, David

    2015-08-01

    We consider a family of isotropic volumetric-isochoric decoupled strain energies based on the Hencky-logarithmic (true, natural) strain tensor log U, where μ > 0 is the infinitesimal shear modulus, is the infinitesimal bulk modulus with the first Lamé constant, are dimensionless parameters, is the gradient of deformation, is the right stretch tensor and is the deviatoric part (the projection onto the traceless tensors) of the strain tensor log U. For small elastic strains, the energies reduce to first order to the classical quadratic Hencky energy which is known to be not rank-one convex. The main result in this paper is that in plane elastostatics the energies of the family are polyconvex for , extending a previous finding on its rank-one convexity. Our method uses a judicious application of Steigmann's polyconvexity criteria based on the representation of the energy in terms of the principal invariants of the stretch tensor U. These energies also satisfy suitable growth and coercivity conditions. We formulate the equilibrium equations, and we prove the existence of minimizers by the direct methods of the calculus of variations.

  7. A Community Publication and Dissemination System for Hydrology Education Materials

    NASA Astrophysics Data System (ADS)

    Ruddell, B. L.

    2015-12-01

    Hosted by CUAHSI and the Science Education Resource Center (SERC), federated by the National Science Digital Library (NSDL), and allied with the Water Data Center (WDC), Hydrologic Information System (HIS), and HydroShare projects, a simple cyberinfrastructure has been launched for the publication and dissemination of data and model driven university hydrology education materials. This lightweight system's metadata describes learning content as a data-driven module with defined data inputs and outputs. This structure allows a user to mix and match modules to create sequences of content that teach both hydrology and computer learning outcomes. Importantly, this modular infrastructure allows an instructor to substitute a module based on updated computer methods for one based on outdated computer methods, hopefully solving the problem of rapid obsolescence that has hampered previous community efforts. The prototype system is now available from CUAHSI and SERC, with some example content. The system is designed to catalog, link to, make visible, and make accessible the existing and future contributions of the community; this system does not create content. Submissions from hydrology educators are eagerly solicited, especially for existing content.

  8. Testing mapping algorithms of the cancer-specific EORTC QLQ-C30 onto EQ-5D in malignant mesothelioma.

    PubMed

    Arnold, David T; Rowen, Donna; Versteegh, Matthijs M; Morley, Anna; Hooper, Clare E; Maskell, Nicholas A

    2015-01-23

    In order to estimate utilities for cancer studies where the EQ-5D was not used, the EORTC QLQ-C30 can be used to estimate EQ-5D using existing mapping algorithms. Several mapping algorithms exist for this transformation, however, algorithms tend to lose accuracy in patients in poor health states. The aim of this study was to test all existing mapping algorithms of QLQ-C30 onto EQ-5D, in a dataset of patients with malignant pleural mesothelioma, an invariably fatal malignancy where no previous mapping estimation has been published. Health related quality of life (HRQoL) data where both the EQ-5D and QLQ-C30 were used simultaneously was obtained from the UK-based prospective observational SWAMP (South West Area Mesothelioma and Pemetrexed) trial. In the original trial 73 patients with pleural mesothelioma were offered palliative chemotherapy and their HRQoL was assessed across five time points. This data was used to test the nine available mapping algorithms found in the literature, comparing predicted against observed EQ-5D values. The ability of algorithms to predict the mean, minimise error and detect clinically significant differences was assessed. The dataset had a total of 250 observations across 5 timepoints. The linear regression mapping algorithms tested generally performed poorly, over-estimating the predicted compared to observed EQ-5D values, especially when observed EQ-5D was below 0.5. The best performing algorithm used a response mapping method and predicted the mean EQ-5D with accuracy with an average root mean squared error of 0.17 (Standard Deviation; 0.22). This algorithm reliably discriminated between clinically distinct subgroups seen in the primary dataset. This study tested mapping algorithms in a population with poor health states, where they have been previously shown to perform poorly. Further research into EQ-5D estimation should be directed at response mapping methods given its superior performance in this study.

  9. Evaluation of free modeling targets in CASP11 and ROLL.

    PubMed

    Kinch, Lisa N; Li, Wenlin; Monastyrskyy, Bohdan; Kryshtafovych, Andriy; Grishin, Nick V

    2016-09-01

    We present an assessment of 'template-free modeling' (FM) in CASP11and ROLL. Community-wide server performance suggested the use of automated scores similar to previous CASPs would provide a good system of evaluating performance, even in the absence of comprehensive manual assessment. The CASP11 FM category included several outstanding examples, including successful prediction by the Baker group of a 256-residue target (T0806-D1) that lacked sequence similarity to any existing template. The top server model prediction by Zhang's Quark, which was apparently selected and refined by several manual groups, encompassed the entire fold of target T0837-D1. Methods from the same two groups tended to dominate overall CASP11 FM and ROLL rankings. Comparison of top FM predictions with those from the previous CASP experiment revealed progress in the category, particularly reflected in high prediction accuracy for larger protein domains. FM prediction models for two cases were sufficient to provide functional insights that were otherwise not obtainable by traditional sequence analysis methods. Importantly, CASP11 abstracts revealed that alignment-based contact prediction methods brought about much of the CASP11 progress, producing both of the functionally relevant models as well as several of the other outstanding structure predictions. These methodological advances enabled de novo modeling of much larger domain structures than was previously possible and allowed prediction of functional sites. Proteins 2016; 84(Suppl 1):51-66. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.

  10. Rapid quantification of neutral lipids and triglycerides during zebrafish embryogenesis.

    PubMed

    Yoganantharjah, Prusothman; Byreddy, Avinesh R; Fraher, Daniel; Puri, Munish; Gibert, Yann

    2017-01-01

    The zebrafish is a useful vertebrate model to study lipid metabolism. Oil Red-O (ORO) staining of zebrafish embryos, though sufficient for visualizing the localization of triglycerides, was previously inadequate to quantify neutral lipid abundance. For metabolic studies, it is crucial to be able to quantify lipids during embryogenesis. Currently no cost effective, rapid and reliable method exists to quantify the deposition of neutral lipids and triglycerides. Thin layer chromatography (TLC), gas chromatography and mass spectrometry can be used to accurately measure lipid levels, but are time consuming and costly in their use. Hence, we developed a rapid and reliable method to quantify neutral lipids and triglycerides. Zebrafish embryos were exposed to Rimonabant (Rimo) or WIN 55,212-2 mesylate (WIN), compounds previously shown to modify lipid content during zebrafish embryogenesis. Following this, ORO stain was extracted out of both the zebrafish body and yolk sac and optical density was measured to give an indication of neutral lipid and triglyceride accumulation. Embryos treated with 0.3 microM WIN resulted in increased lipid accumulation, whereas 3 microM Rimo caused a decrease in lipid accumulation during embryogenesis. TLC was performed on zebrafish bodies to validate the developed method. In addition, BODIPY free fatty acids were injected into zebrafish embryos to confirm quantification of changes in lipid content in the embryo. Previously, ORO was limited to qualitative assessment; now ORO can be used as a quantitative tool to directly determine changes in the levels of neutral lipids and triglycerides.

  11. Probabilistic Determination of Green Infrastructure Pollutant Removal Rates from the International Stormwater BMP Database

    NASA Astrophysics Data System (ADS)

    Gilliom, R.; Hogue, T. S.; McCray, J. E.

    2017-12-01

    There is a need for improved parameterization of stormwater best management practices (BMP) performance estimates to improve modeling of urban hydrology, planning and design of green infrastructure projects, and water quality crediting for stormwater management. Percent removal is commonly used to estimate BMP pollutant removal efficiency, but there is general agreement that this approach has significant uncertainties and is easily affected by site-specific factors. Additionally, some fraction of monitored BMPs have negative percent removal, so it is important to understand the probability that a BMP will provide the desired water quality function versus exacerbating water quality problems. The widely used k-C* equation has shown to provide a more adaptable and accurate method to model BMP contaminant attenuation, and previous work has begun to evaluate the strengths and weaknesses of the k-C* method. However, no systematic method exists for obtaining first-order removal rate constants needed to use the k-C* equation for stormwater BMPs; thus there is minimal application of the method. The current research analyzes existing water quality data in the International Stormwater BMP Database to provide screening-level parameterization of the k-C* equation for selected BMP types and analysis of factors that skew the distribution of efficiency estimates from the database. Results illustrate that while certain BMPs are more likely to provide desired contaminant removal than others, site- and design-specific factors strongly influence performance. For example, bioretention systems show both the highest and lowest removal rates of dissolved copper, total phosphorous, and total nitrogen. Exploration and discussion of this and other findings will inform the application of the probabilistic pollutant removal rate constants. Though data limitations exist, this research will facilitate improved accuracy of BMP modeling and ultimately aid decision-making for stormwater quality management in urban systems.

  12. Generational forecasting in academic medicine: a unique method of planning for success in the next two decades.

    PubMed

    Howell, Lydia Pleotis; Joad, Jesse P; Callahan, Edward; Servis, Gregg; Bonham, Ann C

    2009-08-01

    Multigenerational teams are essential to the missions of academic health centers (AHCs). Generational forecasting using Strauss and Howe's predictive model, "the generational diagonal," can be useful for anticipating and addressing issues so that each generation is effective. Forecasts are based on the observation that cyclical historical events are experienced by all generations, but the response of each generation differs according to its phase of life and previous defining experiences. This article relates Strauss and Howe's generational forecasts to AHCs. Predicted issues such as work-life balance, indebtedness, and succession planning have existed previously, but they now have different causes or consequences because of the unique experiences and life stages of current generations. Efforts to address these issues at the authors' AHC include a work-life balance workgroup, expanded leave, and intramural grants.

  13. Erratum: A Comparison of Closures for Stochastic Advection-Diffusion Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarman, Kenneth D.; Tartakovsky, Alexandre M.

    2015-01-01

    This note corrects an error in the authors' article [SIAM/ASA J. Uncertain. Quantif., 1 (2013), pp. 319 347] in which the cited work [Neuman, Water Resour. Res., 29(3) (1993), pp. 633 645] was incorrectly represented and attributed. Concentration covariance equations presented in our article as new were in fact previously derived in the latter work. In the original abstract, the phrase " . . .we propose a closed-form approximation to two-point covariance as a measure of uncertainty. . ." should be replaced by the phrase " . . .we study a closed-form approximation to two-point covariance, previously derived in [Neumanmore » 1993], as a measure of uncertainty." The primary results in our article--the analytical and numerical comparison of existing closure methods for specific example problems are not changed by this correction.« less

  14. Coupled Aerodynamic and Structural Sensitivity Analysis of a High-Speed Civil Transport

    NASA Technical Reports Server (NTRS)

    Mason, B. H.; Walsh, J. L.

    2001-01-01

    An objective of the High Performance Computing and Communication Program at the NASA Langley Research Center is to demonstrate multidisciplinary shape and sizing optimization of a complete aerospace vehicle configuration by using high-fidelity, finite-element structural analysis and computational fluid dynamics aerodynamic analysis. In a previous study, a multi-disciplinary analysis system for a high-speed civil transport was formulated to integrate a set of existing discipline analysis codes, some of them computationally intensive, This paper is an extension of the previous study, in which the sensitivity analysis for the coupled aerodynamic and structural analysis problem is formulated and implemented. Uncoupled stress sensitivities computed with a constant load vector in a commercial finite element analysis code are compared to coupled aeroelastic sensitivities computed by finite differences. The computational expense of these sensitivity calculation methods is discussed.

  15. Global Solutions for the zero-energy Novikov–Veselov equation by inverse scattering

    NASA Astrophysics Data System (ADS)

    Music, Michael; Perry, Peter

    2018-07-01

    Using the inverse scattering method, we construct global solutions to the Novikov–Veselov equation for real-valued decaying initial data q 0 with the property that the associated Schrödinger operator is nonnegative. Such initial data are either critical (an arbitrarily small perturbation of the potential makes the operator nonpositive) or subcritical (sufficiently small perturbations of the potential preserve non-negativity of the operator). Previously, Lassas, Mueller, Siltanen and Stahel proved global existence for critical potentials, also called potentials of conductivity type. We extend their results to include the much larger class of subcritical potentials. We show that the subcritical potentials form an open set and that the critical potentials form the nowhere dense boundary of this open set. Our analysis draws on previous work of the first author and on ideas of Grinevich and Manakov.

  16. Virtual fringe projection system with nonparallel illumination based on iteration

    NASA Astrophysics Data System (ADS)

    Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian

    2017-06-01

    Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.

  17. Efficient visualization of urban spaces

    NASA Astrophysics Data System (ADS)

    Stamps, A. E.

    2012-10-01

    This chapter presents a new method for calculating efficiency and applies that method to the issues of selecting simulation media and evaluating the contextual fit of new buildings in urban spaces. The new method is called "meta-analysis". A meta-analytic review of 967 environments indicated that static color simulations are the most efficient media for visualizing urban spaces. For contextual fit, four original experiments are reported on how strongly five factors influence visual appeal of a street: architectural style, trees, height of a new building relative to the heights of existing buildings, setting back a third story, and distance. A meta-analysis of these four experiments and previous findings, covering 461 environments, indicated that architectural style, trees, and height had effects strong enough to warrant implementation, but the effects of setting back third stories and distance were too small to warrant implementation.

  18. Retinal artery-vein classification via topology estimation

    PubMed Central

    Estrada, Rolando; Allingham, Michael J.; Mettu, Priyatham S.; Cousins, Scott W.; Tomasi, Carlo; Farsiu, Sina

    2015-01-01

    We propose a novel, graph-theoretic framework for distinguishing arteries from veins in a fundus image. We make use of the underlying vessel topology to better classify small and midsized vessels. We extend our previously proposed tree topology estimation framework by incorporating expert, domain-specific features to construct a simple, yet powerful global likelihood model. We efficiently maximize this model by iteratively exploring the space of possible solutions consistent with the projected vessels. We tested our method on four retinal datasets and achieved classification accuracies of 91.0%, 93.5%, 91.7%, and 90.9%, outperforming existing methods. Our results show the effectiveness of our approach, which is capable of analyzing the entire vasculature, including peripheral vessels, in wide field-of-view fundus photographs. This topology-based method is a potentially important tool for diagnosing diseases with retinal vascular manifestation. PMID:26068204

  19. Ensemble-Based Parameter Estimation in a Coupled GCM Using the Adaptive Spatial Average Method

    DOE PAGES

    Liu, Y.; Liu, Z.; Zhang, S.; ...

    2014-05-29

    Ensemble-based parameter estimation for a climate model is emerging as an important topic in climate research. And for a complex system such as a coupled ocean–atmosphere general circulation model, the sensitivity and response of a model variable to a model parameter could vary spatially and temporally. An adaptive spatial average (ASA) algorithm is proposed to increase the efficiency of parameter estimation. Refined from a previous spatial average method, the ASA uses the ensemble spread as the criterion for selecting “good” values from the spatially varying posterior estimated parameter values; these good values are then averaged to give the final globalmore » uniform posterior parameter. In comparison with existing methods, the ASA parameter estimation has a superior performance: faster convergence and enhanced signal-to-noise ratio.« less

  20. The effect of texture granularity on texture synthesis quality

    NASA Astrophysics Data System (ADS)

    Golestaneh, S. Alireza; Subedar, Mahesh M.; Karam, Lina J.

    2015-09-01

    Natural and artificial textures occur frequently in images and in video sequences. Image/video coding systems based on texture synthesis can make use of a reliable texture synthesis quality assessment method in order to improve the compression performance in terms of perceived quality and bit-rate. Existing objective visual quality assessment methods do not perform satisfactorily when predicting the synthesized texture quality. In our previous work, we showed that texture regularity can be used as an attribute for estimating the quality of synthesized textures. In this paper, we study the effect of another texture attribute, namely texture granularity, on the quality of synthesized textures. For this purpose, subjective studies are conducted to assess the quality of synthesized textures with different levels (low, medium, high) of perceived texture granularity using different types of texture synthesis methods.

  1. Compact illumination optic with three freeform surfaces for improved beam control.

    PubMed

    Sorgato, Simone; Mohedano, Rubén; Chaves, Julio; Hernández, Maikel; Blen, José; Grabovičkić, Dejan; Benítez, Pablo; Miñano, Juan Carlos; Thienpont, Hugo; Duerr, Fabian

    2017-11-27

    Multi-chip and large size LEDs dominate the lighting market in developed countries these days. Nevertheless, a general optical design method to create prescribed intensity patterns for this type of extended sources does not exist. We present a design strategy in which the source and the target pattern are described by means of "edge wavefronts" of the system. The goal is then finding an optic coupling these wavefronts, which in the current work is a monolithic part comprising up to three freeform surfaces calculated with the simultaneous multiple surface (SMS) method. The resulting optic fully controls, for the first time, three freeform wavefronts, one more than previous SMS designs. Simulations with extended LEDs demonstrate improved intensity tailoring capabilities, confirming the effectiveness of our method and suggesting that enhanced performance features can be achieved by controlling additional wavefronts.

  2. A study on the application of topic models to motif finding algorithms.

    PubMed

    Basha Gutierrez, Josep; Nakai, Kenta

    2016-12-22

    Topic models are statistical algorithms which try to discover the structure of a set of documents according to the abstract topics contained in them. Here we try to apply this approach to the discovery of the structure of the transcription factor binding sites (TFBS) contained in a set of biological sequences, which is a fundamental problem in molecular biology research for the understanding of transcriptional regulation. Here we present two methods that make use of topic models for motif finding. First, we developed an algorithm in which first a set of biological sequences are treated as text documents, and the k-mers contained in them as words, to then build a correlated topic model (CTM) and iteratively reduce its perplexity. We also used the perplexity measurement of CTMs to improve our previous algorithm based on a genetic algorithm and several statistical coefficients. The algorithms were tested with 56 data sets from four different species and compared to 14 other methods by the use of several coefficients both at nucleotide and site level. The results of our first approach showed a performance comparable to the other methods studied, especially at site level and in sensitivity scores, in which it scored better than any of the 14 existing tools. In the case of our previous algorithm, the new approach with the addition of the perplexity measurement clearly outperformed all of the other methods in sensitivity, both at nucleotide and site level, and in overall performance at site level. The statistics obtained show that the performance of a motif finding method based on the use of a CTM is satisfying enough to conclude that the application of topic models is a valid method for developing motif finding algorithms. Moreover, the addition of topic models to a previously developed method dramatically increased its performance, suggesting that this combined algorithm can be a useful tool to successfully predict motifs in different kinds of sets of DNA sequences.

  3. Computation and measurement of cell decision making errors using single cell data

    PubMed Central

    Habibi, Iman; Cheong, Raymond; Levchenko, Andre; Emamian, Effat S.; Abdi, Ali

    2017-01-01

    In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF—NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell’s inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves. PMID:28379950

  4. Computation and measurement of cell decision making errors using single cell data.

    PubMed

    Habibi, Iman; Cheong, Raymond; Lipniacki, Tomasz; Levchenko, Andre; Emamian, Effat S; Abdi, Ali

    2017-04-01

    In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF-NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell's inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves.

  5. Quantum money with nearly optimal error tolerance

    NASA Astrophysics Data System (ADS)

    Amiri, Ryan; Arrazola, Juan Miguel

    2017-06-01

    We present a family of quantum money schemes with classical verification which display a number of benefits over previous proposals. Our schemes are based on hidden matching quantum retrieval games and they tolerate noise up to 23 % , which we conjecture reaches 25 % asymptotically as the dimension of the underlying hidden matching states is increased. Furthermore, we prove that 25 % is the maximum tolerable noise for a wide class of quantum money schemes with classical verification, meaning our schemes are almost optimally noise tolerant. We use methods in semidefinite programming to prove security in a substantially different manner to previous proposals, leading to two main advantages: first, coin verification involves only a constant number of states (with respect to coin size), thereby allowing for smaller coins; second, the reusability of coins within our scheme grows linearly with the size of the coin, which is known to be optimal. Last, we suggest methods by which the coins in our protocol could be implemented using weak coherent states and verified using existing experimental techniques, even in the presence of detector inefficiencies.

  6. Flow resistance and suspended load in sand-bed rivers: Simplified stratification model

    USGS Publications Warehouse

    Wright, S.; Parker, G.

    2004-01-01

    New methods are presented for the prediction of the flow depth, grain-size specific near-bed concentration, and bed-material suspended sediment transport rate in sand-bed rivers. The salient improvements delineated here all relate to the need to modify existing formulations in order to encompass the full range of sand-bed rivers, and in particular large, low-slope sand-bed rivers. They can be summarized as follows: (1) the inclusion of density stratification effects in a simplified manner, which have been shown in the companion paper to be particularly relevant for large, low-slope, sand-bed rivers; (2) a new predictor for near-bed entrainment rate into suspension which extends a previous relation to the range of large, low-slope sand-bed rivers; and (3) a new predictor for form drag which again extends a previous relation to include large, low-slope sand-bed rivers. Finally, every attempt has been made to cast the relations in the simplest form possible, including the development of software, so that practicing engineers may easily use the methods. ?? ASCE.

  7. Projective-anticipating, projective, and projective-lag synchronization of time-delayed chaotic systems on random networks.

    PubMed

    Feng, Cun-Fang; Xu, Xin-Jian; Wang, Sheng-Jun; Wang, Ying-Hai

    2008-06-01

    We study projective-anticipating, projective, and projective-lag synchronization of time-delayed chaotic systems on random networks. We relax some limitations of previous work, where projective-anticipating and projective-lag synchronization can be achieved only on two coupled chaotic systems. In this paper, we realize projective-anticipating and projective-lag synchronization on complex dynamical networks composed of a large number of interconnected components. At the same time, although previous work studied projective synchronization on complex dynamical networks, the dynamics of the nodes are coupled partially linear chaotic systems. In this paper, the dynamics of the nodes of the complex networks are time-delayed chaotic systems without the limitation of the partial linearity. Based on the Lyapunov stability theory, we suggest a generic method to achieve the projective-anticipating, projective, and projective-lag synchronization of time-delayed chaotic systems on random dynamical networks, and we find both its existence and sufficient stability conditions. The validity of the proposed method is demonstrated and verified by examining specific examples using Ikeda and Mackey-Glass systems on Erdos-Renyi networks.

  8. The Chandra Source Catalog: X-ray Aperture Photometry

    NASA Astrophysics Data System (ADS)

    Kashyap, Vinay; Primini, F. A.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, I. N.; Evans, J. D.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    The Chandra Source Catalog (CSC) represents a reanalysis of the entire ACIS and HRC imaging observations over the 9-year Chandra mission. We describe here the method by which fluxes are measured for detected sources. Source detection is carried out on a uniform basis, using the CIAO tool wavdetect. Source fluxes are estimated post-facto using a Bayesian method that accounts for background, spatial resolution effects, and contamination from nearby sources. We use gamma-function prior distributions, which could be either non-informative, or in case there exist previous observations of the same source, strongly informative. The current implementation is however limited to non-informative priors. The resulting posterior probability density functions allow us to report the flux and a robust credible range on it.

  9. Body-Earth Mover's Distance: A Matching-Based Approach for Sleep Posture Recognition.

    PubMed

    Xu, Xiaowei; Lin, Feng; Wang, Aosen; Hu, Yu; Huang, Ming-Chun; Xu, Wenyao

    2016-10-01

    Sleep posture is a key component in sleep quality assessment and pressure ulcer prevention. Currently, body pressure analysis has been a popular method for sleep posture recognition. In this paper, a matching-based approach, Body-Earth Mover's Distance (BEMD), for sleep posture recognition is proposed. BEMD treats pressure images as weighted 2D shapes, and combines EMD and Euclidean distance for similarity measure. Compared with existing work, sleep posture recognition is achieved with posture similarity rather than multiple features for specific postures. A pilot study is performed with 14 persons for six different postures. The experimental results show that the proposed BEMD can achieve 91.21% accuracy, which outperforms the previous method with an improvement of 8.01%.

  10. Studies of silicon pn junction solar cells

    NASA Technical Reports Server (NTRS)

    Lindholm, F. A.; Neugroschel, A.

    1977-01-01

    Modifications of the basic Shockley equations that result from the random and nonrandom spatial variations of the chemical composition of a semiconductor were developed. These modifications underlie the existence of the extensive emitter recombination current that limits the voltage over the open circuit of solar cells. The measurement of parameters, series resistance and the base diffusion length is discussed. Two methods are presented for establishing the energy bandgap narrowing in the heavily-doped emitter region. Corrections that can be important in the application of one of these methods to small test cells are examined. Oxide-charge-induced high-low-junction emitter (OCI-HLE) test cells which exhibit considerably higher voltage over the open circuit than was previously seen in n-on-p solar cells are described.

  11. Single point estimation of phenytoin dosing: a reappraisal.

    PubMed

    Koup, J R; Gibaldi, M; Godolphin, W

    1981-11-01

    A previously proposed method for estimation of phenytoin dosing requirement using a single serum sample obtained 24 hours after intravenous loading dose (18 mg/Kg) has been re-evaluated. Using more realistic values for the volume of distribution of phenytoin (0.4 to 1.2 L/Kg), simulations indicate that the proposed method will fail to consistently predict dosage requirements. Additional simulations indicate that two samples obtained during the 24 hour interval following the iv loading dose could be used to more reliably predict phenytoin dose requirement. Because of the nonlinear relationship which exists between phenytoin dose administration rate (RO) and the mean steady state serum concentration (CSS), small errors in prediction of the required RO result in much larger errors in CSS.

  12. Built-up index methods and their applications for urban extraction from Sentinel 2A satellite data: discussion.

    PubMed

    Valdiviezo-N, Juan C; Téllez-Quiñones, Alejandro; Salazar-Garibay, Adan; López-Caloca, Alejandra A

    2018-01-01

    Several built-up indices have been proposed in the literature in order to extract the urban sprawl from satellite data. Given their relative simplicity and easy implementation, such methods have been widely adopted for urban growth monitoring. Previous research has shown that built-up indices are sensitive to different factors related to image resolution, seasonality, and study area location. Also, most of them confuse urban surfaces with bare soil and barren land covers. By gathering the existing built-up indices, the aim of this paper is to discuss some of their advantages, difficulties, and limitations. In order to illustrate our study, we provide some application examples using Sentinel 2A data.

  13. The correlation structure of several popular pseudorandom number generators

    NASA Technical Reports Server (NTRS)

    Neuman, F.; Merrick, R.; Martin, C. F.

    1973-01-01

    One of the desirable properties of a pseudorandom number generator is that the sequence of numbers it generates should have very low autocorrelation for all shifts except for zero shift and those that are multiples of its cycle length. Due to the simple methods of constructing random numbers, the ideal is often not quite fulfilled. A simple method of examining any random generator for previously unsuspected regularities is discussed. Once they are discovered it is often easy to derive the mathematical relationships, which describe the mathematical relationships, which describe the regular behavior. As examples, it is shown that high correlation exists in mixed and multiplicative congruential random number generators and prime moduli Lehmer generators for shifts a fraction of their cycle lengths.

  14. Prediction of spatially explicit rainfall intensity–duration thresholds for post-fire debris-flow generation in the western United States

    USGS Publications Warehouse

    Staley, Dennis M.; Negri, Jacquelyn; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2017-01-01

    Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity–duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity–duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity–duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity–duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity–duration thresholds do not currently exist.

  15. The Separation of Between-person and Within-person Components of Individual Change Over Time: A Latent Curve Model with Structured Residuals

    PubMed Central

    Curran, Patrick J.; Howard, Andrea L.; Bainter, Sierra; Lane, Stephanie T.; McGinley, James S.

    2014-01-01

    Objective Although recent statistical and computational developments allow for the empirical testing of psychological theories in ways not previously possible, one particularly vexing challenge remains: how to optimally model the prospective, reciprocal relations between two constructs as they developmentally unfold over time. Several analytic methods currently exist that attempt to model these types of relations, and each approach is successful to varying degrees. However, none provide the unambiguous separation of between-person and within-person components of stability and change over time, components that are often hypothesized to exist in the psychological sciences. The goal of our paper is to propose and demonstrate a novel extension of the multivariate latent curve model to allow for the disaggregation of these effects. Method We begin with a review of the standard latent curve models and describe how these primarily capture between-person differences in change. We then extend this model to allow for regression structures among the time-specific residuals to capture within-person differences in change. Results We demonstrate this model using an artificial data set generated to mimic the developmental relation between alcohol use and depressive symptomatology spanning five repeated measures. Conclusions We obtain a specificity of results from the proposed analytic strategy that are not available from other existing methodologies. We conclude with potential limitations of our approach and directions for future research. PMID:24364798

  16. Prediction of spatially explicit rainfall intensity-duration thresholds for post-fire debris-flow generation in the western United States

    NASA Astrophysics Data System (ADS)

    Staley, Dennis M.; Negri, Jacquelyn A.; Kean, Jason W.; Laber, Jayme L.; Tillery, Anne C.; Youberg, Ann M.

    2017-02-01

    Early warning of post-fire debris-flow occurrence during intense rainfall has traditionally relied upon a library of regionally specific empirical rainfall intensity-duration thresholds. Development of this library and the calculation of rainfall intensity-duration thresholds often require several years of monitoring local rainfall and hydrologic response to rainstorms, a time-consuming approach where results are often only applicable to the specific region where data were collected. Here, we present a new, fully predictive approach that utilizes rainfall, hydrologic response, and readily available geospatial data to predict rainfall intensity-duration thresholds for debris-flow generation in recently burned locations in the western United States. Unlike the traditional approach to defining regional thresholds from historical data, the proposed methodology permits the direct calculation of rainfall intensity-duration thresholds for areas where no such data exist. The thresholds calculated by this method are demonstrated to provide predictions that are of similar accuracy, and in some cases outperform, previously published regional intensity-duration thresholds. The method also provides improved predictions of debris-flow likelihood, which can be incorporated into existing approaches for post-fire debris-flow hazard assessment. Our results also provide guidance for the operational expansion of post-fire debris-flow early warning systems in areas where empirically defined regional rainfall intensity-duration thresholds do not currently exist.

  17. Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms.

    PubMed

    Li, Le; Yip, Kevin Y

    2016-12-15

    Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature. Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/.

  18. Identifying potential engaging leaders within medical education: The role of positive influence on peers.

    PubMed

    Michalec, Barret; Veloski, J Jon; Hojat, Mohammadreza; Tykocinski, Mark L

    2014-08-26

    Abstract Background: Previous research has paid little to no attention towards exploring methods of identifying existing medical student leaders. Aim: Focusing on the role of influence and employing the tenets of the engaging leadership model, this study examines demographic and academic performance-related differences of positive influencers and if students who have been peer-identified as positive influencers also demonstrate high levels of genuine concern for others. Methods: Three separate fourth-year classes were asked to designate classmates that had significant positive influences on their professional and personal development. The top 10% of those students receiving positive influence nominations were compared with the other students on demographics, academic performance, and genuine concern for others. Results: Besides age, no demographic differences were found between positive influencers and other students. High positive influencers were not found to have higher standardized exam scores but did receive significantly higher clinical clerkship ratings. High positive influencers were found to possess a higher degree of genuine concern for others. Conclusion: The findings lend support to (a) utilizing the engaging model to explore leaders and leadership within medical education, (b) this particular method of identifying existing medical student leaders, and (c) return the focus of leadership research to the power of influence.

  19. A Graph-Embedding Approach to Hierarchical Visual Word Mergence.

    PubMed

    Wang, Lei; Liu, Lingqiao; Zhou, Luping

    2017-02-01

    Appropriately merging visual words are an effective dimension reduction method for the bag-of-visual-words model in image classification. The approach of hierarchically merging visual words has been extensively employed, because it gives a fully determined merging hierarchy. Existing supervised hierarchical merging methods take different approaches and realize the merging process with various formulations. In this paper, we propose a unified hierarchical merging approach built upon the graph-embedding framework. Our approach is able to merge visual words for any scenario, where a preferred structure and an undesired structure are defined, and, therefore, can effectively attend to all kinds of requirements for the word-merging process. In terms of computational efficiency, we show that our algorithm can seamlessly integrate a fast search strategy developed in our previous work and, thus, well maintain the state-of-the-art merging speed. To the best of our survey, the proposed approach is the first one that addresses the hierarchical visual word mergence in such a flexible and unified manner. As demonstrated, it can maintain excellent image classification performance even after a significant dimension reduction, and outperform all the existing comparable visual word-merging methods. In a broad sense, our work provides an open platform for applying, evaluating, and developing new criteria for hierarchical word-merging tasks.

  20. Increasing the utility of regional water table maps: a new method for estimating groundwater recharge

    NASA Astrophysics Data System (ADS)

    Gilmore, T. E.; Zlotnik, V. A.; Johnson, M.

    2017-12-01

    Groundwater table elevations are one of the most fundamental measurements used to characterize unconfined aquifers, groundwater flow patterns, and aquifer sustainability over time. In this study, we developed an analytical model that relies on analysis of groundwater elevation contour (equipotential) shape, aquifer transmissivity, and streambed gradient between two parallel, perennial streams. Using two existing regional water table maps, created at different times using different methods, our analysis of groundwater elevation contours, transmissivity and streambed gradient produced groundwater recharge rates (42-218 mm yr-1) that were consistent with previous independent recharge estimates from different methods. The three regions we investigated overly the High Plains Aquifer in Nebraska and included some areas where groundwater is used for irrigation. The three regions ranged from 1,500 to 3,300 km2, with either Sand Hills surficial geology, or Sand Hills transitioning to loess. Based on our results, the approach may be used to increase the value of existing water table maps, and may be useful as a diagnostic tool to evaluate the quality of groundwater table maps, identify areas in need of detailed aquifer characterization and expansion of groundwater monitoring networks, and/or as a first approximation before investing in more complex approaches to groundwater recharge estimation.

  1. Density matters: Review of approaches to setting organism-based ballast water discharge standards

    USGS Publications Warehouse

    Lee II,; Frazier,; Ruiz,

    2010-01-01

    As part of their effort to develop national ballast water discharge standards under NPDES permitting, the Office of Water requested that WED scientists identify and review existing approaches to generating organism-based discharge standards for ballast water. Six potential approaches were identified and the utility and uncertainties of each approach was evaluated. During the process of reviewing the existing approaches, the WED scientists, in conjunction with scientists at the USGS and Smithsonian Institution, developed a new approach (per capita invasion probability or "PCIP") that addresses many of the limitations of the previous methodologies. THE PCIP approach allows risk managers to generate quantitative discharge standards using historical invasion rates, ballast water discharge volumes, and ballast water organism concentrations. The statistical power of sampling ballast water for both the validation of ballast water treatment systems and ship-board compliance monitoring with the existing methods, though it should be possible to obtain sufficient samples during treatment validation. The report will go to a National Academy of Sciences expert panel that will use it in their evaluation of approaches to developing ballast water discharge standards for the Office of Water.

  2. Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy

    PubMed Central

    Zhang, Lina; Zhang, Chengjin; Gao, Rui; Yang, Runtao; Song, Qing

    2016-01-01

    Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information), PSSM (Position Specific Scoring Matrix), RSA (Relative Solvent Accessibility), and CTD (Composition, Transition, Distribution). The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest), SMO (Sequential Minimal Optimization), NNA (Nearest Neighbor Algorithm), and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection) method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew’s Correlation Coefficient) of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc. PMID:27662651

  3. Existence of the Stark-Wannier quantum resonances

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sacchetti, Andrea, E-mail: andrea.sacchetti@unimore.it

    2014-12-15

    In this paper, we prove the existence of the Stark-Wannier quantum resonances for one-dimensional Schrödinger operators with smooth periodic potential and small external homogeneous electric field. Such a result extends the existence result previously obtained in the case of periodic potentials with a finite number of open gaps.

  4. A novel alignment-free method to classify protein folding types by combining spectral graph clustering with Chou's pseudo amino acid composition.

    PubMed

    Tripathi, Pooja; Pandey, Paras N

    2017-07-07

    The present work employs pseudo amino acid composition (PseAAC) for encoding the protein sequences in their numeric form. Later this will be arranged in the similarity matrix, which serves as input for spectral graph clustering method. Spectral methods are used previously also for clustering of protein sequences, but they uses pair wise alignment scores of protein sequences, in similarity matrix. The alignment score depends on the length of sequences, so clustering short and long sequences together may not good idea. Therefore the idea of introducing PseAAC with spectral clustering algorithm came into scene. We extensively tested our method and compared its performance with other existing machine learning methods. It is consistently observed that, the number of clusters that we obtained for a given set of proteins is close to the number of superfamilies in that set and PseAAC combined with spectral graph clustering shows the best classification results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. GOTHiC, a probabilistic model to resolve complex biases and to identify real interactions in Hi-C data.

    PubMed

    Mifsud, Borbala; Martincorena, Inigo; Darbo, Elodie; Sugar, Robert; Schoenfelder, Stefan; Fraser, Peter; Luscombe, Nicholas M

    2017-01-01

    Hi-C is one of the main methods for investigating spatial co-localisation of DNA in the nucleus. However, the raw sequencing data obtained from Hi-C experiments suffer from large biases and spurious contacts, making it difficult to identify true interactions. Existing methods use complex models to account for biases and do not provide a significance threshold for detecting interactions. Here we introduce a simple binomial probabilistic model that resolves complex biases and distinguishes between true and false interactions. The model corrects biases of known and unknown origin and yields a p-value for each interaction, providing a reliable threshold based on significance. We demonstrate this experimentally by testing the method against a random ligation dataset. Our method outperforms previous methods and provides a statistical framework for further data analysis, such as comparisons of Hi-C interactions between different conditions. GOTHiC is available as a BioConductor package (http://www.bioconductor.org/packages/release/bioc/html/GOTHiC.html).

  6. Fault Diagnosis for Micro-Gas Turbine Engine Sensors via Wavelet Entropy

    PubMed Central

    Yu, Bing; Liu, Dongdong; Zhang, Tianhong

    2011-01-01

    Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can’t be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient. PMID:22163734

  7. Fault diagnosis for micro-gas turbine engine sensors via wavelet entropy.

    PubMed

    Yu, Bing; Liu, Dongdong; Zhang, Tianhong

    2011-01-01

    Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can't be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient.

  8. A study of methods to estimate debris flow velocity

    USGS Publications Warehouse

    Prochaska, A.B.; Santi, P.M.; Higgins, J.D.; Cannon, S.H.

    2008-01-01

    Debris flow velocities are commonly back-calculated from superelevation events which require subjective estimates of radii of curvature of bends in the debris flow channel or predicted using flow equations that require the selection of appropriate rheological models and material property inputs. This research investigated difficulties associated with the use of these conventional velocity estimation methods. Radii of curvature estimates were found to vary with the extent of the channel investigated and with the scale of the media used, and back-calculated velocities varied among different investigated locations along a channel. Distinct populations of Bingham properties were found to exist between those measured by laboratory tests and those back-calculated from field data; thus, laboratory-obtained values would not be representative of field-scale debris flow behavior. To avoid these difficulties with conventional methods, a new preliminary velocity estimation method is presented that statistically relates flow velocity to the channel slope and the flow depth. This method presents ranges of reasonable velocity predictions based on 30 previously measured velocities. ?? 2008 Springer-Verlag.

  9. Reliability apportionment approach for spacecraft solar array using fuzzy reasoning Petri net and fuzzy comprehensive evaluation

    NASA Astrophysics Data System (ADS)

    Wu, Jianing; Yan, Shaoze; Xie, Liyang; Gao, Peng

    2012-07-01

    The reliability apportionment of spacecraft solar array is of significant importance for spacecraft designers in the early stage of design. However, it is difficult to use the existing methods to resolve reliability apportionment problem because of the data insufficiency and the uncertainty of the relations among the components in the mechanical system. This paper proposes a new method which combines the fuzzy comprehensive evaluation with fuzzy reasoning Petri net (FRPN) to accomplish the reliability apportionment of the solar array. The proposed method extends the previous fuzzy methods and focuses on the characteristics of the subsystems and the intrinsic associations among the components. The analysis results show that the synchronization mechanism may obtain the highest reliability value and the solar panels and hinges may get the lowest reliability before design and manufacturing. Our developed method is of practical significance for the reliability apportionment of solar array where the design information has not been clearly identified, particularly in early stage of design.

  10. Surface-from-gradients without discrete integrability enforcement: A Gaussian kernel approach.

    PubMed

    Ng, Heung-Sun; Wu, Tai-Pang; Tang, Chi-Keung

    2010-11-01

    Representative surface reconstruction algorithms taking a gradient field as input enforce the integrability constraint in a discrete manner. While enforcing integrability allows the subsequent integration to produce surface heights, existing algorithms have one or more of the following disadvantages: They can only handle dense per-pixel gradient fields, smooth out sharp features in a partially integrable field, or produce severe surface distortion in the results. In this paper, we present a method which does not enforce discrete integrability and reconstructs a 3D continuous surface from a gradient or a height field, or a combination of both, which can be dense or sparse. The key to our approach is the use of kernel basis functions, which transfer the continuous surface reconstruction problem into high-dimensional space, where a closed-form solution exists. By using the Gaussian kernel, we can derive a straightforward implementation which is able to produce results better than traditional techniques. In general, an important advantage of our kernel-based method is that the method does not suffer discretization and finite approximation, both of which lead to surface distortion, which is typical of Fourier or wavelet bases widely adopted by previous representative approaches. We perform comparisons with classical and recent methods on benchmark as well as challenging data sets to demonstrate that our method produces accurate surface reconstruction that preserves salient and sharp features. The source code and executable of the system are available for downloading.

  11. The doctrine of the two depressions in historical perspective

    PubMed Central

    Shorter, E.

    2013-01-01

    Objective To determine if the concept of two separate depressions –melancholia and non-melancholia – has existed in writings of the main previous thinkers about mood disorders. Method Representative contributions to writing on mood disorders over the past hundred years have been systematically evaluated. Results The concept of two separate depressions does indeed emerge in the psychiatric literature from the very beginning of modern writing about the concept of ‘melancholia’. For the principal nosologists of psychiatry, melancholic depression has always meant something quite different from non-melancholic depression. Exceptions to this include Aubrey Lewis and Karl Leonhard. Yet the balance of opinion among the chief theorists overwhelmingly favors the existence of two quite different illnesses. Conclusion The concept of ‘major depression’ popularized in DSM-III in 1980 is a historical anomaly. It mixes together psychopathologic entities that previous generations of experienced clinicians and thoughtful nosologists had been at pains to keep separate. Recently, there has been a tendency to return to the concept of two depressions: melancholic and non-melancholic illness. ‘Major depression’ is coming into increasing disfavor. In the next edition of DSM (DSM-V), major depression should be abolished; melancholic mood disorder (MMD) and non-melancholic mood disorder (NMMD) should become two of the principle entities in the mood disorder section. PMID:17280565

  12. Multi-tasking computer control of video related equipment

    NASA Technical Reports Server (NTRS)

    Molina, Rod; Gilbert, Bob

    1989-01-01

    The flexibility, cost-effectiveness and widespread availability of personal computers now makes it possible to completely integrate the previously separate elements of video post-production into a single device. Specifically, a personal computer, such as the Commodore-Amiga, can perform multiple and simultaneous tasks from an individual unit. Relatively low cost, minimal space requirements and user-friendliness, provides the most favorable environment for the many phases of video post-production. Computers are well known for their basic abilities to process numbers, text and graphics and to reliably perform repetitive and tedious functions efficiently. These capabilities can now apply as either additions or alternatives to existing video post-production methods. A present example of computer-based video post-production technology is the RGB CVC (Computer and Video Creations) WorkSystem. A wide variety of integrated functions are made possible with an Amiga computer existing at the heart of the system.

  13. Symbolically Modeling Concurrent MCAPI Executions

    NASA Technical Reports Server (NTRS)

    Fischer, Topher; Mercer, Eric; Rungta, Neha

    2011-01-01

    Improper use of Inter-Process Communication (IPC) within concurrent systems often creates data races which can lead to bugs that are challenging to discover. Techniques that use Satisfiability Modulo Theories (SMT) problems to symbolically model possible executions of concurrent software have recently been proposed for use in the formal verification of software. In this work we describe a new technique for modeling executions of concurrent software that use a message passing API called MCAPI. Our technique uses an execution trace to create an SMT problem that symbolically models all possible concurrent executions and follows the same sequence of conditional branch outcomes as the provided execution trace. We check if there exists a satisfying assignment to the SMT problem with respect to specific safety properties. If such an assignment exists, it provides the conditions that lead to the violation of the property. We show how our method models behaviors of MCAPI applications that are ignored in previously published techniques.

  14. A Pilot Study Investigating the Effects of Advanced Nuclear Power Plant Control Room Technologies: Methods and Qualitative Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BLanc, Katya Le; Powers, David; Joe, Jeffrey

    2015-08-01

    Control room modernization is an important part of life extension for the existing light water reactor fleet. None of the 99 currently operating commercial nuclear power plants in the U.S. has completed a full-scale control room modernization to date. Nuclear power plant main control rooms for the existing commercial reactor fleet remain significantly analog, with only limited digital modernizations. Upgrades in the U.S. do not achieve the full potential of newer technologies that might otherwise enhance plant and operator performance. The goal of the control room upgrade benefits research is to identify previously overlooked benefits of modernization, identify candidate technologiesmore » that may facilitate such benefits, and demonstrate these technologies through human factors research. This report describes a pilot study to test upgrades to the Human Systems Simulation Laboratory at INL.« less

  15. Existence and global exponential stability of periodic solution of memristor-based BAM neural networks with time-varying delays.

    PubMed

    Li, Hongfei; Jiang, Haijun; Hu, Cheng

    2016-03-01

    In this paper, we investigate a class of memristor-based BAM neural networks with time-varying delays. Under the framework of Filippov solutions, boundedness and ultimate boundedness of solutions of memristor-based BAM neural networks are guaranteed by Chain rule and inequalities technique. Moreover, a new method involving Yoshizawa-like theorem is favorably employed to acquire the existence of periodic solution. By applying the theory of set-valued maps and functional differential inclusions, an available Lyapunov functional and some new testable algebraic criteria are derived for ensuring the uniqueness and global exponential stability of periodic solution of memristor-based BAM neural networks. The obtained results expand and complement some previous work on memristor-based BAM neural networks. Finally, a numerical example is provided to show the applicability and effectiveness of our theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Plate on plate osteosynthesis for the treatment of nonhealed periplate fractures.

    PubMed

    Arealis, Georgios; Nikolaou, Vassilios S; Lacon, Andrew; Ashwood, Neil; Hamlet, Mark

    2014-01-01

    Purpose. The purpose of this paper is to present our technique for the treatment of periplate fractures. Methods. From 2009 to 2012 we treated three patients. In all cases the existing plate was left and the new one placed over the existing. Locking screws were placed through both plates. The other screws in the new plate were used as best suited the fracture. Results. In all cases less than 6 months had passed between fractures. None of the original fractures had healed. Mean followup was 2 years. All fractures proceeded to union within 7 months. No complications were recorded. All the patients returned to their normal activities and were satisfied with the results of their treatment. Conclusion. Our plate on plate technique is effective for the treatment of periplate fractures. A solid fusion can be achieved at the new fracture site without disturbing the previous fixation.

  17. Creatinine elevation associated with nitromethane exposure: a marker of potential methanol toxicity.

    PubMed

    Cook, Matthew D; Clark, Richard F

    2007-10-01

    Nitromethane, methanol, and oil are the common components of radio-controlled (R/C) vehicle fuels. Nitromethane can cause a false elevation of serum creatinine concentration as measured by the widely used Jaffe colorimetric method. We gathered data from our poison control system and from previously published case reports to see if a correlation exists between serum methanol concentrations and spuriously elevated serum creatinine concentrations after human exposures to R/C fuel. The California Poison Control System (CPCS) computerized database was queried for all cases of human exposure to R/C vehicle fuel reported between December 1, 2002 and December 1, 2004. Serum creatinine and methanol concentrations were recorded when available, as was the method used to determine serum creatinine. A MEDLINE search was used to obtain previously published cases of human nitromethane exposure associated with falsely elevated creatinine concentrations. During the 2-year period, serum creatinine concentrations were recorded in 7 of 26 R/C fuel exposures (all ingestions), and 6 of these were abnormal (range of 1.9-11.5 mg/dL). In this series, the higher the serum creatinine concentration measured by Jaffe method, the higher the serum methanol concentration. The MEDLINE search yielded data from six previously published case reports on this topic. The data from these case reports seem to follow the trend seen in our case series. These data suggest that a spuriously elevated serum creatinine (by Jaffe method) may have value as an early surrogate marker of methanol poisoning in those who ingest R/C fuel. Also, the degree to which the serum creatinine is elevated may indicate the severity of methanol poisoning.

  18. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation

    PubMed Central

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B.; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package “DensParcorr” can be downloaded from CRAN for implementing the proposed statistical methods. PMID:27242395

  19. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    PubMed

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant direct connections are between homologous brain locations in the left and right hemisphere. When comparing partial correlation derived under different sparse tuning parameters, an important finding is that the sparse regularization has more shrinkage effects on negative functional connections than on positive connections, which supports previous findings that many of the negative brain connections are due to non-neurophysiological effects. An R package "DensParcorr" can be downloaded from CRAN for implementing the proposed statistical methods.

  20. Unimolecular Reaction Pathways of a γ-Ketohydroperoxide from Combined Application of Automated Reaction Discovery Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grambow, Colin A.; Jamal, Adeel; Li, Yi -Pei

    Ketohydroperoxides are important in liquid-phase autoxidation and in gas-phase partial oxidation and pre-ignition chemistry, but because of their low concentration, instability, and various analytical chemistry limitations, it has been challenging to experimentally determine their reactivity, and only a few pathways are known. In the present work, 75 elementary-step unimolecular reactions of the simplest γ-ketohydroperoxide, 3-hydroperoxypropanal, were discovered by a combination of density functional theory with several automated transition-state search algorithms: the Berny algorithm coupled with the freezing string method, single- and double-ended growing string methods, the heuristic KinBot algorithm, and the single-component artificial force induced reaction method (SC-AFIR). The presentmore » joint approach significantly outperforms previous manual and automated transition-state searches – 68 of the reactions of γ-ketohydroperoxide discovered here were previously unknown and completely unexpected. All of the methods found the lowest-energy transition state, which corresponds to the first step of the Korcek mechanism, but each algorithm except for SC-AFIR detected several reactions not found by any of the other methods. We show that the low-barrier chemical reactions involve promising new chemistry that may be relevant in atmospheric and combustion systems. Our study highlights the complexity of chemical space exploration and the advantage of combined application of several approaches. Altogether, the present work demonstrates both the power and the weaknesses of existing fully automated approaches for reaction discovery which suggest possible directions for further method development and assessment in order to enable reliable discovery of all important reactions of any specified reactant(s).« less

  1. Unimolecular Reaction Pathways of a γ-Ketohydroperoxide from Combined Application of Automated Reaction Discovery Methods

    DOE PAGES

    Grambow, Colin A.; Jamal, Adeel; Li, Yi -Pei; ...

    2017-12-22

    Ketohydroperoxides are important in liquid-phase autoxidation and in gas-phase partial oxidation and pre-ignition chemistry, but because of their low concentration, instability, and various analytical chemistry limitations, it has been challenging to experimentally determine their reactivity, and only a few pathways are known. In the present work, 75 elementary-step unimolecular reactions of the simplest γ-ketohydroperoxide, 3-hydroperoxypropanal, were discovered by a combination of density functional theory with several automated transition-state search algorithms: the Berny algorithm coupled with the freezing string method, single- and double-ended growing string methods, the heuristic KinBot algorithm, and the single-component artificial force induced reaction method (SC-AFIR). The presentmore » joint approach significantly outperforms previous manual and automated transition-state searches – 68 of the reactions of γ-ketohydroperoxide discovered here were previously unknown and completely unexpected. All of the methods found the lowest-energy transition state, which corresponds to the first step of the Korcek mechanism, but each algorithm except for SC-AFIR detected several reactions not found by any of the other methods. We show that the low-barrier chemical reactions involve promising new chemistry that may be relevant in atmospheric and combustion systems. Our study highlights the complexity of chemical space exploration and the advantage of combined application of several approaches. Altogether, the present work demonstrates both the power and the weaknesses of existing fully automated approaches for reaction discovery which suggest possible directions for further method development and assessment in order to enable reliable discovery of all important reactions of any specified reactant(s).« less

  2. Exploring student learning profiles in algebra-based studio physics: A person-centered approach

    NASA Astrophysics Data System (ADS)

    Pond, Jarrad W. T.; Chini, Jacquelyn J.

    2017-06-01

    In this study, we explore the strategic self-regulatory and motivational characteristics of students in studio-mode physics courses at three universities with varying student populations and varying levels of success in their studio-mode courses. We survey students using questions compiled from several existing questionnaires designed to measure students' study strategies, attitudes toward and motivations for learning physics, organization of scientific knowledge, experiences outside the classroom, and demographics. Using a person-centered approach, we utilize cluster analysis methods to group students into learning profiles based on their individual responses to better understand the strategies and motives of algebra-based studio physics students. Previous studies have identified five distinct learning profiles across several student populations using similar methods. We present results from first-semester and second-semester studio-mode introductory physics courses across three universities. We identify these five distinct learning profiles found in previous studies to be present within our population of introductory physics students. In addition, we investigate interactions between these learning profiles and student demographics. We find significant interactions between a student's learning profile and their experience with high school physics, major, gender, grade expectation, and institution. Ultimately, we aim to use this method of analysis to take the characteristics of students into account in the investigation of successful strategies for using studio methods of physics instruction within and across institutions.

  3. Detection of ischemical dyssynchrony in patients with normal duration of QRS at rest and during exercise echocardiography (Dyssynchrony in coronary artery disease patients during exercise).

    PubMed

    Zagatina, A; Guseva, O; Bartosh-Zelenaya, S Y; Zhuravskaya, N

    2014-04-01

    Ischemic segments cannot develop a sufficient amount of strength during systole, so theoretically they begin to contract later in comparison with non-ischemic zones. There is a lack of information about methods that can detect dyssynchrony during exercise in patients with QRS not longer 100 ms. The aim of the study was to compare different methods of detection regarding left ventricular moving dyssynchrony in patients with significant coronary stenosis artery lesions: pulsed-wave of PW-TDI, strain (S) and strain rate (SR). The study included 133 subjects: 106 consecutive patients who were scheduled for coronary angiography with previous stress-echocardiography and 27 healthy persons. All the patients underwent a supine bicycle exercise test. Seventy-six patients had stenoses and 30 subjects had no significant lesions by coronary angiography. There was a detectable difference between time parameters of left ventricle contraction for the two groups and controls before and during exercise using all Doppler methods. Subgroups of patients without previous myocardial infarction and without hypertrophy of left ventricle had the same results. Maximal difference was observed using strain method. There was a moderate correlation between time parameters and the existence of significant lesions of coronary arteries. Patients without prolongations of QRS with significant lesions of coronary arteries have detectable left ventricular dyssynchrony before and during exercise.

  4. A novel method for identifying disease associated protein complexes based on functional similarity protein complex networks.

    PubMed

    Le, Duc-Hau

    2015-01-01

    Protein complexes formed by non-covalent interaction among proteins play important roles in cellular functions. Computational and purification methods have been used to identify many protein complexes and their cellular functions. However, their roles in terms of causing disease have not been well discovered yet. There exist only a few studies for the identification of disease-associated protein complexes. However, they mostly utilize complicated heterogeneous networks which are constructed based on an out-of-date database of phenotype similarity network collected from literature. In addition, they only apply for diseases for which tissue-specific data exist. In this study, we propose a method to identify novel disease-protein complex associations. First, we introduce a framework to construct functional similarity protein complex networks where two protein complexes are functionally connected by either shared protein elements, shared annotating GO terms or based on protein interactions between elements in each protein complex. Second, we propose a simple but effective neighborhood-based algorithm, which yields a local similarity measure, to rank disease candidate protein complexes. Comparing the predictive performance of our proposed algorithm with that of two state-of-the-art network propagation algorithms including one we used in our previous study, we found that it performed statistically significantly better than that of these two algorithms for all the constructed functional similarity protein complex networks. In addition, it ran about 32 times faster than these two algorithms. Moreover, our proposed method always achieved high performance in terms of AUC values irrespective of the ways to construct the functional similarity protein complex networks and the used algorithms. The performance of our method was also higher than that reported in some existing methods which were based on complicated heterogeneous networks. Finally, we also tested our method with prostate cancer and selected the top 100 highly ranked candidate protein complexes. Interestingly, 69 of them were evidenced since at least one of their protein elements are known to be associated with prostate cancer. Our proposed method, including the framework to construct functional similarity protein complex networks and the neighborhood-based algorithm on these networks, could be used for identification of novel disease-protein complex associations.

  5. Brake System Design Optimization : Volume 2. Supplemental Data.

    DOT National Transportation Integrated Search

    1981-04-01

    Existing freight car braking systems, components, and subsystems are characterized both physically and functionally, and life-cycle costs are examined. Potential improvements to existing systems previously proposed or available are identified and des...

  6. Brake System Design Optimization. Volume II : Supplemental Data.

    DOT National Transportation Integrated Search

    1981-06-01

    Existing freight car braking systems, components, and subsystems are characterized both physically and functionally, and life-cycle costs are examined. Potential improvements to existing systems previously proposed or available are identified and des...

  7. Inferring Gene Regulatory Networks by Singular Value Decomposition and Gravitation Field Algorithm

    PubMed Central

    Zheng, Ming; Wu, Jia-nan; Huang, Yan-xin; Liu, Gui-xia; Zhou, You; Zhou, Chun-guang

    2012-01-01

    Reconstruction of gene regulatory networks (GRNs) is of utmost interest and has become a challenge computational problem in system biology. However, every existing inference algorithm from gene expression profiles has its own advantages and disadvantages. In particular, the effectiveness and efficiency of every previous algorithm is not high enough. In this work, we proposed a novel inference algorithm from gene expression data based on differential equation model. In this algorithm, two methods were included for inferring GRNs. Before reconstructing GRNs, singular value decomposition method was used to decompose gene expression data, determine the algorithm solution space, and get all candidate solutions of GRNs. In these generated family of candidate solutions, gravitation field algorithm was modified to infer GRNs, used to optimize the criteria of differential equation model, and search the best network structure result. The proposed algorithm is validated on both the simulated scale-free network and real benchmark gene regulatory network in networks database. Both the Bayesian method and the traditional differential equation model were also used to infer GRNs, and the results were used to compare with the proposed algorithm in our work. And genetic algorithm and simulated annealing were also used to evaluate gravitation field algorithm. The cross-validation results confirmed the effectiveness of our algorithm, which outperforms significantly other previous algorithms. PMID:23226565

  8. Matched-filtering line search methods applied to Suzaku data

    NASA Astrophysics Data System (ADS)

    Miyazaki, Naoto; Yamada, Shin'ya; Enoto, Teruaki; Axelsson, Magnus; Ohashi, Takaya

    2016-12-01

    A detailed search for emission and absorption lines and an assessment of their upper limits are performed for Suzaku data. The method utilizes a matched-filtering approach to maximize the signal-to-noise ratio for a given energy resolution, which could be applicable to many types of line search. We first applied it to well-known active galactic nuclei spectra that have been reported to have ultra-fast outflows, and find that our results are consistent with previous findings at the ˜3σ level. We proceeded to search for emission and absorption features in two bright magnetars 4U 0142+61 and 1RXS J1708-4009, applying the filtering method to Suzaku data. We found that neither source showed any significant indication of line features, even using long-term Suzaku observations or dividing their spectra into spin phases. The upper limits on the equivalent width of emission/absorption lines are constrained to be a few eV at ˜1 keV and a few hundreds of eV at ˜10 keV. This strengthens previous reports that persistently bright magnetars do not show proton cyclotron absorption features in soft X-rays and, even if they exist, they would be broadened or much weaker than below the detection limit of X-ray CCD.

  9. The effects of time on the capacity of pipe piles in dense marine sand

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chow, F.C.; Jardine, R.J.; Brucy, F.

    Investigations into pile behavior in dense marine sand have been performed by IFP and IC at Dunkirk, North France. In the most recent series of tests, strain-gauged, open-ended pipe piles, driven and statically load tested in 1989, were retested in 1994. Surprisingly large increases in shaft capacity were measured. The possible causes are evaluated in relation to previous case histories, laboratory soil tests, pile corrosion and new effective stress analyses developed using smaller, more intensively instrumented piles. The shaft capacities predicted by existing design methods are also assessed. 51 refs., 12 figs., 4 tabs.

  10. Optical spectroscopy of laser-produced plasmas for standoff isotopic analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harilal, Sivanandan S.; Brumfield, Brian E.; LaHaye, Nicole L.

    2018-04-20

    This review article covers the present status of isotope detection through emission, absorption, and fluorescence spectroscopy of atoms and molecules in a laser-produced plasma formed from a solid sample. A description of the physics behind isotope shifts in atoms and molecules is presented, followed by the physics behind solid sampling of laser ablation plumes, optical methods for isotope measurements, the suitable physical conditions of laser-produced plasma plumes for isotopic analysis, and the current status. Finally, concluding remarks will be made on the existing gaps between previous works in the literature and suggestions for future work.

  11. Optical spectroscopy of laser-produced plasmas for standoff isotopic analysis

    DOE PAGES

    Harilal, S. S.; Brumfield, B. E.; LaHaye, N. L.; ...

    2018-04-20

    This review article covers the present status of isotope detection through emission, absorption, and fluorescence spectroscopy of atoms and molecules in a laser-produced plasma formed from a solid sample. A description of the physics behind isotope shifts in atoms and molecules is presented, followed by the physics behind solid sampling of laser ablation plumes, optical methods for isotope measurements, the suitable physical conditions of laser-produced plasma plumes for isotopic analysis, and the current status. Lastly, concluding remarks will be made on the existing gaps between previous works in the literature and suggestions for future work.

  12. Optical spectroscopy of laser-produced plasmas for standoff isotopic analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harilal, S. S.; Brumfield, B. E.; LaHaye, N. L.

    This review article covers the present status of isotope detection through emission, absorption, and fluorescence spectroscopy of atoms and molecules in a laser-produced plasma formed from a solid sample. A description of the physics behind isotope shifts in atoms and molecules is presented, followed by the physics behind solid sampling of laser ablation plumes, optical methods for isotope measurements, the suitable physical conditions of laser-produced plasma plumes for isotopic analysis, and the current status. Finally, concluding remarks will be made on the existing gaps between previous works in the literature and suggestions for future work.

  13. Marketing your expertise.

    PubMed

    Czaplewski, L M

    1999-01-01

    Marketing an existing or new venture is a vital part of business. For the nurse entrepreneur, marketing involves applying previously learned skills to new situations. The methods used to market a service may mean the difference between success and failure. Unfortunately many entrepreneurs think that because they have a great idea, clients will beat a path to their door. Marketing requires planning, creativity, time, and money. It is an ongoing process that must be evaluated regularly. When marketing achieves results, clients commit to using the entrepreneur's services and profits are realized. Basic marketing concepts are considered, and strategies for developing a workable marketing plan are presented.

  14. Laser-welded Dissimilar Steel-aluminum Seams for Automotive Lightweight Construction

    NASA Astrophysics Data System (ADS)

    Schimek, M.; Springer, A.; Kaierle, S.; Kracht, D.; Wesling, V.

    By reducing vehicle weight, a significant increase in fuel efficiency and consequently a reduction in CO 2 emissions can be achieved. Currently a high interest in the production of hybrid weld seams between steel and aluminum exists. Previous methods as laser brazing are possible only by using fluxes and additional materials. Laser welding can be used to join steel and aluminum without the use of additives. With a low penetration depth increases in tensile strength can be achieved. Recent results from laser welded overlap seams show that there is no compromise in strength by decreasing penetration depth in the aluminum.

  15. Optical spectroscopy of laser-produced plasmas for standoff isotopic analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harilal, S. S.; Brumfield, B. E.; LaHaye, N. L.

    This review article covers the present status of isotope detection through emission, absorption, and fluorescence spectroscopy of atoms and molecules in a laser-produced plasma formed from a solid sample. A description of the physics behind isotope shifts in atoms and molecules is presented, followed by the physics behind solid sampling of laser ablation plumes, optical methods for isotope measurements, the suitable physical conditions of laser-produced plasma plumes for isotopic analysis, and the current status. Lastly, concluding remarks will be made on the existing gaps between previous works in the literature and suggestions for future work.

  16. Optical spectroscopy of laser-produced plasmas for standoff isotopic analysis

    DOE PAGES

    Harilal, S. S.; Brumfield, B. E.; LaHaye, N. L.; ...

    2018-06-01

    This review article covers the present status of isotope detection through emission, absorption, and fluorescence spectroscopy of atoms and molecules in a laser-produced plasma formed from a solid sample. A description of the physics behind isotope shifts in atoms and molecules is presented, followed by the physics behind solid sampling of laser ablation plumes, optical methods for isotope measurements, the suitable physical conditions of laser-produced plasma plumes for isotopic analysis, and the current status. Finally, concluding remarks will be made on the existing gaps between previous works in the literature and suggestions for future work.

  17. Carbide fuel pin and capsule design for irradiations at thermionic temperatures

    NASA Technical Reports Server (NTRS)

    Siegel, B. L.; Slaby, J. G.; Mattson, W. F.; Dilanni, D. C.

    1973-01-01

    The design of a capsule assembly to evaluate tungsten-emitter - carbide-fuel combinations for thermionic fuel elements is presented. An inpile fuel pin evaluation program concerned with clad temperture, neutron spectrum, carbide fuel composition, fuel geometry,fuel density, and clad thickness is discussed. The capsule design was a compromise involving considerations between heat transfer, instrumentation, materials compatibility, and test location. Heat-transfer calculations were instrumental in determining the method of support of the fuel pin to minimize axial temperature variations. The capsule design was easily fabricable and utilized existing state-of-the-art experience from previous programs.

  18. The kinship2 R package for pedigree data.

    PubMed

    Sinnwell, Jason P; Therneau, Terry M; Schaid, Daniel J

    2014-01-01

    The kinship2 package is restructured from the previous kinship package. Existing features are now enhanced and new features added for handling pedigree objects. Pedigree plotting features have been updated to display features on complex pedigrees while adhering to pedigree plotting standards. Kinship matrices can now be calculated for the X chromosome. Other methods have been added to subset and trim pedigrees while maintaining the pedigree structure. We make the kinship2 package available for R on the Contributed R Archives Network (CRAN), where data management is built-in and other packages can use the pedigree object.

  19. Quantum coupled mutation finder: predicting functionally or structurally important sites in proteins using quantum Jensen-Shannon divergence and CUDA programming.

    PubMed

    Gültas, Mehmet; Düzgün, Güncel; Herzog, Sebastian; Jäger, Sven Joachim; Meckbach, Cornelia; Wingender, Edgar; Waack, Stephan

    2014-04-03

    The identification of functionally or structurally important non-conserved residue sites in protein MSAs is an important challenge for understanding the structural basis and molecular mechanism of protein functions. Despite the rich literature on compensatory mutations as well as sequence conservation analysis for the detection of those important residues, previous methods often rely on classical information-theoretic measures. However, these measures usually do not take into account dis/similarities of amino acids which are likely to be crucial for those residues. In this study, we present a new method, the Quantum Coupled Mutation Finder (QCMF) that incorporates significant dis/similar amino acid pair signals in the prediction of functionally or structurally important sites. The result of this study is twofold. First, using the essential sites of two human proteins, namely epidermal growth factor receptor (EGFR) and glucokinase (GCK), we tested the QCMF-method. The QCMF includes two metrics based on quantum Jensen-Shannon divergence to measure both sequence conservation and compensatory mutations. We found that the QCMF reaches an improved performance in identifying essential sites from MSAs of both proteins with a significantly higher Matthews correlation coefficient (MCC) value in comparison to previous methods. Second, using a data set of 153 proteins, we made a pairwise comparison between QCMF and three conventional methods. This comparison study strongly suggests that QCMF complements the conventional methods for the identification of correlated mutations in MSAs. QCMF utilizes the notion of entanglement, which is a major resource of quantum information, to model significant dissimilar and similar amino acid pair signals in the detection of functionally or structurally important sites. Our results suggest that on the one hand QCMF significantly outperforms the previous method, which mainly focuses on dissimilar amino acid signals, to detect essential sites in proteins. On the other hand, it is complementary to the existing methods for the identification of correlated mutations. The method of QCMF is computationally intensive. To ensure a feasible computation time of the QCMF's algorithm, we leveraged Compute Unified Device Architecture (CUDA).The QCMF server is freely accessible at http://qcmf.informatik.uni-goettingen.de/.

  20. The cost of quality: Implementing generalization and suppression for anonymizing biomedical data with minimal information loss.

    PubMed

    Kohlmayer, Florian; Prasser, Fabian; Kuhn, Klaus A

    2015-12-01

    With the ARX data anonymization tool structured biomedical data can be de-identified using syntactic privacy models, such as k-anonymity. Data is transformed with two methods: (a) generalization of attribute values, followed by (b) suppression of data records. The former method results in data that is well suited for analyses by epidemiologists, while the latter method significantly reduces loss of information. Our tool uses an optimal anonymization algorithm that maximizes output utility according to a given measure. To achieve scalability, existing optimal anonymization algorithms exclude parts of the search space by predicting the outcome of data transformations regarding privacy and utility without explicitly applying them to the input dataset. These optimizations cannot be used if data is transformed with generalization and suppression. As optimal data utility and scalability are important for anonymizing biomedical data, we had to develop a novel method. In this article, we first confirm experimentally that combining generalization with suppression significantly increases data utility. Next, we proof that, within this coding model, the outcome of data transformations regarding privacy and utility cannot be predicted. As a consequence, existing algorithms fail to deliver optimal data utility. We confirm this finding experimentally. The limitation of previous work can be overcome at the cost of increased computational complexity. However, scalability is important for anonymizing data with user feedback. Consequently, we identify properties of datasets that may be predicted in our context and propose a novel and efficient algorithm. Finally, we evaluate our solution with multiple datasets and privacy models. This work presents the first thorough investigation of which properties of datasets can be predicted when data is anonymized with generalization and suppression. Our novel approach adopts existing optimization strategies to our context and combines different search methods. The experiments show that our method is able to efficiently solve a broad spectrum of anonymization problems. Our work shows that implementing syntactic privacy models is challenging and that existing algorithms are not well suited for anonymizing data with transformation models which are more complex than generalization alone. As such models have been recommended for use in the biomedical domain, our results are of general relevance for de-identifying structured biomedical data. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  1. Automatically generated acceptance test: A software reliability experiment

    NASA Technical Reports Server (NTRS)

    Protzel, Peter W.

    1988-01-01

    This study presents results of a software reliability experiment investigating the feasibility of a new error detection method. The method can be used as an acceptance test and is solely based on empirical data about the behavior of internal states of a program. The experimental design uses the existing environment of a multi-version experiment previously conducted at the NASA Langley Research Center, in which the launch interceptor problem is used as a model. This allows the controlled experimental investigation of versions with well-known single and multiple faults, and the availability of an oracle permits the determination of the error detection performance of the test. Fault interaction phenomena are observed that have an amplifying effect on the number of error occurrences. Preliminary results indicate that all faults examined so far are detected by the acceptance test. This shows promise for further investigations, and for the employment of this test method on other applications.

  2. Reviewing and piloting methods for decreasing discount rates; someone, somewhere in time.

    PubMed

    Parouty, Mehraj B Y; Krooshof, Daan G M; Westra, Tjalke A; Pechlivanoglou, Petros; Postma, Maarten J

    2013-08-01

    There has been substantial debate on the need for decreasing discounting for monetary and health gains in economic evaluations. Next to the discussion on differential discounting, a way to identify the need for such discounting strategies is through eliciting the time preferences for monetary and health outcomes. In this article, the authors investigate the perceived time preference for money and health gains through a pilot survey on Dutch university students using methods on functional forms previously suggested. Formal objectives of the study were to review such existing methods and to pilot them on a convenience sample using a questionnaire designed for this specific purpose. Indeed, a negative relation between the time of delay and the variance of the discounting rate for all models was observed. This study was intended as a pilot for a large-scale population-based investigation using the findings from this pilot on wording of the questionnaire, interpretation, scope and analytic framework.

  3. Scene analysis for effective visual search in rough three-dimensional-modeling scenes

    NASA Astrophysics Data System (ADS)

    Wang, Qi; Hu, Xiaopeng

    2016-11-01

    Visual search is a fundamental technology in the computer vision community. It is difficult to find an object in complex scenes when there exist similar distracters in the background. We propose a target search method in rough three-dimensional-modeling scenes based on a vision salience theory and camera imaging model. We give the definition of salience of objects (or features) and explain the way that salience measurements of objects are calculated. Also, we present one type of search path that guides to the target through salience objects. Along the search path, when the previous objects are localized, the search region of each subsequent object decreases, which is calculated through imaging model and an optimization method. The experimental results indicate that the proposed method is capable of resolving the ambiguities resulting from distracters containing similar visual features with the target, leading to an improvement of search speed by over 50%.

  4. Correlations of stock price fluctuations under multi-scale and multi-threshold scenarios

    NASA Astrophysics Data System (ADS)

    Sui, Guo; Li, Huajiao; Feng, Sida; Liu, Xueyong; Jiang, Meihui

    2018-01-01

    The multi-scale method is widely used in analyzing time series of financial markets and it can provide market information for different economic entities who focus on different periods. Through constructing multi-scale networks of price fluctuation correlation in the stock market, we can detect the topological relationship between each time series. Previous research has not addressed the problem that the original fluctuation correlation networks are fully connected networks and more information exists within these networks that is currently being utilized. Here we use listed coal companies as a case study. First, we decompose the original stock price fluctuation series into different time scales. Second, we construct the stock price fluctuation correlation networks at different time scales. Third, we delete the edges of the network based on thresholds and analyze the network indicators. Through combining the multi-scale method with the multi-threshold method, we bring to light the implicit information of fully connected networks.

  5. Catalytic, Enantioselective Sulfenofunctionalisation of Alkenes: Mechanistic, Crystallographic, and Computational Studies

    PubMed Central

    Denmark, Scott E.; Hartmann, Eduard; Kornfilt, David J. P.; Wang, Hao

    2015-01-01

    The stereocontrolled introduction of vicinal heteroatomic substituents into organic molecules is one of the most powerful ways of adding value and function. Whereas many methods exist for the introduction of oxygen- and nitrogen-containing substituents, the number stereocontrolled methods for the introduction of sulfur-containing substituents pales by comparison. Previous reports from these laboratories have described the sulfenofunctionalization of alkenes that construct vicinal carbon-sulfur and carbon-oxygen, carbon-nitrogen as well as carbon-carbon bonds with high levels of diastereospecificity and enantioselectivity. This process is enabled by the concept of Lewis base activation of Lewis acids that provides activation of Group 16 electrophiles. To provide a foundation for expansion of substrate scope and improved selectivities, we have undertaken a comprehensive study of the catalytically active species. Insights gleaned from kinetic, crystallographic and computational methods have led to the introduction of a new family of sulfenylating agents that provide significantly enhanced selectivities. PMID:25411883

  6. The riddle of Tasmanian languages

    PubMed Central

    Bowern, Claire

    2012-01-01

    Recent work which combines methods from linguistics and evolutionary biology has been fruitful in discovering the history of major language families because of similarities in evolutionary processes. Such work opens up new possibilities for language research on previously unsolvable problems, especially in areas where information from other sources may be lacking. I use phylogenetic methods to investigate Tasmanian languages. Existing materials are so fragmentary that scholars have been unable to discover how many languages are represented in the sources. Using a clustering algorithm which identifies admixture, source materials representing more than one language are identified. Using the Neighbor-Net algorithm, 12 languages are identified in five clusters. Bayesian phylogenetic methods reveal that the families are not demonstrably related; an important result, given the importance of Tasmanian Aborigines for information about how societies have responded to population collapse in prehistory. This work provides insight into the societies of prehistoric Tasmania and illustrates a new utility of phylogenetics in reconstructing linguistic history. PMID:23015621

  7. shinyGISPA: A web application for characterizing phenotype by gene sets using multiple omics data combinations.

    PubMed

    Dwivedi, Bhakti; Kowalski, Jeanne

    2018-01-01

    While many methods exist for integrating multi-omics data or defining gene sets, there is no one single tool that defines gene sets based on merging of multiple omics data sets. We present shinyGISPA, an open-source application with a user-friendly web-based interface to define genes according to their similarity in several molecular changes that are driving a disease phenotype. This tool was developed to help facilitate the usability of a previously published method, Gene Integrated Set Profile Analysis (GISPA), among researchers with limited computer-programming skills. The GISPA method allows the identification of multiple gene sets that may play a role in the characterization, clinical application, or functional relevance of a disease phenotype. The tool provides an automated workflow that is highly scalable and adaptable to applications that go beyond genomic data merging analysis. It is available at http://shinygispa.winship.emory.edu/shinyGISPA/.

  8. shinyGISPA: A web application for characterizing phenotype by gene sets using multiple omics data combinations

    PubMed Central

    Dwivedi, Bhakti

    2018-01-01

    While many methods exist for integrating multi-omics data or defining gene sets, there is no one single tool that defines gene sets based on merging of multiple omics data sets. We present shinyGISPA, an open-source application with a user-friendly web-based interface to define genes according to their similarity in several molecular changes that are driving a disease phenotype. This tool was developed to help facilitate the usability of a previously published method, Gene Integrated Set Profile Analysis (GISPA), among researchers with limited computer-programming skills. The GISPA method allows the identification of multiple gene sets that may play a role in the characterization, clinical application, or functional relevance of a disease phenotype. The tool provides an automated workflow that is highly scalable and adaptable to applications that go beyond genomic data merging analysis. It is available at http://shinygispa.winship.emory.edu/shinyGISPA/. PMID:29415010

  9. Convex formulation of multiple instance learning from positive and unlabeled bags.

    PubMed

    Bao, Han; Sakai, Tomoya; Sato, Issei; Sugiyama, Masashi

    2018-05-24

    Multiple instance learning (MIL) is a variation of traditional supervised learning problems where data (referred to as bags) are composed of sub-elements (referred to as instances) and only bag labels are available. MIL has a variety of applications such as content-based image retrieval, text categorization, and medical diagnosis. Most of the previous work for MIL assume that training bags are fully labeled. However, it is often difficult to obtain an enough number of labeled bags in practical situations, while many unlabeled bags are available. A learning framework called PU classification (positive and unlabeled classification) can address this problem. In this paper, we propose a convex PU classification method to solve an MIL problem. We experimentally show that the proposed method achieves better performance with significantly lower computation costs than an existing method for PU-MIL. Copyright © 2018 Elsevier Ltd. All rights reserved.

  10. The hydrogen sulfide metabolite trimethylsulfonium is found in human urine

    NASA Astrophysics Data System (ADS)

    Lajin, Bassam; Francesconi, Kevin A.

    2016-06-01

    Hydrogen sulfide is the third and most recently discovered gaseous signaling molecule following nitric oxide and carbon monoxide, playing important roles both in normal physiological conditions and disease progression. The trimethylsulfonium ion (TMS) can result from successive methylation reactions of hydrogen sulfide. No report exists so far about the presence or quantities of TMS in human urine. We developed a method for determining TMS in urine using liquid chromatography-electrospray ionization-triple quadrupole mass spectrometry (LC-ESI-QQQ), and applied the method to establish the urinary levels of TMS in a group of human volunteers. The measured urinary levels of TMS were in the nanomolar range, which is commensurate with the steady-state tissue concentrations of hydrogen sulfide previously reported in the literature. The developed method can be used in future studies for the quantification of urinary TMS as a potential biomarker for hydrogen sulfide body pools.

  11. Native conflict awared layout decomposition in triple patterning lithography using bin-based library matching method

    NASA Astrophysics Data System (ADS)

    Ke, Xianhua; Jiang, Hao; Lv, Wen; Liu, Shiyuan

    2016-03-01

    Triple patterning (TP) lithography becomes a feasible technology for manufacturing as the feature size further scale down to sub 14/10 nm. In TP, a layout is decomposed into three masks followed with exposures and etches/freezing processes respectively. Previous works mostly focus on layout decomposition with minimal conflicts and stitches simultaneously. However, since any existence of native conflict will result in layout re-design/modification and reperforming the time-consuming decomposition, the effective method that can be aware of native conflicts (NCs) in layout is desirable. In this paper, a bin-based library matching method is proposed for NCs detection and layout decomposition. First, a layout is divided into bins and the corresponding conflict graph in each bin is constructed. Then, we match the conflict graph in a prebuilt colored library, and as a result the NCs can be located and highlighted quickly.

  12. Depositing aluminum as sacrificial metal to reduce metal-graphene contact resistance

    NASA Astrophysics Data System (ADS)

    Da-cheng, Mao; Zhi, Jin; Shao-qing, Wang; Da-yong, Zhang; Jing-yuan, Shi; Song-ang, Peng; Xuan-yun, Wang

    2016-07-01

    Reducing the contact resistance without degrading the mobility property is crucial to achieve high-performance graphene field effect transistors. Also, the idea of modifying the graphene surface by etching away the deposited metal provides a new angle to achieve this goal. We exploit this idea by providing a new process method which reduces the contact resistance from 597 Ω·μm to sub 200 Ω·μm while no degradation of mobility is observed in the devices. This simple process method avoids the drawbacks of uncontrollability, ineffectiveness, and trade-off with mobility which often exist in the previously proposed methods. Project by the National Science and Technology Major Project, China (Grant No. 2011ZX02707.3), the National Natural Science Foundation of China (Grant No. 61136005), the Chinese Academy of Sciences (Grant No. KGZD-EW-303), and the Project of Beijing Municipal Science and Technology Commission, China (Grant No. Z151100003515003).

  13. Translation of Genotype to Phenotype by a Hierarchy of Cell Subsystems.

    PubMed

    Yu, Michael Ku; Kramer, Michael; Dutkowski, Janusz; Srivas, Rohith; Licon, Katherine; Kreisberg, Jason; Ng, Cherie T; Krogan, Nevan; Sharan, Roded; Ideker, Trey

    2016-02-24

    Accurately translating genotype to phenotype requires accounting for the functional impact of genetic variation at many biological scales. Here we present a strategy for genotype-phenotype reasoning based on existing knowledge of cellular subsystems. These subsystems and their hierarchical organization are defined by the Gene Ontology or a complementary ontology inferred directly from previously published datasets. Guided by the ontology's hierarchical structure, we organize genotype data into an "ontotype," that is, a hierarchy of perturbations representing the effects of genetic variation at multiple cellular scales. The ontotype is then interpreted using logical rules generated by machine learning to predict phenotype. This approach substantially outperforms previous, non-hierarchical methods for translating yeast genotype to cell growth phenotype, and it accurately predicts the growth outcomes of two new screens of 2,503 double gene knockouts impacting DNA repair or nuclear lumen. Ontotypes also generalize to larger knockout combinations, setting the stage for interpreting the complex genetics of disease.

  14. The Problem of Confounding in Studies of the Effect of Maternal Drug Use on Pregnancy Outcome

    PubMed Central

    Källén, Bengt

    2012-01-01

    In most epidemilogical studies, the problem of confounding adds to the uncertainty in conclusions drawn. This is also true for studies on the effect of maternal drug use on birth defect risks. This paper describes various types of such confounders and discusses methods to identify and adjust for them. Such confounders can be found in maternal characteristics like age, parity, smoking, use of alcohol, and body mass index, subfertility, and previous pregnancies including previous birth of a malformed child, socioeconomy, race/ethnicity, or country of birth. Confounding by concomitant maternal drug use may occur. A geographical or seasonal confounding can exist. In rare instances, infant sex and multiple birth can appear as confounders. The most difficult problem to solve is often confounding by indication. The problem of confounding is less important for congenital malformations than for many other pregnancy outcomes. PMID:22190949

  15. Apparatus for investigating the reactions of soft-bodied invertebrates to controlled humidity gradients

    PubMed Central

    Russell, Joshua; Pierce-Shimomura, Jonathan T.

    2015-01-01

    Background While many studies have assayed behavioral responses of animals to chemical, temperature and light gradients, fewer studies have assayed how animals respond to humidity gradients. Our novel humidity chamber has allowed us to study the neuromolecular basis of humidity sensation in the nematode Caenorhabditis elegans (Russell et al. 2014). New Method We describe an easy-to-construct, low-cost humidity chamber to assay the behavior of small animals, including soft-bodied invertebrates, in controlled humidity gradients. Results We show that our humidity-chamber design is amenable to soft-bodied invertebrates and can produce reliable gradients ranging 0.3–8% RH/cm across a 9-cm long x 7.5-cm wide gel-covered arena. Comparison with Existing Method(s) Previous humidity chambers relied on circulating dry and moist air to produce a steep humidity gradient in a small arena (e.g. Sayeed & Benzer, 1996). To remove the confound of moving air that may elicit mechanical responses independent of humidity responses, our chamber controlled the humidity gradient using reservoirs of hygroscopic materials. Additionally, to better observe the behavioral mechanisms for humidity responses, our chamber provided a larger arena. Although similar chambers have been described previously, these approaches were not suitable for soft-bodied invertebrates or for easy imaging of behavior because they required that animals move across wire or fabric mesh. Conclusion The general applicability of our humidity chamber overcomes limitations of previous designs and opens the door to observe the behavioral responses of soft-bodied invertebrates, including genetically powerful C. elegans and Drosophila larvae. PMID:25176025

  16. Optimization of Photoactive Protein Z for Fast and Efficient Site-Specific Conjugation of Native IgG

    PubMed Central

    2015-01-01

    Antibody conjugates have been used in a variety of applications from immunoassays to drug conjugates. However, it is becoming increasingly clear that in order to maximize an antibody’s antigen binding ability and to produce homogeneous antibody-conjugates, the conjugated molecule should be attached onto IgG site-specifically. We previously developed a facile method for the site-specific modification of full length, native IgGs by engineering a recombinant Protein Z that forms a covalent link to the Fc domain of IgG upon exposure to long wavelength UV light. To further improve the efficiency of Protein Z production and IgG conjugation, we constructed a panel of 13 different Protein Z variants with the UV-active amino acid benzoylphenylalanine (BPA) in different locations. By using this panel of Protein Z to cross-link a range of IgGs from different hosts, including human, mouse, and rat, we discovered two previously unknown Protein Z variants, L17BPA and K35BPA, that are capable of cross-linking many commonly used IgG isotypes with efficiencies ranging from 60% to 95% after only 1 h of UV exposure. When compared to existing site-specific methods, which often require cloning or enzymatic reactions, the Protein Z-based method described here, utilizing the L17BPA, K35BPA, and the previously described Q32BPA variants, represents a vastly more accessible and efficient approach that is compatible with nearly all native IgGs, thus making site-specific conjugation more accessible to the general research community. PMID:25121619

  17. Optimization of photoactive protein Z for fast and efficient site-specific conjugation of native IgG.

    PubMed

    Hui, James Z; Tsourkas, Andrew

    2014-09-17

    Antibody conjugates have been used in a variety of applications from immunoassays to drug conjugates. However, it is becoming increasingly clear that in order to maximize an antibody's antigen binding ability and to produce homogeneous antibody-conjugates, the conjugated molecule should be attached onto IgG site-specifically. We previously developed a facile method for the site-specific modification of full length, native IgGs by engineering a recombinant Protein Z that forms a covalent link to the Fc domain of IgG upon exposure to long wavelength UV light. To further improve the efficiency of Protein Z production and IgG conjugation, we constructed a panel of 13 different Protein Z variants with the UV-active amino acid benzoylphenylalanine (BPA) in different locations. By using this panel of Protein Z to cross-link a range of IgGs from different hosts, including human, mouse, and rat, we discovered two previously unknown Protein Z variants, L17BPA and K35BPA, that are capable of cross-linking many commonly used IgG isotypes with efficiencies ranging from 60% to 95% after only 1 h of UV exposure. When compared to existing site-specific methods, which often require cloning or enzymatic reactions, the Protein Z-based method described here, utilizing the L17BPA, K35BPA, and the previously described Q32BPA variants, represents a vastly more accessible and efficient approach that is compatible with nearly all native IgGs, thus making site-specific conjugation more accessible to the general research community.

  18. Brake System Design Optimization : Volume 1. A Survey and Assessment.

    DOT National Transportation Integrated Search

    1978-06-01

    Existing freight car braking systems, components, and subsystems are characterized both physically and functionally, and life-cycle costs are examined. Potential improvements to existing systems previously proposed or available are identified and des...

  19. Directed differentiation of embryonic stem cells using a bead-based combinatorial screening method.

    PubMed

    Tarunina, Marina; Hernandez, Diana; Johnson, Christopher J; Rybtsov, Stanislav; Ramathas, Vidya; Jeyakumar, Mylvaganam; Watson, Thomas; Hook, Lilian; Medvinsky, Alexander; Mason, Chris; Choo, Yen

    2014-01-01

    We have developed a rapid, bead-based combinatorial screening method to determine optimal combinations of variables that direct stem cell differentiation to produce known or novel cell types having pre-determined characteristics. Here we describe three experiments comprising stepwise exposure of mouse or human embryonic cells to 10,000 combinations of serum-free differentiation media, through which we discovered multiple novel, efficient and robust protocols to generate a number of specific hematopoietic and neural lineages. We further demonstrate that the technology can be used to optimize existing protocols in order to substitute costly growth factors with bioactive small molecules and/or increase cell yield, and to identify in vitro conditions for the production of rare developmental intermediates such as an embryonic lymphoid progenitor cell that has not previously been reported.

  20. A dual-catalysis approach to enantioselective [2 + 2] photocycloadditions using visible light.

    PubMed

    Du, Juana; Skubi, Kazimer L; Schultz, Danielle M; Yoon, Tehshik P

    2014-04-25

    In contrast to the wealth of catalytic systems that are available to control the stereochemistry of thermally promoted cycloadditions, few similarly effective methods exist for the stereocontrol of photochemical cycloadditions. A major unsolved challenge in the design of enantioselective catalytic photocycloaddition reactions has been the difficulty of controlling racemic background reactions that occur by direct photoexcitation of substrates while unbound to catalyst. Here, we describe a strategy for eliminating the racemic background reaction in asymmetric [2 + 2] photocycloadditions of α,β-unsaturated ketones to the corresponding cyclobutanes by using a dual-catalyst system consisting of a visible light-absorbing transition-metal photocatalyst and a stereocontrolling Lewis acid cocatalyst. The independence of these two catalysts enables broader scope, greater stereochemical flexibility, and better efficiency than previously reported methods for enantioselective photochemical cycloadditions.

  1. A nonlinear model for gas chromatograph systems

    NASA Technical Reports Server (NTRS)

    Feinberg, M. P.

    1975-01-01

    Fundamental engineering design techniques and concepts were studied for the optimization of a gas chromatograph-mass spectrometer chemical analysis system suitable for use on an unmanned, Martian roving vehicle. Previously developed mathematical models of the gas chromatograph are found to be inadequate for predicting peak heights and spreading for some experimental conditions and chemical systems. A modification to the existing equilibrium adsorption model is required; the Langmuir isotherm replaces the linear isotherm. The numerical technique of Crank-Nicolson was studied for use with the linear isotherm to determine the utility of the method. Modifications are made to the method eliminate unnecessary calculations which result in an overall reduction of the computation time of about 42 percent. The Langmuir isotherm is considered which takes into account the composition-dependent effects on the thermodynamic parameter, mRo.

  2. A Conceptual and Empirical Review of the Meaning, Measurement, Development, and Teaching of Intervention Competence in Clinical Psychology

    PubMed Central

    Barber, Jacques P.

    2009-01-01

    Through the course of this paper we discuss several fundamental issues related to the intervention competence of psychologists. Following definitional clarification and proposals for more strictly distinguishing competence from adherence, we interpret Dreyfus and Dreyfus’s (1986) five stage theory of competence development (from novice to expert) within a strictly clinical framework. Existing methods of competence assessment are then evaluated, and we argue for the use of new and multiple assessment modalities. Next, we utilize the previous sections as a foundation to propose methods for training and evaluating competent psychologists. Lastly, we discuss several potential impediments to large scale competence assessment and education, such as the heterogeneity of therapeutic orientations and what could be termed a lack of transparency in clinical training. PMID:18952334

  3. The classical limit of minimal length uncertainty relation: revisit with the Hamilton-Jacobi method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Xiaobo; Wang, Peng; Yang, Haitang, E-mail: guoxiaobo@swust.edu.cn, E-mail: pengw@scu.edu.cn, E-mail: hyanga@scu.edu.cn

    2016-05-01

    The existence of a minimum measurable length could deform not only the standard quantum mechanics but also classical physics. The effects of the minimal length on classical orbits of particles in a gravitation field have been investigated before, using the deformed Poisson bracket or Schwarzschild metric. In this paper, we first use the Hamilton-Jacobi method to derive the deformed equations of motion in the context of Newtonian mechanics and general relativity. We then employ them to study the precession of planetary orbits, deflection of light, and time delay in radar propagation. We also set limits on the deformation parameter bymore » comparing our results with the observational measurements. Finally, comparison with results from previous papers is given at the end of this paper.« less

  4. Reliable Acquisition of RAM Dumps from Intel-Based Apple Mac Computers over FireWire

    NASA Astrophysics Data System (ADS)

    Gladyshev, Pavel; Almansoori, Afrah

    RAM content acquisition is an important step in live forensic analysis of computer systems. FireWire offers an attractive way to acquire RAM content of Apple Mac computers equipped with a FireWire connection. However, the existing techniques for doing so require substantial knowledge of the target computer configuration and cannot be used reliably on a previously unknown computer in a crime scene. This paper proposes a novel method for acquiring RAM content of Apple Mac computers over FireWire, which automatically discovers necessary information about the target computer and can be used in the crime scene setting. As an application of the developed method, the techniques for recovery of AOL Instant Messenger (AIM) conversation fragments from RAM dumps are also discussed in this paper.

  5. Formal methods for test case generation

    NASA Technical Reports Server (NTRS)

    Rushby, John (Inventor); De Moura, Leonardo Mendonga (Inventor); Hamon, Gregoire (Inventor)

    2011-01-01

    The invention relates to the use of model checkers to generate efficient test sets for hardware and software systems. The method provides for extending existing tests to reach new coverage targets; searching *to* some or all of the uncovered targets in parallel; searching in parallel *from* some or all of the states reached in previous tests; and slicing the model relative to the current set of coverage targets. The invention provides efficient test case generation and test set formation. Deep regions of the state space can be reached within allotted time and memory. The approach has been applied to use of the model checkers of SRI's SAL system and to model-based designs developed in Stateflow. Stateflow models achieving complete state and transition coverage in a single test case are reported.

  6. Further investigations of the W-test for pairwise epistasis testing.

    PubMed

    Howey, Richard; Cordell, Heather J

    2017-01-01

    Background: In a recent paper, a novel W-test for pairwise epistasis testing was proposed that appeared, in computer simulations, to have higher power than competing alternatives. Application to genome-wide bipolar data detected significant epistasis between SNPs in genes of relevant biological function. Network analysis indicated that the implicated genes formed two separate interaction networks, each containing genes highly related to autism and neurodegenerative disorders. Methods: Here we investigate further the properties and performance of the W-test via theoretical evaluation, computer simulations and application to real data. Results: We demonstrate that, for common variants, the W-test is closely related to several existing tests of association allowing for interaction, including logistic regression on 8 degrees of freedom, although logistic regression can show inflated type I error for low minor allele frequencies,  whereas the W-test shows good/conservative type I error control. Although in some situations the W-test can show higher power, logistic regression is not limited to tests on 8 degrees of freedom but can instead be tailored to impose greater structure on the assumed alternative hypothesis, offering a power advantage when the imposed structure matches the true structure. Conclusions: The W-test is a potentially useful method for testing for association - without necessarily implying interaction - between genetic variants disease, particularly when one or more of the genetic variants are rare. For common variants, the advantages of the W-test are less clear, and, indeed, there are situations where existing methods perform better. In our investigations, we further uncover a number of problems with the practical implementation and application of the W-test (to bipolar disorder) previously described, apparently due to inadequate use of standard data quality-control procedures. This observation leads us to urge caution in interpretation of the previously-presented results, most of which we consider are highly likely to be artefacts.

  7. SU-G-BRA-11: Tumor Tracking in An Iterative Volume of Interest Based 4D CBCT Reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martin, R; Pan, T; Ahmad, M

    2016-06-15

    Purpose: 4D CBCT can allow evaluation of tumor motion immediately prior to radiation therapy, but suffers from heavy artifacts that limit its ability to track tumors. Various iterative and compressed sensing reconstructions have been proposed to reduce these artifacts, but are costly time-wise and can degrade the image quality of bony anatomy for alignment with regularization. We have previously proposed an iterative volume of interest (I4D VOI) method which minimizes reconstruction time and maintains image quality of bony anatomy by focusing a 4D reconstruction within a VOI. The purpose of this study is to test the tumor tracking accuracy ofmore » this method compared to existing methods. Methods: Long scan (8–10 mins) CBCT data with corresponding RPM data was collected for 12 lung cancer patients. The full data set was sorted into 8 phases and reconstructed using FDK cone beam reconstruction to serve as a gold standard. The data was reduced in way that maintains a normal breathing pattern and used to reconstruct 4D images using FDK, low and high regularization TV minimization (λ=2,10), and the proposed I4D VOI method with PTVs used for the VOI. Tumor trajectories were found using rigid registration within the VOI for each reconstruction and compared to the gold standard. Results: The root mean square error (RMSE) values were 2.70mm for FDK, 2.50mm for low regularization TV, 1.48mm for high regularization TV, and 2.34mm for I4D VOI. Streak artifacts in I4D VOI were reduced compared to FDK and images were less blurred than TV reconstructed images. Conclusion: I4D VOI performed at least as well as existing methods in tumor tracking, with the exception of high regularization TV minimization. These results along with the reconstruction time and outside VOI image quality advantages suggest I4D VOI to be an improvement over existing methods. Funding support provided by CPRIT grant RP110562-P2-01.« less

  8. A lifelong learning hyper-heuristic method for bin packing.

    PubMed

    Sim, Kevin; Hart, Emma; Paechter, Ben

    2015-01-01

    We describe a novel hyper-heuristic system that continuously learns over time to solve a combinatorial optimisation problem. The system continuously generates new heuristics and samples problems from its environment; and representative problems and heuristics are incorporated into a self-sustaining network of interacting entities inspired by methods in artificial immune systems. The network is plastic in both its structure and content, leading to the following properties: it exploits existing knowledge captured in the network to rapidly produce solutions; it can adapt to new problems with widely differing characteristics; and it is capable of generalising over the problem space. The system is tested on a large corpus of 3,968 new instances of 1D bin-packing problems as well as on 1,370 existing problems from the literature; it shows excellent performance in terms of the quality of solutions obtained across the datasets and in adapting to dynamically changing sets of problem instances compared to previous approaches. As the network self-adapts to sustain a minimal repertoire of both problems and heuristics that form a representative map of the problem space, the system is further shown to be computationally efficient and therefore scalable.

  9. Link-Based Similarity Measures Using Reachability Vectors

    PubMed Central

    Yoon, Seok-Ho; Kim, Ji-Soo; Ryu, Minsoo; Choi, Ho-Jin

    2014-01-01

    We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures. PMID:24701188

  10. Modified nucleoside triphosphates exist in mammals† †Electronic supplementary information (ESI) available. See DOI: 10.1039/c7sc05472f

    PubMed Central

    Jiang, Han-Peng; Xiong, Jun; Liu, Fei-Long; Ma, Cheng-Jie; Tang, Xing-Lin; Feng, Yu-Qi

    2018-01-01

    DNA and RNA contain diverse chemical modifications that exert important influences in a variety of cellular processes. In addition to enzyme-mediated modifications of DNA and RNA, previous in vitro studies showed that pre-modified nucleoside triphosphates (NTPs) can be incorporated into DNA and RNA during replication and transcription. Herein, we established a chemical labeling method in combination with liquid chromatography-electrospray ionization-mass spectrometry (LC-ESI-MS) analysis for the determination of endogenous NTPs in the mammalian cells and tissues. We synthesized 8-(diazomethyl)quinoline (8-DMQ) that could efficiently react with the phosphate group under mild condition to label NTPs. The developed method allowed sensitive detection of NTPs, with the detection limits improved by 56–137 folds. The results showed that 12 types of endogenous modified NTPs were distinctly determined in the mammalian cells and tissues. In addition, the majority of these modified NTPs exhibited significantly decreased contents in human hepatocellular carcinoma (HCC) tissues compared to tumor-adjacent normal tissues. Taken together, our study revealed the widespread existence of various modified NTPs in eukaryotes. PMID:29780546

  11. The influence of previous subject experience on interactions during peer instruction in an introductory physics course: A mixed methods analysis

    NASA Astrophysics Data System (ADS)

    Vondruska, Judy A.

    Over the past decade, peer instruction and the introduction of student response systems has provided a means of improving student engagement and achievement in large-lecture settings. While the nature of the student discourse occurring during peer instruction is less understood, existing studies have shown student ideas about the subject, extraneous cues, and confidence level appear to matter in the student-student discourse. Using a mixed methods research design, this study examined the influence of previous subject experience on peer instruction in an introductory, one-semester Survey of Physics course. Quantitative results indicated students in discussion pairs where both had previous subject experience were more likely to answer clicker question correctly both before and after peer discussion compared to student groups where neither partner had previous subject experience. Students in mixed discussion pairs were not statistically different in correct response rates from the other pairings. There was no statistically significant difference between the experience pairs on unit exam scores or the Peer Instruction Partner Survey. Although there was a statistically significant difference between the pre-MPEX and post-MPEX scores, there was no difference between the members of the various subject experience peer discussion pairs. The qualitative study, conducted after the quantitative study, helped to inform the quantitative results by exploring the nature of the peer interactions through survey questions and a series of focus groups discussions. While the majority of participants described a benefit to the use of clickers in the lecture, their experience with their discussion partners varied. Students with previous subject experience tended to describe peer instruction more positively than students who did not have previous subject experience, regardless of the experience level of their partner. They were also more likely to report favorable levels of comfort with the peer instruction experience. Students with no previous subject experience were more likely to describe a level of discomfort being assigned a stranger for a discussion partner and were more likely to report communication issues with their partner. Most group members, regardless of previous subject experience, related deeper discussions occurring when partners did not initially have the same answer to the clicker questions.

  12. Time to stabilization in single leg drop jump landings: an examination of calculation methods and assessment of differences in sample rate, filter settings and trial length on outcome values.

    PubMed

    Fransz, Duncan P; Huurnink, Arnold; de Boode, Vosse A; Kingma, Idsart; van Dieën, Jaap H

    2015-01-01

    Time to stabilization (TTS) is the time it takes for an individual to return to a baseline or stable state following a jump or hop landing. A large variety exists in methods to calculate the TTS. These methods can be described based on four aspects: (1) the input signal used (vertical, anteroposterior, or mediolateral ground reaction force) (2) signal processing (smoothed by sequential averaging, a moving root-mean-square window, or fitting an unbounded third order polynomial), (3) the stable state (threshold), and (4) the definition of when the (processed) signal is considered stable. Furthermore, differences exist with regard to the sample rate, filter settings and trial length. Twenty-five healthy volunteers performed ten 'single leg drop jump landing' trials. For each trial, TTS was calculated according to 18 previously reported methods. Additionally, the effects of sample rate (1000, 500, 200 and 100 samples/s), filter settings (no filter, 40, 15 and 10 Hz), and trial length (20, 14, 10, 7, 5 and 3s) were assessed. The TTS values varied considerably across the calculation methods. The maximum effect of alterations in the processing settings, averaged over calculation methods, were 2.8% (SD 3.3%) for sample rate, 8.8% (SD 7.7%) for filter settings, and 100.5% (SD 100.9%) for trial length. Differences in TTS calculation methods are affected differently by sample rate, filter settings and trial length. The effects of differences in sample rate and filter settings are generally small, while trial length has a large effect on TTS values. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. A Ranking Approach to Genomic Selection.

    PubMed

    Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori

    2015-01-01

    Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual's breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS.

  14. A new method for calculating differential distributions directly in Mellin space

    NASA Astrophysics Data System (ADS)

    Mitov, Alexander

    2006-12-01

    We present a new method for the calculation of differential distributions directly in Mellin space without recourse to the usual momentum-fraction (or z-) space. The method is completely general and can be applied to any process. It is based on solving the integration-by-parts identities when one of the powers of the propagators is an abstract number. The method retains the full dependence on the Mellin variable and can be implemented in any program for solving the IBP identities based on algebraic elimination, like Laporta. General features of the method are: (1) faster reduction, (2) smaller number of master integrals compared to the usual z-space approach and (3) the master integrals satisfy difference instead of differential equations. This approach generalizes previous results related to fully inclusive observables like the recently calculated three-loop space-like anomalous dimensions and coefficient functions in inclusive DIS to more general processes requiring separate treatment of the various physical cuts. Many possible applications of this method exist, the most notable being the direct evaluation of the three-loop time-like splitting functions in QCD.

  15. De novo peptide sequencing using CID and HCD spectra pairs.

    PubMed

    Yan, Yan; Kusalik, Anthony J; Wu, Fang-Xiang

    2016-10-01

    In tandem mass spectrometry (MS/MS), there are several different fragmentation techniques possible, including, collision-induced dissociation (CID) higher energy collisional dissociation (HCD), electron-capture dissociation (ECD), and electron transfer dissociation (ETD). When using pairs of spectra for de novo peptide sequencing, the most popular methods are designed for CID (or HCD) and ECD (or ETD) spectra because of the complementarity between them. Less attention has been paid to the use of CID and HCD spectra pairs. In this study, a new de novo peptide sequencing method is proposed for these spectra pairs. This method includes a CID and HCD spectra merging criterion and a parent mass correction step, along with improvements to our previously proposed algorithm for sequencing merged spectra. Three pairs of spectral datasets were used to investigate and compare the performance of the proposed method with other existing methods designed for single spectrum (HCD or CID) sequencing. Experimental results showed that full-length peptide sequencing accuracy was increased significantly by using spectra pairs in the proposed method, with the highest accuracy reaching 81.31%. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. A Novel Deployment Method for Communication-Intensive Applications in Service Clouds

    PubMed Central

    Liu, Chuanchang; Yang, Jingqi

    2014-01-01

    The service platforms are migrating to clouds for reasonably solving long construction periods, low resource utilizations, and isolated constructions of service platforms. However, when the migration is conducted in service clouds, there is a little focus of deploying communication-intensive applications in previous deployment methods. To address this problem, this paper proposed the combination of the online deployment and the offline deployment for deploying communication-intensive applications in service clouds. Firstly, the system architecture was designed for implementing the communication-aware deployment method for communication-intensive applications in service clouds. Secondly, in the online-deployment algorithm and the offline-deployment algorithm, service instances were deployed in an optimal cloud node based on the communication overhead which is determined by the communication traffic between services, as well as the communication performance between cloud nodes. Finally, the experimental results demonstrated that the proposed methods deployed communication-intensive applications effectively with lower latency and lower load compared with existing algorithms. PMID:25140331

  17. Effect of analytical conditions in wavelength dispersive electron microprobe analysis on the measurement of strontium-to-calcium (Sr/Ca) ratios in otoliths of anadromous salmonids

    USGS Publications Warehouse

    Zimmerman, Christian E.; Nielsen, Roger L.

    2003-01-01

    The use of strontium-to-calcium (Sr/Ca) ratios in otoliths is becoming a standard method to describe life history type and the chronology of migrations between freshwater and seawater habitats in teleosts (e.g. Kalish, 1990; Radtke et al., 1990; Secor, 1992; Rieman et al., 1994; Radtke, 1995; Limburg, 1995; Tzeng et al. 1997; Volk et al., 2000; Zimmerman, 2000; Zimmerman and Reeves, 2000, 2002). This method provides critical information concerning the relationship and ecology of species exhibiting phenotypic variation in migratory behavior (Kalish, 1990; Secor, 1999). Methods and procedures, however, vary among laboratories because a standard method or protocol for measurement of Sr in otoliths does not exist. In this note, we examine the variations in analytical conditions in an effort to increase precision of Sr/Ca measurements. From these findings we argue that precision can be maximized with higher beam current (although there is specimen damage) than previously recommended by Gunn et al. (1992).

  18. Improved regulatory element prediction based on tissue-specific local epigenomic signatures

    PubMed Central

    He, Yupeng; Gorkin, David U.; Dickel, Diane E.; Nery, Joseph R.; Castanon, Rosa G.; Lee, Ah Young; Shen, Yin; Visel, Axel; Pennacchio, Len A.; Ren, Bing; Ecker, Joseph R.

    2017-01-01

    Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulatory element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared with existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types. REPTILE is available at https://github.com/yupenghe/REPTILE/. PMID:28193886

  19. A Study of Deep CNN-Based Classification of Open and Closed Eyes Using a Visible Light Camera Sensor.

    PubMed

    Kim, Ki Wan; Hong, Hyung Gil; Nam, Gi Pyo; Park, Kang Ryoung

    2017-06-30

    The necessity for the classification of open and closed eyes is increasing in various fields, including analysis of eye fatigue in 3D TVs, analysis of the psychological states of test subjects, and eye status tracking-based driver drowsiness detection. Previous studies have used various methods to distinguish between open and closed eyes, such as classifiers based on the features obtained from image binarization, edge operators, or texture analysis. However, when it comes to eye images with different lighting conditions and resolutions, it can be difficult to find an optimal threshold for image binarization or optimal filters for edge and texture extraction. In order to address this issue, we propose a method to classify open and closed eye images with different conditions, acquired by a visible light camera, using a deep residual convolutional neural network. After conducting performance analysis on both self-collected and open databases, we have determined that the classification accuracy of the proposed method is superior to that of existing methods.

  20. Transition probabilities for general birth-death processes with applications in ecology, genetics, and evolution

    PubMed Central

    Crawford, Forrest W.; Suchard, Marc A.

    2011-01-01

    A birth-death process is a continuous-time Markov chain that counts the number of particles in a system over time. In the general process with n current particles, a new particle is born with instantaneous rate λn and a particle dies with instantaneous rate μn. Currently no robust and efficient method exists to evaluate the finite-time transition probabilities in a general birth-death process with arbitrary birth and death rates. In this paper, we first revisit the theory of continued fractions to obtain expressions for the Laplace transforms of these transition probabilities and make explicit an important derivation connecting transition probabilities and continued fractions. We then develop an efficient algorithm for computing these probabilities that analyzes the error associated with approximations in the method. We demonstrate that this error-controlled method agrees with known solutions and outperforms previous approaches to computing these probabilities. Finally, we apply our novel method to several important problems in ecology, evolution, and genetics. PMID:21984359

  1. A DNA microarray-based methylation-sensitive (MS)-AFLP hybridization method for genetic and epigenetic analyses.

    PubMed

    Yamamoto, F; Yamamoto, M

    2004-07-01

    We previously developed a PCR-based DNA fingerprinting technique named the Methylation Sensitive (MS)-AFLP method, which permits comparative genome-wide scanning of methylation status with a manageable number of fingerprinting experiments. The technique uses the methylation sensitive restriction enzyme NotI in the context of the existing Amplified Fragment Length Polymorphism (AFLP) method. Here we report the successful conversion of this gel electrophoresis-based DNA fingerprinting technique into a DNA microarray hybridization technique (DNA Microarray MS-AFLP). By performing a total of 30 (15 x 2 reciprocal labeling) DNA Microarray MS-AFLP hybridization experiments on genomic DNA from two breast and three prostate cancer cell lines in all pairwise combinations, and Southern hybridization experiments using more than 100 different probes, we have demonstrated that the DNA Microarray MS-AFLP is a reliable method for genetic and epigenetic analyses. No statistically significant differences were observed in the number of differences between the breast-prostate hybridization experiments and the breast-breast or prostate-prostate comparisons.

  2. Optimal economic order quantity for buyer-distributor-vendor supply chain with backlogging derived without derivatives

    NASA Astrophysics Data System (ADS)

    Teng, Jinn-Tsair; Cárdenas-Barrón, Leopoldo Eduardo; Lou, Kuo-Ren; Wee, Hui Ming

    2013-05-01

    In this article, we first complement an inappropriate mathematical error on the total cost in the previously published paper by Chung and Wee [2007, 'Optimal the Economic Lot Size of a Three-stage Supply Chain With Backlogging Derived Without Derivatives', European Journal of Operational Research, 183, 933-943] related to buyer-distributor-vendor three-stage supply chain with backlogging derived without derivatives. Then, an arithmetic-geometric inequality method is proposed not only to simplify the algebraic method of completing prefect squares, but also to complement their shortcomings. In addition, we provide a closed-form solution to integral number of deliveries for the distributor and the vendor without using complex derivatives. Furthermore, our method can solve many cases in which their method cannot, because they did not consider that a squared root of a negative number does not exist. Finally, we use some numerical examples to show that our proposed optimal solution is cheaper to operate than theirs.

  3. A novel deployment method for communication-intensive applications in service clouds.

    PubMed

    Liu, Chuanchang; Yang, Jingqi

    2014-01-01

    The service platforms are migrating to clouds for reasonably solving long construction periods, low resource utilizations, and isolated constructions of service platforms. However, when the migration is conducted in service clouds, there is a little focus of deploying communication-intensive applications in previous deployment methods. To address this problem, this paper proposed the combination of the online deployment and the offline deployment for deploying communication-intensive applications in service clouds. Firstly, the system architecture was designed for implementing the communication-aware deployment method for communication-intensive applications in service clouds. Secondly, in the online-deployment algorithm and the offline-deployment algorithm, service instances were deployed in an optimal cloud node based on the communication overhead which is determined by the communication traffic between services, as well as the communication performance between cloud nodes. Finally, the experimental results demonstrated that the proposed methods deployed communication-intensive applications effectively with lower latency and lower load compared with existing algorithms.

  4. Descriptive Question Answering with Answer Type Independent Features

    NASA Astrophysics Data System (ADS)

    Yoon, Yeo-Chan; Lee, Chang-Ki; Kim, Hyun-Ki; Jang, Myung-Gil; Ryu, Pum Mo; Park, So-Young

    In this paper, we present a supervised learning method to seek out answers to the most frequently asked descriptive questions: reason, method, and definition questions. Most of the previous systems for question answering focus on factoids, lists or definitional questions. However, descriptive questions such as reason questions and method questions are also frequently asked by users. We propose a system for these types of questions. The system conducts an answer search as follows. First, we analyze the user's question and extract search keywords and the expected answer type. Second, information retrieval results are obtained from an existing search engine such as Yahoo or Google. Finally, we rank the results to find snippets containing answers to the questions based on a ranking SVM algorithm. We also propose features to identify snippets containing answers for descriptive questions. The features are adaptable and thus are not dependent on answer type. Experimental results show that the proposed method and features are clearly effective for the task.

  5. Grayscale inhomogeneity correction method for multiple mosaicked electron microscope images

    NASA Astrophysics Data System (ADS)

    Zhou, Fangxu; Chen, Xi; Sun, Rong; Han, Hua

    2018-04-01

    Electron microscope image stitching is highly desired to acquire microscopic resolution images of large target scenes in neuroscience. However, the result of multiple Mosaicked electron microscope images may exist severe gray scale inhomogeneity due to the instability of the electron microscope system and registration errors, which degrade the visual effect of the mosaicked EM images and aggravate the difficulty of follow-up treatment, such as automatic object recognition. Consequently, the grayscale correction method for multiple mosaicked electron microscope images is indispensable in these areas. Different from most previous grayscale correction methods, this paper designs a grayscale correction process for multiple EM images which tackles the difficulty of the multiple images monochrome correction and achieves the consistency of grayscale in the overlap regions. We adjust overall grayscale of the mosaicked images with the location and grayscale information of manual selected seed images, and then fuse local overlap regions between adjacent images using Poisson image editing. Experimental result demonstrates the effectiveness of our proposed method.

  6. LOCALIZER: subcellular localization prediction of both plant and effector proteins in the plant cell

    PubMed Central

    Sperschneider, Jana; Catanzariti, Ann-Maree; DeBoer, Kathleen; Petre, Benjamin; Gardiner, Donald M.; Singh, Karam B.; Dodds, Peter N.; Taylor, Jennifer M.

    2017-01-01

    Pathogens secrete effector proteins and many operate inside plant cells to enable infection. Some effectors have been found to enter subcellular compartments by mimicking host targeting sequences. Although many computational methods exist to predict plant protein subcellular localization, they perform poorly for effectors. We introduce LOCALIZER for predicting plant and effector protein localization to chloroplasts, mitochondria, and nuclei. LOCALIZER shows greater prediction accuracy for chloroplast and mitochondrial targeting compared to other methods for 652 plant proteins. For 107 eukaryotic effectors, LOCALIZER outperforms other methods and predicts a previously unrecognized chloroplast transit peptide for the ToxA effector, which we show translocates into tobacco chloroplasts. Secretome-wide predictions and confocal microscopy reveal that rust fungi might have evolved multiple effectors that target chloroplasts or nuclei. LOCALIZER is the first method for predicting effector localisation in plants and is a valuable tool for prioritizing effector candidates for functional investigations. LOCALIZER is available at http://localizer.csiro.au/. PMID:28300209

  7. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  8. Radiation Detection Computational Benchmark Scenarios

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaver, Mark W.; Casella, Andrew M.; Wittman, Richard S.

    2013-09-24

    Modeling forms an important component of radiation detection development, allowing for testing of new detector designs, evaluation of existing equipment against a wide variety of potential threat sources, and assessing operation performance of radiation detection systems. This can, however, result in large and complex scenarios which are time consuming to model. A variety of approaches to radiation transport modeling exist with complementary strengths and weaknesses for different problems. This variety of approaches, and the development of promising new tools (such as ORNL’s ADVANTG) which combine benefits of multiple approaches, illustrates the need for a means of evaluating or comparing differentmore » techniques for radiation detection problems. This report presents a set of 9 benchmark problems for comparing different types of radiation transport calculations, identifying appropriate tools for classes of problems, and testing and guiding the development of new methods. The benchmarks were drawn primarily from existing or previous calculations with a preference for scenarios which include experimental data, or otherwise have results with a high level of confidence, are non-sensitive, and represent problem sets of interest to NA-22. From a technical perspective, the benchmarks were chosen to span a range of difficulty and to include gamma transport, neutron transport, or both and represent different important physical processes and a range of sensitivity to angular or energy fidelity. Following benchmark identification, existing information about geometry, measurements, and previous calculations were assembled. Monte Carlo results (MCNP decks) were reviewed or created and re-run in order to attain accurate computational times and to verify agreement with experimental data, when present. Benchmark information was then conveyed to ORNL in order to guide testing and development of hybrid calculations. The results of those ADVANTG calculations were then sent to PNNL for compilation. This is a report describing the details of the selected Benchmarks and results from various transport codes.« less

  9. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2006-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reductions in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  10. Rapid Structured Volume Grid Smoothing and Adaption Technique

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.

    2004-01-01

    A rapid, structured volume grid smoothing and adaption technique, based on signal processing methods, was developed and applied to the Shuttle Orbiter at hypervelocity flight conditions in support of the Columbia Accident Investigation. Because of the fast pace of the investigation, computational aerothermodynamicists, applying hypersonic viscous flow solving computational fluid dynamic (CFD) codes, refined and enhanced a grid for an undamaged baseline vehicle to assess a variety of damage scenarios. Of the many methods available to modify a structured grid, most are time-consuming and require significant user interaction. By casting the grid data into different coordinate systems, specifically two computational coordinates with arclength as the third coordinate, signal processing methods are used for filtering the data [Taubin, CG v/29 1995]. Using a reverse transformation, the processed data are used to smooth the Cartesian coordinates of the structured grids. By coupling the signal processing method with existing grid operations within the Volume Grid Manipulator tool, problems related to grid smoothing are solved efficiently and with minimal user interaction. Examples of these smoothing operations are illustrated for reduction in grid stretching and volume grid adaptation. In each of these examples, other techniques existed at the time of the Columbia accident, but the incorporation of signal processing techniques reduced the time to perform the corrections by nearly 60%. This reduction in time to perform the corrections therefore enabled the assessment of approximately twice the number of damage scenarios than previously possible during the allocated investigation time.

  11. Validation of protein carbonyl measurement: A multi-centre study

    PubMed Central

    Augustyniak, Edyta; Adam, Aisha; Wojdyla, Katarzyna; Rogowska-Wrzesinska, Adelina; Willetts, Rachel; Korkmaz, Ayhan; Atalay, Mustafa; Weber, Daniela; Grune, Tilman; Borsa, Claudia; Gradinaru, Daniela; Chand Bollineni, Ravi; Fedorova, Maria; Griffiths, Helen R.

    2014-01-01

    Protein carbonyls are widely analysed as a measure of protein oxidation. Several different methods exist for their determination. A previous study had described orders of magnitude variance that existed when protein carbonyls were analysed in a single laboratory by ELISA using different commercial kits. We have further explored the potential causes of variance in carbonyl analysis in a ring study. A soluble protein fraction was prepared from rat liver and exposed to 0, 5 and 15 min of UV irradiation. Lyophilised preparations were distributed to six different laboratories that routinely undertook protein carbonyl analysis across Europe. ELISA and Western blotting techniques detected an increase in protein carbonyl formation between 0 and 5 min of UV irradiation irrespective of method used. After irradiation for 15 min, less oxidation was detected by half of the laboratories than after 5 min irradiation. Three of the four ELISA carbonyl results fell within 95% confidence intervals. Likely errors in calculating absolute carbonyl values may be attributed to differences in standardisation. Out of up to 88 proteins identified as containing carbonyl groups after tryptic cleavage of irradiated and control liver proteins, only seven were common in all three liver preparations. Lysine and arginine residues modified by carbonyls are likely to be resistant to tryptic proteolysis. Use of a cocktail of proteases may increase the recovery of oxidised peptides. In conclusion, standardisation is critical for carbonyl analysis and heavily oxidised proteins may not be effectively analysed by any existing technique. PMID:25560243

  12. Using Lin's method to solve Bykov's problems

    NASA Astrophysics Data System (ADS)

    Knobloch, Jürgen; Lamb, Jeroen S. W.; Webster, Kevin N.

    2014-10-01

    We consider nonwandering dynamics near heteroclinic cycles between two hyperbolic equilibria. The constituting heteroclinic connections are assumed to be such that one of them is transverse and isolated. Such heteroclinic cycles are associated with the termination of a branch of homoclinic solutions, and called T-points in this context. We study codimension-two T-points and their unfoldings in Rn. In our consideration we distinguish between cases with real and complex leading eigenvalues of the equilibria. In doing so we establish Lin's method as a unified approach to (re)gain and extend results of Bykov's seminal studies and related works. To a large extent our approach reduces the study to the discussion of intersections of lines and spirals in the plane. Case (RR): Under open conditions on the eigenvalues, there exist open sets in parameter space for which there exist periodic orbits close to the heteroclinic cycle. In addition, there exist two one-parameter families of homoclinic orbits to each of the saddle points p1 and p2.See Theorem 2.1 and Proposition 2.2 for precise statements and Fig. 2 for bifurcation diagrams. Cases (RC) and (CC): At the bifurcation point μ=0 and for each N≥2, there exists an invariant set S0N close to the heteroclinic cycle on which the first return map is topologically conjugated to a full shift on N symbols. For any fixed N≥2, the invariant set SμN persists for |μ| sufficiently small.In addition, there exist infinitely many transversal and non-transversal heteroclinic orbits connecting the saddle points p1 and p2 in a neighbourhood of μ=0, as well as infinitely many one-parameter families of homoclinic orbits to each of the saddle points.For full statements of the results see Theorem 2.3 and Propositions 2.4, 2.5 and Fig. 3 for bifurcation diagrams. The dynamics near T-points has been studied previously by Bykov [6-10], Glendinning and Sparrow [20], Kokubu [27,28] and Labouriau and Rodrigues [30,31,38]. See also the surveys by Homburg and Sandstede [24], Shilnikov et al. [43] and Fiedler [18]. The occurrence of T-points in local bifurcations has been discussed by Barrientos et al. [4], and by Lamb et al. [32] in the context of reversible systems. All these studies consider dynamics in R3 using a geometric return map approach, and their results reflect the description of types of nonwandering dynamics described above.Further related studies concerning T-points can be found in [34] and [37], where inclination flips were considered in this context. In [5], numerical studies of T-points are performed using kneading invariants.The main aim of this paper is to present a comprehensive study of dynamics near T-points, including detailed proofs of all results, employing a unified functional-analytic approach, without making any assumption on the dimension of the phase space. In the process, we recover and generalise to higher dimensional settings all previously reported results for T-points in R3. In addition, we reveal the existence of richer dynamics in the (RC) and (CC) cases. A detailed discussion of our results is contained in Section 2.The functional analytic approach we follow is commonly referred to as Lin's method, after the seminal paper by Lin [33], and employs a reduction on an appropriate Banach space of piecewise continuous functions approximating the initial heteroclinic cycle to yield bifurcation equations whose solutions represent orbits of the nonwandering set. The development of such an approach is typical for the school of Hale, and is in contrast to the analysis contained in previous T-point studies, which relies on the construction of a first return map. Our choice of analytical framework is motivated by the fact that Lin's method provides a unified approach to study global bifurcations in arbitrary dimension, and has been shown to extend to a larger class of settings, such as delay and advance-delay equations [19,33].

  13. Easy and accurate variance estimation of the nonparametric estimator of the partial area under the ROC curve and its application.

    PubMed

    Yu, Jihnhee; Yang, Luge; Vexler, Albert; Hutson, Alan D

    2016-06-15

    The receiver operating characteristic (ROC) curve is a popular technique with applications, for example, investigating an accuracy of a biomarker to delineate between disease and non-disease groups. A common measure of accuracy of a given diagnostic marker is the area under the ROC curve (AUC). In contrast with the AUC, the partial area under the ROC curve (pAUC) looks into the area with certain specificities (i.e., true negative rate) only, and it can be often clinically more relevant than examining the entire ROC curve. The pAUC is commonly estimated based on a U-statistic with the plug-in sample quantile, making the estimator a non-traditional U-statistic. In this article, we propose an accurate and easy method to obtain the variance of the nonparametric pAUC estimator. The proposed method is easy to implement for both one biomarker test and the comparison of two correlated biomarkers because it simply adapts the existing variance estimator of U-statistics. In this article, we show accuracy and other advantages of the proposed variance estimation method by broadly comparing it with previously existing methods. Further, we develop an empirical likelihood inference method based on the proposed variance estimator through a simple implementation. In an application, we demonstrate that, depending on the inferences by either the AUC or pAUC, we can make a different decision on a prognostic ability of a same set of biomarkers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  14. Bridging Human Reliability Analysis and Psychology, Part 1: The Psychological Literature Review for the IDHEAS Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    April M. Whaley; Stacey M. L. Hendrickson; Ronald L. Boring

    In response to Staff Requirements Memorandum (SRM) SRM-M061020, the U.S. Nuclear Regulatory Commission (NRC) is sponsoring work to update the technical basis underlying human reliability analysis (HRA) in an effort to improve the robustness of HRA. The ultimate goal of this work is to develop a hybrid of existing methods addressing limitations of current HRA models and in particular issues related to intra- and inter-method variabilities and results. This hybrid method is now known as the Integrated Decision-tree Human Event Analysis System (IDHEAS). Existing HRA methods have looked at elements of the psychological literature, but there has not previously beenmore » a systematic attempt to translate the complete span of cognition from perception to action into mechanisms that can inform HRA. Therefore, a first step of this effort was to perform a literature search of psychology, cognition, behavioral science, teamwork, and operating performance to incorporate current understanding of human performance in operating environments, thus affording an improved technical foundation for HRA. However, this literature review went one step further by mining the literature findings to establish causal relationships and explicit links between the different types of human failures, performance drivers and associated performance measures ultimately used for quantification. This is the first of two papers that detail the literature review (paper 1) and its product (paper 2). This paper describes the literature review and the high-level architecture used to organize the literature review, and the second paper (Whaley, Hendrickson, Boring, & Xing, these proceedings) describes the resultant cognitive framework.« less

  15. Development of a defect stream function, law of the wall/wake method for compressible turbulent boundary layers. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wahls, Richard A.

    1990-01-01

    The method presented is designed to improve the accuracy and computational efficiency of existing numerical methods for the solution of flows with compressible turbulent boundary layers. A compressible defect stream function formulation of the governing equations assuming an arbitrary turbulence model is derived. This formulation is advantageous because it has a constrained zero-order approximation with respect to the wall shear stress and the tangential momentum equation has a first integral. Previous problems with this type of formulation near the wall are eliminated by using empirically based analytic expressions to define the flow near the wall. The van Driest law of the wall for velocity and the modified Crocco temperature-velocity relationship are used. The associated compressible law of the wake is determined and it extends the valid range of the analytical expressions beyond the logarithmic region of the boundary layer. The need for an inner-region eddy viscosity model is completely avoided. The near-wall analytic expressions are patched to numerically computed outer region solutions at a point determined during the computation. A new boundary condition on the normal derivative of the tangential velocity at the surface is presented; this condition replaces the no-slip condition and enables numerical integration to the surface with a relatively coarse grid using only an outer region turbulence model. The method was evaluated for incompressible and compressible equilibrium flows and was implemented into an existing Navier-Stokes code using the assumption of local equilibrium flow with respect to the patching. The method has proven to be accurate and efficient.

  16. Efficient sampling of parsimonious inversion histories with application to genome rearrangement in Yersinia.

    PubMed

    Miklós, István; Darling, Aaron E

    2009-06-22

    Inversions are among the most common mutations acting on the order and orientation of genes in a genome, and polynomial-time algorithms exist to obtain a minimal length series of inversions that transform one genome arrangement to another. However, the minimum length series of inversions (the optimal sorting path) is often not unique as many such optimal sorting paths exist. If we assume that all optimal sorting paths are equally likely, then statistical inference on genome arrangement history must account for all such sorting paths and not just a single estimate. No deterministic polynomial algorithm is known to count the number of optimal sorting paths nor sample from the uniform distribution of optimal sorting paths. Here, we propose a stochastic method that uniformly samples the set of all optimal sorting paths. Our method uses a novel formulation of parallel Markov chain Monte Carlo. In practice, our method can quickly estimate the total number of optimal sorting paths. We introduce a variant of our approach in which short inversions are modeled to be more likely, and we show how the method can be used to estimate the distribution of inversion lengths and breakpoint usage in pathogenic Yersinia pestis. The proposed method has been implemented in a program called "MC4Inversion." We draw comparison of MC4Inversion to the sampler implemented in BADGER and a previously described importance sampling (IS) technique. We find that on high-divergence data sets, MC4Inversion finds more optimal sorting paths per second than BADGER and the IS technique and simultaneously avoids bias inherent in the IS technique.

  17. Efficient Planning of Wind-Optimal Routes in North Atlantic Oceanic Airspace

    NASA Technical Reports Server (NTRS)

    Rodionova, Olga; Sridhar, Banavar

    2017-01-01

    The North Atlantic oceanic airspace (NAT) is crossed daily by more than a thousand flights, which are greatly affected by strong jet stream air currents. Several studies devoted to generating wind-optimal (WO) aircraft trajectories in the NAT demonstrated great efficiency of such an approach for individual flights. However, because of the large separation norms imposed in the NAT, previously proposed WO trajectories induce a large number of potential conflicts. Much work has been done on strategic conflict detection and resolution (CDR) in the NAT. The work presented here extends previous methods and attempts to take advantage of the NAT traffic structure to simplify the problem and improve the results of CDR. Four approaches are studied in this work: 1) subdividing the existing CDR problem into sub-problems of smaller sizes, which are easier to handle; 2) more efficient data reorganization within the considered time period; 3) problem localization, i.e. concentrating the resolution effort in the most conflicted regions; 4) applying CDR to the pre-tactical decision horizon (a couple of hours in advance). Obtained results show that these methods efficiently resolve potential conflicts at the strategic and pre-tactical levels by keeping the resulting trajectories close to the initial WO ones.

  18. 10Be dating of late-glacial moraines near the Cordillera Vilcanota and the Quelccaya Ice Cap, Peru

    NASA Astrophysics Data System (ADS)

    Kelly, M. A.; Thompson, L. G.

    2004-12-01

    The surface exposure method, based on the measurement of cosmogenic 10Be produced in quartz, is applied to determine the age of deposition of glacial moraines near the Cordillera Vilcanota and the Quelccaya Ice Cap (about 13° S, 70° W) in southeastern Peru. These data are useful for examining the timing of past glaciation in the tropical Andes and for comparison with chronologies of glaciation at higher latitudes. The preliminary data set consists of more than ten surface exposure ages. Samples used for dating are from the surfaces of boulders on a set of prominent moraines about four kilometers away from the present ice margins. The age of the moraine set was previously bracketed by radiocarbon dating of peat associated with the glacial deposits. Based on radiocarbon ages, these moraines were formed during the late-glacial period, just prior to the last glacial-interglacial transition. The surface exposure dating method enables the direct dating of the moraines. Surface exposure dates are cross-checked with the previously existing radiocarbon dates and provide a means to improve the chronology of past glaciation in the tropical Andes.

  19. Modification of the Integrated Sasang Constitutional Diagnostic Model

    PubMed Central

    Nam, Jiho

    2017-01-01

    In 2012, the Korea Institute of Oriental Medicine proposed an objective and comprehensive physical diagnostic model to address quantification problems in the existing Sasang constitutional diagnostic method. However, certain issues have been raised regarding a revision of the proposed diagnostic model. In this paper, we propose various methodological approaches to address the problems of the previous diagnostic model. Firstly, more useful variables are selected in each component. Secondly, the least absolute shrinkage and selection operator is used to reduce multicollinearity without the modification of explanatory variables. Thirdly, proportions of SC types and age are considered to construct individual diagnostic models and classify the training set and the test set for reflecting the characteristics of the entire dataset. Finally, an integrated model is constructed with explanatory variables of individual diagnosis models. The proposed integrated diagnostic model significantly improves the sensitivities for both the male SY type (36.4% → 62.0%) and the female SE type (43.7% → 64.5%), which were areas of limitation of the previous integrated diagnostic model. The ideas of these new algorithms are expected to contribute not only to the scientific development of Sasang constitutional medicine in Korea but also to that of other diagnostic methods for traditional medicine. PMID:29317897

  20. Critical speeds and forced response solutions for active magnetic bearing turbomachinery, part 2

    NASA Technical Reports Server (NTRS)

    Rawal, D.; Keesee, J.; Kirk, R. Gordon

    1991-01-01

    The need for better performance of turbomachinery with active magnetic bearings has necessitated a study of such systems for accurate prediction of their vibrational characteristics. A modification of existing transfer matrix methods for rotor analysis is presented to predict the response of rotor systems with active magnetic bearings. The position of the magnetic bearing sensors is taken into account and the effect of changing sensor position on the vibrational characteristics of the rotor system is studied. The modified algorithm is validated using a simpler Jeffcott model described previously. The effect of changing from a rotating unbalance excitation to a constant excitation in a single plane is also studied. A typical eight stage centrifugal compressor rotor is analyzed using the modified transfer matrix code. The results for a two mass Jeffcott model were presented previously. The results obtained by running this model with the transfer matrix method were compared with the results of the Jeffcott analysis for the purposes of verification. Also included are plots of amplitude versus frequency for the eight stage centrifugal compressor rotor. These plots demonstrate the significant influence that sensor location has on the amplitude and critical frequencies of the rotor system.

  1. Improved cryoEM-Guided Iterative Molecular Dynamics–Rosetta Protein Structure Refinement Protocol for High Precision Protein Structure Prediction

    PubMed Central

    2016-01-01

    Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538

  2. Integrating Information in Biological Ontologies and Molecular Networks to Infer Novel Terms

    PubMed Central

    Li, Le; Yip, Kevin Y.

    2016-01-01

    Currently most terms and term-term relationships in Gene Ontology (GO) are defined manually, which creates cost, consistency and completeness issues. Recent studies have demonstrated the feasibility of inferring GO automatically from biological networks, which represents an important complementary approach to GO construction. These methods (NeXO and CliXO) are unsupervised, which means 1) they cannot use the information contained in existing GO, 2) the way they integrate biological networks may not optimize the accuracy, and 3) they are not customized to infer the three different sub-ontologies of GO. Here we present a semi-supervised method called Unicorn that extends these previous methods to tackle the three problems. Unicorn uses a sub-tree of an existing GO sub-ontology as training part to learn parameters in integrating multiple networks. Cross-validation results show that Unicorn reliably inferred the left-out parts of each specific GO sub-ontology. In addition, by training Unicorn with an old version of GO together with biological networks, it successfully re-discovered some terms and term-term relationships present only in a new version of GO. Unicorn also successfully inferred some novel terms that were not contained in GO but have biological meanings well-supported by the literature.Availability: Source code of Unicorn is available at http://yiplab.cse.cuhk.edu.hk/unicorn/. PMID:27976738

  3. Electronic characterization of lithographically patterned microcoils for high sensitivity NMR detection.

    PubMed

    Demas, Vasiliki; Bernhardt, Anthony; Malba, Vince; Adams, Kristl L; Evans, Lee; Harvey, Christopher; Maxwell, Robert S; Herberg, Julie L

    2009-09-01

    Nuclear magnetic resonance (NMR) offers a non-destructive, powerful, structure-specific analytical method for the identification of chemical and biological systems. The use of radio frequency (RF) microcoils has been shown to increase the sensitivity in mass-limited samples. Recent advances in micro-receiver technology have further demonstrated a substantial increase in mass sensitivity [D.L. Olson, T.L. Peck, A.G. Webb, R.L. Magin, J.V. Sweedler, High-resolution microcoil H-1-NMR for mass-limited, nanoliter-volume samples, Science 270 (5244) (1995) 1967-1970]. Lithographic methods for producing solenoid microcoils possess a level of flexibility and reproducibility that exceeds previous production methods, such as hand winding microcoils. This paper presents electrical characterizations of RF microcoils produced by a unique laser lithography system that can pattern three dimensional surfaces and compares calculated and experimental results to those for wire wound RF microcoils. We show that existing optimization conditions for RF coil design still hold true for RF microcoils produced by lithography. Current lithographic microcoils show somewhat inferior performance to wire wound RF microcoils due to limitations in the existing electroplating technique. In principle, however, when the pitch of the RF microcoil is less than 100mum lithographic coils should show comparable performance to wire wound coils. In the cases of larger pitch, wire cross sections can be significantly larger and resistances lower than microfabricated conductors.

  4. An auxiliary optimization method for complex public transit route network based on link prediction

    NASA Astrophysics Data System (ADS)

    Zhang, Lin; Lu, Jian; Yue, Xianfei; Zhou, Jialin; Li, Yunxuan; Wan, Qian

    2018-02-01

    Inspired by the missing (new) link prediction and the spurious existing link identification in link prediction theory, this paper establishes an auxiliary optimization method for public transit route network (PTRN) based on link prediction. First, link prediction applied to PTRN is described, and based on reviewing the previous studies, the summary indices set and its algorithms set are collected for the link prediction experiment. Second, through analyzing the topological properties of Jinan’s PTRN established by the Space R method, we found that this is a typical small-world network with a relatively large average clustering coefficient. This phenomenon indicates that the structural similarity-based link prediction will show a good performance in this network. Then, based on the link prediction experiment of the summary indices set, three indices with maximum accuracy are selected for auxiliary optimization of Jinan’s PTRN. Furthermore, these link prediction results show that the overall layout of Jinan’s PTRN is stable and orderly, except for a partial area that requires optimization and reconstruction. The above pattern conforms to the general pattern of the optimal development stage of PTRN in China. Finally, based on the missing (new) link prediction and the spurious existing link identification, we propose optimization schemes that can be used not only to optimize current PTRN but also to evaluate PTRN planning.

  5. Dementia ascertainment using existing data in UK longitudinal and cohort studies: a systematic review of methodology.

    PubMed

    Sibbett, Ruth A; Russ, Tom C; Deary, Ian J; Starr, John M

    2017-07-03

    Studies investigating the risk factors for or causation of dementia must consider subjects prior to disease onset. To overcome the limitations of prospective studies and self-reported recall of information, the use of existing data is key. This review provides a narrative account of dementia ascertainment methods using sources of existing data. The literature search was performed using: MEDLINE, EMBASE, PsychInfo and Web of Science. Included articles reported a UK-based study of dementia in which cases were ascertained using existing data. Existing data included that which was routinely collected and that which was collected for previous research. After removing duplicates, abstracts were screened and the remaining articles were included for full-text review. A quality tool was used to evaluate the description of the ascertainment methodology. Of the 3545 abstracts screened, 360 articles were selected for full-text review. 47 articles were included for final consideration. Data sources for ascertainment included: death records, national datasets, research databases and hospital records among others. 36 articles used existing data alone for ascertainment, of which 27 used only a single data source. The most frequently used source was a research database. Quality scores ranged from 7/16 to 16/16. Quality scores were better for articles with dementia ascertainment as an outcome. Some papers performed validation studies of dementia ascertainment and most indicated that observed rates of dementia were lower than expected. We identified a lack of consistency in dementia ascertainment methodology using existing data. With no data source identified as a "gold-standard", we suggest the use of multiple sources. Where possible, studies should access records with evidence to confirm the diagnosis. Studies should also calculate the dementia ascertainment rate for the population being studied to enable a comparison with an expected rate.

  6. Previous Violent Events and Mental Health Outcomes in Guatemala

    PubMed Central

    Puac-Polanco, Victor D.; Lopez-Soto, Victor A.; Kohn, Robert; Xie, Dawei; Richmond, Therese S.

    2015-01-01

    Objectives. We analyzed a probability sample of Guatemalans to determine if a relationship exists between previous violent events and development of mental health outcomes in various sociodemographic groups, as well as during and after the Guatemalan Civil War. Methods. We used regression modeling, an interaction test, and complex survey design adjustments to estimate prevalences and test potential relationships between previous violent events and mental health. Results. Many (20.6%) participants experienced at least 1 previous serious violent event. Witnessing someone severely injured or killed was the most common event. Depression was experienced by 4.2% of participants, with 6.5% experiencing anxiety, 6.4% an alcohol-related disorder, and 1.9% posttraumatic stress disorder (PTSD). Persons who experienced violence during the war had 4.3 times the adjusted odds of alcohol-related disorders (P < .05) and 4.0 times the adjusted odds of PTSD (P < .05) compared with the postwar period. Women, indigenous Maya, and urban dwellers had greater odds of experiencing postviolence mental health outcomes. Conclusions. Violence that began during the civil war and continues today has had a significant effect on the mental health of Guatemalans. However, mental health outcomes resulting from violent events decreased in the postwar period, suggesting a nation in recovery. PMID:25713973

  7. 12 CFR 325.5 - Miscellaneous.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... assets arising from deductible temporary differences that exceed the amount of taxes previously paid that could be recovered through loss carrybacks if existing temporary differences (both deductible and.... (ii) For purposes of this limitation, all existing temporary differences should be assumed to fully...

  8. 12 CFR 325.5 - Miscellaneous.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... assets arising from deductible temporary differences that exceed the amount of taxes previously paid that could be recovered through loss carrybacks if existing temporary differences (both deductible and.... (ii) For purposes of this limitation, all existing temporary differences should be assumed to fully...

  9. Using DDGS in industrial materials

    USDA-ARS?s Scientific Manuscript database

    Adding biological materials as fillers to plastics can enhance any existing biodegradability or provide biodegradability where none had previously existed. One potential biofiller is DDGS. In fact, several studies have been conducted recently that have investigated the use of DDGS in various plast...

  10. Developing and validating a nutrition knowledge questionnaire: key methods and considerations.

    PubMed

    Trakman, Gina Louise; Forsyth, Adrienne; Hoye, Russell; Belski, Regina

    2017-10-01

    To outline key statistical considerations and detailed methodologies for the development and evaluation of a valid and reliable nutrition knowledge questionnaire. Literature on questionnaire development in a range of fields was reviewed and a set of evidence-based guidelines specific to the creation of a nutrition knowledge questionnaire have been developed. The recommendations describe key qualitative methods and statistical considerations, and include relevant examples from previous papers and existing nutrition knowledge questionnaires. Where details have been omitted for the sake of brevity, the reader has been directed to suitable references. We recommend an eight-step methodology for nutrition knowledge questionnaire development as follows: (i) definition of the construct and development of a test plan; (ii) generation of the item pool; (iii) choice of the scoring system and response format; (iv) assessment of content validity; (v) assessment of face validity; (vi) purification of the scale using item analysis, including item characteristics, difficulty and discrimination; (vii) evaluation of the scale including its factor structure and internal reliability, or Rasch analysis, including assessment of dimensionality and internal reliability; and (viii) gathering of data to re-examine the questionnaire's properties, assess temporal stability and confirm construct validity. Several of these methods have previously been overlooked. The measurement of nutrition knowledge is an important consideration for individuals working in the nutrition field. Improved methods in the development of nutrition knowledge questionnaires, such as the use of factor analysis or Rasch analysis, will enable more confidence in reported measures of nutrition knowledge.

  11. Testing the existence of optical linear polarization in young brown dwarfs

    NASA Astrophysics Data System (ADS)

    Manjavacas, E.; Miles-Páez, P. A.; Zapatero-Osorio, M. R.; Goldman, B.; Buenzli, E.; Henning, T.; Pallé, E.; Fang, M.

    2017-07-01

    Linear polarization can be used as a probe of the existence of atmospheric condensates in ultracool dwarfs. Models predict that the observed linear polarization increases with the degree of oblateness, which is inversely proportional to the surface gravity. We aimed to test the existence of optical linear polarization in a sample of bright young brown dwarfs, with spectral types between M6 and L2, observable from the Calar Alto Observatory, and cataloged previously as low gravity objects using spectroscopy. Linear polarimetric images were collected in I and R band using CAFOS at the 2.2-m telescope in Calar Alto Observatory (Spain). The flux ratio method was employed to determine the linear polarization degrees. With a confidence of 3σ, our data indicate that all targets have a linear polarimetry degree in average below 0.69 per cent in the I band, and below 1.0 per cent in the R band, at the time they were observed. We detected significant (I.e. P/σ ≥ 3) linear polarization for the young M6 dwarf 2MASS J04221413+1530525 in the R band, with a degree of p* = 0.81 ± 0.17 per cent.

  12. Assessment of Completeness and Positional Accuracy of Linear Features in Volunteered Geographic Information (vgi)

    NASA Astrophysics Data System (ADS)

    Eshghi, M.; Alesheikh, A. A.

    2015-12-01

    Recent advances in spatial data collection technologies and online services dramatically increase the contribution of ordinary people to produce, share, and use geographic information. Collecting spatial data as well as disseminating them on the internet by citizens has led to a huge source of spatial data termed as Volunteered Geographic Information (VGI) by Mike Goodchild. Although, VGI has produced previously unavailable data assets, and enriched existing ones. But its quality can be highly variable and challengeable. This presents several challenges to potential end users who are concerned about the validation and the quality assurance of the data which are collected. Almost, all the existing researches are based on how to find accurate VGI data from existing VGI data which consist of a) comparing the VGI data with the accurate official data, or b) in cases that there is no access to correct data; therefore, looking for an alternative way to determine the quality of VGI data is essential, and so forth. In this paper it has been attempt to develop a useful method to reach this goal. In this process, the positional accuracy of linear feature of Iran, Tehran OSM data have been analyzed.

  13. Natural image sequences constrain dynamic receptive fields and imply a sparse code.

    PubMed

    Häusler, Chris; Susemihl, Alex; Nawrot, Martin P

    2013-11-06

    In their natural environment, animals experience a complex and dynamic visual scenery. Under such natural stimulus conditions, neurons in the visual cortex employ a spatially and temporally sparse code. For the input scenario of natural still images, previous work demonstrated that unsupervised feature learning combined with the constraint of sparse coding can predict physiologically measured receptive fields of simple cells in the primary visual cortex. This convincingly indicated that the mammalian visual system is adapted to the natural spatial input statistics. Here, we extend this approach to the time domain in order to predict dynamic receptive fields that can account for both spatial and temporal sparse activation in biological neurons. We rely on temporal restricted Boltzmann machines and suggest a novel temporal autoencoding training procedure. When tested on a dynamic multi-variate benchmark dataset this method outperformed existing models of this class. Learning features on a large dataset of natural movies allowed us to model spatio-temporal receptive fields for single neurons. They resemble temporally smooth transformations of previously obtained static receptive fields and are thus consistent with existing theories. A neuronal spike response model demonstrates how the dynamic receptive field facilitates temporal and population sparseness. We discuss the potential mechanisms and benefits of a spatially and temporally sparse representation of natural visual input. Copyright © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

  14. Evolutions of fluctuation modes and inner structures of global stock markets

    NASA Astrophysics Data System (ADS)

    Yan, Yan; Wang, Lei; Liu, Maoxin; Chen, Xiaosong

    2016-09-01

    The paper uses empirical data, including 42 globally main stock indices in the period 1996-2014, to systematically study the evolution of fluctuation modes and inner structures of global stock markets. The data are large in scale considering both time and space. A covariance matrix-based principle fluctuation mode analysis (PFMA) is used to explore the properties of the global stock markets. It has been ignored by previous studies that covariance matrix is more suitable than the correlation matrix to be the basis of PFMA. It is found that the principle fluctuation modes of global stock markets are in the same directions, and global stock markets are divided into three clusters, which are found to be closely related to the countries’ locations with exceptions of China, Russia and Czech Republic. A time-stable correlation network constructing method is proposed to solve the problem of high-level statistical uncertainty when the estimated periods are very short, and the complex dynamic network (CDN) is constructed to investigate the evolution of inner structures. The results show when the clusters emerge and how long the clusters exist. When the 2008 financial crisis broke out, the indices form one cluster. After these crises, only the European cluster still exists. These findings complement the previous studies, and can help investors and regulators to understand the global stock markets.

  15. Usability-driven pruning of large ontologies: the case of SNOMED CT

    PubMed Central

    Boeker, Martin; Illarramendi, Arantza; Schulz, Stefan

    2012-01-01

    Objectives To study ontology modularization techniques when applied to SNOMED CT in a scenario in which no previous corpus of information exists and to examine if frequency-based filtering using MEDLINE can reduce subset size without discarding relevant concepts. Materials and Methods Subsets were first extracted using four graph-traversal heuristics and one logic-based technique, and were subsequently filtered with frequency information from MEDLINE. Twenty manually coded discharge summaries from cardiology patients were used as signatures and test sets. The coverage, size, and precision of extracted subsets were measured. Results Graph-traversal heuristics provided high coverage (71–96% of terms in the test sets of discharge summaries) at the expense of subset size (17–51% of the size of SNOMED CT). Pre-computed subsets and logic-based techniques extracted small subsets (1%), but coverage was limited (24–55%). Filtering reduced the size of large subsets to 10% while still providing 80% coverage. Discussion Extracting subsets to annotate discharge summaries is challenging when no previous corpus exists. Ontology modularization provides valuable techniques, but the resulting modules grow as signatures spread across subhierarchies, yielding a very low precision. Conclusion Graph-traversal strategies and frequency data from an authoritative source can prune large biomedical ontologies and produce useful subsets that still exhibit acceptable coverage. However, a clinical corpus closer to the specific use case is preferred when available. PMID:22268217

  16. Using component technologies for web based wavelet enhanced mammographic image visualization.

    PubMed

    Sakellaropoulos, P; Costaridou, L; Panayiotakis, G

    2000-01-01

    The poor contrast detectability of mammography can be dealt with by domain specific software visualization tools. Remote desktop client access and time performance limitations of a previously reported visualization tool are addressed, aiming at more efficient visualization of mammographic image resources existing in web or PACS image servers. This effort is also motivated by the fact that at present, web browsers do not support domain-specific medical image visualization. To deal with desktop client access the tool was redesigned by exploring component technologies, enabling the integration of stand alone domain specific mammographic image functionality in a web browsing environment (web adaptation). The integration method is based on ActiveX Document Server technology. ActiveX Document is a part of Object Linking and Embedding (OLE) extensible systems object technology, offering new services in existing applications. The standard DICOM 3.0 part 10 compatible image-format specification Papyrus 3.0 is supported, in addition to standard digitization formats such as TIFF. The visualization functionality of the tool has been enhanced by including a fast wavelet transform implementation, which allows for real time wavelet based contrast enhancement and denoising operations. Initial use of the tool with mammograms of various breast structures demonstrated its potential in improving visualization of diagnostic mammographic features. Web adaptation and real time wavelet processing enhance the potential of the previously reported tool in remote diagnosis and education in mammography.

  17. Existence of equilibria in articulated bearings

    NASA Astrophysics Data System (ADS)

    Buscaglia, G.; Ciuperca, I.; Hafidi, I.; Jai, M.

    2007-04-01

    The existence of equilibrium solutions for a lubricated system consisting of an articulated body sliding over a flat plate is considered. Though this configuration is very common (it corresponds to the popular tilting-pad thrust bearings), the existence problem has only been addressed in extremely simplified cases, such as planar sliders of infinite width. Our results show the existence of at least one equilibrium for a quite general class of (nonplanar) slider shapes. We also extend previous results concerning planar sliders.

  18. Scoring Methods for Building Genotypic Scores: An Application to Didanosine Resistance in a Large Derivation Set

    PubMed Central

    Houssaini, Allal; Assoumou, Lambert; Miller, Veronica; Calvez, Vincent; Marcelin, Anne-Geneviève; Flandre, Philippe

    2013-01-01

    Background Several attempts have been made to determine HIV-1 resistance from genotype resistance testing. We compare scoring methods for building weighted genotyping scores and commonly used systems to determine whether the virus of a HIV-infected patient is resistant. Methods and Principal Findings Three statistical methods (linear discriminant analysis, support vector machine and logistic regression) are used to determine the weight of mutations involved in HIV resistance. We compared these weighted scores with known interpretation systems (ANRS, REGA and Stanford HIV-db) to classify patients as resistant or not. Our methodology is illustrated on the Forum for Collaborative HIV Research didanosine database (N = 1453). The database was divided into four samples according to the country of enrolment (France, USA/Canada, Italy and Spain/UK/Switzerland). The total sample and the four country-based samples allow external validation (one sample is used to estimate a score and the other samples are used to validate it). We used the observed precision to compare the performance of newly derived scores with other interpretation systems. Our results show that newly derived scores performed better than or similar to existing interpretation systems, even with external validation sets. No difference was found between the three methods investigated. Our analysis identified four new mutations associated with didanosine resistance: D123S, Q207K, H208Y and K223Q. Conclusions We explored the potential of three statistical methods to construct weighted scores for didanosine resistance. Our proposed scores performed at least as well as already existing interpretation systems and previously unrecognized didanosine-resistance associated mutations were identified. This approach could be used for building scores of genotypic resistance to other antiretroviral drugs. PMID:23555613

  19. Do Current Recommendations for Upper Instrumented Vertebra Predict Shoulder Imbalance? An Attempted Validation of Level Selection for Adolescent Idiopathic Scoliosis.

    PubMed

    Bjerke, Benjamin T; Cheung, Zoe B; Shifflett, Grant D; Iyer, Sravisht; Derman, Peter B; Cunningham, Matthew E

    2015-10-01

    Shoulder balance for adolescent idiopathic scoliosis (AIS) patients is associated with patient satisfaction and self-image. However, few validated systems exist for selecting the upper instrumented vertebra (UIV) post-surgical shoulder balance. The purpose is to examine the existing UIV selection criteria and correlate with post-surgical shoulder balance in AIS patients. Patients who underwent spinal fusion at age 10-18 years for AIS over a 6-year period were reviewed. All patients with a minimum of 1-year radiographic follow-up were included. Imbalance was determined to be radiographic shoulder height |RSH| ≥ 15 mm at latest follow-up. Three UIV selection methods were considered: Lenke, Ilharreborde, and Trobisch. A recommended UIV was determined using each method from pre-surgical radiographs. The recommended UIV for each method was compared to the actual UIV instrumented for all three methods; concordance between these levels was defined as "Correct" UIV selection, and discordance was defined as "Incorrect" selection. One hundred seventy-one patients were included with 2.3 ± 1.1 year follow-up. For all methods, "Correct" UIV selection resulted in more shoulder imbalance than "Incorrect" UIV selection. Overall shoulder imbalance incidence was improved from 31.0% (53/171) to 15.2% (26/171). New shoulder imbalance incidence for patients with previously level shoulders was 8.8%. We could not identify a set of UIV selection criteria that accurately predicted post-surgical shoulder balance. Further validated measures are needed in this area. The complexity of proximal thoracic curve correction is underscored in a case example, where shoulder imbalance occurred despite "Correct" UIV selection by all methods.

  20. What methods are used to apply positive deviance within healthcare organisations? A systematic review

    PubMed Central

    Baxter, Ruth; Taylor, Natalie; Kellar, Ian; Lawton, Rebecca

    2016-01-01

    Background The positive deviance approach focuses on those who demonstrate exceptional performance, despite facing the same constraints as others. ‘Positive deviants’ are identified and hypotheses about how they succeed are generated. These hypotheses are tested and then disseminated within the wider community. The positive deviance approach is being increasingly applied within healthcare organisations, although limited guidance exists and different methods, of varying quality, are used. This paper systematically reviews healthcare applications of the positive deviance approach to explore how positive deviance is defined, the quality of existing applications and the methods used within them, including the extent to which staff and patients are involved. Methods Peer-reviewed articles, published prior to September 2014, reporting empirical research on the use of the positive deviance approach within healthcare, were identified from seven electronic databases. A previously defined four-stage process for positive deviance in healthcare was used as the basis for data extraction. Quality assessments were conducted using a validated tool, and a narrative synthesis approach was followed. Results 37 of 818 articles met the inclusion criteria. The positive deviance approach was most frequently applied within North America, in secondary care, and to address healthcare-associated infections. Research predominantly identified positive deviants and generated hypotheses about how they succeeded. The approach and processes followed were poorly defined. Research quality was low, articles lacked detail and comparison groups were rarely included. Applications of positive deviance typically lacked staff and/or patient involvement, and the methods used often required extensive resources. Conclusion Further research is required to develop high quality yet practical methods which involve staff and patients in all stages of the positive deviance approach. The efficacy and efficiency of positive deviance must be assessed and compared with other quality improvement approaches. PROSPERO registration number CRD42014009365. PMID:26590198

  1. Measurement of airborne ultrasonic slow waves in calcaneal cancellous bone.

    PubMed

    Strelitzki, R; Paech, V; Nicholson, P H

    1999-05-01

    Measurements of an airborne ultrasonic wave were made in defatted cancellous bone from the human calcaneus using standard ultrasonic equipment. The wave propagating under these conditions was consistent with a decoupled Biot slow wave travelling in the air alone, as previously reported in gas-saturated foams. Reproducible measurements of phase velocity and attenuation coefficient were possible, and an estimate of the tortuosity of the trabecular framework was derived from the high frequency limit of the phase velocity. Thus the method offers a new approach to the acoustic characterisation of bone in vitro which, in contrast to existing techniques, has the potential to yield information directly characterising the trabecular structure.

  2. Interfacing External Quantum Devices to a Universal Quantum Computer

    PubMed Central

    Lagana, Antonio A.; Lohe, Max A.; von Smekal, Lorenz

    2011-01-01

    We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. PMID:22216276

  3. On the exterior Dirichlet problem for Hessian quotient equations

    NASA Astrophysics Data System (ADS)

    Li, Dongsheng; Li, Zhisu

    2018-06-01

    In this paper, we establish the existence and uniqueness theorem for solutions of the exterior Dirichlet problem for Hessian quotient equations with prescribed asymptotic behavior at infinity. This extends the previous related results on the Monge-Ampère equations and on the Hessian equations, and rearranges them in a systematic way. Based on the Perron's method, the main ingredient of this paper is to construct some appropriate subsolutions of the Hessian quotient equation, which is realized by introducing some new quantities about the elementary symmetric polynomials and using them to analyze the corresponding ordinary differential equation related to the generalized radially symmetric subsolutions of the original equation.

  4. Automated Measurement and Verification and Innovative Occupancy Detection Technologies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, Phillip; Bruce, Nordman; Piette, Mary Ann

    In support of DOE’s sensors and controls research, the goal of this project is to move toward integrated building to grid systems by building on previous work to develop and demonstrate a set of load characterization measurement and evaluation tools that are envisioned to be part of a suite of applications for transactive efficient buildings, built upon data-driven load characterization and prediction models. This will include the ability to include occupancy data in the models, plus data collection and archival methods to include different types of occupancy data with existing networks and a taxonomy for naming these data within amore » Volttron agent platform.« less

  5. Interfacing external quantum devices to a universal quantum computer.

    PubMed

    Lagana, Antonio A; Lohe, Max A; von Smekal, Lorenz

    2011-01-01

    We present a scheme to use external quantum devices using the universal quantum computer previously constructed. We thereby show how the universal quantum computer can utilize networked quantum information resources to carry out local computations. Such information may come from specialized quantum devices or even from remote universal quantum computers. We show how to accomplish this by devising universal quantum computer programs that implement well known oracle based quantum algorithms, namely the Deutsch, Deutsch-Jozsa, and the Grover algorithms using external black-box quantum oracle devices. In the process, we demonstrate a method to map existing quantum algorithms onto the universal quantum computer. © 2011 Lagana et al.

  6. Breaking Megrelishvili protocol using matrix diagonalization

    NASA Astrophysics Data System (ADS)

    Arzaki, Muhammad; Triantoro Murdiansyah, Danang; Adi Prabowo, Satrio

    2018-03-01

    In this article we conduct a theoretical security analysis of Megrelishvili protocol—a linear algebra-based key agreement between two participants. We study the computational complexity of Megrelishvili vector-matrix problem (MVMP) as a mathematical problem that strongly relates to the security of Megrelishvili protocol. In particular, we investigate the asymptotic upper bounds for the running time and memory requirement of the MVMP that involves diagonalizable public matrix. Specifically, we devise a diagonalization method for solving the MVMP that is asymptotically faster than all of the previously existing algorithms. We also found an important counterintuitive result: the utilization of primitive matrix in Megrelishvili protocol makes the protocol more vulnerable to attacks.

  7. Genetic Optimization and Simulation of a Piezoelectric Pipe-Crawling Inspection Robot

    NASA Technical Reports Server (NTRS)

    Hollinger, Geoffrey A.; Briscoe, Jeri M.

    2004-01-01

    Using the DarwinZk development software, a genetic algorithm (GA) was used to design and optimize a pipe-crawling robot for parameters such as mass, power consumption, and joint extension to further the research of the Miniature Inspection Systems Technology (MIST) team. In an attempt to improve on existing designs, a new robot was developed, the piezo robot. The final proposed design uses piezoelectric expansion actuators to move the robot with a 'chimneying' method employed by mountain climbers and greatly improves on previous designs in load bearing ability, pipe traversing specifications, and field usability. This research shows the advantages of GA assisted design in the field of robotics.

  8. Voltammetry Method Evaluation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoyt, N.; Pereira, C.; Willit, J.

    2016-07-29

    The purpose of the ANL MPACT Voltammetry project is to evaluate the suitability of previously developed cyclic voltammetry techniques to provide electroanalytical measurements of actinide concentrations in realistic used fuel processing scenarios. The molten salts in these scenarios are very challenging as they include high concentrations of multiple electrochemically active species, thereby creating a variety of complications. Some of the problems that arise therein include issues related to uncompensated resistance, cylindrical diffusion, and alloying of the electrodeposited metals. Improvements to the existing voltammetry technique to account for these issues have been implemented, resulting in good measurements of actinide concentrations acrossmore » a wide range of adverse conditions.« less

  9. Antarctic ice shelf thickness from CryoSat-2 radar altimetry

    NASA Astrophysics Data System (ADS)

    Chuter, Stephen; Bamber, Jonathan

    2016-04-01

    The Antarctic ice shelves provide buttressing to the inland grounded ice sheet, and therefore play a controlling role in regulating ice dynamics and mass imbalance. Accurate knowledge of ice shelf thickness is essential for input-output method mass balance calculations, sub-ice shelf ocean models and buttressing parameterisations in ice sheet models. Ice shelf thickness has previously been inferred from satellite altimetry elevation measurements using the assumption of hydrostatic equilibrium, as direct measurements of ice thickness do not provide the spatial coverage necessary for these applications. The sensor limitations of previous radar altimeters have led to poor data coverage and a lack of accuracy, particularly the grounding zone where a break in slope exists. We present a new ice shelf thickness dataset using four years (2011-2014) of CryoSat-2 elevation measurements, with its SARIn dual antennae mode of operation alleviating the issues affecting previous sensors. These improvements and the dense across track spacing of the satellite has resulted in ˜92% coverage of the ice shelves, with substantial improvements, for example, of over 50% across the Venable and Totten Ice Shelves in comparison to the previous dataset. Significant improvements in coverage and accuracy are also seen south of 81.5° for the Ross and Filchner-Ronne Ice Shelves. Validation of the surface elevation measurements, used to derive ice thickness, against NASA ICESat laser altimetry data shows a mean bias of less than 1 m (equivalent to less than 9 m in ice thickness) and a fourfold decrease in standard deviation in comparison to the previous continental dataset. Importantly, the most substantial improvements are found in the grounding zone. Validation of the derived thickness data has been carried out using multiple Radio Echo Sounding (RES) campaigns across the continent. Over the Amery ice shelf, where extensive RES measurements exist, the mean difference between the datasets is 3.3% and 4.7% across the whole shelf and within 10 km of the grounding line, respectively. These represent a two to three fold improvement in accuracy when compared to the previous data product. The impact of these improvements on Input-Output estimates of mass balance is illustrated for the Abbot Ice Shelf. Our new product shows a mean reduction of 29% in thickness at the grounding line when compared to the previous dataset as well as the elimination of non-physical 'data spikes' that were prevalent in the previous product in areas of complex terrain. The reduction in grounding line thickness equates to a change in mass balance for the areas from -14±9 GTyr-1to -4±9 GTyr-1. We show examples from other sectors including the Getz and George VI ice shelves. The updated estimate is more consistent with the positive surface elevation rate in this region obtained from satellite altimetry. The new thickness dataset will greatly reduce the uncertainty in Input-Output estimates of mass balance for the ˜30% of the grounding line of Antarctica where direct ice thickness measurements do not exist.

  10. Model-based registration of multi-rigid-body for augmented reality

    NASA Astrophysics Data System (ADS)

    Ikeda, Sei; Hori, Hajime; Imura, Masataka; Manabe, Yoshitsugu; Chihara, Kunihiro

    2009-02-01

    Geometric registration between a virtual object and the real space is the most basic problem in augmented reality. Model-based tracking methods allow us to estimate three-dimensional (3-D) position and orientation of a real object by using a textured 3-D model instead of visual marker. However, it is difficult to apply existing model-based tracking methods to the objects that have movable parts such as a display of a mobile phone, because these methods suppose a single, rigid-body model. In this research, we propose a novel model-based registration method for multi rigid-body objects. For each frame, the 3-D models of each rigid part of the object are first rendered according to estimated motion and transformation from the previous frame. Second, control points are determined by detecting the edges of the rendered image and sampling pixels on these edges. Motion and transformation are then simultaneously calculated from distances between the edges and the control points. The validity of the proposed method is demonstrated through experiments using synthetic videos.

  11. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less

  12. General methods for sensitivity analysis of equilibrium dynamics in patch occupancy models

    USGS Publications Warehouse

    Miller, David A.W.

    2012-01-01

    Sensitivity analysis is a useful tool for the study of ecological models that has many potential applications for patch occupancy modeling. Drawing from the rich foundation of existing methods for Markov chain models, I demonstrate new methods for sensitivity analysis of the equilibrium state dynamics of occupancy models. Estimates from three previous studies are used to illustrate the utility of the sensitivity calculations: a joint occupancy model for a prey species, its predators, and habitat used by both; occurrence dynamics from a well-known metapopulation study of three butterfly species; and Golden Eagle occupancy and reproductive dynamics. I show how to deal efficiently with multistate models and how to calculate sensitivities involving derived state variables and lower-level parameters. In addition, I extend methods to incorporate environmental variation by allowing for spatial and temporal variability in transition probabilities. The approach used here is concise and general and can fully account for environmental variability in transition parameters. The methods can be used to improve inferences in occupancy studies by quantifying the effects of underlying parameters, aiding prediction of future system states, and identifying priorities for sampling effort.

  13. Linearized self-consistent quasiparticle GW method: Application to semiconductors and simple metals

    DOE PAGES

    Kutepov, A. L.; Oudovenko, V. S.; Kotliar, G.

    2017-06-23

    We present a code implementing the linearized self-consistent quasiparticle GW method (QSGW) in the LAPW basis. Our approach is based on the linearization of the self-energy around zero frequency which differs it from the existing implementations of the QSGW method. The linearization allows us to use Matsubara frequencies instead of working on the real axis. This results in efficiency gains by switching to the imaginary time representation in the same way as in the space time method. The all electron LAPW basis set eliminates the need for pseudopotentials. We discuss the advantages of our approach, such as its N 3more » scaling with the system size N, as well as its shortcomings. We apply our approach to study the electronic properties of selected semiconductors, insulators, and simple metals and show that our code produces the results very close to the previously published QSGW data. Our implementation is a good platform for further many body diagrammatic resummations such as the vertex-corrected GW approach and the GW+DMFT method.« less

  14. Eye-tracking the time-course of novel word learning and lexical competition in adults and children.

    PubMed

    Weighall, A R; Henderson, L M; Barr, D J; Cairney, S A; Gaskell, M G

    2017-04-01

    Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method - the visual world paradigm - consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e.g., looks to the novel object biscal upon hearing "click on the biscuit") were compared to fixations on untrained objects. Novel word-object pairings learned immediately before testing and those learned the previous day exhibited significant competition effects, with stronger competition for the previous day pairings for children but not adults. Crucially, this competition effect was significantly smaller for novel than existing competitors (e.g., looks to candy upon hearing "click on the candle"), suggesting that novel items may not compete for recognition like fully-fledged lexical items, even after 24h. Explicit memory (cued recall) was superior for words learned the day before testing, particularly for children; this effect (but not the lexical competition effects) correlated with sleep-spindle density. Together, the results suggest that different aspects of new word learning follow different time courses: visual world competition effects can emerge swiftly, but are qualitatively different from those observed with established words, and are less reliant upon sleep. Furthermore, the findings fit with the view that word learning earlier in development is boosted by sleep to a greater degree. Copyright © 2016. Published by Elsevier Inc.

  15. The quality of diagnosis and management of migraine and tension-type headache in three social groups in Russia.

    PubMed

    Lebedeva, Elena R; Kobzeva, Natalia R; Gilev, Denis V; Olesen, Jes

    2017-03-01

    Background Three successive editions of the International Classification of Headache Disorders and multiple guideline papers on headache care have described evidence based diagnosis and treatment of headache disorders. It remains unknown, however, to which extent this has improved the diagnosis and management of headache. That was the aim of our study in which we also analysed differences between three social groups in Russia. Methods We studied 1042 students (719 females, 323 males, mean age 20.6, age range 17-40), 1075 workers (146 females, 929 males, mean age 40.4, age range 21-67) and 1007 blood donors (484 females, 523 males, mean age 34.1, age range 18-64). We conducted a semi-structured, validated, face-to-face professional interview. Data on prevalence and associated factors have previously been published. A section of the interview focused on previous diagnosis and treatment, the topic of this paper. Results Only 496 of 2110 participants (23%) with headache in Russia had consulted because of headache. Students consulted more frequently (35%), workers and blood donors less often (13% and 14%). Only 12% of the patients with ICHD-3beta diagnosis of migraine and 11.7% with ICHD-3beta diagnosis of tension-type headache (TTH) had previously been correctly diagnosed. Triptans were used by only 6% of migraine patients. Only 0.4% of migraine patients and no TTH patients had received prophylactic treatment. Conclusion Despite existing guidelines about diagnosis and treatment, both remain poor in Russia. According to the literature this is only slightly better in Europe and America. Dissemination of existing knowledge should have higher priority in the future.

  16. Myocardium tracking via matching distributions.

    PubMed

    Ben Ayed, Ismail; Li, Shuo; Ross, Ian; Islam, Ali

    2009-01-01

    The goal of this study is to investigate automatic myocardium tracking in cardiac Magnetic Resonance (MR) sequences using global distribution matching via level-set curve evolution. Rather than relying on the pixelwise information as in existing approaches, distribution matching compares intensity distributions, and consequently, is well-suited to the myocardium tracking problem. Starting from a manual segmentation of the first frame, two curves are evolved in order to recover the endocardium (inner myocardium boundary) and the epicardium (outer myocardium boundary) in all the frames. For each curve, the evolution equation is sought following the maximization of a functional containing two terms: (1) a distribution matching term measuring the similarity between the non-parametric intensity distributions sampled from inside and outside the curve to the model distributions of the corresponding regions estimated from the previous frame; (2) a gradient term for smoothing the curve and biasing it toward high gradient of intensity. The Bhattacharyya coefficient is used as a similarity measure between distributions. The functional maximization is obtained by the Euler-Lagrange ascent equation of curve evolution, and efficiently implemented via level-set. The performance of the proposed distribution matching was quantitatively evaluated by comparisons with independent manual segmentations approved by an experienced cardiologist. The method was applied to ten 2D mid-cavity MR sequences corresponding to ten different subjects. Although neither shape prior knowledge nor curve coupling were used, quantitative evaluation demonstrated that the results were consistent with manual segmentations. The proposed method compares well with existing methods. The algorithm also yields a satisfying reproducibility. Distribution matching leads to a myocardium tracking which is more flexible and applicable than existing methods because the algorithm uses only the current data, i.e., does not require a training, and consequently, the solution is not bounded to some shape/intensity prior information learned from of a finite training set.

  17. Identifying sources of tick blood meals using unidentified tandem mass spectral libraries.

    PubMed

    Önder, Özlem; Shao, Wenguang; Kemps, Brian D; Lam, Henry; Brisson, Dustin

    2013-01-01

    Rapid and reliable identification of the vertebrate species on which a disease vector previously parasitized is imperative to study ecological factors that affect pathogen distribution and can aid the development of public health programs. Here we describe a proteome profiling technique designed to identify the source of blood meals of haematophagous arthropods. This method employs direct spectral matching and thus does not require a priori knowledge of any genetic or protein sequence information. Using this technology, we detect remnants of blood in blacklegged ticks (Ixodes scapularis) and correctly determine the vertebrate species from which the blood was derived, even 6 months after the tick had fed. This biological fingerprinting methodology is sensitive, fast, cost-effective and can potentially be adapted for other biological and medical applications when existing genome-based methods are impractical or ineffective.

  18. A Dual-Catalysis Approach to Enantioselective [2+2] Photocycloadditions Using Visible Light

    PubMed Central

    Du, Juana; Skubi, Kazimer L.; Schultz, Danielle M.; Yoon, Tehshik P.

    2015-01-01

    In contrast to the wealth of catalytic systems that are available to control the stereochemistry of thermally promoted cycloadditions, few similarly effective methods exist for the stereocontrol of photochemical cycloadditions. A major unsolved challenge in the design of enantioselective catalytic photocycloaddition reactions has been the difficulty of controlling racemic background reactions that occur by direct photoexcitation of substrates while unbound to catalyst. Here we describe a strategy for eliminating the racemic background reaction in asymmetric [2+2] photocycloadditions of α,β-unsaturated ketones to the corresponding cyclobutanes by employing a dual-catalyst system consisting of a visible light-absorbing transition metal photocatalyst and a stereocontrolling Lewis acid co-catalyst. The independence of these two catalysts enables broader scope, greater stereochemical flexibility, and better efficiency than previously reported methods for enantioselective photochemical cycloadditions. PMID:24763585

  19. Directed Differentiation of Embryonic Stem Cells Using a Bead-Based Combinatorial Screening Method

    PubMed Central

    Tarunina, Marina; Hernandez, Diana; Johnson, Christopher J.; Rybtsov, Stanislav; Ramathas, Vidya; Jeyakumar, Mylvaganam; Watson, Thomas; Hook, Lilian; Medvinsky, Alexander; Mason, Chris; Choo, Yen

    2014-01-01

    We have developed a rapid, bead-based combinatorial screening method to determine optimal combinations of variables that direct stem cell differentiation to produce known or novel cell types having pre-determined characteristics. Here we describe three experiments comprising stepwise exposure of mouse or human embryonic cells to 10,000 combinations of serum-free differentiation media, through which we discovered multiple novel, efficient and robust protocols to generate a number of specific hematopoietic and neural lineages. We further demonstrate that the technology can be used to optimize existing protocols in order to substitute costly growth factors with bioactive small molecules and/or increase cell yield, and to identify in vitro conditions for the production of rare developmental intermediates such as an embryonic lymphoid progenitor cell that has not previously been reported. PMID:25251366

  20. Clustering of self-organizing map identifies five distinct medulloblastoma subgroups.

    PubMed

    Cao, Changjun; Wang, Wei; Jiang, Pucha

    2016-01-01

    Medulloblastoma is one the most malignant paediatric brain tumours. Molecular subgrouping these medulloblastomas will not only help identify specific cohorts for certain treatment but also improve confidence in prognostic prediction. Currently, there is a consensus of the existences of four distinct subtypes of medulloblastoma. We proposed a novel bioinformatics method, clustering of self-organizing map, to determine the subgroups and their molecular diversity. Microarray expression profiles of 46 medulloblastoma samples were analysed and five clusters with distinct demographics, clinical outcome and transcriptional profiles were identified. The previously reported Wnt subgroup was identified as expected. Three other novel subgroups were proposed for later investigation. Our findings underscore the value of SOM clustering for discovering the medulloblastoma subgroups. When the suggested subdivision has been confirmed in large cohorts, this method should serve as a part of routine classification of clinical samples.

  1. The magnetic universe through vector potential SPMHD simulations

    NASA Astrophysics Data System (ADS)

    Stasyszyn, F. A.

    2017-10-01

    The use of Smoothed Particle Magneto Hydrodynamics (SPMHD) is getting nowadays more and more common in Astrophysics. From galaxy clusters to neutron starts, there are multiple applications already existing in the literature. I will review some of the common methods used and highlight the successful approach of using vector potentials to describe the evolution of the magnetic fields. The latter have some interesting advantages, and their results challenge previous findings, being the magnetic divergence problem naturally vanished. We select a few examples to discuss some areas of interest. First, we show some Galaxy Clusters from the MUSIC project. These cosmological simulations are done with the usual sub-grid recipes, as radiative cooling and star formation, being the first ones obtained with an SPH code in a self consistent way. This demonstrates the robustness of the new method in a variety of astrophysical scenarios.

  2. Recent Improvements in Aerodynamic Design Optimization on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Nielsen, Eric J.; Anderson, W. Kyle

    2000-01-01

    Recent improvements in an unstructured-grid method for large-scale aerodynamic design are presented. Previous work had shown such computations to be prohibitively long in a sequential processing environment. Also, robust adjoint solutions and mesh movement procedures were difficult to realize, particularly for viscous flows. To overcome these limiting factors, a set of design codes based on a discrete adjoint method is extended to a multiprocessor environment using a shared memory approach. A nearly linear speedup is demonstrated, and the consistency of the linearizations is shown to remain valid. The full linearization of the residual is used to precondition the adjoint system, and a significantly improved convergence rate is obtained. A new mesh movement algorithm is implemented and several advantages over an existing technique are presented. Several design cases are shown for turbulent flows in two and three dimensions.

  3. Runge-Kutta method for wall shear stress of blood flow in stenosed artery

    NASA Astrophysics Data System (ADS)

    Awaludin, Izyan Syazana; Ahmad, Rokiah@Rozita

    2014-06-01

    A mathematical model of blood flow through stenotic artery is considered. A stenosis is defined as the partial occlusion of the blood vessels due to the accumulation of cholesterols, fats and the abnormal growth of tissue on the artery walls. The development of stenosis in the artery is one of the factors that cause problem in blood circulation system. This study was conducted to determine the wall shear stress of blood flow in stenosed artery. Modified mathematical model is used to analyze the relationship of the wall shear stress versus the length and height of stenosis. The existing models that have been created by previous researchers are solved using fourth order Runge-Kutta method. Numerical results show that the wall shear stress is proportionate to the length and height of stenosis.

  4. BiFC Assay to Detect Calmodulin Binding to Plant Receptor Kinases.

    PubMed

    Fischer, Cornelia; Sauter, Margret; Dietrich, Petra

    2017-01-01

    Plant receptor-like kinases (RLKs) are regulated at various levels including posttranscriptional modification and interaction with regulatory proteins. Calmodulin (CaM) is a calcium-sensing protein that was shown to bind to some RLKs such as the PHYTOSULFOKINE RECEPTOR1 (PSKR1). The CaM-binding site is embedded in subdomain VIa of the kinase domain. It is possible that many more of RLKs interact with CaM than previously described. To unequivocally confirm CaM binding, several methods exist. Bimolecular fluorescence complementation (BiFC) and pull-down assays have been successfully used to study CaM binding to PSKR1 and are described in this chapter (BiFC) and in Chapter 15 (pull down). The two methods are complementary. BiFC is useful to show localization and interaction of soluble as well as of membrane-bound proteins in planta.

  5. Complete mechanical behavior analysis of FG Nano Beam under non-uniform loading using non-local theory

    NASA Astrophysics Data System (ADS)

    Ghaffari, I.; Parhizkar Yaghoobi, M.; Ghannad, M.

    2018-01-01

    The purpose of this study is to offer a complete solution to analyze the mechanical behavior (bending, buckling and vibration) of Nano-beam under non-uniform loading. Furthermore, the effects of size (nonlocal parameters), non-homogeneity constants, and different boundary conditions are investigated by using this method. The exact solution presented here reduces costs incurred by experiments. In this research, the displacement field obeys the kinematics of the Euler-Bernoulli beam theory and non-local elasticity theory has been used. The governing equations and general boundary conditions are derived for a beam by using energy method. The presented solution enables us to analyze any kind of loading profile and boundary conditions with no limitations. Furthermore, this solution, unlike previous studies, is not a series-solution; hence, there is no limitation prior to existing with the series-solution, nor does it need to check convergence. Based on the developed analytical solution, the influence of size, non-homogeneity and non-uniform loads on bending, buckling and vibration behaviors is discussed. Also, the obtained result is highly accurate and in good agreement with previous research. In theoretical method, the allowable range for non-local parameters can be determined so as to make a major contribution to the reduction of the cost of experiments determining the value of non-local parameters.

  6. Random-breakage mapping method applied to human DNA sequences

    NASA Technical Reports Server (NTRS)

    Lobrich, M.; Rydberg, B.; Cooper, P. K.; Chatterjee, A. (Principal Investigator)

    1996-01-01

    The random-breakage mapping method [Game et al. (1990) Nucleic Acids Res., 18, 4453-4461] was applied to DNA sequences in human fibroblasts. The methodology involves NotI restriction endonuclease digestion of DNA from irradiated calls, followed by pulsed-field gel electrophoresis, Southern blotting and hybridization with DNA probes recognizing the single copy sequences of interest. The Southern blots show a band for the unbroken restriction fragments and a smear below this band due to radiation induced random breaks. This smear pattern contains two discontinuities in intensity at positions that correspond to the distance of the hybridization site to each end of the restriction fragment. By analyzing the positions of those discontinuities we confirmed the previously mapped position of the probe DXS1327 within a NotI fragment on the X chromosome, thus demonstrating the validity of the technique. We were also able to position the probes D21S1 and D21S15 with respect to the ends of their corresponding NotI fragments on chromosome 21. A third chromosome 21 probe, D21S11, has previously been reported to be close to D21S1, although an uncertainty about a second possible location existed. Since both probes D21S1 and D21S11 hybridized to a single NotI fragment and yielded a similar smear pattern, this uncertainty is removed by the random-breakage mapping method.

  7. Meta-heuristic algorithms as tools for hydrological science

    NASA Astrophysics Data System (ADS)

    Yoo, Do Guen; Kim, Joong Hoon

    2014-12-01

    In this paper, meta-heuristic optimization techniques are introduced and their applications to water resources engineering, particularly in hydrological science are introduced. In recent years, meta-heuristic optimization techniques have been introduced that can overcome the problems inherent in iterative simulations. These methods are able to find good solutions and require limited computation time and memory use without requiring complex derivatives. Simulation-based meta-heuristic methods such as Genetic algorithms (GAs) and Harmony Search (HS) have powerful searching abilities, which can occasionally overcome the several drawbacks of traditional mathematical methods. For example, HS algorithms can be conceptualized from a musical performance process and used to achieve better harmony; such optimization algorithms seek a near global optimum determined by the value of an objective function, providing a more robust determination of musical performance than can be achieved through typical aesthetic estimation. In this paper, meta-heuristic algorithms and their applications (focus on GAs and HS) in hydrological science are discussed by subject, including a review of existing literature in the field. Then, recent trends in optimization are presented and a relatively new technique such as Smallest Small World Cellular Harmony Search (SSWCHS) is briefly introduced, with a summary of promising results obtained in previous studies. As a result, previous studies have demonstrated that meta-heuristic algorithms are effective tools for the development of hydrological models and the management of water resources.

  8. Position Estimation and Local Mapping Using Omnidirectional Images and Global Appearance Descriptors

    PubMed Central

    Berenguer, Yerai; Payá, Luis; Ballesta, Mónica; Reinoso, Oscar

    2015-01-01

    This work presents some methods to create local maps and to estimate the position of a mobile robot, using the global appearance of omnidirectional images. We use a robot that carries an omnidirectional vision system on it. Every omnidirectional image acquired by the robot is described only with one global appearance descriptor, based on the Radon transform. In the work presented in this paper, two different possibilities have been considered. In the first one, we assume the existence of a map previously built composed of omnidirectional images that have been captured from previously-known positions. The purpose in this case consists of estimating the nearest position of the map to the current position of the robot, making use of the visual information acquired by the robot from its current (unknown) position. In the second one, we assume that we have a model of the environment composed of omnidirectional images, but with no information about the location of where the images were acquired. The purpose in this case consists of building a local map and estimating the position of the robot within this map. Both methods are tested with different databases (including virtual and real images) taking into consideration the changes of the position of different objects in the environment, different lighting conditions and occlusions. The results show the effectiveness and the robustness of both methods. PMID:26501289

  9. Multi-dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs

    NASA Astrophysics Data System (ADS)

    Alias, Christophe; Darte, Alain; Feautrier, Paul; Gonnord, Laure

    Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters.

  10. Robust Path Planning and Feedback Design Under Stochastic Uncertainty

    NASA Technical Reports Server (NTRS)

    Blackmore, Lars

    2008-01-01

    Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.

  11. Cochlear anatomy using micro computed tomography (μCT) imaging

    NASA Astrophysics Data System (ADS)

    Kim, Namkeun; Yoon, Yongjin; Steele, Charles; Puria, Sunil

    2008-02-01

    A novel micro computed tomography (μCT) image processing method was implemented to measure anatomical features of the gerbil and chinchilla cochleas, taking into account the bent modailosis axis. Measurements were made of the scala vestibule (SV) area, the scala tympani (SV) area, and the basilar membrane (BM) width using prepared cadaveric temporal bones. 3-D cochlear structures were obtained from the scanned images using a process described in this study. It was necessary to consider the sharp curvature of mododailosis axis near the basal region. The SV and ST areas were calculated from the μCT reconstructions and compared with existing data obtained by Magnetic Resonance Microscopy (MRM), showing both qualitative and quantitative agreement. In addition to this, the width of the BM, which is the distance between the primary and secondary osseous spiral laminae, is calculated for the two animals and compared with previous data from the MRM method. For the gerbil cochlea, which does not have much cartilage in the osseous spiral lamina, the μCT-based BM width measurements show good agreement with previous data. The chinchilla BM, which contains more cartilage in the osseous spiral lamina than the gerbil, shows a large difference in the BM widths between the μCT and MRM methods. The SV area, ST area, and BM width measurements from this study can be used in building an anatomically based mathematical cochlear model.

  12. An experimental study of gully sidewall expansion

    USDA-ARS?s Scientific Manuscript database

    Soil erosion, in its myriad forms, devastates arable land and infrastructure and strains the balance between economic stability and viability. Gullies may form in existing channels or where no previous channel drainage existed. Typically, gullies are a result of a disequilibrium between the eroding ...

  13. Event Networks and the Identification of Crime Pattern Motifs

    PubMed Central

    2015-01-01

    In this paper we demonstrate the use of network analysis to characterise patterns of clustering in spatio-temporal events. Such clustering is of both theoretical and practical importance in the study of crime, and forms the basis for a number of preventative strategies. However, existing analytical methods show only that clustering is present in data, while offering little insight into the nature of the patterns present. Here, we show how the classification of pairs of events as close in space and time can be used to define a network, thereby generalising previous approaches. The application of graph-theoretic techniques to these networks can then offer significantly deeper insight into the structure of the data than previously possible. In particular, we focus on the identification of network motifs, which have clear interpretation in terms of spatio-temporal behaviour. Statistical analysis is complicated by the nature of the underlying data, and we provide a method by which appropriate randomised graphs can be generated. Two datasets are used as case studies: maritime piracy at the global scale, and residential burglary in an urban area. In both cases, the same significant 3-vertex motif is found; this result suggests that incidents tend to occur not just in pairs, but in fact in larger groups within a restricted spatio-temporal domain. In the 4-vertex case, different motifs are found to be significant in each case, suggesting that this technique is capable of discriminating between clustering patterns at a finer granularity than previously possible. PMID:26605544

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Foreman-Mackey, Daniel; Hogg, David W.; Morton, Timothy D., E-mail: danfm@nyu.edu

    No true extrasolar Earth analog is known. Hundreds of planets have been found around Sun-like stars that are either Earth-sized but on shorter periods, or else on year-long orbits but somewhat larger. Under strong assumptions, exoplanet catalogs have been used to make an extrapolated estimate of the rate at which Sun-like stars host Earth analogs. These studies are complicated by the fact that every catalog is censored by non-trivial selection effects and detection efficiencies, and every property (period, radius, etc.) is measured noisily. Here we present a general hierarchical probabilistic framework for making justified inferences about the population of exoplanets,more » taking into account survey completeness and, for the first time, observational uncertainties. We are able to make fewer assumptions about the distribution than previous studies; we only require that the occurrence rate density be a smooth function of period and radius (employing a Gaussian process). By applying our method to synthetic catalogs, we demonstrate that it produces more accurate estimates of the whole population than standard procedures based on weighting by inverse detection efficiency. We apply the method to an existing catalog of small planet candidates around G dwarf stars. We confirm a previous result that the radius distribution changes slope near Earth's radius. We find that the rate density of Earth analogs is about 0.02 (per star per natural logarithmic bin in period and radius) with large uncertainty. This number is much smaller than previous estimates made with the same data but stronger assumptions.« less

  15. Evaluating user reputation in online rating systems via an iterative group-based ranking method

    NASA Astrophysics Data System (ADS)

    Gao, Jian; Zhou, Tao

    2017-05-01

    Reputation is a valuable asset in online social lives and it has drawn increased attention. Due to the existence of noisy ratings and spamming attacks, how to evaluate user reputation in online rating systems is especially significant. However, most of the previous ranking-based methods either follow a debatable assumption or have unsatisfied robustness. In this paper, we propose an iterative group-based ranking method by introducing an iterative reputation-allocation process into the original group-based ranking method. More specifically, the reputation of users is calculated based on the weighted sizes of the user rating groups after grouping all users by their rating similarities, and the high reputation users' ratings have larger weights in dominating the corresponding user rating groups. The reputation of users and the user rating group sizes are iteratively updated until they become stable. Results on two real data sets with artificial spammers suggest that the proposed method has better performance than the state-of-the-art methods and its robustness is considerably improved comparing with the original group-based ranking method. Our work highlights the positive role of considering users' grouping behaviors towards a better online user reputation evaluation.

  16. Hyper-Parallel Tempering Monte Carlo Method and It's Applications

    NASA Astrophysics Data System (ADS)

    Yan, Qiliang; de Pablo, Juan

    2000-03-01

    A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.

  17. A finite element method to compute three-dimensional equilibrium configurations of fluid membranes: Optimal parameterization, variational formulation and applications

    NASA Astrophysics Data System (ADS)

    Rangarajan, Ramsharan; Gao, Huajian

    2015-09-01

    We introduce a finite element method to compute equilibrium configurations of fluid membranes, identified as stationary points of a curvature-dependent bending energy functional under certain geometric constraints. The reparameterization symmetries in the problem pose a challenge in designing parametric finite element methods, and existing methods commonly resort to Lagrange multipliers or penalty parameters. In contrast, we exploit these symmetries by representing solution surfaces as normal offsets of given reference surfaces and entirely bypass the need for artificial constraints. We then resort to a Galerkin finite element method to compute discrete C1 approximations of the normal offset coordinate. The variational framework presented is suitable for computing deformations of three-dimensional membranes subject to a broad range of external interactions. We provide a systematic algorithm for computing large deformations, wherein solutions at subsequent load steps are identified as perturbations of previously computed ones. We discuss the numerical implementation of the method in detail and demonstrate its optimal convergence properties using examples. We discuss applications of the method to studying adhesive interactions of fluid membranes with rigid substrates and to investigate the influence of membrane tension in tether formation.

  18. Evaluation and comparison of predictive individual-level general surrogates.

    PubMed

    Gabriel, Erin E; Sachs, Michael C; Halloran, M Elizabeth

    2018-07-01

    An intermediate response measure that accurately predicts efficacy in a new setting at the individual level could be used both for prediction and personalized medical decisions. In this article, we define a predictive individual-level general surrogate (PIGS), which is an individual-level intermediate response that can be used to accurately predict individual efficacy in a new setting. While methods for evaluating trial-level general surrogates, which are predictors of trial-level efficacy, have been developed previously, few, if any, methods have been developed to evaluate individual-level general surrogates, and no methods have formalized the use of cross-validation to quantify the expected prediction error. Our proposed method uses existing methods of individual-level surrogate evaluation within a given clinical trial setting in combination with cross-validation over a set of clinical trials to evaluate surrogate quality and to estimate the absolute prediction error that is expected in a new trial setting when using a PIGS. Simulations show that our method performs well across a variety of scenarios. We use our method to evaluate and to compare candidate individual-level general surrogates over a set of multi-national trials of a pentavalent rotavirus vaccine.

  19. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    PubMed

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  20. Determination of wind tunnel constraint effects by a unified pressure signature method. Part 2: Application to jet-in-crossflow

    NASA Technical Reports Server (NTRS)

    Hackett, J. E.; Sampath, S.; Phillips, C. G.

    1981-01-01

    The development of an improved jet-in-crossflow model for estimating wind tunnel blockage and angle-of-attack interference is described. Experiments showed that the simpler existing models fall seriously short of representing far-field flows properly. A new, vortex-source-doublet (VSD) model was therefore developed which employs curved trajectories and experimentally-based singularity strengths. The new model is consistent with existing and new experimental data and it predicts tunnel wall (i.e. far-field) pressures properly. It is implemented as a preprocessor to the wall-pressure-signature-based tunnel interference predictor. The supporting experiments and theoretical studies revealed some new results. Comparative flow field measurements with 1-inch "free-air" and 3-inch impinging jets showed that vortex penetration into the flow, in diameters, was almost unaltered until 'hard' impingement occurred. In modeling impinging cases, a 'plume redirection' term was introduced which is apparently absent in previous models. The effects of this term were found to be very significant.

  1. Intermittent sea-level acceleration

    NASA Astrophysics Data System (ADS)

    Olivieri, M.; Spada, G.

    2013-10-01

    Using instrumental observations from the Permanent Service for Mean Sea Level (PSMSL), we provide a new assessment of the global sea-level acceleration for the last ~ 2 centuries (1820-2010). Our results, obtained by a stack of tide gauge time series, confirm the existence of a global sea-level acceleration (GSLA) and, coherently with independent assessments so far, they point to a value close to 0.01 mm/yr2. However, differently from previous studies, we discuss how change points or abrupt inflections in individual sea-level time series have contributed to the GSLA. Our analysis, based on methods borrowed from econometrics, suggests the existence of two distinct driving mechanisms for the GSLA, both involving a minority of tide gauges globally. The first effectively implies a gradual increase in the rate of sea-level rise at individual tide gauges, while the second is manifest through a sequence of catastrophic variations of the sea-level trend. These occurred intermittently since the end of the 19th century and became more frequent during the last four decades.

  2. From "Where" to "What": Distributed Representations of Brand Associations in the Human Brain.

    PubMed

    Chen, Yu-Ping; Nelson, Leif D; Hsu, Ming

    2015-08-01

    Considerable attention has been given to the notion that there exists a set of human-like characteristics associated with brands, referred to as brand personality. Here we combine newly available machine learning techniques with functional neuroimaging data to characterize the set of processes that give rise to these associations. We show that brand personality traits can be captured by the weighted activity across a widely distributed set of brain regions previously implicated in reasoning, imagery, and affective processing. That is, as opposed to being constructed via reflective processes, brand personality traits appear to exist a priori inside the minds of consumers, such that we were able to predict what brand a person is thinking about based solely on the relationship between brand personality associations and brain activity. These findings represent an important advance in the application of neuroscientific methods to consumer research, moving from work focused on cataloguing brain regions associated with marketing stimuli to testing and refining mental constructs central to theories of consumer behavior.

  3. Retrofitting existing chemical scrubbers to biotrickling filters for H2S emission control

    PubMed Central

    Gabriel, David; Deshusses, Marc A.

    2003-01-01

    Biological treatment is a promising alternative to conventional air-pollution control methods, but thus far biotreatment processes for odor control have always required much larger reactor volumes than chemical scrubbers. We converted an existing full-scale chemical scrubber to a biological trickling filter and showed that effective treatment of hydrogen sulfide (H2S) in the converted scrubber was possible even at gas contact times as low as 1.6 s. That is 8–20 times shorter than previous biotrickling filtration reports and comparable to usual contact times in chemical scrubbers. Significant removal of reduced sulfur compounds, ammonia, and volatile organic compounds present in traces in the air was also observed. Continuous operation for >8 months showed stable performance and robust behavior for H2S treatment, with pollutant-removal performance comparable to that achieved by using a chemical scrubber. Our study demonstrates that biotrickling filters can replace chemical scrubbers and be a safer, more economical technique for odor control. PMID:12740445

  4. Retrofitting existing chemical scrubbers to biotrickling filters for H2S emission control.

    PubMed

    Gabriel, David; Deshusses, Marc A

    2003-05-27

    Biological treatment is a promising alternative to conventional air-pollution control methods, but thus far biotreatment processes for odor control have always required much larger reactor volumes than chemical scrubbers. We converted an existing full-scale chemical scrubber to a biological trickling filter and showed that effective treatment of hydrogen sulfide (H2S) in the converted scrubber was possible even at gas contact times as low as 1.6 s. That is 8-20 times shorter than previous biotrickling filtration reports and comparable to usual contact times in chemical scrubbers. Significant removal of reduced sulfur compounds, ammonia, and volatile organic compounds present in traces in the air was also observed. Continuous operation for >8 months showed stable performance and robust behavior for H2S treatment, with pollutant-removal performance comparable to that achieved by using a chemical scrubber. Our study demonstrates that biotrickling filters can replace chemical scrubbers and be a safer, more economical technique for odor control.

  5. Building the United States National Vegetation Classification

    USGS Publications Warehouse

    Franklin, S.B.; Faber-Langendoen, D.; Jennings, M.; Keeler-Wolf, T.; Loucks, O.; Peet, R.; Roberts, D.; McKerrow, A.

    2012-01-01

    The Federal Geographic Data Committee (FGDC) Vegetation Subcommittee, the Ecological Society of America Panel on Vegetation Classification, and NatureServe have worked together to develop the United States National Vegetation Classification (USNVC). The current standard was accepted in 2008 and fosters consistency across Federal agencies and non-federal partners for the description of each vegetation concept and its hierarchical classification. The USNVC is structured as a dynamic standard, where changes to types at any level may be proposed at any time as new information comes in. But, because much information already exists from previous work, the NVC partners first established methods for screening existing types to determine their acceptability with respect to the 2008 standard. Current efforts include a screening process to assign confidence to Association and Group level descriptions, and a review of the upper three levels of the classification. For the upper levels especially, the expectation is that the review process includes international scientists. Immediate future efforts include the review of remaining levels and the development of a proposal review process.

  6. Advantage of hole stimulus in rivalry competition.

    PubMed

    Meng, Qianli; Cui, Ding; Zhou, Ke; Chen, Lin; Ma, Yuanye

    2012-01-01

    Mounting psychophysical evidence suggests that early visual computations are sensitive to the topological properties of stimuli, such as the determination of whether the object has a hole or not. Previous studies have demonstrated that the hole feature took some advantages during conscious perception. In this study, we investigate whether there exists a privileged processing for hole stimuli during unconscious perception. By applying a continuous flash suppression paradigm, the target was gradually introduced to one eye to compete against a flashed full contrast Mondrian pattern which was presented to the other eye. This method ensured that the target image was suppressed during the initial perceptual period. We compared the initial suppressed duration between the stimuli with and without the hole feature and found that hole stimuli required less time than no-hole stimuli to gain dominance against the identical suppression noise. These results suggest the hole feature could be processed in the absence of awareness, and there exists a privileged detection of hole stimuli during suppressed phase in the interocular rivalry.

  7. From “Where” to “What”: Distributed Representations of Brand Associations in the Human Brain

    PubMed Central

    Chen, Yu-Ping; Nelson, Leif D.; Hsu, Ming

    2015-01-01

    Considerable attention has been given to the notion that there exists a set of human-like characteristics associated with brands, referred to as brand personality. Here we combine newly available machine learning techniques with functional neuroimaging data to characterize the set of processes that give rise to these associations. We show that brand personality traits can be captured by the weighted activity across a widely distributed set of brain regions previously implicated in reasoning, imagery, and affective processing. That is, as opposed to being constructed via reflective processes, brand personality traits appear to exist a priori inside the minds of consumers, such that we were able to predict what brand a person is thinking about based solely on the relationship between brand personality associations and brain activity. These findings represent an important advance in the application of neuroscientific methods to consumer research, moving from work focused on cataloguing brain regions associated with marketing stimuli to testing and refining mental constructs central to theories of consumer behavior. PMID:27065490

  8. Internal rotation of 13 low-mass low-luminosity red giants in the Kepler field

    NASA Astrophysics Data System (ADS)

    Triana, S. A.; Corsaro, E.; De Ridder, J.; Bonanno, A.; Pérez Hernández, F.; García, R. A.

    2017-06-01

    Context. The Kepler space telescope has provided time series of red giants of such unprecedented quality that a detailed asteroseismic analysis becomes possible. For a limited set of about a dozen red giants, the observed oscillation frequencies obtained by peak-bagging together with the most recent pulsation codes allowed us to reliably determine the core/envelope rotation ratio. The results so far show that the current models are unable to reproduce the rotation ratios, predicting higher values than what is observed and thus indicating that an efficient angular momentum transport mechanism should be at work. Here we provide an asteroseismic analysis of a sample of 13 low-luminosity low-mass red giant stars observed by Kepler during its first nominal mission. These targets form a subsample of the 19 red giants studied previously, which not only have a large number of extracted oscillation frequencies, but also unambiguous mode identifications. Aims: We aim to extend the sample of red giants for which internal rotation ratios obtained by theoretical modeling of peak-bagged frequencies are available. We also derive the rotation ratios using different methods, and compare the results of these methods with each other. Methods: We built seismic models using a grid search combined with a Nelder-Mead simplex algorithm and obtained rotation averages employing Bayesian inference and inversion methods. We compared these averages with those obtained using a previously developed model-independent method. Results: We find that the cores of the red giants in this sample are rotating 5 to 10 times faster than their envelopes, which is consistent with earlier results. The rotation rates computed from the different methods show good agreement for some targets, while some discrepancies exist for others.

  9. Differences in quantitative assessment of myocardial scar and gray zone by LGE-CMR imaging using established gray zone protocols.

    PubMed

    Mesubi, Olurotimi; Ego-Osuala, Kelechi; Jeudy, Jean; Purtilo, James; Synowski, Stephen; Abutaleb, Ameer; Niekoop, Michelle; Abdulghani, Mohammed; Asoglu, Ramazan; See, Vincent; Saliaris, Anastasios; Shorofsky, Stephen; Dickfeld, Timm

    2015-02-01

    Late gadolinium enhancement cardiac magnetic resonance (LGE-CMR) imaging is the gold standard for myocardial scar evaluation. Heterogeneous areas of scar ('gray zone'), may serve as arrhythmogenic substrate. Various gray zone protocols have been correlated to clinical outcomes and ventricular tachycardia channels. This study assessed the quantitative differences in gray zone and scar core sizes as defined by previously validated signal intensity (SI) threshold algorithms. High quality LGE-CMR images performed in 41 cardiomyopathy patients [ischemic (33) or non-ischemic (8)] were analyzed using previously validated SI threshold methods [Full Width at Half Maximum (FWHM), n-standard deviation (NSD) and modified-FWHM]. Myocardial scar was defined as scar core and gray zone using SI thresholds based on these methods. Scar core, gray zone and total scar sizes were then computed and compared among these models. The median gray zone mass was 2-3 times larger with FWHM (15 g, IQR: 8-26 g) compared to NSD or modified-FWHM (5 g, IQR: 3-9 g; and 8 g. IQR: 6-12 g respectively, p < 0.001). Conversely, infarct core mass was 2.3 times larger with NSD (30 g, IQR: 17-53 g) versus FWHM and modified-FWHM (13 g, IQR: 7-23 g, p < 0.001). The gray zone extent (percentage of total scar that was gray zone) also varied significantly among the three methods, 51 % (IQR: 42-61 %), 17 % (IQR: 11-21 %) versus 38 % (IQR: 33-43 %) for FWHM, NSD and modified-FWHM respectively (p < 0.001). Considerable variability exists among the current methods for MRI defined gray zone and scar core. Infarct core and total myocardial scar mass also differ using these methods. Further evaluation of the most accurate quantification method is needed.

  10. Distortion Correction of OCT Images of the Crystalline Lens: GRIN Approach

    PubMed Central

    Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie

    2012-01-01

    Purpose To propose a method to correct Optical Coherence Tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Methods 2-D images of 9 human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared to the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley and lens thickness shifts from the nominal data. Results Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface, in terms of RMS and peak values, with errors less than 6μm and 13μm respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8μm. Conclusions The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in 2-D, it is expected that 3-D imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations. PMID:22466105

  11. A fluorescence high throughput screening method for the detection of reactive electrophiles as potential skin sensitizers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avonto, Cristina; Chittiboyina, Amar G.; Rua, Diego

    2015-12-01

    Skin sensitization is an important toxicological end-point in the risk assessment of chemical allergens. Because of the complexity of the biological mechanisms associated with skin sensitization, integrated approaches combining different chemical, biological and in silico methods are recommended to replace conventional animal tests. Chemical methods are intended to characterize the potential of a sensitizer to induce earlier molecular initiating events. The presence of an electrophilic mechanistic domain is considered one of the essential chemical features to covalently bind to the biological target and induce further haptenation processes. Current in chemico assays rely on the quantification of unreacted model nucleophiles aftermore » incubation with the candidate sensitizer. In the current study, a new fluorescence-based method, ‘HTS-DCYA assay’, is proposed. The assay aims at the identification of reactive electrophiles based on their chemical reactivity toward a model fluorescent thiol. The reaction workflow enabled the development of a High Throughput Screening (HTS) method to directly quantify the reaction adducts. The reaction conditions have been optimized to minimize solubility issues, oxidative side reactions and increase the throughput of the assay while minimizing the reaction time, which are common issues with existing methods. Thirty-six chemicals previously classified with LLNA, DPRA or KeratinoSens™ were tested as a proof of concept. Preliminary results gave an estimated 82% accuracy, 78% sensitivity, 90% specificity, comparable to other in chemico methods such as Cys-DPRA. In addition to validated chemicals, six natural products were analyzed and a prediction of their sensitization potential is presented for the first time. - Highlights: • A novel fluorescence-based method to detect electrophilic sensitizers is proposed. • A model fluorescent thiol was used to directly quantify the reaction products. • A discussion of the reaction workflow and critical parameters is presented. • The method could provide a useful tool to complement existing chemical assays.« less

  12. 50 CFR 253.11 - Guarantee policy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., except: (1) Vessel construction. The Program will not finance this project cost. The Program will only refinance this project cost for an existing vessel whose previous construction cost has already been financed (or otherwise paid). Refinancing this project cost for a vessel that already exists is not...

  13. 75 FR 52253 - Airworthiness Directives; GA 8 Airvan (Pty) Ltd Models GA8 and GA8-TC320 Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-25

    ... amendment is issued to update the service bulletin to remove any ambiguities that could have existed in the... ambiguities that could have existed in the previous revision to the referenced service bulletin. It also...

  14. The ACCE method: an approach for obtaining quantitative or qualitative estimates of residual confounding that includes unmeasured confounding

    PubMed Central

    Smith, Eric G.

    2015-01-01

    Background:  Nonrandomized studies typically cannot account for confounding from unmeasured factors.  Method:  A method is presented that exploits the recently-identified phenomenon of  “confounding amplification” to produce, in principle, a quantitative estimate of total residual confounding resulting from both measured and unmeasured factors.  Two nested propensity score models are constructed that differ only in the deliberate introduction of an additional variable(s) that substantially predicts treatment exposure.  Residual confounding is then estimated by dividing the change in treatment effect estimate between models by the degree of confounding amplification estimated to occur, adjusting for any association between the additional variable(s) and outcome. Results:  Several hypothetical examples are provided to illustrate how the method produces a quantitative estimate of residual confounding if the method’s requirements and assumptions are met.  Previously published data is used to illustrate that, whether or not the method routinely provides precise quantitative estimates of residual confounding, the method appears to produce a valuable qualitative estimate of the likely direction and general size of residual confounding. Limitations:  Uncertainties exist, including identifying the best approaches for: 1) predicting the amount of confounding amplification, 2) minimizing changes between the nested models unrelated to confounding amplification, 3) adjusting for the association of the introduced variable(s) with outcome, and 4) deriving confidence intervals for the method’s estimates (although bootstrapping is one plausible approach). Conclusions:  To this author’s knowledge, it has not been previously suggested that the phenomenon of confounding amplification, if such amplification is as predictable as suggested by a recent simulation, provides a logical basis for estimating total residual confounding. The method's basic approach is straightforward.  The method's routine usefulness, however, has not yet been established, nor has the method been fully validated. Rapid further investigation of this novel method is clearly indicated, given the potential value of its quantitative or qualitative output. PMID:25580226

  15. POLYBROMINATED DIPHENYL ETHERS (PBDES) IN AMERICAN MOTHERS' MILK

    EPA Science Inventory

    No previous reports exist on polybrominated diphenyl ether (PBDE) congeners in individual American mothers' milk. This report on PBDEs is an extension of our previous studies on concentrations of dioxins, dibenzofurans, PCBs, and other chlorinated organics in human milk in a num...

  16. Analysis of Over 10,000 Cases Finds No Association between Previously-Reported Candidate Polymorphisms and Ovarian Cancer Outcome

    PubMed Central

    White, Kristin L.; Vierkant, Robert A.; Fogarty, Zachary C.; Charbonneau, Bridget; Block, Matthew S.; Pharoah, Paul D.P.; Chenevix-Trench, Georgia; Rossing, Mary Anne; Cramer, Daniel W.; Pearce, C. Leigh; Schildkraut, Joellen M.; Menon, Usha; Kjaer, Susanne Kruger; Levine, Douglas A.; Gronwald, Jacek; Culver, Hoda Anton; Whittemore, Alice S.; Karlan, Beth Y.; Lambrechts, Diether; Wentzensen, Nicolas; Kupryjanczyk, Jolanta; Chang-Claude, Jenny; Bandera, Elisa V.; Hogdall, Estrid; Heitz, Florian; Kaye, Stanley B.; Fasching, Peter A.; Campbell, Ian; Goodman, Marc T.; Pejovic, Tanja; Bean, Yukie; Lurie, Galina; Eccles, Diana; Hein, Alexander; Beckmann, Matthias W.; Ekici, Arif B.; Paul, James; Brown, Robert; Flanagan, James; Harter, Philipp; du Bois, Andreas; Schwaab, Ira; Hogdall, Claus K.; Lundvall, Lene; Olson, Sara H.; Orlow, Irene; Paddock, Lisa E.; Rudolph, Anja; Eilber, Ursula; Dansonka-Mieszkowska, Agnieszka; Rzepecka, Iwona K.; Ziolkowska-Seta, Izabela; Brinton, Louise; Yang, Hannah; Garcia-Closas, Montserrat; Despierre, Evelyn; Lambrechts, Sandrina; Vergote, Ignace; Walsh, Christine; Lester, Jenny; Sieh, Weiva; McGuire, Valerie; Rothstein, Joseph H.; Ziogas, Argyrios; Lubiński, Jan; Cybulski, Cezary; Menkiszak, Janusz; Jensen, Allan; Gayther, Simon A.; Ramus, Susan J.; Gentry-Maharaj, Aleksandra; Berchuck, Andrew; Wu, Anna H.; Pike, Malcolm C.; Van Den Berg, David; Terry, Kathryn L.; Vitonis, Allison F.; Doherty, Jennifer A.; Johnatty, Sharon; deFazio, Anna; Song, Honglin; Tyrer, Jonathan; Sellers, Thomas A.; Phelan, Catherine M.; Kalli, Kimberly R.; Cunningham, Julie M.; Fridley, Brooke L.; Goode, Ellen L.

    2013-01-01

    Background Ovarian cancer is a leading cause of cancer-related death among women. In an effort to understand contributors to disease outcome, we evaluated single-nucleotide polymorphisms (SNPs) previously associated with ovarian cancer recurrence or survival, specifically in angiogenesis, inflammation, mitosis, and drug disposition genes. Methods Twenty-seven SNPs in VHL, HGF, IL18, PRKACB, ABCB1, CYP2C8, ERCC2, and ERCC1 previously associated with ovarian cancer outcome were genotyped in 10,084 invasive cases from 28 studies from the Ovarian Cancer Association Consortium with over 37,000 observed person-years and 4,478 deaths. Cox proportional hazards models were used to examine the association between candidate SNPs and ovarian cancer recurrence or survival with and without adjustment for key covariates. Results We observed no association between genotype and ovarian cancer recurrence or survival for any of the SNPs examined. Conclusions These results refute prior associations between these SNPs and ovarian cancer outcome and underscore the importance of maximally powered genetic association studies. Impact These variants should not be used in prognostic models. Alternate approaches to uncovering inherited prognostic factors, if they exist, are needed. PMID:23513043

  17. Connections between residence time distributions and watershed characteristics across the continental US

    NASA Astrophysics Data System (ADS)

    Condon, L. E.; Maxwell, R. M.; Kollet, S. J.; Maher, K.; Haggerty, R.; Forrester, M. M.

    2016-12-01

    Although previous studies have demonstrated fractal residence time distributions in small watersheds, analyzing residence time scaling over large spatial areas is difficult with existing observational methods. For this study we use a fully integrated groundwater surface water simulation combined with Lagrangian particle tracking to evaluate connections between residence time distributions and watershed characteristics such as geology, topography and climate. Our simulation spans more than six million square kilometers of the continental US, encompassing a broad range of watershed sizes and physiographic settings. Simulated results demonstrate power law residence time distributions with peak age rages from 1.5 to 10.5 years. These ranges agree well with previous observational work and demonstrate the feasibility of using integrated models to simulate residence times. Comparing behavior between eight major watersheds, we show spatial variability in both the peak and the variance of the residence time distributions that can be related to model inputs. Peak age is well correlated with basin averaged hydraulic conductivity and the semi-variance corresponds to aridity. While power law age distributions have previously been attributed to fractal topography, these results illustrate the importance of subsurface characteristics and macro climate as additional controls on groundwater configuration and residence times.

  18. We should ban the OPCAB approach in CABG, just as we should ban jetliners and bicycles, or maybe not!

    PubMed Central

    2016-01-01

    Implementing a new technical process demands a complex preparation. In cardiac surgery this complex preparation is often reduced to visiting a surgeon who is familiar with a technique. The science of learning has identified that several steps are needed towards a successful implementation. The first step is the creation of a complete conceptual approach; this demands the deposit in writing of the actions and reactions of every involved party in this new approach. By definition a successful implementation starts with the creation of a group of involved individuals willing to collaborate towards a new goal. Then every teachable component, described in this concept, needs to be worked out in simulation training, from the smallest manual step to complete scenario training for complex situations. Finally, optimal organisational learning needs to have an existing database of the previous situation, a clear goal and objective and a new database where every new approach is restudied versus the previous one, using appropriate methods of corrections for variability. A complete implementation will always be more successful versus a partial one, due to the habit in partial implementation to return to the previous routines. PMID:27942400

  19. Quasi-linear theory of electron density and temperature fluctuations with application to MHD generators and MPD arc thrusters

    NASA Technical Reports Server (NTRS)

    Smith, M.

    1972-01-01

    Fluctuations in electron density and temperature coupled through Ohm's law are studied for an ionizable medium. The nonlinear effects are considered in the limit of a third order quasi-linear treatment. Equations are derived for the amplitude of the fluctuation. Conditions under which a steady state can exist in the presence of the fluctuation are examined and effective transport properties are determined. A comparison is made to previously considered second order theory. The effect of third order terms indicates the possibility of fluctuations existing in regions predicted stable by previous analysis.

  20. Using remote sensing and GIS techniques to estimate discharge and recharge. fluxes for the Death Valley regional groundwater flow system, USA

    USGS Publications Warehouse

    D'Agnese, F. A.; Faunt, C.C.; Keith, Turner A.

    1996-01-01

    The recharge and discharge components of the Death Valley regional groundwater flow system were defined by remote sensing and GIS techniques that integrated disparate data types to develop a spatially complex representation of near-surface hydrological processes. Image classification methods were applied to multispectral satellite data to produce a vegetation map. This map provided a basis for subsequent evapotranspiration and infiltration estimations. The vegetation map was combined with ancillary data in a GIS to delineate different types of wetlands, phreatophytes and wet playa areas. Existing evapotranspiration-rate estimates were then used to calculate discharge volumes for these areas. A previously used empirical method of groundwater recharge estimation was modified by GIS methods to incorporate data describing soil-moisture conditions, and a recharge potential map was produced. These discharge and recharge maps were readily converted to data arrays for numerical modelling codes. Inverse parameter estimation techniques also used these data to evaluate the reliability and sensitivity of estimated values.

  1. Recoil distance method lifetime measurement of the 21+ state in 94Sr and implications for the structure of neutron-rich Sr isotopes

    NASA Astrophysics Data System (ADS)

    Chester, A.; Ball, G. C.; Caballero-Folch, R.; Cross, D. S.; Cruz, S.; Domingo, T.; Drake, T. E.; Garnsworthy, A. B.; Hackman, G.; Hallam, S.; Henderson, J.; Henderson, R.; Korten, W.; Krücken, R.; Moukaddam, M.; Olaizola, B.; Ruotsalainen, P.; Smallcombe, J.; Starosta, K.; Svensson, C. E.; Williams, J.; Wimmer, K.

    2017-07-01

    A high precision lifetime measurement of the 21+ state in 94Sr was performed at TRIUMF's ISAC-II facility by coupling the recoil distance method implemented via the TIGRESS integrated plunger with unsafe Coulomb excitation in inverse kinematics. Due to limited statistics imposed by the use of a radioactive 94Sr beam, a likelihood ratio χ2 method was derived and used to compare experimental data to Geant4 simulations. The B (E 2 ;21+→01+) value extracted from the lifetime measurement of 7 .80-0.40+0.50(stat.)±0.07 (sys.) ps is approximately 25% larger than previously reported while the relative error has been reduced by a factor of approximately 8. A baseline deformation has been established for Sr isotopes with N ≤58 which is a necessary condition for the quantum phase transition interpretation of the onset of deformation in this region. A comparison to existing theoretical models is presented.

  2. A Review of Computational Methods for Finding Non-Coding RNA Genes

    PubMed Central

    Abbas, Qaisar; Raza, Syed Mansoor; Biyabani, Azizuddin Ahmed; Jaffar, Muhammad Arfan

    2016-01-01

    Finding non-coding RNA (ncRNA) genes has emerged over the past few years as a cutting-edge trend in bioinformatics. There are numerous computational intelligence (CI) challenges in the annotation and interpretation of ncRNAs because it requires a domain-related expert knowledge in CI techniques. Moreover, there are many classes predicted yet not experimentally verified by researchers. Recently, researchers have applied many CI methods to predict the classes of ncRNAs. However, the diverse CI approaches lack a definitive classification framework to take advantage of past studies. A few review papers have attempted to summarize CI approaches, but focused on the particular methodological viewpoints. Accordingly, in this article, we summarize in greater detail than previously available, the CI techniques for finding ncRNAs genes. We differentiate from the existing bodies of research and discuss concisely the technical merits of various techniques. Lastly, we review the limitations of ncRNA gene-finding CI methods with a point-of-view towards the development of new computational tools. PMID:27918472

  3. A Comparative Analysis of Perceptions of Pharmacy Students’ Stress and Stressors across Two Multicampus Universities

    PubMed Central

    Gaither, Caroline A.; Crawford, Stephanie Y.; Tieman, Jami

    2016-01-01

    Objective. To compare perceived levels of stress, stressors, and academic self-efficacy among students at two multicampus colleges of pharmacy. Methods. A survey instrument using previously validated items was developed and administered to first-year, second-year, and third-year pharmacy students at two universities with multiple campuses in spring 2013. Results. Eight hundred twenty students out of 1115 responded (73.5% response rate). Institutional differences were found in perceived student stress levels, self-efficacy, and stress-related causes. An interaction effect was demonstrated between institution and campus type (main or branch) for perceived stress and self-efficacy although campus type alone did not demonstrate a direct effect. Institutional and campus differences existed in awareness of campus counseling services, as did a few differences in coping methods. Conclusion. Stress measures were similar for pharmacy students at main or branch campuses. Institutional differences in student stress might be explained by instructional methods, campus support services, institutional climate, and nonuniversity factors. PMID:27402985

  4. System identification for Space Station Freedom using observer/Kalman filter Markov parameters. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Papadopoulos, Michael; Tolson, Robert H.

    1993-01-01

    The Modal Identification Experiment (MIE) is a proposed experiment to define the dynamic characteristics of Space Station Freedom. Previous studies emphasized free-decay modal identification. The feasibility of using a forced response method (Observer/Kalman Filter Identification (OKID)) is addressed. The interest in using OKID is to determine the input mode shape matrix which can be used for controller design or control-structure interaction analysis, and investigate if forced response methods may aid in separating closely spaced modes. A model of the SC-7 configuration of Space Station Freedom was excited using simulated control system thrusters to obtain acceleration output. It is shown that an 'optimum' number of outputs exists for OKID. To recover global mode shapes, a modified method called Global-Local OKID was developed. This study shows that using data from a long forced response followed by free-decay leads to the 'best' modal identification. Twelve out of the thirteen target modes were identified for such an output.

  5. Early‐Stage Capital Cost Estimation of Biorefinery Processes: A Comparative Study of Heuristic Techniques

    PubMed Central

    Couturier, Jean‐Luc; Kokossis, Antonis; Dubois, Jean‐Luc

    2016-01-01

    Abstract Biorefineries offer a promising alternative to fossil‐based processing industries and have undergone rapid development in recent years. Limited financial resources and stringent company budgets necessitate quick capital estimation of pioneering biorefinery projects at the early stages of their conception to screen process alternatives, decide on project viability, and allocate resources to the most promising cases. Biorefineries are capital‐intensive projects that involve state‐of‐the‐art technologies for which there is no prior experience or sufficient historical data. This work reviews existing rapid cost estimation practices, which can be used by researchers with no previous cost estimating experience. It also comprises a comparative study of six cost methods on three well‐documented biorefinery processes to evaluate their accuracy and precision. The results illustrate discrepancies among the methods because their extrapolation on biorefinery data often violates inherent assumptions. This study recommends the most appropriate rapid cost methods and urges the development of an improved early‐stage capital cost estimation tool suitable for biorefinery processes. PMID:27484398

  6. The online community based decision making support system for mitigating biased decision making

    NASA Astrophysics Data System (ADS)

    Kang, Sunghyun; Seo, Jiwan; Choi, Seungjin; Kim, Junho; Han, Sangyong

    2016-10-01

    As the Internet technology and social media advance, various information and opinions are shared and distributed through the online communities. However, the existence of implicit and explicit bias of opinions may have a potential influence on the outcomes. Compared to the importance of mitigating biased information, the study in this field is relatively young and does not address many important issues. In this paper we propose the noble approach to mitigate the biased opinions using conventional machine learning methods. The proposed method extracts the useful features such as inclination and sentiment of the community members. They are classified based on their previous behavior, and the propensity of the members is understood. This information on each community and its members is very useful and improve the ability to make an unbiased decision. The proposed method presented in this paper is shown to have the ability to assist optimal, fair and good decision making while also reducing the influence of implicit bias.

  7. Finite Element Analysis of Flexural Vibrations in Hard Disk Drive Spindle Systems

    NASA Astrophysics Data System (ADS)

    LIM, SEUNGCHUL

    2000-06-01

    This paper is concerned with the flexural vibration analysis of the hard disk drive (HDD) spindle system by means of the finite element method. In contrast to previous research, every system component is here analytically modelled taking into account its structural flexibility and also the centrifugal effect particularly on the disk. To prove the effectiveness and accuracy of the formulated models, commercial HDD systems with two and three identical disks are selected as examples. Then their major natural modes are computed with only a small number of element meshes as the shaft rotational speed is varied, and subsequently compared with the existing numerical results obtained using other methods and newly acquired experimental ones. Based on such a series of studies, the proposed method can be concluded as a very promising tool for the design of HDDs and various other high-performance computer disk drives such as floppy disk drives, CD ROM drives, and their variations having spindle mechanisms similar to those of HDDs.

  8. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method

    PubMed Central

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508

  9. Eulerian-Lagrangian Simulations of Transonic Flutter Instabilities

    NASA Technical Reports Server (NTRS)

    Bendiksen, Oddvar O.

    1994-01-01

    This paper presents an overview of recent applications of Eulerian-Lagrangian computational schemes in simulating transonic flutter instabilities. This approach, the fluid-structure system is treated as a single continuum dynamics problem, by switching from an Eulerian to a Lagrangian formulation at the fluid-structure boundary. This computational approach effectively eliminates the phase integration errors associated with previous methods, where the fluid and structure are integrated sequentially using different schemes. The formulation is based on Hamilton's Principle in mixed coordinates, and both finite volume and finite element discretization schemes are considered. Results from numerical simulations of transonic flutter instabilities are presented for isolated wings, thin panels, and turbomachinery blades. The results suggest that the method is capable of reproducing the energy exchange between the fluid and the structure with significantly less error than existing methods. Localized flutter modes and panel flutter modes involving traveling waves can also be simulated effectively with no a priori knowledge of the type of instability involved.

  10. Long-term effects of user preference-oriented recommendation method on the evolution of online system

    NASA Astrophysics Data System (ADS)

    Shi, Xiaoyu; Shang, Ming-Sheng; Luo, Xin; Khushnood, Abbas; Li, Jian

    2017-02-01

    As the explosion growth of Internet economy, recommender system has become an important technology to solve the problem of information overload. However, recommenders are not one-size-fits-all, different recommenders have different virtues, making them be suitable for different users. In this paper, we propose a novel personalized recommender based on user preferences, which allows multiple recommenders to exist in E-commerce system simultaneously. We find that output of a recommender to each user is quite different when using different recommenders, the recommendation accuracy can be significantly improved if each user is assigned with his/her optimal personalized recommender. Furthermore, different from previous works focusing on short-term effects on recommender, we also evaluate the long-term effect of the proposed method by modeling the evolution of mutual feedback between user and online system. Finally, compared with single recommender running on the online system, the proposed method can improve the accuracy of recommendation significantly and get better trade-offs between short- and long-term performances of recommendation.

  11. A Comparative Analysis of Perceptions of Pharmacy Students' Stress and Stressors across Two Multicampus Universities.

    PubMed

    Awé, Clara; Gaither, Caroline A; Crawford, Stephanie Y; Tieman, Jami

    2016-06-25

    Objective. To compare perceived levels of stress, stressors, and academic self-efficacy among students at two multicampus colleges of pharmacy. Methods. A survey instrument using previously validated items was developed and administered to first-year, second-year, and third-year pharmacy students at two universities with multiple campuses in spring 2013. Results. Eight hundred twenty students out of 1115 responded (73.5% response rate). Institutional differences were found in perceived student stress levels, self-efficacy, and stress-related causes. An interaction effect was demonstrated between institution and campus type (main or branch) for perceived stress and self-efficacy although campus type alone did not demonstrate a direct effect. Institutional and campus differences existed in awareness of campus counseling services, as did a few differences in coping methods. Conclusion. Stress measures were similar for pharmacy students at main or branch campuses. Institutional differences in student stress might be explained by instructional methods, campus support services, institutional climate, and nonuniversity factors.

  12. Dynamics of Disagreement: Large-Scale Temporal Network Analysis Reveals Negative Interactions in Online Collaboration

    NASA Astrophysics Data System (ADS)

    Tsvetkova, Milena; García-Gavilanes, Ruth; Yasseri, Taha

    2016-11-01

    Disagreement and conflict are a fact of social life. However, negative interactions are rarely explicitly declared and recorded and this makes them hard for scientists to study. In an attempt to understand the structural and temporal features of negative interactions in the community, we use complex network methods to analyze patterns in the timing and configuration of reverts of article edits to Wikipedia. We investigate how often and how fast pairs of reverts occur compared to a null model in order to control for patterns that are natural to the content production or are due to the internal rules of Wikipedia. Our results suggest that Wikipedia editors systematically revert the same person, revert back their reverter, and come to defend a reverted editor. We further relate these interactions to the status of the involved editors. Even though the individual reverts might not necessarily be negative social interactions, our analysis points to the existence of certain patterns of negative social dynamics within the community of editors. Some of these patterns have not been previously explored and carry implications for the knowledge collection practice conducted on Wikipedia. Our method can be applied to other large-scale temporal collaboration networks to identify the existence of negative social interactions and other social processes.

  13. Stochastic unilateral free vibration of an in-plane cable network

    NASA Astrophysics Data System (ADS)

    Giaccu, Gian Felice; Barbiellini, Bernardo; Caracoglia, Luca

    2015-03-01

    Cross-ties are often used on cable-stayed bridges for mitigating wind-induced stay vibration since they can be easily installed on existing systems. The system obtained by connecting two (or more) stays with a transverse restrainer is designated as an "in-plane cable-network". Failures in the restrainers of an existing network have been observed. In a previous study [1] a model was proposed to explain the failures in the cross-ties as being related to a loss in the initial pre-tensioning force imparted to the connector. This effect leads to the "unilateral" free vibration of the network. Deterministic free vibrations of a three-cable network were investigated by using the "equivalent linearization method". Since the value of the initial vibration amplitude is often not well known due to the complex aeroelastic vibration regimes, which can be experienced by the stays, the stochastic nature of the problem must be considered. This issue is investigated in the present paper. Free-vibration dynamics of the cable network, driven by an initial stochastic disturbance associated with uncertain vibration amplitudes, is examined. The corresponding random eigen-value problem for the vibration frequencies is solved through an implementation of Stochastic Approximation, (SA) based on the Robbins-Monro Theorem. Monte-Carlo methods are also used for validating the SA results.

  14. Inferring the Limit Behavior of Some Elementary Cellular Automata

    NASA Astrophysics Data System (ADS)

    Ruivo, Eurico L. P.; de Oliveira, Pedro P. B.

    Cellular automata locally define dynamical systems, discrete in space, time and in the state variables, capable of displaying arbitrarily complex global emergent behavior. One core question in the study of cellular automata refers to their limit behavior, that is, to the global dynamical features in an infinite time evolution. Previous works have shown that for finite time evolutions, the dynamics of one-dimensional cellular automata can be described by regular languages and, therefore, by finite automata. Such studies have shown the existence of growth patterns in the evolution of such finite automata for some elementary cellular automata rules and also inferred the limit behavior of such rules based upon the growth patterns; however, the results on the limit behavior were obtained manually, by direct inspection of the structures that arise during the time evolution. Here we present the formalization of an automatic method to compute such structures. Based on this, the rules of the elementary cellular automata space were classified according to the existence of a growth pattern in their finite automata. Also, we present a method to infer the limit graph of some elementary cellular automata rules, derived from the analysis of the regular expressions that describe their behavior in finite time. Finally, we analyze some attractors of two rules for which we could not compute the whole limit set.

  15. Neutron-induced fission cross section measurements for uranium isotopes 236U and 234U at LANSCE

    NASA Astrophysics Data System (ADS)

    Laptev, A. B.; Tovesson, F.; Hill, T. S.

    2013-04-01

    A well established program of neutron-induced fission cross section measurement at Los Alamos Neutron Science Center (LANSCE) is supporting the Fuel Cycle Research program (FC R&D). The incident neutron energy range spans from sub-thermal up to 200 MeV by combining two LANSCE facilities, the Lujan Center and the Weapons Neutron Research facility (WNR). The time-of-flight method is implemented to measure the incident neutron energy. A parallel-plate fission ionization chamber was used as a fission fragment detector. The event rate ratio between the investigated foil and a standard 235U foil is converted into a fission cross section ratio. In addition to previously measured data new measurements include 236U data which is being analyzed, and 234U data acquired in the 2011-2012 LANSCE run cycle. The new data complete the full suite of Uranium isotopes which were investigated with this experimental approach. Obtained data are presented in comparison with existing evaluations and previous data.

  16. A stochastic optimization model under modeling uncertainty and parameter certainty for groundwater remediation design--part I. Model development.

    PubMed

    He, L; Huang, G H; Lu, H W

    2010-04-15

    Solving groundwater remediation optimization problems based on proxy simulators can usually yield optimal solutions differing from the "true" ones of the problem. This study presents a new stochastic optimization model under modeling uncertainty and parameter certainty (SOMUM) and the associated solution method for simultaneously addressing modeling uncertainty associated with simulator residuals and optimizing groundwater remediation processes. This is a new attempt different from the previous modeling efforts. The previous ones focused on addressing uncertainty in physical parameters (i.e. soil porosity) while this one aims to deal with uncertainty in mathematical simulator (arising from model residuals). Compared to the existing modeling approaches (i.e. only parameter uncertainty is considered), the model has the advantages of providing mean-variance analysis for contaminant concentrations, mitigating the effects of modeling uncertainties on optimal remediation strategies, offering confidence level of optimal remediation strategies to system designers, and reducing computational cost in optimization processes. 2009 Elsevier B.V. All rights reserved.

  17. CACTI: free, open-source software for the sequential coding of behavioral interactions.

    PubMed

    Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B

    2012-01-01

    The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.

  18. Epitaxial Relationships between Calcium Carbonate and Inorganic Substrates

    PubMed Central

    Yang, Taewook; Jho, Jae Young; Kim, Il Won

    2014-01-01

    The polymorph-selective crystallization of calcium carbonate has been studied in terms of epitaxial relationship between the inorganic substrates and the aragonite/calcite polymorphs with implication in bioinspired mineralization. EpiCalc software was employed to assess the previously published experimental results on two different groups of inorganic substrates: aragonitic carbonate crystals (SrCO3, PbCO3, and BaCO3) and a hexagonal crystal family (α-Al2O3, α-SiO2, and LiNbO3). The maximum size of the overlayer (aragonite or calcite) was calculated for each substrate based on a threshold value of the dimensionless potential to estimate the relative nucleation preference of the polymorphs of calcium carbonate. The results were in good agreement with previous experimental observations, although stereochemical effects between the overlayer and substrate should be separately considered when existed. In assessing the polymorph-selective nucleation, the current method appeared to provide a better tool than the oversimplified mismatch parameters without invoking time-consuming molecular simulation\\. PMID:25226539

  19. A new genus and species of vespertilionid bat from the Indomalayan Region

    PubMed Central

    Ruedi, Manuel; Eger, Judith L; Lim, Burton K

    2018-01-01

    Abstract Bats belonging to the subfamily Vespertilioninae are diverse and cosmopolitan, but their systematic arrangement remains a challenge. Previous molecular surveys suggested new and unexpected relationships of some members compared to more traditional, morphology-based classifications, and revealed the existence of taxonomically undefined lineages. We describe here a new genus and species corresponding to an enigmatic lineage that was previously identified within the genus Eptesicus in the Indomalayan Region. Phylogenetic reconstructions based on mitochondrial and nuclear genes relate the new taxon to Tylonycteris and Philetor, and show that specimens associated with this new genus represent 2 genetically distinct species. Although little is known about their ecology, locations of capture and wing morphology suggest that members of this new genus are tree-dwelling, open-space aerial insect predators. The new species has only been documented from Yok Don National Park in Vietnam, so its conservation status is uncertain until more surveying methods target the bat fauna of the dipterocarp forest in Southeast Asia. PMID:29674788

  20. Study on improving the turbidity measurement of the absolute coagulation rate constant.

    PubMed

    Sun, Zhiwei; Liu, Jie; Xu, Shenghua

    2006-05-23

    The existing theories dealing with the evaluation of the absolute coagulation rate constant by turbidity measurement were experimentally tested for different particle-sized (radius = a) suspensions at incident wavelengths (lambda) ranging from near-infrared to ultraviolet light. When the size parameter alpha = 2pi a/lambda > 3, the rate constant data from previous theories for fixed-sized particles show significant inconsistencies at different light wavelengths. We attribute this problem to the imperfection of these theories in describing the light scattering from doublets through their evaluation of the extinction cross section. The evaluations of the rate constants by all previous theories become untenable as the size parameter increases and therefore hampers the applicable range of the turbidity measurement. By using the T-matrix method, we present a robust solution for evaluating the extinction cross section of doublets formed in the aggregation. Our experiments show that this new approach is effective in extending the applicability range of the turbidity methodology and increasing measurement accuracy.

Top