Sample records for local approach methods

  1. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  2. An Examination of Rater Performance on a Local Oral English Proficiency Test: A Mixed-Methods Approach

    ERIC Educational Resources Information Center

    Yan, Xun

    2014-01-01

    This paper reports on a mixed-methods approach to evaluate rater performance on a local oral English proficiency test. Three types of reliability estimates were reported to examine rater performance from different perspectives. Quantitative results were also triangulated with qualitative rater comments to arrive at a more representative picture of…

  3. A new local-global approach for classification.

    PubMed

    Peres, R T; Pedreira, C E

    2010-09-01

    In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.

  4. Evaluation Methodology between Globalization and Localization Features Approaches for Skin Cancer Lesions Classification

    NASA Astrophysics Data System (ADS)

    Ahmed, H. M.; Al-azawi, R. J.; Abdulhameed, A. A.

    2018-05-01

    Huge efforts have been put in the developing of diagnostic methods to skin cancer disease. In this paper, two different approaches have been addressed for detection the skin cancer in dermoscopy images. The first approach uses a global method that uses global features for classifying skin lesions, whereas the second approach uses a local method that uses local features for classifying skin lesions. The aim of this paper is selecting the best approach for skin lesion classification. The dataset has been used in this paper consist of 200 dermoscopy images from Pedro Hispano Hospital (PH2). The achieved results are; sensitivity about 96%, specificity about 100%, precision about 100%, and accuracy about 97% for globalization approach while, sensitivity about 100%, specificity about 100%, precision about 100%, and accuracy about 100% for Localization Approach, these results showed that the localization approach achieved acceptable accuracy and better than globalization approach for skin cancer lesions classification.

  5. Global/local methods for probabilistic structural analysis

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Wu, Y.-T.

    1993-01-01

    A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.

  6. Global/local methods for probabilistic structural analysis

    NASA Astrophysics Data System (ADS)

    Millwater, H. R.; Wu, Y.-T.

    1993-04-01

    A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.

  7. Local Stakeholder Perception on Community Participation in Marine Protected Area Management: A Q-Method Approach

    NASA Astrophysics Data System (ADS)

    Megat Jamual Fawaeed, P. S.; Daim, M. S.

    2018-02-01

    Local stakeholder involvement in Marine Protected Area (MPA) management can bring to a successful MPA. Generally, participatory research in marine protected area management is exploring the relationship between marine protected area management approach adopted by the management agencies and the level of participation of local stakeholder whom reside within the marine protected areas. However, the scenario of local community participation in MPA management in Malaysia seems discouraging and does not align with the International Aichi Biodiversity Target 2020. In order to achieve the International Aichi Biodiversity Target 2020, this paper attempts to explore the methodology on participatory research towards the local stakeholder of Pulau Perhentian Marine Park (PPMP), Terengganu, Malaysia. A Q-methodology is used to investigate the perspective of local stakeholder who represents different stances on the issues, by having participants rank and sort a series of statements by comply quantitative and qualitative method in collecting the data. A structured questionnaire will be employed across this study by means of face-to-face interview. In total, 210 respondents from Kampung Pasir Hantu are randomly selected. Meanwhile, a workshop with the agency (Department of Marine Park) had been held to discuss about the issues faces on behalf of management that manage the PPMP. Using the Q-method, researcher acknowledged wise viewpoints, reflecting how different stakeholders’ perception and opinion about community participation with highlights the current level of community participation in MPA. Thus, this paper describes the phases involved in this study, methodology and analysis used in making a conclusion .

  8. SubCellProt: predicting protein subcellular localization using machine learning approaches.

    PubMed

    Garg, Prabha; Sharma, Virag; Chaudhari, Pradeep; Roy, Nilanjan

    2009-01-01

    High-throughput genome sequencing projects continue to churn out enormous amounts of raw sequence data. However, most of this raw sequence data is unannotated and, hence, not very useful. Among the various approaches to decipher the function of a protein, one is to determine its localization. Experimental approaches for proteome annotation including determination of a protein's subcellular localizations are very costly and labor intensive. Besides the available experimental methods, in silico methods present alternative approaches to accomplish this task. Here, we present two machine learning approaches for prediction of the subcellular localization of a protein from the primary sequence information. Two machine learning algorithms, k Nearest Neighbor (k-NN) and Probabilistic Neural Network (PNN) were used to classify an unknown protein into one of the 11 subcellular localizations. The final prediction is made on the basis of a consensus of the predictions made by two algorithms and a probability is assigned to it. The results indicate that the primary sequence derived features like amino acid composition, sequence order and physicochemical properties can be used to assign subcellular localization with a fair degree of accuracy. Moreover, with the enhanced accuracy of our approach and the definition of a prediction domain, this method can be used for proteome annotation in a high throughput manner. SubCellProt is available at www.databases.niper.ac.in/SubCellProt.

  9. Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.

    PubMed

    Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona

    2016-05-31

    Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.

  10. Optic disk localization by a robust fusion method

    NASA Astrophysics Data System (ADS)

    Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin

    2013-02-01

    The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.

  11. Enhanced Methods for Local Ancestry Assignment in Sequenced Admixed Individuals

    PubMed Central

    Brown, Robert; Pasaniuc, Bogdan

    2014-01-01

    Inferring the ancestry at each locus in the genome of recently admixed individuals (e.g., Latino Americans) plays a major role in medical and population genetic inferences, ranging from finding disease-risk loci, to inferring recombination rates, to mapping missing contigs in the human genome. Although many methods for local ancestry inference have been proposed, most are designed for use with genotyping arrays and fail to make use of the full spectrum of data available from sequencing. In addition, current haplotype-based approaches are very computationally demanding, requiring large computational time for moderately large sample sizes. Here we present new methods for local ancestry inference that leverage continent-specific variants (CSVs) to attain increased performance over existing approaches in sequenced admixed genomes. A key feature of our approach is that it incorporates the admixed genomes themselves jointly with public datasets, such as 1000 Genomes, to improve the accuracy of CSV calling. We use simulations to show that our approach attains accuracy similar to widely used computationally intensive haplotype-based approaches with large decreases in runtime. Most importantly, we show that our method recovers comparable local ancestries, as the 1000 Genomes consensus local ancestry calls in the real admixed individuals from the 1000 Genomes Project. We extend our approach to account for low-coverage sequencing and show that accurate local ancestry inference can be attained at low sequencing coverage. Finally, we generalize CSVs to sub-continental population-specific variants (sCSVs) and show that in some cases it is possible to determine the sub-continental ancestry for short chromosomal segments on the basis of sCSVs. PMID:24743331

  12. Formulation analysis and computation of an optimization-based local-to-nonlocal coupling method.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D'Elia, Marta; Bochev, Pavel Blagoveston

    2017-01-01

    In this paper, we present an optimization-based coupling method for local and nonlocal continuum models. Our approach couches the coupling of the models into a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the local and nonlocal problem domains, and the virtual controls are the nonlocal volume constraint and the local boundary condition. We present the method in the context of Local-to-Nonlocal di usion coupling. Numerical examples illustrate the theoretical properties of the approach.

  13. Localized-overlap approach to calculations of intermolecular interactions

    NASA Astrophysics Data System (ADS)

    Rob, Fazle

    Symmetry-adapted perturbation theory (SAPT) based on the density functional theory (DFT) description of the monomers [SAPT(DFT)] is one of the most robust tools for computing intermolecular interaction energies. Currently, one can use the SAPT(DFT) method to calculate interaction energies of dimers consisting of about a hundred atoms. To remove the methodological and technical limits and extend the size of the systems that can be calculated with the method, a novel approach has been proposed that redefines the electron densities and polarizabilities in a localized way. In the new method, accurate but computationally expensive quantum-chemical calculations are only applied for the regions where it is necessary and for other regions, where overlap effects of the wave functions are negligible, inexpensive asymptotic techniques are used. Unlike other hybrid methods, this new approach is mathematically rigorous. The main benefit of this method is that with the increasing size of the system the calculation scales linearly and, therefore, this approach will be denoted as local-overlap SAPT(DFT) or LSAPT(DFT). As a byproduct of developing LSAPT(DFT), some important problems concerning distributed molecular response, in particular, the unphysical charge-flow terms were eliminated. Additionally, to illustrate the capabilities of SAPT(DFT), a potential energy function has been developed for an energetic molecular crystal of 1,1-diamino-2,2-dinitroethylene (FOX-7), where an excellent agreement with the experimental data has been found.

  14. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  15. LOCALIZING THE RANGELAND HEALTH METHOD FOR SOUTHEASTERN ARIZONA

    EPA Science Inventory

    The interagency manual Interpreting Indicators of Rangeland Health, Version 4 (Technical Reference 1734-6) provides a method for making rangeland health assessments. The manual recommends that the rangeland health assessment approach be adapted to local conditions. This technica...

  16. An efficient linear-scaling CCSD(T) method based on local natural orbitals.

    PubMed

    Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály

    2013-09-07

    An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)] and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)] with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Mo̸ller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.

  17. Efficient and accurate local approximations to coupled-electron pair approaches: An attempt to revive the pair natural orbital method

    NASA Astrophysics Data System (ADS)

    Neese, Frank; Wennmohs, Frank; Hansen, Andreas

    2009-03-01

    Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Møller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol-1. Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500

  18. Efficient and accurate local approximations to coupled-electron pair approaches: An attempt to revive the pair natural orbital method.

    PubMed

    Neese, Frank; Wennmohs, Frank; Hansen, Andreas

    2009-03-21

    Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Moller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol(-1). Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500

  19. Comparison and combination of "direct" and fragment based local correlation methods: Cluster in molecules and domain based local pair natural orbital perturbation and coupled cluster theories

    NASA Astrophysics Data System (ADS)

    Guo, Yang; Becker, Ute; Neese, Frank

    2018-03-01

    Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.

  20. Nonlocal and Mixed-Locality Multiscale Finite Element Methods

    DOE PAGES

    Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

    2018-03-27

    In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less

  1. Nonlocal and Mixed-Locality Multiscale Finite Element Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.

    In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less

  2. Improving mobile robot localization: grid-based approach

    NASA Astrophysics Data System (ADS)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  3. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  4. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  5. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method

    PubMed Central

    Juric, Matjaz B.

    2018-01-01

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage. PMID:29587352

  6. A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.

    PubMed

    Huang, Shiping; Wu, Zhifeng; Misra, Anil

    2017-12-11

    Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.

  7. AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT

    EPA Science Inventory

    A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...

  8. Localization of phonons in mass-disordered alloys: A typical medium dynamical cluster approach

    DOE PAGES

    Jarrell, Mark; Moreno, Juana; Raja Mondal, Wasim; ...

    2017-07-20

    The effect of disorder on lattice vibrational modes has been a topic of interest for several decades. In this article, we employ a Green's function based approach, namely, the dynamical cluster approximation (DCA), to investigate phonons in mass-disordered systems. Detailed benchmarks with previous exact calculations are used to validate the method in a wide parameter space. An extension of the method, namely, the typical medium DCA (TMDCA), is used to study Anderson localization of phonons in three dimensions. We show that, for binary isotopic disorder, lighter impurities induce localized modes beyond the bandwidth of the host system, while heavier impuritiesmore » lead to a partial localization of the low-frequency acoustic modes. For a uniform (box) distribution of masses, the physical spectrum is shown to develop long tails comprising mostly localized modes. The mobility edge separating extended and localized modes, obtained through the TMDCA, agrees well with results from the transfer matrix method. A reentrance behavior of the mobility edge with increasing disorder is found that is similar to, but somewhat more pronounced than, the behavior in disordered electronic systems. Our work establishes a computational approach, which recovers the thermodynamic limit, is versatile and computationally inexpensive, to investigate lattice vibrations in disordered lattice systems.« less

  9. Localization of phonons in mass-disordered alloys: A typical medium dynamical cluster approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jarrell, Mark; Moreno, Juana; Raja Mondal, Wasim

    The effect of disorder on lattice vibrational modes has been a topic of interest for several decades. In this article, we employ a Green's function based approach, namely, the dynamical cluster approximation (DCA), to investigate phonons in mass-disordered systems. Detailed benchmarks with previous exact calculations are used to validate the method in a wide parameter space. An extension of the method, namely, the typical medium DCA (TMDCA), is used to study Anderson localization of phonons in three dimensions. We show that, for binary isotopic disorder, lighter impurities induce localized modes beyond the bandwidth of the host system, while heavier impuritiesmore » lead to a partial localization of the low-frequency acoustic modes. For a uniform (box) distribution of masses, the physical spectrum is shown to develop long tails comprising mostly localized modes. The mobility edge separating extended and localized modes, obtained through the TMDCA, agrees well with results from the transfer matrix method. A reentrance behavior of the mobility edge with increasing disorder is found that is similar to, but somewhat more pronounced than, the behavior in disordered electronic systems. Our work establishes a computational approach, which recovers the thermodynamic limit, is versatile and computationally inexpensive, to investigate lattice vibrations in disordered lattice systems.« less

  10. A Novel Local Learning based Approach With Application to Breast Cancer Diagnosis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Songhua; Tourassi, Georgia

    2012-01-01

    The purpose of this study is to develop and evaluate a novel local learning-based approach for computer-assisted diagnosis of breast cancer. Our new local learning based algorithm using the linear logistic regression method as its base learner is described. Overall, our algorithm will perform its stochastic searching process until the total allowed computing time is used up by our random walk process in identifying the most suitable population subdivision scheme and their corresponding individual base learners. The proposed local learning-based approach was applied for the prediction of breast cancer given 11 mammographic and clinical findings reported by physicians using themore » BI-RADS lexicon. Our database consisted of 850 patients with biopsy confirmed diagnosis (290 malignant and 560 benign). We also compared the performance of our method with a collection of publicly available state-of-the-art machine learning methods. Predictive performance for all classifiers was evaluated using 10-fold cross validation and Receiver Operating Characteristics (ROC) analysis. Figure 1 reports the performance of 54 machine learning methods implemented in the machine learning toolkit Weka (version 3.0). We introduced a novel local learning-based classifier and compared it with an extensive list of other classifiers for the problem of breast cancer diagnosis. Our experiments show that the algorithm superior prediction performance outperforming a wide range of other well established machine learning techniques. Our conclusion complements the existing understanding in the machine learning field that local learning may capture complicated, non-linear relationships exhibited by real-world datasets.« less

  11. Localized 2D COSY sequences: Method and experimental evaluation for a whole metabolite quantification approach

    NASA Astrophysics Data System (ADS)

    Martel, Dimitri; Tse Ve Koon, K.; Le Fur, Yann; Ratiney, Hélène

    2015-11-01

    Two-dimensional spectroscopy offers the possibility to unambiguously distinguish metabolites by spreading out the multiplet structure of J-coupled spin systems into a second dimension. Quantification methods that perform parametric fitting of the 2D MRS signal have recently been proposed for resolved PRESS (JPRESS) but not explicitly for Localized Correlation Spectroscopy (LCOSY). Here, through a whole metabolite quantification approach, correlation spectroscopy quantification performances are studied. The ability to quantify metabolite relaxation constant times is studied for three localized 2D MRS sequences (LCOSY, LCTCOSY and the JPRESS) in vitro on preclinical MR systems. The issues encountered during implementation and quantification strategies are discussed with the help of the Fisher matrix formalism. The described parameterized models enable the computation of the lower bound for error variance - generally known as the Cramér Rao bounds (CRBs), a standard of precision - on the parameters estimated from these 2D MRS signal fittings. LCOSY has a theoretical net signal loss of two per unit of acquisition time compared to JPRESS. A rapid analysis could point that the relative CRBs of LCOSY compared to JPRESS (expressed as a percentage of the concentration values) should be doubled but we show that this is not necessarily true. Finally, the LCOSY quantification procedure has been applied on data acquired in vivo on a mouse brain.

  12. A novel approach for SEMG signal classification with adaptive local binary patterns.

    PubMed

    Ertuğrul, Ömer Faruk; Kaya, Yılmaz; Tekin, Ramazan

    2016-07-01

    Feature extraction plays a major role in the pattern recognition process, and this paper presents a novel feature extraction approach, adaptive local binary pattern (aLBP). aLBP is built on the local binary pattern (LBP), which is an image processing method, and one-dimensional local binary pattern (1D-LBP). In LBP, each pixel is compared with its neighbors. Similarly, in 1D-LBP, each data in the raw is judged against its neighbors. 1D-LBP extracts feature based on local changes in the signal. Therefore, it has high a potential to be employed in medical purposes. Since, each action or abnormality, which is recorded in SEMG signals, has its own pattern, and via the 1D-LBP these (hidden) patterns may be detected. But, the positions of the neighbors in 1D-LBP are constant depending on the position of the data in the raw. Also, both LBP and 1D-LBP are very sensitive to noise. Therefore, its capacity in detecting hidden patterns is limited. To overcome these drawbacks, aLBP was proposed. In aLBP, the positions of the neighbors and their values can be assigned adaptively via the down-sampling and the smoothing coefficients. Therefore, the potential to detect (hidden) patterns, which may express an illness or an action, is really increased. To validate the proposed feature extraction approach, two different datasets were employed. Achieved accuracies by the proposed approach were higher than obtained results by employed popular feature extraction approaches and the reported results in the literature. Obtained accuracy results were brought out that the proposed method can be employed to investigate SEMG signals. In summary, this work attempts to develop an adaptive feature extraction scheme that can be utilized for extracting features from local changes in different categories of time-varying signals.

  13. A Bayesian network approach for modeling local failure in lung cancer

    NASA Astrophysics Data System (ADS)

    Oh, Jung Hun; Craft, Jeffrey; Lozi, Rawan Al; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam

    2011-03-01

    Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.

  14. An integrated approach to model strain localization bands in magnesium alloys

    NASA Astrophysics Data System (ADS)

    Baxevanakis, K. P.; Mo, C.; Cabal, M.; Kontsos, A.

    2018-02-01

    Strain localization bands (SLBs) that appear at early stages of deformation of magnesium alloys have been recently associated with heterogeneous activation of deformation twinning. Experimental evidence has demonstrated that such "Lüders-type" band formations dominate the overall mechanical behavior of these alloys resulting in sigmoidal type stress-strain curves with a distinct plateau followed by pronounced anisotropic hardening. To evaluate the role of SLB formation on the local and global mechanical behavior of magnesium alloys, an integrated experimental/computational approach is presented. The computational part is developed based on custom subroutines implemented in a finite element method that combine a plasticity model with a stiffness degradation approach. Specific inputs from the characterization and testing measurements to the computational approach are discussed while the numerical results are validated against such available experimental information, confirming the existence of load drops and the intensification of strain accumulation at the time of SLB initiation.

  15. Mapping the Similarities of Spectra: Global and Locally-biased Approaches to SDSS Galaxies

    NASA Astrophysics Data System (ADS)

    Lawlor, David; Budavári, Tamás; Mahoney, Michael W.

    2016-12-01

    We present a novel approach to studying the diversity of galaxies. It is based on a novel spectral graph technique, that of locally-biased semi-supervised eigenvectors. Our method introduces new coordinates that summarize an entire spectrum, similar to but going well beyond the widely used Principal Component Analysis (PCA). Unlike PCA, however, this technique does not assume that the Euclidean distance between galaxy spectra is a good global measure of similarity. Instead, we relax that condition to only the most similar spectra, and we show that doing so yields more reliable results for many astronomical questions of interest. The global variant of our approach can identify very finely numerous astronomical phenomena of interest. The locally-biased variants of our basic approach enable us to explore subtle trends around a set of chosen objects. The power of the method is demonstrated in the Sloan Digital Sky Survey Main Galaxy Sample, by illustrating that the derived spectral coordinates carry an unprecedented amount of information.

  16. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  17. Community-Based Outdoor Education Using a Local Approach to Conservation

    ERIC Educational Resources Information Center

    Maeda, Kazushi

    2005-01-01

    Local people of a community interact with nature in a way that is mediated by their local cultures and shape their own environment. We need a local approach to conservation for the local environment adding to the political or technological approaches for global environmental problems such as the destruction of the ozone layer or global warming.…

  18. Localization of MEG human brain responses to retinotopic visual stimuli with contrasting source reconstruction approaches

    PubMed Central

    Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine

    2014-01-01

    Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268

  19. Well-conditioning global-local analysis using stable generalized/extended finite element method for linear elastic fracture mechanics

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felicio Bruzzi

    2016-11-01

    Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.

  20. Sound source localization method in an environment with flow based on Amiet-IMACS

    NASA Astrophysics Data System (ADS)

    Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin

    2017-05-01

    A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.

  1. Markovian master equations for quantum thermal machines: local versus global approach

    NASA Astrophysics Data System (ADS)

    Hofer, Patrick P.; Perarnau-Llobet, Martí; Miranda, L. David M.; Haack, Géraldine; Silva, Ralph; Bohr Brask, Jonatan; Brunner, Nicolas

    2017-12-01

    The study of quantum thermal machines, and more generally of open quantum systems, often relies on master equations. Two approaches are mainly followed. On the one hand, there is the widely used, but often criticized, local approach, where machine sub-systems locally couple to thermal baths. On the other hand, in the more established global approach, thermal baths couple to global degrees of freedom of the machine. There has been debate as to which of these two conceptually different approaches should be used in situations out of thermal equilibrium. Here we compare the local and global approaches against an exact solution for a particular class of thermal machines. We consider thermodynamically relevant observables, such as heat currents, as well as the quantum state of the machine. Our results show that the use of a local master equation is generally well justified. In particular, for weak inter-system coupling, the local approach agrees with the exact solution, whereas the global approach fails for non-equilibrium situations. For intermediate coupling, the local and the global approach both agree with the exact solution and for strong coupling, the global approach is preferable. These results are backed by detailed derivations of the regimes of validity for the respective approaches.

  2. Meshless Local Petrov-Galerkin Method for Bending Problems

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Raju, Ivatury S.

    2002-01-01

    Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.

  3. Speeding up local correlation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kats, Daniel

    2014-12-28

    We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.

  4. Fluctuating local field method probed for a description of small classical correlated lattices

    NASA Astrophysics Data System (ADS)

    Rubtsov, Alexey N.

    2018-05-01

    Thermal-equilibrated finite classical lattices are considered as a minimal model of the systems showing an interplay between low-energy collective fluctuations and single-site degrees of freedom. Standard local field approach, as well as classical limit of the bosonic DMFT method, do not provide a satisfactory description of Ising and Heisenberg small lattices subjected to an external polarizing field. We show that a dramatic improvement can be achieved within a simple approach, in which the local field appears to be a fluctuating quantity related to the low-energy degree(s) of freedom.

  5. Localization of multiple defects using the compact phased array (CPA) method

    NASA Astrophysics Data System (ADS)

    Senyurek, Volkan Y.; Baghalian, Amin; Tashakori, Shervin; McDaniel, Dwayne; Tansel, Ibrahim N.

    2018-01-01

    Array systems of transducers have found numerous applications in detection and localization of defects in structural health monitoring (SHM) of plate-like structures. Different types of array configurations and analysis algorithms have been used to improve the process of localization of defects. For accurate and reliable monitoring of large structures by array systems, a high number of actuator and sensor elements are often required. In this study, a compact phased array system consisting of only three piezoelectric elements is used in conjunction with an updated total focusing method (TFM) for localization of single and multiple defects in an aluminum plate. The accuracy of the localization process was greatly improved by including wave propagation information in TFM. Results indicated that the proposed CPA approach can locate single and multiple defects with high accuracy while decreasing the processing costs and the number of required transducers. This method can be utilized in critical applications such as aerospace structures where the use of a large number of transducers is not desirable.

  6. Many-body-localization: strong disorder perturbative approach for the local integrals of motion

    NASA Astrophysics Data System (ADS)

    Monthus, Cécile

    2018-05-01

    For random quantum spin models, the strong disorder perturbative expansion of the local integrals of motion around the real-spin operators is revisited. The emphasis is on the links with other properties of the many-body-localized phase, in particular the memory in the dynamics of the local magnetizations and the statistics of matrix elements of local operators in the eigenstate basis. Finally, this approach is applied to analyze the many-body-localization transition in a toy model studied previously from the point of view of the entanglement entropy.

  7. Stochastic seismic inversion based on an improved local gradual deformation method

    NASA Astrophysics Data System (ADS)

    Yang, Xiuwei; Zhu, Peimin

    2017-12-01

    A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.

  8. Potential energy surface fitting by a statistically localized, permutationally invariant, local interpolating moving least squares method for the many-body potential: Method and application to N{sub 4}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu

    2014-02-07

    Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less

  9. Using economic analyses for local priority setting : the population cost-impact approach.

    PubMed

    Heller, Richard F; Gemmell, Islay; Wilson, Edward C F; Fordham, Richard; Smith, Richard D

    2006-01-01

    Standard methods of economic analysis may not be suitable for local decision making that is specific to a particular population. We describe a new three-step methodology, termed 'population cost-impact analysis', which provides a population perspective to the costs and benefits of alternative interventions. The first two steps involve calculating the population impact and the costs of the proposed interventions relevant to local conditions. This involves the calculation of population impact measures (which have been previously described but are not currently used extensively) - measures of absolute risk and risk reduction, applied to a population denominator. In step three, preferences of policy-makers are obtained. This is in contrast to the QALY approach in which quality weights are obtained as a part of the measurement of benefit. We applied the population cost-impact analysis method to a comparison of two interventions - increasing the use of beta-adrenoceptor antagonists (beta-blockers) and smoking cessation - after myocardial infarction in a scaled-back notional local population of 100,000 people in England. Twenty-two public health professionals were asked via a questionnaire to rank the order in which they would implement four interventions. They were given information on both population cost impact and QALYs for each intervention. In a population of 100,000 people, moving from current to best practice for beta-adrenoceptor antagonists and smoking cessation will prevent 11 and 4 deaths (or gain of 127 or 42 life-years), respectively. The cost per event prevented in the next year, or life-year gained, is less for beta-adrenoceptor antagonists than for smoking cessation. Public health professionals were found to be more inclined to rank alternative interventions according to the population cost impact than the QALY approach. The use of the population cost-impact approach allows information on the benefits of moving from current to best practice to be

  10. A global/local analysis method for treating details in structural design

    NASA Technical Reports Server (NTRS)

    Aminpour, Mohammad A.; Mccleary, Susan L.; Ransom, Jonathan B.

    1993-01-01

    A method for analyzing global/local behavior of plate and shell structures is described. In this approach, a detailed finite element model of the local region is incorporated within a coarser global finite element model. The local model need not be nodally compatible (i.e., need not have a one-to-one nodal correspondence) with the global model at their common boundary; therefore, the two models may be constructed independently. The nodal incompatibility of the models is accounted for by introducing appropriate constraint conditions into the potential energy in a hybrid variational formulation. The primary advantage of this method is that the need for transition modeling between global and local models is eliminated. Eliminating transition modeling has two benefits. First, modeling efforts are reduced since tedious and complex transitioning need not be performed. Second, errors due to the mesh distortion, often unavoidable in mesh transitioning, are minimized by avoiding distorted elements beyond what is needed to represent the geometry of the component. The method is applied reduced to a plate loaded in tension and transverse bending. The plate has a central hole, and various hole sixes and shapes are studied. The method is also applied to a composite laminated fuselage panel with a crack emanating from a window in the panel. While this method is applied herein to global/local problems, it is also applicable to the coupled analysis of independently modeled components as well as adaptive refinement.

  11. Local discretization method for overdamped Brownian motion on a potential with multiple deep wells.

    PubMed

    Nguyen, P T T; Challis, K J; Jack, M W

    2016-11-01

    We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.

  12. Local discretization method for overdamped Brownian motion on a potential with multiple deep wells

    NASA Astrophysics Data System (ADS)

    Nguyen, P. T. T.; Challis, K. J.; Jack, M. W.

    2016-11-01

    We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.

  13. A Synthetic Comparator Approach to Local Evaluation of School-Based Substance Use Prevention Programming.

    PubMed

    Hansen, William B; Derzon, James H; Reese, Eric L

    2014-06-01

    We propose a method for creating groups against which outcomes of local pretest-posttest evaluations of evidence-based programs can be judged. This involves assessing pretest markers for new and previously conducted evaluations to identify groups that have high pretest similarity. A database of 802 prior local evaluations provided six summary measures for analysis. The proximity of all groups using these variables is calculated as standardized proximities having values between 0 and 1. Five methods for creating standardized proximities are demonstrated. The approach allows proximity limits to be adjusted to find sufficient numbers of synthetic comparators. Several index cases are examined to assess the numbers of groups available to serve as comparators. Results show that most local evaluations would have sufficient numbers of comparators available for estimating program effects. This method holds promise as a tool for local evaluations to estimate relative effectiveness. © The Author(s) 2012.

  14. Estimating the financial resources needed for local public health departments in Minnesota: a multimethod approach.

    PubMed

    Riley, William; Briggs, Jill; McCullough, Mac

    2011-01-01

    This study presents a model for determining total funding needed for individual local health departments. The aim is to determine the financial resources needed to provide services for statewide local public health departments in Minnesota based on a gaps analysis done to estimate the funding needs. We used a multimethod analysis consisting of 3 approaches to estimate gaps in local public health funding consisting of (1) interviews of selected local public health leaders, (2) a Delphi panel, and (3) a Nominal Group Technique. On the basis of these 3 approaches, a consensus estimate of funding gaps was generated for statewide projections. The study includes an analysis of cost, performance, and outcomes from 2005 to 2007 for all 87 local governmental health departments in Minnesota. For each of the methods, we selected a panel to represent a profile of Minnesota health departments. The 2 main outcome measures were local-level gaps in financial resources and total resources needed to provide public health services at the local level. The total public health expenditure in Minnesota for local governmental public health departments was $302 million in 2007 ($58.92 per person). The consensus estimate of the financial gaps in local public health departments indicates that an additional $32.5 million (a 10.7% increase or $6.32 per person) is needed to adequately serve public health needs in the local communities. It is possible to make informed estimates of funding gaps for public health activities on the basis of a combination of quantitative methods. There is a wide variation in public health expenditure at the local levels, and methods are needed to establish minimum baseline expenditure levels to adequately treat a population. The gaps analysis can be used by stakeholders to inform policy makers of the need for improved funding of the public health system.

  15. Local electric dipole moments: A generalized approach.

    PubMed

    Groß, Lynn; Herrmann, Carmen

    2016-09-30

    We present an approach for calculating local electric dipole moments for fragments of molecular or supramolecular systems. This is important for understanding chemical gating and solvent effects in nanoelectronics, atomic force microscopy, and intensities in infrared spectroscopy. Owing to the nonzero partial charge of most fragments, "naively" defined local dipole moments are origin-dependent. Inspired by previous work based on Bader's atoms-in-molecules (AIM) partitioning, we derive a definition of fragment dipole moments which achieves origin-independence by relying on internal reference points. Instead of bond critical points (BCPs) as in existing approaches, we use as few reference points as possible, which are located between the fragment and the remainder(s) of the system and may be chosen based on chemical intuition. This allows our approach to be used with AIM implementations that circumvent the calculation of critical points for reasons of computational efficiency, for cases where no BCPs are found due to large interfragment distances, and with local partitioning schemes other than AIM which do not provide BCPs. It is applicable to both covalently and noncovalently bound systems. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  16. Methods and strategies of object localization

    NASA Technical Reports Server (NTRS)

    Shao, Lejun; Volz, Richard A.

    1989-01-01

    An important property of an intelligent robot is to be able to determine the location of an object in 3-D space. A general object localization system structure is proposed, some important issues on localization discussed, and an overview given for current available object localization algorithms and systems. The algorithms reviewed are characterized by their feature extracting and matching strategies; the range finding methods; the types of locatable objects; and the mathematical formulating methods.

  17. Evolutionary Local Search of Fuzzy Rules through a novel Neuro-Fuzzy encoding method.

    PubMed

    Carrascal, A; Manrique, D; Ríos, J; Rossi, C

    2003-01-01

    This paper proposes a new approach for constructing fuzzy knowledge bases using evolutionary methods. We have designed a genetic algorithm that automatically builds neuro-fuzzy architectures based on a new indirect encoding method. The neuro-fuzzy architecture represents the fuzzy knowledge base that solves a given problem; the search for this architecture takes advantage of a local search procedure that improves the chromosomes at each generation. Experiments conducted both on artificially generated and real world problems confirm the effectiveness of the proposed approach.

  18. Design local exhaust ventilation on sieve machine at PT.Perkebunan Nusantara VIII Ciater using design for assembly (DFA) approach with Boothroyd and Dewhurst method

    NASA Astrophysics Data System (ADS)

    Khalqihi, K. I.; Rahayu, M.; Rendra, M.

    2017-12-01

    PT Perkebunan Nusantara VIII Ciater is a company produced black tea orthodox more or less 4 tons every day. At the production section, PT Perkebunan Nusantara VIII will use local exhaust ventilation specially at sortation area on sieve machine. To maintain the quality of the black tea orthodox, all machine must be scheduled for maintenance every once a month and takes time 2 hours in workhours, with additional local exhaust ventilation, it will increase time for maintenance process, if maintenance takes time more than 2 hours it will caused production process delayed. To support maintenance process in PT Perkebunan Nusantara VIII Ciater, designing local exhaust ventilation using design for assembly approach with Boothroyd and Dewhurst method, design for assembly approach is choosen to simplify maintenance process which required assembly process. There are 2 LEV designs for this research. Design 1 with 94 components, assembly time 647.88 seconds and assembly efficiency level 23.62%. Design 2 with 82 components, assembly time 567.84 seconds and assembly efficiency level 24.83%. Design 2 is choosen for this research based on DFA goals, minimum total part that use, optimization assembly time, and assembly efficiency level.

  19. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS

    PubMed Central

    Einstein, Daniel R.; Dyedov, Vladimir

    2010-01-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  20. Geometrical optics in the near field: local plane-interface approach with evanescent waves.

    PubMed

    Bose, Gaurav; Hyvärinen, Heikki J; Tervo, Jani; Turunen, Jari

    2015-01-12

    We show that geometrical models may provide useful information on light propagation in wavelength-scale structures even if evanescent fields are present. We apply a so-called local plane-wave and local plane-interface methods to study a geometry that resembles a scanning near-field microscope. We show that fair agreement between the geometrical approach and rigorous electromagnetic theory can be achieved in the case where evanescent waves are required to predict any transmission through the structure.

  1. Local phase method for designing and optimizing metasurface devices.

    PubMed

    Hsu, Liyi; Dupré, Matthieu; Ndao, Abdoulaye; Yellowhair, Julius; Kanté, Boubacar

    2017-10-16

    Metasurfaces have attracted significant attention due to their novel designs for flat optics. However, the approach usually used to engineer metasurface devices assumes that neighboring elements are identical, by extracting the phase information from simulations with periodic boundaries, or that near-field coupling between particles is negligible, by extracting the phase from single particle simulations. This is not the case most of the time and the approach thus prevents the optimization of devices that operate away from their optimum. Here, we propose a versatile numerical method to obtain the phase of each element within the metasurface (meta-atoms) while accounting for near-field coupling. Quantifying the phase error of each element of the metasurfaces with the proposed local phase method paves the way to the design of highly efficient metasurface devices including, but not limited to, deflectors, high numerical aperture metasurface concentrators, lenses, cloaks, and modulators.

  2. Local Approximation and Hierarchical Methods for Stochastic Optimization

    NASA Astrophysics Data System (ADS)

    Cheng, Bolong

    In this thesis, we present local and hierarchical approximation methods for two classes of stochastic optimization problems: optimal learning and Markov decision processes. For the optimal learning problem class, we introduce a locally linear model with radial basis function for estimating the posterior mean of the unknown objective function. The method uses a compact representation of the function which avoids storing the entire history, as is typically required by nonparametric methods. We derive a knowledge gradient policy with the locally parametric model, which maximizes the expected value of information. We show the policy is asymptotically optimal in theory, and experimental works suggests that the method can reliably find the optimal solution on a range of test functions. For the Markov decision processes problem class, we are motivated by an application where we want to co-optimize a battery for multiple revenue, in particular energy arbitrage and frequency regulation. The nature of this problem requires the battery to make charging and discharging decisions at different time scales while accounting for the stochastic information such as load demand, electricity prices, and regulation signals. Computing the exact optimal policy becomes intractable due to the large state space and the number of time steps. We propose two methods to circumvent the computation bottleneck. First, we propose a nested MDP model that structure the co-optimization problem into smaller sub-problems with reduced state space. This new model allows us to understand how the battery behaves down to the two-second dynamics (that of the frequency regulation market). Second, we introduce a low-rank value function approximation for backward dynamic programming. This new method only requires computing the exact value function for a small subset of the state space and approximate the entire value function via low-rank matrix completion. We test these methods on historical price data from the

  3. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    PubMed

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  4. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method

    PubMed Central

    Tuta, Jure; Juric, Matjaz B.

    2016-01-01

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments—some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models—free space path loss and ITU models—which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2–3 and 3–4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements. PMID:27929453

  5. Towards a bulk approach to local interactions of hydrometeors

    NASA Astrophysics Data System (ADS)

    Baumgartner, Manuel; Spichtinger, Peter

    2018-02-01

    The growth of small cloud droplets and ice crystals is dominated by the diffusion of water vapor. Usually, Maxwell's approach to growth for isolated particles is used in describing this process. However, recent investigations show that local interactions between particles can change diffusion properties of cloud particles. In this study we develop an approach for including these local interactions into a bulk model approach. For this purpose, a simplified framework of local interaction is proposed and governing equations are derived from this setup. The new model is tested against direct simulations and incorporated into a parcel model framework. Using the parcel model, possible implications of the new model approach for clouds are investigated. The results indicate that for specific scenarios the lifetime of cloud droplets in subsaturated air may be longer (e.g., for an initially water supersaturated air parcel within a downdraft). These effects might have an impact on mixed-phase clouds, for example in terms of riming efficiencies.

  6. Non-parametric identification of multivariable systems: A local rational modeling approach with application to a vibration isolation benchmark

    NASA Astrophysics Data System (ADS)

    Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom

    2018-05-01

    Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.

  7. A high-resolution computational localization method for transcranial magnetic stimulation mapping.

    PubMed

    Aonuma, Shinta; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa; Takakura, Tomokazu; Tamura, Manabu; Muragaki, Yoshihiro

    2018-05-15

    Transcranial magnetic stimulation (TMS) is used for the mapping of brain motor functions. The complexity of the brain deters determining the exact localization of the stimulation site using simplified methods (e.g., the region below the center of the TMS coil) or conventional computational approaches. This study aimed to present a high-precision localization method for a specific motor area by synthesizing computed non-uniform current distributions in the brain for multiple sessions of TMS. Peritumoral mapping by TMS was conducted on patients who had intra-axial brain neoplasms located within or close to the motor speech area. The electric field induced by TMS was computed using realistic head models constructed from magnetic resonance images of patients. A post-processing method was implemented to determine a TMS hotspot by combining the computed electric fields for the coil orientations and positions that delivered high motor-evoked potentials during peritumoral mapping. The method was compared to the stimulation site localized via intraoperative direct brain stimulation and navigated TMS. Four main results were obtained: 1) the dependence of the computed hotspot area on the number of peritumoral measurements was evaluated; 2) the estimated localization of the hand motor area in eight non-affected hemispheres was in good agreement with the position of a so-called "hand-knob"; 3) the estimated hotspot areas were not sensitive to variations in tissue conductivity; and 4) the hand motor areas estimated by this proposal and direct electric stimulation (DES) were in good agreement in the ipsilateral hemisphere of four glioma patients. The TMS localization method was validated by well-known positions of the "hand-knob" in brains for the non-affected hemisphere, and by a hotspot localized via DES during awake craniotomy for the tumor-containing hemisphere. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Confocal laser induced fluorescence with comparable spatial localization to the conventional method

    NASA Astrophysics Data System (ADS)

    Thompson, Derek S.; Henriquez, Miguel F.; Scime, Earl E.; Good, Timothy N.

    2017-10-01

    We present measurements of ion velocity distributions obtained by laser induced fluorescence (LIF) using a single viewport in an argon plasma. A patent pending design, which we refer to as the confocal fluorescence telescope, combines large objective lenses with a large central obscuration and a spatial filter to achieve high spatial localization along the laser injection direction. Models of the injection and collection optics of the two assemblies are used to provide a theoretical estimate of the spatial localization of the confocal arrangement, which is taken to be the full width at half maximum of the spatial optical response. The new design achieves approximately 1.4 mm localization at a focal length of 148.7 mm, improving on previously published designs by an order of magnitude and approaching the localization achieved by the conventional method. The confocal method, however, does so without requiring a pair of separated, perpendicular optical paths. The confocal technique therefore eases the two window access requirement of the conventional method, extending the application of LIF to experiments where conventional LIF measurements have been impossible or difficult, or where multiple viewports are scarce.

  9. Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods

    NASA Astrophysics Data System (ADS)

    Ullrich, P. A.; Guerra, J. E.

    2014-12-01

    The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.

  10. VLPs of HCV local isolates for HCV immunoassay diagnostic approach in Indonesia

    NASA Astrophysics Data System (ADS)

    Prasetyo, Afiono Agung

    2017-01-01

    Hepatitis C Virus (HCV) infection is a major global disease which often leads to morbidity and mortality. Low survival is related to the lack of adequate diagnostic because HCV infection is frequently asymptomatic and there are no specific diagnostic tests due to the fast transformation of the virus. Here, we investigated the VLPs (virus-like particles) of HCV local isolate as an immunoassay diagnostic approach to detect HCV infection, especially in Indonesia. The core, E1, and E2 of HCV local isolate genes were cloned and molecular analyzed, either as single or in recombinant-VLP form, to determine the molecular and chemical characteristics of each VLPs related to their potential use as an immunoassay detection method for HCV infection. The results indicated the molecular and chemical character of each VLPs are comparable. Conclusion: VLPs of HCV has the potential as an immunoassay diagnostic approach to detect HCV infection.

  11. OCT-based approach to local relaxations discrimination from translational relaxation motions

    NASA Astrophysics Data System (ADS)

    Matveev, Lev A.; Matveyev, Alexandr L.; Gubarkova, Ekaterina V.; Gelikonov, Grigory V.; Sirotkina, Marina A.; Kiseleva, Elena B.; Gelikonov, Valentin M.; Gladkova, Natalia D.; Vitkin, Alex; Zaitsev, Vladimir Y.

    2016-04-01

    Multimodal optical coherence tomography (OCT) is an emerging tool for tissue state characterization. Optical coherence elastography (OCE) is an approach to mapping mechanical properties of tissue based on OCT. One of challenging problems in OCE is elimination of the influence of residual local tissue relaxation that complicates obtaining information on elastic properties of the tissue. Alternatively, parameters of local relaxation itself can be used as an additional informative characteristic for distinguishing the tissue in normal and pathological states over the OCT image area. Here we briefly present an OCT-based approach to evaluation of local relaxation processes in the tissue bulk after sudden unloading of its initial pre-compression. For extracting the local relaxation rate we evaluate temporal dependence of local strains that are mapped using our recently developed hybrid phase resolved/displacement-tracking (HPRDT) approach. This approach allows one to subtract the contribution of global displacements of scatterers in OCT scans and separate the temporal evolution of local strains. Using a sample excised from of a coronary arteria, we demonstrate that the observed relaxation of local strains can be reasonably fitted by an exponential law, which opens the possibility to characterize the tissue by a single relaxation time. The estimated local relaxation times are assumed to be related to local biologically-relevant processes inside the tissue, such as diffusion, leaking/draining of the fluids, local folding/unfolding of the fibers, etc. In general, studies of evolution of such features can provide new metrics for biologically-relevant changes in tissue, e.g., in the problems of treatment monitoring.

  12. A local approach for focussed Bayesian fusion

    NASA Astrophysics Data System (ADS)

    Sander, Jennifer; Heizmann, Michael; Goussev, Igor; Beyerer, Jürgen

    2009-04-01

    Local Bayesian fusion approaches aim to reduce high storage and computational costs of Bayesian fusion which is separated from fixed modeling assumptions. Using the small world formalism, we argue why this proceeding is conform with Bayesian theory. Then, we concentrate on the realization of local Bayesian fusion by focussing the fusion process solely on local regions that are task relevant with a high probability. The resulting local models correspond then to restricted versions of the original one. In a previous publication, we used bounds for the probability of misleading evidence to show the validity of the pre-evaluation of task specific knowledge and prior information which we perform to build local models. In this paper, we prove the validity of this proceeding using information theoretic arguments. For additional efficiency, local Bayesian fusion can be realized in a distributed manner. Here, several local Bayesian fusion tasks are evaluated and unified after the actual fusion process. For the practical realization of distributed local Bayesian fusion, software agents are predestinated. There is a natural analogy between the resulting agent based architecture and criminal investigations in real life. We show how this analogy can be used to improve the efficiency of distributed local Bayesian fusion additionally. Using a landscape model, we present an experimental study of distributed local Bayesian fusion in the field of reconnaissance, which highlights its high potential.

  13. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  14. Local spatiotemporal time-frequency peak filtering method for seismic random noise reduction

    NASA Astrophysics Data System (ADS)

    Liu, Yanping; Dang, Bo; Li, Yue; Lin, Hongbo

    2014-12-01

    To achieve a higher level of seismic random noise suppression, the Radon transform has been adopted to implement spatiotemporal time-frequency peak filtering (TFPF) in our previous studies. Those studies involved performing TFPF in full-aperture Radon domain, including linear Radon and parabolic Radon. Although the superiority of this method to the conventional TFPF has been tested through processing on synthetic seismic models and field seismic data, there are still some limitations in the method. Both full-aperture linear Radon and parabolic Radon are applicable and effective for some relatively simple situations (e.g., curve reflection events with regular geometry) but inapplicable for complicated situations such as reflection events with irregular shapes, or interlaced events with quite different slope or curvature parameters. Therefore, a localized approach to the application of the Radon transform must be applied. It would serve the filter method better by adapting the transform to the local character of the data variations. In this article, we propose an idea that adopts the local Radon transform referred to as piecewise full-aperture Radon to realize spatiotemporal TFPF, called local spatiotemporal TFPF. Through experiments on synthetic seismic models and field seismic data, this study demonstrates the advantage of our method in seismic random noise reduction and reflection event recovery for relatively complicated situations of seismic data.

  15. The Robin Hood method - A novel numerical method for electrostatic problems based on a non-local charge transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lazic, Predrag; Stefancic, Hrvoje; Abraham, Hrvoje

    2006-03-20

    We introduce a novel numerical method, named the Robin Hood method, of solving electrostatic problems. The approach of the method is closest to the boundary element methods, although significant conceptual differences exist with respect to this class of methods. The method achieves equipotentiality of conducting surfaces by iterative non-local charge transfer. For each of the conducting surfaces, non-local charge transfers are performed between surface elements, which differ the most from the targeted equipotentiality of the surface. The method is tested against analytical solutions and its wide range of application is demonstrated. The method has appealing technical characteristics. For the problemmore » with N surface elements, the computational complexity of the method essentially scales with N {sup {alpha}}, where {alpha} < 2, the required computer memory scales with N, while the error of the potential decreases exponentially with the number of iterations for many orders of magnitude of the error, without the presence of the Critical Slowing Down. The Robin Hood method could prove useful in other classical or even quantum problems. Some future development ideas for possible applications outside electrostatics are addressed.« less

  16. An improved local radial point interpolation method for transient heat conduction analysis

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  17. Jacobian-Based Iterative Method for Magnetic Localization in Robotic Capsule Endoscopy

    PubMed Central

    Di Natali, Christian; Beccani, Marco; Simaan, Nabil; Valdastri, Pietro

    2016-01-01

    The purpose of this study is to validate a Jacobian-based iterative method for real-time localization of magnetically controlled endoscopic capsules. The proposed approach applies finite-element solutions to the magnetic field problem and least-squares interpolations to obtain closed-form and fast estimates of the magnetic field. By defining a closed-form expression for the Jacobian of the magnetic field relative to changes in the capsule pose, we are able to obtain an iterative localization at a faster computational time when compared with prior works, without suffering from the inaccuracies stemming from dipole assumptions. This new algorithm can be used in conjunction with an absolute localization technique that provides initialization values at a slower refresh rate. The proposed approach was assessed via simulation and experimental trials, adopting a wireless capsule equipped with a permanent magnet, six magnetic field sensors, and an inertial measurement unit. The overall refresh rate, including sensor data acquisition and wireless communication was 7 ms, thus enabling closed-loop control strategies for magnetic manipulation running faster than 100 Hz. The average localization error, expressed in cylindrical coordinates was below 7 mm in both the radial and axial components and 5° in the azimuthal component. The average error for the capsule orientation angles, obtained by fusing gyroscope and inclinometer measurements, was below 5°. PMID:27087799

  18. Locating damage using integrated global-local approach with wireless sensing system and single-chip impedance measurement device.

    PubMed

    Lin, Tzu-Hsuan; Lu, Yung-Chi; Hung, Shih-Lin

    2014-01-01

    This study developed an integrated global-local approach for locating damage on building structures. A damage detection approach with a novel embedded frequency response function damage index (NEFDI) was proposed and embedded in the Imote2.NET-based wireless structural health monitoring (SHM) system to locate global damage. Local damage is then identified using an electromechanical impedance- (EMI-) based damage detection method. The electromechanical impedance was measured using a single-chip impedance measurement device which has the advantages of small size, low cost, and portability. The feasibility of the proposed damage detection scheme was studied with reference to a numerical example of a six-storey shear plane frame structure and a small-scale experimental steel frame. Numerical and experimental analysis using the integrated global-local SHM approach reveals that, after NEFDI indicates the approximate location of a damaged area, the EMI-based damage detection approach can then identify the detailed damage location in the structure of the building.

  19. The regional approach and regional studies method in the process of geography teaching

    NASA Astrophysics Data System (ADS)

    Dermendzhieva, Stela; Doikov, Martin

    2017-03-01

    We define the regional approach as a manner of relations among the global trends of development of the "Society-man-nature" system and the local differentiating level of knowledge. Conditionally, interactions interlace under the influence of the character of Geography as a science, education, approaches, goals and teaching methods. Global, national and local development differentiates in three concentric circles at the level of knowledge. It is determined as a conception of modern, complex and effective mechanism for young people, through which knowledge develops in regional historical and cultural perspective; self-consciousness for socio-economic and cultural integration is formed as a part of the. historical-geographical image of the native land. This way an attitude to the. native land is formed as a connecting construct between patriotism to the motherland and the same in global aspect. The possibility for integration and cooperation of the educative geographical content with all the local historical-geographical, regional, profession orientating, artistic, municipal and district institutions, is outlined. Contemporary geographical education appears to be a powerful and indispensable mechanism for organization of human sciences, while the regional approach and the application of the regional studies method stimulate and motivate the development and realization of optimal capacities for direct connection with the local structures and environments.

  20. Localized Principal Component Analysis based Curve Evolution: A Divide and Conquer Approach

    PubMed Central

    Appia, Vikram; Ganapathy, Balaji; Yezzi, Anthony; Faber, Tracy

    2014-01-01

    We propose a novel localized principal component analysis (PCA) based curve evolution approach which evolves the segmenting curve semi-locally within various target regions (divisions) in an image and then combines these locally accurate segmentation curves to obtain a global segmentation. The training data for our approach consists of training shapes and associated auxiliary (target) masks. The masks indicate the various regions of the shape exhibiting highly correlated variations locally which may be rather independent of the variations in the distant parts of the global shape. Thus, in a sense, we are clustering the variations exhibited in the training data set. We then use a parametric model to implicitly represent each localized segmentation curve as a combination of the local shape priors obtained by representing the training shapes and the masks as a collection of signed distance functions. We also propose a parametric model to combine the locally evolved segmentation curves into a single hybrid (global) segmentation. Finally, we combine the evolution of these semilocal and global parameters to minimize an objective energy function. The resulting algorithm thus provides a globally accurate solution, which retains the local variations in shape. We present some results to illustrate how our approach performs better than the traditional approach with fully global PCA. PMID:25520901

  1. Impact localization on composite structures using time difference and MUSIC approach

    NASA Astrophysics Data System (ADS)

    Zhong, Yongteng; Xiang, Jiawei

    2017-05-01

    1-D uniform linear array (ULA) has the shortcoming of the half-plane mirror effect, which does not allow discriminating between a target placed above the array and a target placed below the array. This paper presents time difference (TD) and multiple signal classification (MUSIC) based omni-directional impact localization on a large stiffened composite structure using improved linear array, which is able to perform omni-directional 360° localization. This array contains 2M+3 PZT sensors, where 2M+1 PZT sensors are arranged as a uniform linear array, and the other two PZT sensors are placed above and below the array. Firstly, the arrival times of impact signals observed by the other two sensors are determined using the wavelet transform. Compared with each other, the direction range of impact source can be decided in general, 0°to 180° or 180°to 360°. And then, two dimensional multiple signal classification (2D-MUSIC) based spatial spectrum formula using the uniform linear array is applied for impact localization by the general direction range. When the arrival times of impact signals observed by upper PZT is equal to that of lower PZT, the direction can be located in x axis (0°or 180°). And time difference based MUSIC method is present to locate impact position. To verify the proposed approach, the proposed approach is applied to a composite structure. The localization results are in good agreement with the actual impact occurring positions.

  2. 3-D Localization Method for a Magnetically Actuated Soft Capsule Endoscope and Its Applications

    PubMed Central

    Yim, Sehyuk; Sitti, Metin

    2014-01-01

    In this paper, we present a 3-D localization method for a magnetically actuated soft capsule endoscope (MASCE). The proposed localization scheme consists of three steps. First, MASCE is oriented to be coaxially aligned with an external permanent magnet (EPM). Second, MASCE is axially contracted by the enhanced magnetic attraction of the approaching EPM. Third, MASCE recovers its initial shape by the retracting EPM as the magnetic attraction weakens. The combination of the estimated direction in the coaxial alignment step and the estimated distance in the shape deformation (recovery) step provides the position of MASCE in 3-D. It is experimentally shown that the proposed localization method could provide 2.0–3.7 mm of distance error in 3-D. This study also introduces two new applications of the proposed localization method. First, based on the trace of contact points between the MASCE and the surface of the stomach, the 3-D geometrical model of a synthetic stomach was reconstructed. Next, the relative tissue compliance at each local contact point in the stomach was characterized by measuring the local tissue deformation at each point due to the preloading force. Finally, the characterized relative tissue compliance parameter was mapped onto the geometrical model of the stomach toward future use in disease diagnosis. PMID:25383064

  3. Local regression type methods applied to the study of geophysics and high frequency financial data

    NASA Astrophysics Data System (ADS)

    Mariani, M. C.; Basu, K.

    2014-09-01

    In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.

  4. An Observationally-Centred Method to Quantify the Changing Shape of Local Temperature Distributions

    NASA Astrophysics Data System (ADS)

    Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.

    2014-12-01

    For climate sensitive decisions and adaptation planning, guidance on how local climate is changing is needed at the specific thresholds relevant to particular impacts or policy endeavours. This requires the quantification of how the distributions of variables, such as daily temperature, are changing at specific quantiles. These temperature distributions are non-normal and vary both geographically and in time. We present a method[1,2] for analysing local climatic time series data to assess which quantiles of the local climatic distribution show the greatest and most robust changes. We have demonstrated this approach using the E-OBS gridded dataset[3] which consists of time series of local daily temperature across Europe over the last 60 years. Our method extracts the changing cumulative distribution function over time and uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. The change in temperature can be tracked at a temperature threshold, at a likelihood, or at a given return time, independently for each geographical location. Geographical correlations are thus an output of our method and reflect both climatic properties (local and synoptic), and spatial correlations inherent in the observation methodology. We find as an output many regionally consistent patterns of response of potential value in adaptation planning. For instance, in a band from Northern France to Denmark the hottest days in the summer temperature distribution have seen changes of at least 2°C over a 43 year period; over four times the global mean change over the same period. We discuss methods to quantify the robustness of these observed sensitivities and their statistical likelihood. This approach also quantifies the level of detail at which one might wish to see agreement between climate models and

  5. Localized Multiple Kernel Learning A Convex Approach

    DTIC Science & Technology

    2016-11-22

    data. All the aforementioned approaches to localized MKL are formulated in terms of non-convex optimization problems, and deep the- oretical...learning. IEEE Transactions on Neural Networks, 22(3):433–446, 2011. Jingjing Yang, Yuanning Li, Yonghong Tian, Lingyu Duan, and Wen Gao. Group-sensitive

  6. Approaches for Studying the Subcellular Localization, Interactions, and Regulation of Histone Deacetylase 5 (HDAC5)

    PubMed Central

    Guise, Amanda J.; Cristea, Ileana M.

    2017-01-01

    As a member of the class IIa family of histone deacetylases, the histone deacetylase 5 (HDAC5) is known to undergo nuclear–cytoplasmic shuttling and to be a critical transcriptional regulator. Its misregulation has been linked to prominent human diseases, including cardiac diseases and tumorigenesis. In this chapter, we describe several experimental methods that have proven effective for studying the functions and regulatory features of HDAC5. We present methods for assessing the subcellular localization, protein interactions, posttranslational modifications (PTMs), and activity of HDAC5 from the standpoint of investigating either the endogenous protein or tagged protein forms in human cells. Specifically, given that at the heart of HDAC5 regulation lie its dynamic localization, interactions, and PTMs, we present methods for assessing HDAC5 localization in fixed and live cells, for isolating HDAC5-containing protein complexes to identify its interactions and modifications, and for determining how these PTMs map to predicted HDAC5 structural motifs. Lastly, we provide examples of approaches for studying HDAC5 functions with a focus on its regulation during cell-cycle progression. These methods can readily be adapted for the study of other HDACs or non-HDAC-proteins of interest. Individually, these techniques capture temporal and spatial snapshots of HDAC5 functions; yet together, these approaches provide powerful tools for investigating both the regulation and regulatory roles of HDAC5 in different cell contexts relevant to health and disease. PMID:27246208

  7. A downscaling method for the assessment of local climate change

    NASA Astrophysics Data System (ADS)

    Bruno, E.; Portoghese, I.; Vurro, M.

    2009-04-01

    The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The

  8. A Variational Level Set Approach Based on Local Entropy for Image Segmentation and Bias Field Correction.

    PubMed

    Tang, Jian; Jiang, Xiaoliang

    2017-01-01

    Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.

  9. Plasmonics simulations including nonlocal effects using a boundary element method approach

    NASA Astrophysics Data System (ADS)

    Trügler, Andreas; Hohenester, Ulrich; García de Abajo, F. Javier

    2017-09-01

    Spatial nonlocality in the photonic response of metallic nanoparticles is actually known to produce near-field quenching and significant plasmon frequency shifts relative to local descriptions. As the control over size and morphology of fabricated nanostructures is truly reaching the nanometer scale, understanding and accounting for nonlocal phenomena is becoming increasingly important. Recent advances clearly point out the need to go beyond the local theory. We here present a general formalism for incorporating spatial dispersion effects through the hydrodynamic model and generalizations for arbitrary surface morphologies. Our method relies on the boundary element method, which we supplement with a nonlocal interaction potential. We provide numerical examples in excellent agreement with the literature for individual and paired gold nanospheres, and critically examine the accuracy of our approach. The present method involves marginal extra computational cost relative to local descriptions and facilitates the simulation of spatial dispersion effects in the photonic response of complex nanoplasmonic structures.

  10. Analysis of Non Local Image Denoising Methods

    NASA Astrophysics Data System (ADS)

    Pardo, Álvaro

    Image denoising is probably one of the most studied problems in the image processing community. Recently a new paradigm on non local denoising was introduced. The Non Local Means method proposed by Buades, Morel and Coll attracted the attention of other researches who proposed improvements and modifications to their proposal. In this work we analyze those methods trying to understand their properties while connecting them to segmentation based on spectral graph properties. We also propose some improvements to automatically estimate the parameters used on these methods.

  11. A locally p-adaptive approach for Large Eddy Simulation of compressible flows in a DG framework

    NASA Astrophysics Data System (ADS)

    Tugnoli, Matteo; Abbà, Antonella; Bonaventura, Luca; Restelli, Marco

    2017-11-01

    We investigate the possibility of reducing the computational burden of LES models by employing local polynomial degree adaptivity in the framework of a high-order DG method. A novel degree adaptation technique especially featured to be effective for LES applications is proposed and its effectiveness is compared to that of other criteria already employed in the literature. The resulting locally adaptive approach allows to achieve significant reductions in computational cost of representative LES computations.

  12. Training NOAA Staff on Effective Communication Methods with Local Climate Users

    NASA Astrophysics Data System (ADS)

    Timofeyeva, M. M.; Mayes, B.

    2011-12-01

    Since 2002 NOAA National Weather Service (NWS) Climate Services Division (CSD) offered training opportunities to NWS staff. As a result of eight-year-long development of the training program, NWS offers three training courses and about 25 online distance learning modules covering various climate topics: climate data and observations, climate variability and change, NWS national and local climate products, their tools, skill, and interpretation. Leveraging climate information and expertise available at all NOAA line offices and partners allows delivery of the most advanced knowledge and is a very critical aspect of the training program. NWS challenges in providing local climate services includes effective communication techniques on provide highly technical scientific information to local users. Addressing this challenge requires well trained, climate-literate workforce at local level capable of communicating the NOAA climate products and services as well as provide climate-sensitive decision support. Trained NWS climate service personnel use proactive and reactive approaches and professional education methods in communicating climate variability and change information to local users. Both scientifically-unimpaired messages and amiable communication techniques such as story telling approach are important in developing an engaged dialog between the climate service providers and users. Several pilot projects NWS CSD conducted in the past year applied the NWS climate services training program to training events for NOAA technical user groups. The technical user groups included natural resources managers, engineers, hydrologists, and planners for transportation infrastructure. Training of professional user groups required tailoring the instructions to the potential applications of each group of users. Training technical user identified the following critical issues: (1) Knowledge of target audience expectations, initial knowledge status, and potential use of climate

  13. Using the Storypath Approach to Make Local Government Understandable

    ERIC Educational Resources Information Center

    McGuire, Margit E.; Cole, Bronwyn

    2008-01-01

    Learning about local government seems boring and irrelevant to most young people, particularly to students from high-poverty backgrounds. The authors explore a promising approach for solving this problem, Storypath, which engages students in authentic learning and active citizenship. The Storypath approach is based on a narrative in which students…

  14. Global/local methods research using the CSM testbed

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. Hayden, Jr.; Thompson, Danniella M.

    1990-01-01

    Research activities in global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.

  15. Dynamical self-arrest in symmetric and asymmetric diblock copolymer melts using a replica approach within a local theory.

    PubMed

    Wu, Sangwook

    2009-03-01

    We investigate dynamical self-arrest in a diblock copolymer melt using a replica approach within a self-consistent local method based on dynamical mean-field theory (DMFT). The local replica approach effectively predicts (chiN)_{A} for dynamical self-arrest in a block copolymer melt for symmetric and asymmetric cases. We discuss the competition of the cubic and quartic interactions in the Landau free energy for a block copolymer melt in stabilizing a glassy state depending on the chain length. Our local replica theory provides a universal value for the dynamical self-arrest in block copolymer melts with (chiN)_{A} approximately 10.5+64N;{-3/10} for the symmetric case.

  16. Combining local scaling and global methods to detect soil pore space

    NASA Astrophysics Data System (ADS)

    Martin-Sotoca, Juan Jose; Saa-Requejo, Antonio; Grau, Juan B.; Tarquis, Ana M.

    2017-04-01

    The characterization of the spatial distribution of soil pore structures is essential to obtain different parameters that will influence in several models related to water flow and/or microbial growth processes. The first step in pore structure characterization is obtaining soil images that best approximate reality. Over the last decade, major technological advances in X-ray computed tomography (CT) have allowed for the investigation and reconstruction of natural porous media architectures at very fine scales. The subsequent step is delimiting the pore structure (pore space) from the CT soil images applying a thresholding. Many times we could find CT-scan images that show low contrast at the solid-void interface that difficult this step. Different delimitation methods can result in different spatial distributions of pores influencing the parameters used in the models. Recently, new local segmentation method using local greyscale value (GV) concentration variabilities, based on fractal concepts, has been presented. This method creates singularity maps to measure the GV concentration at each point. The C-A method was combined with the singularity map approach (Singularity-CA method) to define local thresholds that can be applied to binarize CT images. Comparing this method with classical methods, such as Otsu and Maximum Entropy, we observed that more pores can be detected mainly due to its ability to amplify anomalous concentrations. However, it delineated many small pores that were incorrect. In this work, we present an improve version of Singularity-CA method that avoid this problem basically combining it with the global classical methods. References Martín-Sotoca, J.J., A. Saa-Requejo, J.B. Grau, A.M. Tarquis. New segmentation method based on fractal properties using singularity maps. Geoderma, 287, 40-53, 2017. Martín-Sotoca, J.J, A. Saa-Requejo, J.B. Grau, A.M. Tarquis. Local 3D segmentation of soil pore space based on fractal properties using singularity

  17. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-12-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (element-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables that exist at the same locations has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  18. Multivariate localization methods for ensemble Kalman filtering

    NASA Astrophysics Data System (ADS)

    Roh, S.; Jun, M.; Szunyogh, I.; Genton, M. G.

    2015-05-01

    In ensemble Kalman filtering (EnKF), the small number of ensemble members that is feasible to use in a practical data assimilation application leads to sampling variability of the estimates of the background error covariances. The standard approach to reducing the effects of this sampling variability, which has also been found to be highly efficient in improving the performance of EnKF, is the localization of the estimates of the covariances. One family of localization techniques is based on taking the Schur (entry-wise) product of the ensemble-based sample covariance matrix and a correlation matrix whose entries are obtained by the discretization of a distance-dependent correlation function. While the proper definition of the localization function for a single state variable has been extensively investigated, a rigorous definition of the localization function for multiple state variables has been seldom considered. This paper introduces two strategies for the construction of localization functions for multiple state variables. The proposed localization functions are tested by assimilating simulated observations experiments into the bivariate Lorenz 95 model with their help.

  19. A finite-volume Eulerian-Lagrangian Localized Adjoint Method for solution of the advection-dispersion equation

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1993-01-01

    A new mass-conservative method for solution of the one-dimensional advection-dispersion equation is derived and discussed. Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods, in terms of accuracy and efficiency, for solute transport problems that are dominated by advection. For dispersion-dominated problems, the performance of the method is similar to that of standard methods. Like previous ELLAM formulations, FVELLAM systematically conserves mass globally with all types of boundary conditions. FVELLAM differs from other ELLAM approaches in that integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking, as used by most characteristic methods, of characteristic lines intersecting inflow boundaries. FVELLAM extends previous ELLAM results by obtaining mass conservation locally on Lagrangian space-time elements. Details of the integration, tracking, and boundary algorithms are presented. Test results are given for problems in Cartesian and radial coordinates.

  20. A Locally Modal B-Spline Based Full-Vector Finite-Element Method with PML for Nonlinear and Lossy Plasmonic Waveguide

    NASA Astrophysics Data System (ADS)

    Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan

    2016-09-01

    In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.

  1. Global and Local Approaches Describing Critical Phenomena on the Developing and Developed Financial Markets

    NASA Astrophysics Data System (ADS)

    Grech, Dariusz

    We define and confront global and local methods to analyze the financial crash-like events on the financial markets from the critical phenomena point of view. These methods are based respectively on the analysis of log-periodicity and on the local fractal properties of financial time series in the vicinity of phase transitions (crashes). The log-periodicity analysis is made in a daily time horizon, for the whole history (1991-2008) of Warsaw Stock Exchange Index (WIG) connected with the largest developing financial market in Europe. We find that crash-like events on the Polish financial market are described better by the log-divergent price model decorated with log-periodic behavior than by the power-law-divergent price model usually discussed in log-periodic scenarios for developed markets. Predictions coming from log-periodicity scenario are verified for all main crashes that took place in WIG history. It is argued that crash predictions within log-periodicity model strongly depend on the amount of data taken to make a fit and therefore are likely to contain huge inaccuracies. Next, this global analysis is confronted with the local fractal description. To do so, we provide calculation of the so-called local (time dependent) Hurst exponent H loc for the WIG time series and for main US stock market indices like DJIA and S&P 500. We point out dependence between the behavior of the local fractal properties of financial time series and the crashes appearance on the financial markets. We conclude that local fractal method seems to work better than the global approach - both for developing and developed markets. The very recent situation on the market, particularly related to the Fed intervention in September 2007 and the situation immediately afterwards is also analyzed within fractal approach. It is shown in this context how the financial market evolves through different phases of fractional Brownian motion. Finally, the current situation on American market is

  2. Method for localizing heating in tumor tissue

    DOEpatents

    Doss, James D.; McCabe, Charles W.

    1977-04-12

    A method for a localized tissue heating of tumors is disclosed. Localized radio frequency current fields are produced with specific electrode configurations. Several electrode configurations are disclosed, enabling variations in electrical and thermal properties of tissues to be exploited.

  3. A geometric method for nipple localization

    PubMed Central

    Khan, Humayun Ayub; Bayat, Ardeshir

    2008-01-01

    BACKGROUND: An important part of preoperative assessment in breast reduction surgery is to locate the site of the nipple-areola complex for the newly structured breast. Inappropriate location is difficult to correct secondarily. Traditional methods of nipple localization taught and practiced suggest the nipple to be located anterior to the inframammary fold. Trying to project this point on the anterior surface of the breast requires either large calipers or feeling the posteriorly placed finger on the anterior surface of a large breast. This certainly introduces some subjectivity to the calculation. OBJECTIVES: To introduce an easy and accurate method of nipple localization to reduce the learning curve for trainee surgeons. METHODS: Aesthetic placement of the nipples is at the lower angles of an equilateral or a short isosceles triangle on the chest with its apex at the sternal angle. This triangle can be thought of as two right-angled triangles with their Y-axis on the median plane. The base and vertical limb are measured, and the hypotenuse is calculated. The location of the lower angle is marked on the anterior surface of the breast and represents the new position of the nipple. RESULTS: Forty patients had nipple localization performed in the above-described manner, with satisfactory placement of the nipple-areola complex. CONCLUSIONS: The above technique introduces some objective measurements to the localization of the nipple in breast reduction surgery. It is easy to practice, and infuses confidence in trainees marking their initial breast reductions. PMID:19554165

  4. Preoperative computed tomography-guided percutaneous hookwire localization of metallic marker clips in the breast with a radial approach: initial experience.

    PubMed

    Uematsu, T; Kasami, M; Uchida, Y; Sanuki, J; Kimura, K; Tanaka, K; Takahashi, K

    2007-06-01

    Hookwire localization is the current standard technique for radiological marking of nonpalpable breast lesions. Stereotactic directional vacuum-assisted breast biopsy (SVAB) is of sufficient sensitivity and specificity to replace surgical biopsy. Wire localization for metallic marker clips placed after SVAB is needed. To describe a method for performing computed tomography (CT)-guided hookwire localization using a radial approach for metallic marker clips placed percutaneously after SVAB. Nineteen women scheduled for SVAB with marker-clip placement, CT-guided wire localization of marker clips, and, eventually, surgical excision were prospectively entered into the study. CT-guided wire localization was performed with a radial approach, followed by placement of a localizing marker-clip surgical excision. Feasibility and reliability of the procedure and the incidence of complications were examined. CT-guided wire localization surgical excision was successfully performed in all 19 women without any complications. The mean total procedure time was 15 min. The median distance on CT image from marker clip to hookwire was 2 mm (range 0-3 mm). CT-guided preoperative hookwire localization with a radial approach for marker clips after SVAB is technically feasible.

  5. Local Authority Approaches to the School Admissions Process. LG Group Research Report

    ERIC Educational Resources Information Center

    Rudd, Peter; Gardiner, Clare; Marson-Smith, Helen

    2010-01-01

    What are the challenges, barriers and facilitating factors connected to the various school admissions approaches used by local authorities? This report gathers the views of local authority admissions officers on the strengths and weaknesses of different approaches, as well as the issues and challenges they face in this important area. It covers:…

  6. Block-localized wavefunction (BLW) method at the density functional theory (DFT) level.

    PubMed

    Mo, Yirong; Song, Lingchun; Lin, Yuchun

    2007-08-30

    The block-localized wavefunction (BLW) approach is an ab initio valence bond (VB) method incorporating the efficiency of molecular orbital (MO) theory. It can generate the wavefunction for a resonance structure or diabatic state self-consistently by partitioning the overall electrons and primitive orbitals into several subgroups and expanding each block-localized molecular orbital in only one subspace. Although block-localized molecular orbitals in the same subspace are constrained to be orthogonal (a feature of MO theory), orbitals between different subspaces are generally nonorthogonal (a feature of VB theory). The BLW method is particularly useful in the quantification of the electron delocalization (resonance) effect within a molecule and the charge-transfer effect between molecules. In this paper, we extend the BLW method to the density functional theory (DFT) level and implement the BLW-DFT method to the quantum mechanical software GAMESS. Test applications to the pi conjugation in the planar allyl radical and ions with the basis sets of 6-31G(d), 6-31+G(d), 6-311+G(d,p), and cc-pVTZ show that the basis set dependency is insignificant. In addition, the BLW-DFT method can also be used to elucidate the nature of intermolecular interactions. Examples of pi-cation interactions and solute-solvent interactions will be presented and discussed. By expressing each diabatic state with one BLW, the BLW method can be further used to study chemical reactions and electron-transfer processes whose potential energy surfaces are typically described by two or more diabatic states.

  7. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  8. Matrix-product-state method with local basis optimization for nonequilibrium electron-phonon systems

    NASA Astrophysics Data System (ADS)

    Heidrich-Meisner, Fabian; Brockt, Christoph; Dorfner, Florian; Vidmar, Lev; Jeckelmann, Eric

    We present a method for simulating the time evolution of quasi-one-dimensional correlated systems with strongly fluctuating bosonic degrees of freedom (e.g., phonons) using matrix product states. For this purpose we combine the time-evolving block decimation (TEBD) algorithm with a local basis optimization (LBO) approach. We discuss the performance of our approach in comparison to TEBD with a bare boson basis, exact diagonalization, and diagonalization in a limited functional space. TEBD with LBO can reduce the computational cost by orders of magnitude when boson fluctuations are large and thus it allows one to investigate problems that are out of reach of other approaches. First, we test our method on the non-equilibrium dynamics of a Holstein polaron and show that it allows us to study the regime of strong electron-phonon coupling. Second, the method is applied to the scattering of an electronic wave packet off a region with electron-phonon coupling. Our study reveals a rich physics including transient self-trapping and dissipation. Supported by Deutsche Forschungsgemeinschaft (DFG) via FOR 1807.

  9. Locally Compact Quantum Groups. A von Neumann Algebra Approach

    NASA Astrophysics Data System (ADS)

    Van Daele, Alfons

    2014-08-01

    In this paper, we give an alternative approach to the theory of locally compact quantum groups, as developed by Kustermans and Vaes. We start with a von Neumann algebra and a comultiplication on this von Neumann algebra. We assume that there exist faithful left and right Haar weights. Then we develop the theory within this von Neumann algebra setting. In [Math. Scand. 92 (2003), 68-92] locally compact quantum groups are also studied in the von Neumann algebraic context. This approach is independent of the original C^*-algebraic approach in the sense that the earlier results are not used. However, this paper is not really independent because for many proofs, the reader is referred to the original paper where the C^*-version is developed. In this paper, we give a completely self-contained approach. Moreover, at various points, we do things differently. We have a different treatment of the antipode. It is similar to the original treatment in [Ann. Sci. & #201;cole Norm. Sup. (4) 33 (2000), 837-934]. But together with the fact that we work in the von Neumann algebra framework, it allows us to use an idea from [Rev. Roumaine Math. Pures Appl. 21 (1976), 1411-1449] to obtain the uniqueness of the Haar weights in an early stage. We take advantage of this fact when deriving the other main results in the theory. We also give a slightly different approach to duality. Finally, we collect, in a systematic way, several important formulas. In an appendix, we indicate very briefly how the C^*-approach and the von Neumann algebra approach eventually yield the same objects. The passage from the von Neumann algebra setting to the C^*-algebra setting is more or less standard. For the other direction, we use a new method. It is based on the observation that the Haar weights on the C^*-algebra extend to weights on the double dual with central support and that all these supports are the same. Of course, we get the von Neumann algebra by cutting down the double dual with this unique

  10. A Robust Vehicle Localization Approach Based on GNSS/IMU/DMI/LiDAR Sensor Fusion for Autonomous Vehicles

    PubMed Central

    Meng, Xiaoli

    2017-01-01

    Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit)/DMI (Distance-Measuring Instruments), a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization. PMID:28926996

  11. A Robust Vehicle Localization Approach Based on GNSS/IMU/DMI/LiDAR Sensor Fusion for Autonomous Vehicles.

    PubMed

    Meng, Xiaoli; Wang, Heng; Liu, Bingbing

    2017-09-18

    Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit)/DMI (Distance-Measuring Instruments), a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization.

  12. Machine-learning approach for local classification of crystalline structures in multiphase systems

    NASA Astrophysics Data System (ADS)

    Dietz, C.; Kretz, T.; Thoma, M. H.

    2017-07-01

    Machine learning is one of the most popular fields in computer science and has a vast number of applications. In this work we will propose a method that will use a neural network to locally identify crystal structures in a mixed phase Yukawa system consisting of fcc, hcp, and bcc clusters and disordered particles similar to plasma crystals. We compare our approach to already used methods and show that the quality of identification increases significantly. The technique works very well for highly disturbed lattices and shows a flexible and robust way to classify crystalline structures that can be used by only providing particle positions. This leads to insights into highly disturbed crystalline structures.

  13. [Classification of local anesthesia methods].

    PubMed

    Petricas, A Zh; Medvedev, D V; Olkhovskaya, E B

    The traditional classification methods of dental local anesthesia must be modified. In this paper we proved that the vascular mechanism is leading component of spongy injection. It is necessary to take into account the high effectiveness and relative safety of spongy anesthesia, as well as versatility, ease of implementation and the growing prevalence in the world. The essence of the proposed modification is to distinguish the methods in diffusive (including surface anesthesia, infiltration and conductive anesthesia) and vascular-diffusive (including intraosseous, intraligamentary, intraseptal and intrapulpal anesthesia). For the last four methods the common term «spongy (intraosseous) anesthesia» may be used.

  14. Acoustic localization at large scales: a promising method for grey wolf monitoring.

    PubMed

    Papin, Morgane; Pichenot, Julian; Guérold, François; Germain, Estelle

    2018-01-01

    The grey wolf ( Canis lupus ) is naturally recolonizing its former habitats in Europe where it was extirpated during the previous two centuries. The management of this protected species is often controversial and its monitoring is a challenge for conservation purposes. However, this elusive carnivore can disperse over long distances in various natural contexts, making its monitoring difficult. Moreover, methods used for collecting signs of presence are usually time-consuming and/or costly. Currently, new acoustic recording tools are contributing to the development of passive acoustic methods as alternative approaches for detecting, monitoring, or identifying species that produce sounds in nature, such as the grey wolf. In the present study, we conducted field experiments to investigate the possibility of using a low-density microphone array to localize wolves at a large scale in two contrasting natural environments in north-eastern France. For scientific and social reasons, the experiments were based on a synthetic sound with similar acoustic properties to howls. This sound was broadcast at several sites. Then, localization estimates and the accuracy were calculated. Finally, linear mixed-effects models were used to identify the factors that influenced the localization accuracy. Among 354 nocturnal broadcasts in total, 269 were recorded by at least one autonomous recorder, thereby demonstrating the potential of this tool. Besides, 59 broadcasts were recorded by at least four microphones and used for acoustic localization. The broadcast sites were localized with an overall mean accuracy of 315 ± 617 (standard deviation) m. After setting a threshold for the temporal error value associated with the estimated coordinates, some unreliable values were excluded and the mean accuracy decreased to 167 ± 308 m. The number of broadcasts recorded was higher in the lowland environment, but the localization accuracy was similar in both environments, although it varied

  15. Rupture Predictions of Notched Ti-6Al-4V Using Local Approaches

    PubMed Central

    Peron, Mirco; Berto, Filippo

    2018-01-01

    Ti-6Al-4V has been extensively used in structural applications in various engineering fields, from naval to automotive and from aerospace to biomedical. Structural applications are characterized by geometrical discontinuities such as notches, which are widely known to harmfully affect their tensile strength. In recent years, many attempts have been done to define solid criteria with which to reliably predict the tensile strength of materials. Among these criteria, two local approaches are worth mentioning due to the accuracy of their predictions, i.e., the strain energy density (SED) approach and the theory of critical distance (TCD) method. In this manuscript, the robustness of these two methods in predicting the tensile behavior of notched Ti-6Al-4V specimens has been compared. To this aim, two very dissimilar notch geometries have been tested, i.e., semi-circular and blunt V-notch with a notch root radius equal to 1 mm, and the experimental results have been compared with those predicted by the two models. The experimental values have been estimated with low discrepancies by either the SED approach and the TCD method, but the former results in better predictions. The deviations for the SED are in fact lower than 1.3%, while the TCD provides predictions with errors almost up to 8.5%. Finally, the weaknesses and the strengths of the two models have been reported. PMID:29693565

  16. An MRI denoising method using image data redundancy and local SNR estimation.

    PubMed

    Golshan, Hosein M; Hasanzadeh, Reza P R; Yousefzadeh, Shahrokh C

    2013-09-01

    This paper presents an LMMSE-based method for the three-dimensional (3D) denoising of MR images assuming a Rician noise model. Conventionally, the LMMSE method estimates the noise-less signal values using the observed MR data samples within local neighborhoods. This is not an efficient procedure to deal with this issue while the 3D MR data intrinsically includes many similar samples that can be used to improve the estimation results. To overcome this problem, we model MR data as random fields and establish a principled way which is capable of choosing the samples not only from a local neighborhood but also from a large portion of the given data. To follow the similar samples within the MR data, an effective similarity measure based on the local statistical moments of images is presented. The parameters of the proposed filter are automatically chosen from the estimated local signal-to-noise ratio. To further enhance the denoising performance, a recursive version of the introduced approach is also addressed. The proposed filter is compared with related state-of-the-art filters using both synthetic and real MR datasets. The experimental results demonstrate the superior performance of our proposal in removing the noise and preserving the anatomical structures of MR images. Copyright © 2013 Elsevier Inc. All rights reserved.

  17. The effective local potential method: Implementation for molecules and relation to approximate optimized effective potential techniques

    NASA Astrophysics Data System (ADS)

    Izmaylov, Artur F.; Staroverov, Viktor N.; Scuseria, Gustavo E.; Davidson, Ernest R.; Stoltz, Gabriel; Cancès, Eric

    2007-02-01

    We have recently formulated a new approach, named the effective local potential (ELP) method, for calculating local exchange-correlation potentials for orbital-dependent functionals based on minimizing the variance of the difference between a given nonlocal potential and its desired local counterpart [V. N. Staroverov et al., J. Chem. Phys. 125, 081104 (2006)]. Here we show that under a mildly simplifying assumption of frozen molecular orbitals, the equation defining the ELP has a unique analytic solution which is identical with the expression arising in the localized Hartree-Fock (LHF) and common energy denominator approximations (CEDA) to the optimized effective potential. The ELP procedure differs from the CEDA and LHF in that it yields the target potential as an expansion in auxiliary basis functions. We report extensive calculations of atomic and molecular properties using the frozen-orbital ELP method and its iterative generalization to prove that ELP results agree with the corresponding LHF and CEDA values, as they should. Finally, we make the case for extending the iterative frozen-orbital ELP method to full orbital relaxation.

  18. New Methods for Crafting Locally Decision-Relevant Scenarios

    NASA Astrophysics Data System (ADS)

    Lempert, R. J.

    2015-12-01

    Scenarios can play an important role in helping decision makers to imagine future worlds, both good and bad, different than the one with which we are familiar and to take concrete steps now to address the risks generated by climate change. At their best, scenarios can effectively represent deep uncertainty; integrate over multiple domains; and enable parties with different expectation and values to expand the range of futures they consider, to see the world from different points of view, and to grapple seriously with the potential implications of surprising or inconvenient futures. These attributes of scenario processes can prove crucial in helping craft effective responses to climate change. But traditional scenario methods can also fail to overcome difficulties related to choosing, communicating, and using scenarios to identify, evaluate, and reach consensus on appropriate policies. Such challenges can limit scenario's impact in broad public discourse. This talk will demonstrate how new decision support approaches can employ new quantitative tools that allow scenarios to emerge from a process of deliberation with analysis among stakeholders, rather than serve as inputs to it, thereby increasing the impacts of scenarios on decision making. This talk will demonstrate these methods in the design of a decision support tool to help residents of low lying coastal cities grapple with the long-term risks of sea level rise. In particular, this talk will show how information from the IPCC SSP's can be combined with local information to provide a rich set of locally decision-relevant information.

  19. LuciPHOr: Algorithm for Phosphorylation Site Localization with False Localization Rate Estimation Using Modified Target-Decoy Approach*

    PubMed Central

    Fermin, Damian; Walmsley, Scott J.; Gingras, Anne-Claude; Choi, Hyungwon; Nesvizhskii, Alexey I.

    2013-01-01

    The localization of phosphorylation sites in peptide sequences is a challenging problem in large-scale phosphoproteomics analysis. The intense neutral loss peaks and the coexistence of multiple serine/threonine and/or tyrosine residues are limiting factors for objectively scoring site patterns across thousands of peptides. Various computational approaches for phosphorylation site localization have been proposed, including Ascore, Mascot Delta score, and ProteinProspector, yet few address direct estimation of the false localization rate (FLR) in each experiment. Here we propose LuciPHOr, a modified target-decoy-based approach that uses mass accuracy and peak intensities for site localization scoring and FLR estimation. Accurate estimation of the FLR is a difficult task at the individual-site level because the degree of uncertainty in localization varies significantly across different peptides. LuciPHOr carries out simultaneous localization on all candidate sites in each peptide and estimates the FLR based on the target-decoy framework, where decoy phosphopeptides generated by placing artificial phosphorylation(s) on non-candidate residues compete with the non-decoy phosphopeptides. LuciPHOr also reports approximate site-level confidence scores for all candidate sites as a means to localize additional sites from multiphosphorylated peptides in which localization can be partially achieved. Unlike the existing tools, LuciPHOr is compatible with any search engine output processed through the Trans-Proteomic Pipeline. We evaluated the performance of LuciPHOr in terms of the sensitivity and accuracy of FLR estimates using two synthetic phosphopeptide libraries and a phosphoproteomic dataset generated from complex mouse brain samples. PMID:23918812

  20. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    PubMed

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

  1. Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun

    2018-07-01

    People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.

  2. Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiao, Xiangmin; Einstein, Daniel R.; Dyedov, Volodymyr

    2010-03-24

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stabilitymore » and consistency tests. These concepts lend themselves to robust numerical techniques including eigenvalue analysis, weighted least squares approximations, and numerical minimization, resulting in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods.« less

  3. Finding fossils in new ways: an artificial neural network approach to predicting the location of productive fossil localities.

    PubMed

    Anemone, Robert; Emerson, Charles; Conroy, Glenn

    2011-01-01

    Chance and serendipity have long played a role in the location of productive fossil localities by vertebrate paleontologists and paleoanthropologists. We offer an alternative approach, informed by methods borrowed from the geographic information sciences and using recent advances in computer science, to more efficiently predict where fossil localities might be found. Our model uses an artificial neural network (ANN) that is trained to recognize the spectral characteristics of known productive localities and other land cover classes, such as forest, wetlands, and scrubland, within a study area based on the analysis of remotely sensed (RS) imagery. Using these spectral signatures, the model then classifies other pixels throughout the study area. The results of the neural network classification can be examined and further manipulated within a geographic information systems (GIS) software package. While we have developed and tested this model on fossil mammal localities in deposits of Paleocene and Eocene age in the Great Divide Basin of southwestern Wyoming, a similar analytical approach can be easily applied to fossil-bearing sedimentary deposits of any age in any part of the world. We suggest that new analytical tools and methods of the geographic sciences, including remote sensing and geographic information systems, are poised to greatly enrich paleoanthropological investigations, and that these new methods should be embraced by field workers in the search for, and geospatial analysis of, fossil primates and hominins. Copyright © 2011 Wiley-Liss, Inc.

  4. Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach.

    PubMed

    Liu, Mengyun; Chen, Ruizhi; Li, Deren; Chen, Yujin; Guo, Guangyi; Cao, Zhipeng; Pan, Yuanjin

    2017-12-08

    After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to "see" which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for

  5. Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach

    PubMed Central

    Chen, Ruizhi; Li, Deren; Chen, Yujin; Guo, Guangyi; Cao, Zhipeng

    2017-01-01

    After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for

  6. Demonstration of lysosomal localization for the mammalian ependymin-related protein using classical approaches combined with a novel density shift method.

    PubMed

    Della Valle, Maria Cecilia; Sleat, David E; Sohar, Istvan; Wen, Ting; Pintar, John E; Jadot, Michel; Lobel, Peter

    2006-11-17

    Most newly synthesized soluble lysosomal proteins are delivered to the lysosome via the mannose 6-phosphate (Man-6-P)-targeting pathway. The presence of the Man-6-P post-translational modification allows these proteins to be affinity-purified on immobilized Man-6-P receptors. This approach has formed the basis for a number of proteomic studies that identified multiple as yet uncharacterized Man-6-P glycoproteins that may represent new lysosomal proteins. Although the presence of Man-6-P is suggestive of lysosomal function, the subcellular localization of such candidates requires experimental verification. Here, we have investigated one such candidate, ependymin-related protein (EPDR). EPDR is a protein of unknown function with some sequence similarity to ependymin, a fish protein thought to play a role in memory consolidation and learning. Using classical subcellular fractionation on rat brain, EPDR co-distributes with lysosomal proteins, but there is significant overlap between lysosomal and mitochondrial markers. For more definitive localization, we have developed a novel approach based upon a selective buoyant density shift of the brain lysosomes in a mutant mouse lacking NPC2, a lysosomal protein involved in lipid transport. EPDR, in parallel with lysosomal markers, shows this density shift in gradient centrifugation experiments comparing mutant and wild type mice. This approach, combined with morphological analyses, demonstrates that EPDR resides in the lysosome. In addition, the lipidosis-induced density shift approach represents a valuable tool for identification and validation of both luminal and membrane lysosomal proteins that should be applicable to high throughput proteomic studies.

  7. Comparison of local grid refinement methods for MODFLOW

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.; Leake, S.A.

    2006-01-01

    Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).

  8. Localization and cooperative communication methods for cognitive radio

    NASA Astrophysics Data System (ADS)

    Duval, Olivier

    We study localization of nearby nodes and cooperative communication for cognitive radios. Cognitive radios sensing their environment to estimate the channel gain between nodes can cooperate and adapt their transmission power to maximize the capacity of the communication between two nodes. We study the end-to-end capacity of a cooperative relaying scheme using orthogonal frequency-division modulation (OFDM) modulation, under power constraints for both the base station and the relay station. The relay uses amplify-and-forward and decode-and-forward cooperative relaying techniques to retransmit messages on a subset of the available subcarriers. The power used in the base station and the relay station transmitters is allocated to maximize the overall system capacity. The subcarrier selection and power allocation are obtained based on convex optimization formulations and an iterative algorithm. Additionally, decode-and-forward relaying schemes are allowed to pair source and relayed subcarriers to increase further the capacity of the system. The proposed techniques outperforms non-selective relaying schemes over a range of relay power budgets. Cognitive radios can be used for opportunistic access of the radio spectrum by detecting spectrum holes left unused by licensed primary users. We introduce a spectrum holes detection approach, which combines blind modulation classification, angle of arrival estimation and number of sources detection. We perform eigenspace analysis to determine the number of sources, and estimate their angles of arrival (AOA). In addition, we classify detected sources as primary or secondary users with their distinct second-orde one-conjugate cyclostationarity features. Extensive simulations carried out indicate that the proposed system identifies and locates individual sources correctly, even at -4 dB signal-to-noise ratios (SNR). In environments with a high density of scatterers, several wireless channels experience nonline-of-sight (NLOS

  9. Solution of the advection-dispersion equation in two dimensions by a finite-volume Eulerian-Lagrangian localized adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1998-01-01

    We extend the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) for solution of the advection-dispersion equation to two dimensions. The method can conserve mass globally and is not limited by restrictions on the size of the grid Peclet or Courant number. Therefore, it is well suited for solution of advection-dominated ground-water solute transport problems. In test problem comparisons with standard finite differences, FVELLAM is able to attain accurate solutions on much coarser space and time grids. On fine grids, the accuracy of the two methods is comparable. A critical aspect of FVELLAM (and all other ELLAMs) is evaluation of the mass storage integral from the preceding time level. In FVELLAM this may be accomplished with either a forward or backtracking approach. The forward tracking approach conserves mass globally and is the preferred approach. The backtracking approach is less computationally intensive, but not globally mass conservative. Boundary terms are systematically represented as integrals in space and time which are evaluated by a common integration scheme in conjunction with forward tracking through time. Unlike the one-dimensional case, local mass conservation cannot be guaranteed, so slight oscillations in concentration can develop, particularly in the vicinity of inflow or outflow boundaries. Published by Elsevier Science Ltd.

  10. Millimeter-Wave Localizers for Aircraft-to-Aircraft Approach Navigation

    NASA Technical Reports Server (NTRS)

    Tang, Adrian J.

    2013-01-01

    Aerial refueling technology for both manned and unmanned aircraft is critical for operations where extended aircraft flight time is required. Existing refueling assets are typically manned aircraft, which couple to a second aircraft through the use of a refueling boom. Alignment and mating of the two aircraft continues to rely on human control with use of high-resolution cameras. With the recent advances in unmanned aircraft, it would be highly advantageous to remove/reduce human control from the refueling process, simplifying the amount of remote mission management and enabling new operational scenarios. Existing aerial refueling uses a camera, making it non-autonomous and prone to human error. Existing commercial localizer technology has proven robust and reliable, but not suited for aircraft-to-aircraft approaches like in aerial refueling scenarios since the resolution is too coarse (approximately one meter). A localizer approach system for aircraft-to-aircraft docking can be constructed using the same modulation with a millimeterwave carrier to provide high resolution. One technology used to remotely align commercial aircraft on approach to a runway are ILS (instrument landing systems). ILS have been in service within the U.S. for almost 50 years. In a commercial ILS, two partially overlapping beams of UHF (109 to 126 MHz) are broadcast from an antenna array so that their overlapping region defines the centerline of the runway. This is called a localizer system and is responsible for horizontal alignment of the approach. One beam is modulated with a 150-Hz tone, while the other with a 90-Hz tone. Through comparison of the modulation depths of both tones, an autopilot system aligns the approaching aircraft with the runway centerline. A similar system called a glide-slope (GS) exists in the 320-to-330MHz band for vertical alignment of the approach. While this technology has been proven reliable for millions of commercial flights annually, its UHF nature limits

  11. Reactive Gas transport in soil: Kinetics versus Local Equilibrium Approach

    NASA Astrophysics Data System (ADS)

    Geistlinger, Helmut; Jia, Ruijan

    2010-05-01

    Gas transport through the unsaturated soil zone was studied using an analytical solution of the gas transport model that is mathematically equivalent to the Two-Region model. The gas transport model includes diffusive and convective gas fluxes, interphase mass transfer between the gas and water phase, and biodegradation. The influence of non-equilibrium phenomena, spatially variable initial conditions, and transient boundary conditions are studied. The objective of this paper is to compare the kinetic approach for interphase mass transfer with the standard local equilibrium approach and to find conditions and time-scales under which the local equilibrium approach is justified. The time-scale of investigation was limited to the day-scale, because this is the relevant scale for understanding gas emission from the soil zone with transient water saturation. For the first time a generalized mass transfer coefficient is proposed that justifies the often used steady-state Thin-Film mass transfer coefficient for small and medium water-saturated aggregates of about 10 mm. The main conclusion from this study is that non-equilibrium mass transfer depends strongly on the temporal and small-scale spatial distribution of water within the unsaturated soil zone. For regions with low water saturation and small water-saturated aggregates (radius about 1 mm) the local equilibrium approach can be used as a first approximation for diffusive gas transport. For higher water saturation and medium radii of water-saturated aggregates (radius about 10 mm) and for convective gas transport, the non-equilibrium effect becomes more and more important if the hydraulic residence time and the Damköhler number decrease. Relative errors can range up to 100% and more. While for medium radii the local equilibrium approach describes the main features both of the spatial concentration profile and the time-dependence of the emission rate, it fails completely for larger aggregates (radius about 100 mm

  12. A new experimental method for determining local airloads on rotor blades in forward flight

    NASA Astrophysics Data System (ADS)

    Berton, E.; Maresca, C.; Favier, D.

    This paper presents a new approach for determining local airloads on helicopter rotor blade sections in forward flight. The method is based on the momentum equation in which all the terms are expressed by means of the velocity field measured by a laser Doppler velocimeter. The relative magnitude of the different terms involved in the momentum and Bernoulli equations is estimated and the results are encouraging.

  13. Local Table Condensation in Rough Set Approach for Jumping Emerging Pattern Induction

    NASA Astrophysics Data System (ADS)

    Terlecki, Pawel; Walczak, Krzysztof

    This paper extends the rough set approach for JEP induction based on the notion of a condensed decision table. The original transaction database is transformed to a relational form and patterns are induced by means of local reducts. The transformation employs an item aggregation obtained by coloring a graph that re0ects con0icts among items. For e±ciency reasons we propose to perform this preprocessing locally, i.e. at the transaction level, to achieve a higher dimensionality gain. Special maintenance strategy is also used to avoid graph rebuilds. Both global and local approach have been tested and discussed for dense and synthetically generated sparse datasets.

  14. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  15. Radiographic localization of unerupted teeth: further findings about the vertical tube shift method and other localization techniques.

    PubMed

    Jacobs, S G

    2000-10-01

    The parallax method (image/tube shift method, Clark's rule, Richards' buccal object rule) is recommended to localize unerupted teeth. Richards' contribution to the development of the parallax method is discussed. The favored method for localization uses a rotational panoramic radiograph in combination with an occlusal radiograph involving a vertical shift of the x-ray tube. The use of this combination when localizing teeth and supernumeraries in the premolar region is illustrated. When taking an occlusal radiograph to localize an unerupted maxillary canine, clinical situations are presented where modification of the vertical angulation of the tube of 70 degrees to 75 degrees or of the horizontal position of the tube is warranted. The limitations of axial (true, cross-sectional, vertex) occlusal radiographs are also explored.

  16. Supplementary routes to local anaesthesia.

    PubMed

    Meechan, J G

    2002-11-01

    The satisfactory provision of many dental treatments, particularly endodontics, relies on achieving excellent pain control. Unfortunately, the administration of a local anaesthetic solution does not always produce satisfactory anaesthesia of the dental pulp. This may be distressing for both patient and operator. Fortunately, failure of local anaesthetic injections can be overcome. This is often achieved by using alternative routes of approach for subsequent injections. Nerves such as the inferior alveolar nerve can be anaesthetized by a variety of block methods. However, techniques of anaesthesia other than the standard infiltration and regional block injections may be employed successfully when these former methods have failed to produce adequate pain control. This paper describes some supplementary local anaesthetic techniques that may be used to achieve pulpal anaesthesia for endodontic procedures when conventional approaches have failed. Although some of these techniques can be used as the primary form of anaesthesia, these are normally employed as 'back-up'. The methods described are intraligamentary (periodontal ligament) injections, intraosseous anaesthesia and the intrapulpal approach. The factors that influence the success of these methods and the advantages and disadvantages of each technique are discussed. The advent of new instrumentation, which permits the slow delivery of local anaesthetic solution has led to the development of novel methods of anaesthesia in dentistry. These new approaches are discussed.

  17. Optoelectronic scanning system upgrade by energy center localization methods

    NASA Astrophysics Data System (ADS)

    Flores-Fuentes, W.; Sergiyenko, O.; Rodriguez-Quiñonez, J. C.; Rivas-López, M.; Hernández-Balbuena, D.; Básaca-Preciado, L. C.; Lindner, L.; González-Navarro, F. F.

    2016-11-01

    A problem of upgrading an optoelectronic scanning system with digital post-processing of the signal based on adequate methods of energy center localization is considered. An improved dynamic triangulation analysis technique is proposed by an example of industrial infrastructure damage detection. A modification of our previously published method aimed at searching for the energy center of an optoelectronic signal is described. Application of the artificial intelligence algorithm of compensation for the error of determining the angular coordinate in calculating the spatial coordinate through dynamic triangulation is demonstrated. Five energy center localization methods are developed and tested to select the best method. After implementation of these methods, digital compensation for the measurement error, and statistical data analysis, a non-parametric behavior of the data is identified. The Wilcoxon signed rank test is applied to improve the result further. For optical scanning systems, it is necessary to detect a light emitter mounted on the infrastructure being investigated to calculate its spatial coordinate by the energy center localization method.

  18. Improving the local wavenumber method by automatic DEXP transformation

    NASA Astrophysics Data System (ADS)

    Abbas, Mahmoud Ahmed; Fedi, Maurizio; Florio, Giovanni

    2014-12-01

    In this paper we present a new method for source parameter estimation, based on the local wavenumber function. We make use of the stable properties of the Depth from EXtreme Points (DEXP) method, in which the depth to the source is determined at the extreme points of the field scaled with a power-law of the altitude. Thus the method results particularly suited to deal with local wavenumber of high-order, as it is able to overcome its known instability caused by the use of high-order derivatives. The DEXP transformation enjoys a relevant feature when applied to the local wavenumber function: the scaling-law is in fact independent of the structural index. So, differently from the DEXP transformation applied directly to potential fields, the Local Wavenumber DEXP transformation is fully automatic and may be implemented as a very fast imaging method, mapping every kind of source at the correct depth. Also the simultaneous presence of sources with different homogeneity degree can be easily and correctly treated. The method was applied to synthetic and real examples from Bulgaria and Italy and the results agree well with known information about the causative sources.

  19. A special purpose knowledge-based face localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  20. RFMix: A Discriminative Modeling Approach for Rapid and Robust Local-Ancestry Inference

    PubMed Central

    Maples, Brian K.; Gravel, Simon; Kenny, Eimear E.; Bustamante, Carlos D.

    2013-01-01

    Local-ancestry inference is an important step in the genetic analysis of fully sequenced human genomes. Current methods can only detect continental-level ancestry (i.e., European versus African versus Asian) accurately even when using millions of markers. Here, we present RFMix, a powerful discriminative modeling approach that is faster (∼30×) and more accurate than existing methods. We accomplish this by using a conditional random field parameterized by random forests trained on reference panels. RFMix is capable of learning from the admixed samples themselves to boost performance and autocorrect phasing errors. RFMix shows high sensitivity and specificity in simulated Hispanics/Latinos and African Americans and admixed Europeans, Africans, and Asians. Finally, we demonstrate that African Americans in HapMap contain modest (but nonzero) levels of Native American ancestry (∼0.4%). PMID:23910464

  1. GRAM-CNN: a deep learning approach with local context for named entity recognition in biomedical text.

    PubMed

    Zhu, Qile; Li, Xiaolin; Conesa, Ana; Pereira, Cécile

    2018-05-01

    Best performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models. We propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems. The GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN. andyli@ece.ufl.edu or aconesa@ufl.edu. Supplementary data are available at Bioinformatics online.

  2. GRAM-CNN: a deep learning approach with local context for named entity recognition in biomedical text

    PubMed Central

    Zhu, Qile; Li, Xiaolin; Conesa, Ana; Pereira, Cécile

    2018-01-01

    Abstract Motivation Best performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models. Results We propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems. Availability and implementation The GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN. Contact andyli@ece.ufl.edu or aconesa@ufl.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:29272325

  3. Local cell metrics: a novel method for analysis of cell-cell interactions.

    PubMed

    Su, Jing; Zapata, Pedro J; Chen, Chien-Chiang; Meredith, J Carson

    2009-10-23

    The regulation of many cell functions is inherently linked to cell-cell contact interactions. However, effects of contact interactions among adherent cells can be difficult to detect with global summary statistics due to the localized nature and noise inherent to cell-cell interactions. The lack of informatics approaches specific for detecting cell-cell interactions is a limitation in the analysis of large sets of cell image data, including traditional and combinatorial or high-throughput studies. Here we introduce a novel histogram-based data analysis strategy, termed local cell metrics (LCMs), which addresses this shortcoming. The new LCM method is demonstrated via a study of contact inhibition of proliferation of MC3T3-E1 osteoblasts. We describe how LCMs can be used to quantify the local environment of cells and how LCMs are decomposed mathematically into metrics specific to each cell type in a culture, e.g., differently-labelled cells in fluorescence imaging. Using this approach, a quantitative, probabilistic description of the contact inhibition effects in MC3T3-E1 cultures has been achieved. We also show how LCMs are related to the naïve Bayes model. Namely, LCMs are Bayes class-conditional probability functions, suggesting their use for data mining and classification. LCMs are successful in robust detection of cell contact inhibition in situations where conventional global statistics fail to do so. The noise due to the random features of cell behavior was suppressed significantly as a result of the focus on local distances, providing sensitive detection of cell-cell contact effects. The methodology can be extended to any quantifiable feature that can be obtained from imaging of cell cultures or tissue samples, including optical, fluorescent, and confocal microscopy. This approach may prove useful in interpreting culture and histological data in fields where cell-cell interactions play a critical role in determining cell fate, e.g., cancer, developmental

  4. A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization

    PubMed Central

    Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao

    2016-01-01

    The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322

  5. A Method for Spatially Resolved Local Intracellular Mechanochemical Sensing and Organelle Manipulation

    PubMed Central

    Shekhar, S.; Cambi, A.; Figdor, C.G.; Subramaniam, V.; Kanger, J.S.

    2012-01-01

    Because both the chemical and mechanical properties of living cells play crucial functional roles, there is a strong need for biophysical methods to address these properties simultaneously. Here we present a novel (to our knowledge) approach to measure local intracellular micromechanical and chemical properties using a hybrid magnetic chemical biosensor. We coupled a fluorescent dye, which serves as a chemical sensor, to a magnetic particle that is used for measurement of the viscoelastic environment by studying the response of the particle to magnetic force pulses. As a demonstration of the potential of this approach, we applied the method to study the process of phagocytosis, wherein cytoskeletal reorganization occurs in parallel with acidification of the phagosome. During this process, we measured the shear modulus and viscosity of the phagosomal environment concurrently with the phagosomal pH. We found that it is possible to manipulate phagocytosis by stalling the centripetal movement of the phagosome using magnetic force. Our results suggest that preventing centripetal phagosomal transport delays the onset of acidification. To our knowledge, this is the first report of manipulation of intracellular phagosomal transport without interfering with the underlying motor proteins or cytoskeletal network through biochemical methods. PMID:22947855

  6. Global/local stress analysis of composite panels

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Knight, Norman F., Jr.

    1989-01-01

    A method for performing a global/local stress analysis is described, and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.

  7. Method Engineering: A Service-Oriented Approach

    NASA Astrophysics Data System (ADS)

    Cauvet, Corine

    In the past, a large variety of methods have been published ranging from very generic frameworks to methods for specific information systems. Method Engineering has emerged as a research discipline for designing, constructing and adapting methods for Information Systems development. Several approaches have been proposed as paradigms in method engineering. The meta modeling approach provides means for building methods by instantiation, the component-based approach aims at supporting the development of methods by using modularization constructs such as method fragments, method chunks and method components. This chapter presents an approach (SO2M) for method engineering based on the service paradigm. We consider services as autonomous computational entities that are self-describing, self-configuring and self-adapting. They can be described, published, discovered and dynamically composed for processing a consumer's demand (a developer's requirement). The method service concept is proposed to capture a development process fragment for achieving a goal. Goal orientation in service specification and the principle of service dynamic composition support method construction and method adaptation to different development contexts.

  8. Global/local methods research using a common structural analysis framework

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. H., Jr.; Thompson, Danniella M.

    1991-01-01

    Methodologies for global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.

  9. A potential-of-mean-force approach for fracture mechanics of heterogeneous materials using the lattice element method

    NASA Astrophysics Data System (ADS)

    Laubie, Hadrien; Radjaï, Farhang; Pellenq, Roland; Ulm, Franz-Josef

    2017-08-01

    Fracture of heterogeneous materials has emerged as a critical issue in many engineering applications, ranging from subsurface energy to biomedical applications, and requires a rational framework that allows linking local fracture processes with global fracture descriptors such as the energy release rate, fracture energy and fracture toughness. This is achieved here by means of a local and a global potential-of-mean-force (PMF) inspired Lattice Element Method (LEM) approach. In the local approach, fracture-strength criteria derived from the effective interaction potentials between mass points are shown to exhibit a scaling commensurable with the energy dissipation of fracture processes. In the global PMF-approach, fracture is considered as a sequence of equilibrium states associated with minimum potential energy states analogous to Griffith's approach. It is found that this global approach has much in common with a Grand Canonical Monte Carlo (GCMC) approach, in which mass points are randomly removed following a maximum dissipation criterion until the energy release rate reaches the fracture energy. The duality of the two approaches is illustrated through the application of the PMF-inspired LEM for fracture propagation in a homogeneous linear elastic solid using different means of evaluating the energy release rate. Finally, by application of the method to a textbook example of fracture propagation in a heterogeneous material, it is shown that the proposed PMF-inspired LEM approach captures some well-known toughening mechanisms related to fracture energy contrast, elasticity contrast and crack deflection in the considered two-phase layered composite material.

  10. Experiences of using a participatory action research approach to strengthen district local capacity in Eastern Uganda

    PubMed Central

    Tetui, Moses; Coe, Anna-Britt; Hurtig, Anna-Karin; Ekirapa-Kiracho, Elizabeth; Kiwanuka, Suzanne N.

    2017-01-01

    ABSTRACT Background: To achieve a sustained improvement in health outcomes, the way health interventions are designed and implemented is critical. A participatory action research approach is applauded for building local capacity such as health management. Thereby increasing the chances of sustaining health interventions. Objective: This study explored stakeholder experiences of using PAR to implement an intervention meant to strengthen the local district capacity. Methods: This was a qualitative study featuring 18 informant interviews and a focus group discussion. Respondents included politicians, administrators, health managers and external researchers in three rural districts of eastern Uganda where PAR was used. Qualitative content analysis was used to explore stakeholders’ experiences. Results: ‘Being awakened’ emerged as an overarching category capturing stakeholder experiences of using PAR. This was described in four interrelated and sequential categories, which included: stakeholder involvement, being invigorated, the risk of wide stakeholder engagement and balancing the risk of wide stakeholder engagement. In terms of involvement, the stakeholders felt engaged, a sense of ownership, felt valued and responsible during the implementation of the project. Being invigorated meant being awakened, inspired and supported. On the other hand, risks such as conflict, stress and uncertainty were reported, and finally these risks were balanced through tolerance, risk-awareness and collaboration. Conclusions: The PAR approach was desirable because it created opportunities for building local capacity and enhancing continuity of interventions. Stakeholders were awakened by the approach, as it made them more responsive to systems challenges and possible local solutions. Nonetheless, the use of PAR should be considered in full knowledge of the undesirable and complex experiences, such as uncertainty, conflict and stress. This will enable adequate preparation and

  11. An observationally centred method to quantify local climate change as a distribution

    NASA Astrophysics Data System (ADS)

    Stainforth, David; Chapman, Sandra; Watkins, Nicholas

    2013-04-01

    For planning and adaptation, guidance on trends in local climate is needed at the specific thresholds relevant to particular impact or policy endeavours. This requires quantifying trends at specific quantiles in distributions of variables such as daily temperature or precipitation. These non-normal distributions vary both geographically and in time. The trends in the relevant quantiles may not simply follow the trend in the distribution mean. We present a method[1] for analysing local climatic timeseries data to assess which quantiles of the local climatic distribution show the greatest and most robust trends. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily temperature from specific locations across Europe over the last 60 years. Our method extracts the changing cumulative distribution function over time and uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of the sensitivity of different quantiles of the distributions to changing climate. Geographical location and temperature are treated as independent variables, we thus obtain as outputs how the trend or sensitivity varies with temperature (or occurrence likelihood), and with geographical location. These sensitivities are found to be geographically varying across Europe; as one would expect given the different influences on local climate between, say, Western Scotland and central Italy. We find as an output many regionally consistent patterns of response of potential value in adaptation planning. We discuss methods to quantify the robustness of these observed sensitivities and their statistical likelihood. This also quantifies the level of detail needed from climate models if they are to be used as tools to assess climate change impact. [1] S C

  12. An alternative subspace approach to EEG dipole source localization

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  13. Algebraic Algorithm Design and Local Search

    DTIC Science & Technology

    1996-12-01

    method for performing algorithm design that is more purely algebraic than that of KIDS. This method is then applied to local search. Local search is a...synthesis. Our approach was to follow KIDS in spirit, but to adopt a pure algebraic formalism, supported by Kestrel’s SPECWARE environment (79), that...design was developed that is more purely algebraic than that of KIDS. This method was then applied to local search. A general theory of local search was

  14. Iterative normalization method for improved prostate cancer localization with multispectral magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Samil Yetik, Imam

    2012-04-01

    Use of multispectral magnetic resonance imaging has received a great interest for prostate cancer localization in research and clinical studies. Manual extraction of prostate tumors from multispectral magnetic resonance imaging is inefficient and subjective, while automated segmentation is objective and reproducible. For supervised, automated segmentation approaches, learning is essential to obtain the information from training dataset. However, in this procedure, all patients are assumed to have similar properties for the tumor and normal tissues, and the segmentation performance suffers since the variations across patients are ignored. To conquer this difficulty, we propose a new iterative normalization method based on relative intensity values of tumor and normal tissues to normalize multispectral magnetic resonance images and improve segmentation performance. The idea of relative intensity mimics the manual segmentation performed by human readers, who compare the contrast between regions without knowing the actual intensity values. We compare the segmentation performance of the proposed method with that of z-score normalization followed by support vector machine, local active contours, and fuzzy Markov random field. Our experimental results demonstrate that our method outperforms the three other state-of-the-art algorithms, and was found to have specificity of 0.73, sensitivity of 0.69, and accuracy of 0.79, significantly better than alternative methods.

  15. A "New" Approach to Local Labor Market Analysis: A Feasibility Study.

    ERIC Educational Resources Information Center

    Goldfarb, Robert; Hamermesh, Daniel

    This report describes research on the New Haven labor market carried out during the summer and fall of 1969 and the spring of 1970. The aims of the research were to develop further the theoretical approach to micro-labor economics in a local labor market and to test the feasibility of collecting data from local firms which could be used to test…

  16. Damage detection of rotating wind turbine blades using local flexibility method and long-gauge fiber Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Hsu, Ting-Yu; Shiao, Shen-Yuan; Liao, Wen-I.

    2018-01-01

    Wind turbines are a cost-effective alternative energy source; however, their blades are susceptible to damage. Therefore, damage detection of wind turbine blades is of great importance for condition monitoring of wind turbines. Many vibration-based structural damage detection techniques have been proposed in the last two decades. The local flexibility method, which can determine local stiffness variations of beam-like structures by using measured modal parameters, is one of the most promising vibration-based approaches. The local flexibility method does not require a finite element model of the structure. A few structural modal parameters identified from the ambient vibration signals both before and after damage are required for this method. In this study, we propose a damage detection approach for rotating wind turbine blades using the local flexibility method based on the dynamic macro-strain signals measured by long-gauge fiber Bragg grating (FBG)-based sensors. A small wind turbine structure was constructed and excited using a shaking table to generate vibration signals. The structure was designed to have natural frequencies as close as possible to those of a typical 1.5 MW wind turbine in real scale. The optical fiber signal of the rotating blades was transmitted to the data acquisition system through a rotary joint fixed inside the hollow shaft of the wind turbine. Reversible damage was simulated by aluminum plates attached to some sections of the wind turbine blades. The damaged locations of the rotating blades were successfully detected using the proposed approach, with the extent of damage somewhat over-estimated. Nevertheless, although the specimen of wind turbine blades cannot represent a real one, the results still manifest that FBG-based macro-strain measurement has potential to be employed to obtain the modal parameters of the rotating wind turbines and then locations of wind turbine segments with a change of rigidity can be estimated effectively by

  17. A discrete-time localization method for capsule endoscopy based on on-board magnetic sensing

    NASA Astrophysics Data System (ADS)

    Salerno, Marco; Ciuti, Gastone; Lucarini, Gioia; Rizzo, Rocco; Valdastri, Pietro; Menciassi, Arianna; Landi, Alberto; Dario, Paolo

    2012-01-01

    Recent achievements in active capsule endoscopy have allowed controlled inspection of the bowel by magnetic guidance. Capsule localization represents an important enabling technology for such kinds of platforms. In this paper, the authors present a localization method, applied as first step in time-discrete capsule position detection, that is useful for establishing a magnetic link at the beginning of an endoscopic procedure or for re-linking the capsule in the case of loss due to locomotion. The novelty of this approach consists in using magnetic sensors on board the capsule whose output is combined with pre-calculated magnetic field analytical model solutions. A magnetic field triangulation algorithm is used for obtaining the position of the capsule inside the gastrointestinal tract. Experimental validation has demonstrated that the proposed procedure is stable, accurate and has a wide localization range in a volume of about 18 × 103 cm3. Position errors of 14 mm along the X direction, 11 mm along the Y direction and 19 mm along the Z direction were obtained in less than 27 s of elaboration time. The proposed approach, being compatible with magnetic fields used for locomotion, can be easily extended to other platforms for active capsule endoscopy.

  18. Comparison study of global and local approaches describing critical phenomena on the Polish stock exchange market

    NASA Astrophysics Data System (ADS)

    Czarnecki, Łukasz; Grech, Dariusz; Pamuła, Grzegorz

    2008-12-01

    We confront global and local methods to analyze the financial crash-like events on the Polish financial market from the critical phenomena point of view. These methods are based on the analysis of log-periodicity and the local fractal properties of financial time series in the vicinity of phase transitions (crashes). The whole history (1991-2008) of Warsaw Stock Exchange Index (WIG) describing the largest developing financial market in Europe, is analyzed in a daily time horizon. We find that crash-like events on the Polish financial market are described better by the log-divergent price model decorated with log-periodic behavior than the corresponding power-law-divergent price model. Predictions coming from log-periodicity scenario are verified for all main crashes that took place in WIG history. It is argued that crash predictions within log-periodicity model strongly depend on the amount of data taken to make a fit and therefore are likely to contain huge inaccuracies. Turning to local fractal description, we calculate the so-called local (time dependent) Hurst exponent H for the WIG time series and we find the dependence between the behavior of the local fractal properties of the WIG time series and the crashes appearance on the financial market. The latter method seems to work better than the global approach - both for developing as for developed markets. The current situation on the market, particularly related to the Fed intervention in September’07 and the situation on the market immediately after this intervention is also analyzed from the fractional Brownian motion point of view.

  19. A local segmentation parameter optimization approach for mapping heterogeneous urban environments using VHR imagery

    NASA Astrophysics Data System (ADS)

    Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore

    2017-10-01

    Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.

  20. The demonstration of a theory-based approach to the design of localized patient safety interventions

    PubMed Central

    2013-01-01

    Background There is evidence of unsafe care in healthcare systems globally. Interventions to implement recommended practice often have modest and variable effects. Ideally, selecting and adapting interventions according to local contexts should enhance effects. However, the means by which this can happen is seldom systematic, based on theory, or made transparent. This work aimed to demonstrate the applicability, feasibility, and acceptability of a theoretical domains framework implementation (TDFI) approach for co-designing patient safety interventions. Methods We worked with three hospitals to support the implementation of evidence-based guidance to reduce the risk of feeding into misplaced nasogastric feeding tubes. Our stepped process, informed by the TDF and key principles from implementation literature, entailed: involving stakeholders; identifying target behaviors; identifying local factors (barriers and levers) affecting behavior change using a TDF-based questionnaire; working with stakeholders to generate specific local strategies to address key barriers; and supporting stakeholders to implement strategies. Exit interviews and audit data collection were undertaken to assess the feasibility and acceptability of this approach. Results Following audit and discussion, implementation teams for each Trust identified the process of checking the positioning of nasogastric tubes prior to feeding as the key behavior to target. Questionnaire results indicated differences in key barriers between organizations. Focus groups generated innovative, generalizable, and adaptable strategies for overcoming barriers, such as awareness events, screensavers, equipment modifications, and interactive learning resources. Exit interviews identified themes relating to the benefits, challenges, and sustainability of this approach. Time trend audit data were collected for 301 patients over an 18-month period for one Trust, suggesting clinically significant improved use of pH and

  1. Local Discontinuous Galerkin Methods for the Cahn-Hilliard Type Equations

    DTIC Science & Technology

    2007-01-01

    Kuramoto-Sivashinsky equations , the Ito-type coupled KdV equa- tions, the Kadomtsev - Petviashvili equation , and the Zakharov-Kuznetsov equation . A common...Local discontinuous Galerkin methods for the Cahn-Hilliard type equations Yinhua Xia∗, Yan Xu† and Chi-Wang Shu ‡ Abstract In this paper we develop...local discontinuous Galerkin (LDG) methods for the fourth-order nonlinear Cahn-Hilliard equation and system. The energy stability of the LDG methods is

  2. A method for spatially resolved local intracellular mechanochemical sensing and organelle manipulation.

    PubMed

    Shekhar, S; Cambi, A; Figdor, C G; Subramaniam, V; Kanger, J S

    2012-08-08

    Because both the chemical and mechanical properties of living cells play crucial functional roles, there is a strong need for biophysical methods to address these properties simultaneously. Here we present a novel (to our knowledge) approach to measure local intracellular micromechanical and chemical properties using a hybrid magnetic chemical biosensor. We coupled a fluorescent dye, which serves as a chemical sensor, to a magnetic particle that is used for measurement of the viscoelastic environment by studying the response of the particle to magnetic force pulses. As a demonstration of the potential of this approach, we applied the method to study the process of phagocytosis, wherein cytoskeletal reorganization occurs in parallel with acidification of the phagosome. During this process, we measured the shear modulus and viscosity of the phagosomal environment concurrently with the phagosomal pH. We found that it is possible to manipulate phagocytosis by stalling the centripetal movement of the phagosome using magnetic force. Our results suggest that preventing centripetal phagosomal transport delays the onset of acidification. To our knowledge, this is the first report of manipulation of intracellular phagosomal transport without interfering with the underlying motor proteins or cytoskeletal network through biochemical methods. Copyright © 2012 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  3. Locally adaptive parallel temperature accelerated dynamics method

    NASA Astrophysics Data System (ADS)

    Shim, Yunsic; Amar, Jacques G.

    2010-03-01

    The recently-developed temperature-accelerated dynamics (TAD) method [M. Sørensen and A.F. Voter, J. Chem. Phys. 112, 9599 (2000)] along with the more recently developed parallel TAD (parTAD) method [Y. Shim et al, Phys. Rev. B 76, 205439 (2007)] allow one to carry out non-equilibrium simulations over extended time and length scales. The basic idea behind TAD is to speed up transitions by carrying out a high-temperature MD simulation and then use the resulting information to obtain event times at the desired low temperature. In a typical implementation, a fixed high temperature Thigh is used. However, in general one expects that for each configuration there exists an optimal value of Thigh which depends on the particular transition pathways and activation energies for that configuration. Here we present a locally adaptive high-temperature TAD method in which instead of using a fixed Thigh the high temperature is dynamically adjusted in order to maximize simulation efficiency. Preliminary results of the performance obtained from parTAD simulations of Cu/Cu(100) growth using the locally adaptive Thigh method will also be presented.

  4. Wlan-Based Indoor Localization Using Neural Networks

    NASA Astrophysics Data System (ADS)

    Saleem, Fasiha; Wyne, Shurjeel

    2016-07-01

    Wireless indoor localization has generated recent research interest due to its numerous applications. This work investigates Wi-Fi based indoor localization using two variants of the fingerprinting approach. Specifically, we study the application of an artificial neural network (ANN) for implementing the fingerprinting approach and compare its localization performance with a probabilistic fingerprinting method that is based on maximum likelihood estimation (MLE) of the user location. We incorporate spatial correlation of fading into our investigations, which is often neglected in simulation studies and leads to erroneous location estimates. The localization performance is quantified in terms of accuracy, precision, robustness, and complexity. Multiple methods for handling the case of missing APs in online stage are investigated. Our results indicate that ANN-based fingerprinting outperforms the probabilistic approach for all performance metrics considered in this work.

  5. Robust 3D face landmark localization based on local coordinate coding.

    PubMed

    Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Maybank, Stephen J

    2014-12-01

    In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy.

  6. Measuring Filament Orientation: A New Quantitative, Local Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Green, C.-E.; Cunningham, M. R.; Jones, P. A.

    The relative orientation between filamentary structures in molecular clouds and the ambient magnetic field provides insight into filament formation and stability. To calculate the relative orientation, a measurement of filament orientation is first required. We propose a new method to calculate the orientation of the one-pixel-wide filament skeleton that is output by filament identification algorithms such as filfinder. We derive the local filament orientation from the direction of the intensity gradient in the skeleton image using the Sobel filter and a few simple post-processing steps. We call this the “Sobel-gradient method.” The resulting filament orientation map can be compared quantitativelymore » on a local scale with the magnetic field orientation map to then find the relative orientation of the filament with respect to the magnetic field at each point along the filament. It can also be used for constructing radial profiles for filament width fitting. The proposed method facilitates automation in analyses of filament skeletons, which is imperative in this era of “big data.”.« less

  7. Measuring Filament Orientation: A New Quantitative, Local Approach

    NASA Astrophysics Data System (ADS)

    Green, C.-E.; Dawson, J. R.; Cunningham, M. R.; Jones, P. A.; Novak, G.; Fissel, L. M.

    2017-09-01

    The relative orientation between filamentary structures in molecular clouds and the ambient magnetic field provides insight into filament formation and stability. To calculate the relative orientation, a measurement of filament orientation is first required. We propose a new method to calculate the orientation of the one-pixel-wide filament skeleton that is output by filament identification algorithms such as filfinder. We derive the local filament orientation from the direction of the intensity gradient in the skeleton image using the Sobel filter and a few simple post-processing steps. We call this the “Sobel-gradient method.” The resulting filament orientation map can be compared quantitatively on a local scale with the magnetic field orientation map to then find the relative orientation of the filament with respect to the magnetic field at each point along the filament. It can also be used for constructing radial profiles for filament width fitting. The proposed method facilitates automation in analyses of filament skeletons, which is imperative in this era of “big data.”

  8. Single-particle dynamics of the Anderson model: a local moment approach

    NASA Astrophysics Data System (ADS)

    Glossop, Matthew T.; Logan, David E.

    2002-07-01

    A non-perturbative local moment approach to single-particle dynamics of the general asymmetric Anderson impurity model is developed. The approach encompasses all energy scales and interaction strengths. It captures thereby strong coupling Kondo behaviour, including the resultant universal scaling behaviour of the single-particle spectrum; as well as the mixed valence and essentially perturbative empty orbital regimes. The underlying approach is physically transparent and innately simple, and as such is capable of practical extension to lattice-based models within the framework of dynamical mean-field theory.

  9. Level set method with automatic selective local statistics for brain tumor segmentation in MR images.

    PubMed

    Thapaliya, Kiran; Pyun, Jae-Young; Park, Chun-Su; Kwon, Goo-Rak

    2013-01-01

    The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  10. Semi-supervised protein subcellular localization.

    PubMed

    Xu, Qian; Hu, Derek Hao; Xue, Hong; Yu, Weichuan; Yang, Qiang

    2009-01-30

    Protein subcellular localization is concerned with predicting the location of a protein within a cell using computational method. The location information can indicate key functionalities of proteins. Accurate predictions of subcellular localizations of protein can aid the prediction of protein function and genome annotation, as well as the identification of drug targets. Computational methods based on machine learning, such as support vector machine approaches, have already been widely used in the prediction of protein subcellular localization. However, a major drawback of these machine learning-based approaches is that a large amount of data should be labeled in order to let the prediction system learn a classifier of good generalization ability. However, in real world cases, it is laborious, expensive and time-consuming to experimentally determine the subcellular localization of a protein and prepare instances of labeled data. In this paper, we present an approach based on a new learning framework, semi-supervised learning, which can use much fewer labeled instances to construct a high quality prediction model. We construct an initial classifier using a small set of labeled examples first, and then use unlabeled instances to refine the classifier for future predictions. Experimental results show that our methods can effectively reduce the workload for labeling data using the unlabeled data. Our method is shown to enhance the state-of-the-art prediction results of SVM classifiers by more than 10%.

  11. Density matrix modeling of quantum cascade lasers without an artificially localized basis: A generalized scattering approach

    NASA Astrophysics Data System (ADS)

    Pan, Andrew; Burnett, Benjamin A.; Chui, Chi On; Williams, Benjamin S.

    2017-08-01

    We derive a density matrix (DM) theory for quantum cascade lasers (QCLs) that describes the influence of scattering on coherences through a generalized scattering superoperator. The theory enables quantitative modeling of QCLs, including localization and tunneling effects, using the well-defined energy eigenstates rather than the ad hoc localized basis states required by most previous DM models. Our microscopic approach to scattering also eliminates the need for phenomenological transition or dephasing rates. We discuss the physical interpretation and numerical implementation of the theory, presenting sets of both energy-resolved and thermally averaged equations, which can be used for detailed or compact device modeling. We illustrate the theory's applications by simulating a high performance resonant-phonon terahertz (THz) QCL design, which cannot be easily or accurately modeled using conventional DM methods. We show that the theory's inclusion of coherences is crucial for describing localization and tunneling effects consistent with experiment.

  12. Solution of the advection-dispersion equation by a finite-volume eulerian-lagrangian local adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1992-01-01

    A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.

  13. DLocalMotif: a discriminative approach for discovering local motifs in protein sequences.

    PubMed

    Mehdi, Ahmed M; Sehgal, Muhammad Shoaib B; Kobe, Bostjan; Bailey, Timothy L; Bodén, Mikael

    2013-01-01

    Local motifs are patterns of DNA or protein sequences that occur within a sequence interval relative to a biologically defined anchor or landmark. Current protein motif discovery methods do not adequately consider such constraints to identify biologically significant motifs that are only weakly over-represented but spatially confined. Using negatives, i.e. sequences known to not contain a local motif, can further increase the specificity of their discovery. This article introduces the method DLocalMotif that makes use of positional information and negative data for local motif discovery in protein sequences. DLocalMotif combines three scoring functions, measuring degrees of motif over-representation, entropy and spatial confinement, specifically designed to discriminatively exploit the availability of negative data. The method is shown to outperform current methods that use only a subset of these motif characteristics. We apply the method to several biological datasets. The analysis of peroxisomal targeting signals uncovers several novel motifs that occur immediately upstream of the dominant peroxisomal targeting signal-1 signal. The analysis of proline-tyrosine nuclear localization signals uncovers multiple novel motifs that overlap with C2H2 zinc finger domains. We also evaluate the method on classical nuclear localization signals and endoplasmic reticulum retention signals and find that DLocalMotif successfully recovers biologically relevant sequence properties. http://bioinf.scmb.uq.edu.au/dlocalmotif/

  14. Localized diabatization applied to excitons in molecular crystals

    NASA Astrophysics Data System (ADS)

    Jin, Zuxin; Subotnik, Joseph E.

    2017-06-01

    Traditional ab initio electronic structure calculations of periodic systems yield delocalized eigenstates that should be understood as adiabatic states. For example, excitons are bands of extended states which superimpose localized excitations on every lattice site. However, in general, in order to study the effects of nuclear motion on exciton transport, it is standard to work with a localized description of excitons, especially in a hopping regime; even in a band regime, a localized description can be helpful. To extract localized excitons from a band requires essentially a diabatization procedure. In this paper, three distinct methods are proposed for such localized diabatization: (i) a simple projection method, (ii) a more general Pipek-Mezey localization scheme, and (iii) a variant of Boys diabatization. Approaches (i) and (ii) require localized, single-particle Wannier orbitals, while approach (iii) has no such dependence. These methods should be very useful for studying energy transfer through solids with ab initio calculations.

  15. A locally adaptive kernel regression method for facies delineation

    NASA Astrophysics Data System (ADS)

    Fernàndez-Garcia, D.; Barahona-Palomo, M.; Henri, C. V.; Sanchez-Vila, X.

    2015-12-01

    Facies delineation is defined as the separation of geological units with distinct intrinsic characteristics (grain size, hydraulic conductivity, mineralogical composition). A major challenge in this area stems from the fact that only a few scattered pieces of hydrogeological information are available to delineate geological facies. Several methods to delineate facies are available in the literature, ranging from those based only on existing hard data, to those including secondary data or external knowledge about sedimentological patterns. This paper describes a methodology to use kernel regression methods as an effective tool for facies delineation. The method uses both the spatial and the actual sampled values to produce, for each individual hard data point, a locally adaptive steering kernel function, self-adjusting the principal directions of the local anisotropic kernels to the direction of highest local spatial correlation. The method is shown to outperform the nearest neighbor classification method in a number of synthetic aquifers whenever the available number of hard data is small and randomly distributed in space. In the case of exhaustive sampling, the steering kernel regression method converges to the true solution. Simulations ran in a suite of synthetic examples are used to explore the selection of kernel parameters in typical field settings. It is shown that, in practice, a rule of thumb can be used to obtain suboptimal results. The performance of the method is demonstrated to significantly improve when external information regarding facies proportions is incorporated. Remarkably, the method allows for a reasonable reconstruction of the facies connectivity patterns, shown in terms of breakthrough curves performance.

  16. Two-step superresolution approach for surveillance face image through radial basis function-partial least squares regression and locality-induced sparse representation

    NASA Astrophysics Data System (ADS)

    Jiang, Junjun; Hu, Ruimin; Han, Zhen; Wang, Zhongyuan; Chen, Jun

    2013-10-01

    Face superresolution (SR), or face hallucination, refers to the technique of generating a high-resolution (HR) face image from a low-resolution (LR) one with the help of a set of training examples. It aims at transcending the limitations of electronic imaging systems. Applications of face SR include video surveillance, in which the individual of interest is often far from cameras. A two-step method is proposed to infer a high-quality and HR face image from a low-quality and LR observation. First, we establish the nonlinear relationship between LR face images and HR ones, according to radial basis function and partial least squares (RBF-PLS) regression, to transform the LR face into the global face space. Then, a locality-induced sparse representation (LiSR) approach is presented to enhance the local facial details once all the global faces for each LR training face are constructed. A comparison of some state-of-the-art SR methods shows the superiority of the proposed two-step approach, RBF-PLS global face regression followed by LiSR-based local patch reconstruction. Experiments also demonstrate the effectiveness under both simulation conditions and some real conditions.

  17. Conventional and reciprocal approaches to the inverse dipole localization problem for N(20)-P (20) somatosensory evoked potentials.

    PubMed

    Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre

    2013-01-01

    The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.

  18. An Update on Modern Approaches to Localized Esophageal Cancer

    PubMed Central

    Welsh, James; Amini, Arya; Likhacheva, Anna; Erasmus, Jeremy; Gomez, Daniel; Davila, Marta; Mehran, Reza J; Komaki, Ritsuko; Liao, Zhongxing; Hofstetter, Wayne L; Bhutani, Manoop; Ajani, Jaffer A

    2014-01-01

    Esophageal cancer treatment continues to be a topic of wide debate. Based on improvements in chemotherapy drugs, surgical techniques, and radiotherapy advances, esophageal cancer treatment approaches are becoming more specific to the stage of the tumor and the overall performance status of the patient. While surgery continues to be the standard treatment option for localized disease, the current direction favors multimodality treatment including both radiation and chemotherapy with surgery. In the next few years, we will continue to see improvements in radiation techniques and proton treatment, with more minimally invasive surgical approaches minimizing postoperative side effects, and the discovery of molecular biomarkers to help deliver more specifically targeted medication to treat esophageal cancers. PMID:21365188

  19. Method of preliminary localization of the iris in biometric access control systems

    NASA Astrophysics Data System (ADS)

    Minacova, N.; Petrov, I.

    2015-10-01

    This paper presents a method of preliminary localization of the iris, based on the stable brightness features of the iris in images of the eye. In tests on images of eyes from publicly available databases method showed good accuracy and speed compared to existing methods preliminary localization.

  20. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    NASA Technical Reports Server (NTRS)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  1. Use of the Support Group Method to Tackle Bullying, and Evaluation from Schools and Local Authorities in England

    ERIC Educational Resources Information Center

    Smith, Peter K.; Howard, Sharon; Thompson, Fran

    2007-01-01

    The Support Group Method (SGM), formerly the No Blame Approach, is widely used as an anti-bullying intervention in schools, but has aroused some controversy. There is little evidence from users regarding its effectiveness. We aimed to ascertain the use of and support for the SGM in Local Authorities (LAs) and schools; and obtain ratings of…

  2. Localized diabatization applied to excitons in molecular crystals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Zuxin; Subotnik, Joseph E.

    Traditional ab initio electronic structure calculations of periodic systems yield delocalized eigenstates that should be understood as adiabatic states. For example, excitons are bands of extended states which superimpose localized excitations on every lattice site. However, in general, in order to study the effects of nuclear motion on exciton transport, it is standard to work with a localized description of excitons, especially in a hopping regime; even in a band regime, a localized description can be helpful. To extract localized excitons from a band requires essentially a diabatization procedure. In this paper, three distinct methods are proposed for such localizedmore » diabatization: (i) a simple projection method, (ii) a more general Pipek-Mezey localization scheme, and (iii) a variant of Boys diabatization. Approaches (i) and (ii) require localized, single-particle Wannier orbitals, while approach (iii) has no such dependence. Lastly, these methods should be very useful for studying energy transfer through solids with ab initio calculations.« less

  3. Localized diabatization applied to excitons in molecular crystals

    DOE PAGES

    Jin, Zuxin; Subotnik, Joseph E.

    2017-06-28

    Traditional ab initio electronic structure calculations of periodic systems yield delocalized eigenstates that should be understood as adiabatic states. For example, excitons are bands of extended states which superimpose localized excitations on every lattice site. However, in general, in order to study the effects of nuclear motion on exciton transport, it is standard to work with a localized description of excitons, especially in a hopping regime; even in a band regime, a localized description can be helpful. To extract localized excitons from a band requires essentially a diabatization procedure. In this paper, three distinct methods are proposed for such localizedmore » diabatization: (i) a simple projection method, (ii) a more general Pipek-Mezey localization scheme, and (iii) a variant of Boys diabatization. Approaches (i) and (ii) require localized, single-particle Wannier orbitals, while approach (iii) has no such dependence. Lastly, these methods should be very useful for studying energy transfer through solids with ab initio calculations.« less

  4. Local Discontinuous Galerkin Methods for Partial Differential Equations with Higher Order Derivatives

    NASA Technical Reports Server (NTRS)

    Yan, Jue; Shu, Chi-Wang; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    In this paper we review the existing and develop new continuous Galerkin methods for solving time dependent partial differential equations with higher order derivatives in one and multiple space dimensions. We review local discontinuous Galerkin methods for convection diffusion equations involving second derivatives and for KdV type equations involving third derivatives. We then develop new local discontinuous Galerkin methods for the time dependent bi-harmonic type equations involving fourth derivatives, and partial differential equations involving fifth derivatives. For these new methods we present correct interface numerical fluxes and prove L(exp 2) stability for general nonlinear problems. Preliminary numerical examples are shown to illustrate these methods. Finally, we present new results on a post-processing technique, originally designed for methods with good negative-order error estimates, on the local discontinuous Galerkin methods applied to equations with higher derivatives. Numerical experiments show that this technique works as well for the new higher derivative cases, in effectively doubling the rate of convergence with negligible additional computational cost, for linear as well as some nonlinear problems, with a local uniform mesh.

  5. General Approach to Quantum Channel Impossibility by Local Operations and Classical Communication.

    PubMed

    Cohen, Scott M

    2017-01-13

    We describe a general approach to proving the impossibility of implementing a quantum channel by local operations and classical communication (LOCC), even with an infinite number of rounds, and find that this can often be demonstrated by solving a set of linear equations. The method also allows one to design a LOCC protocol to implement the channel whenever such a protocol exists in any finite number of rounds. Perhaps surprisingly, the computational expense for analyzing LOCC channels is not much greater than that for LOCC measurements. We apply the method to several examples, two of which provide numerical evidence that the set of quantum channels that are not LOCC is not closed and that there exist channels that can be implemented by LOCC either in one round or in three rounds that are on the boundary of the set of all LOCC channels. Although every LOCC protocol must implement a separable quantum channel, it is a very difficult task to determine whether or not a given channel is separable. Fortunately, prior knowledge that the channel is separable is not required for application of our method.

  6. Identity, Intersectionality, and Mixed-Methods Approaches

    ERIC Educational Resources Information Center

    Harper, Casandra E.

    2011-01-01

    In this article, the author argues that current strategies to study and understand students' identities fall short of fully capturing their complexity. A multi-dimensional perspective and a mixed-methods approach can reveal nuance that is missed with current approaches. The author offers an illustration of how mixed-methods research can promote a…

  7. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities.

    PubMed

    Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P

    2015-09-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.

  8. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities

    PubMed Central

    Green, Carla A.; Duan, Naihua; Gibbons, Robert D.; Hoagwood, Kimberly E.; Palinkas, Lawrence A.; Wisdom, Jennifer P.

    2015-01-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings. PMID:24722814

  9. Feature weight estimation for gene selection: a local hyperlinear learning approach

    PubMed Central

    2014-01-01

    Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071

  10. The modal surface interpolation method for damage localization

    NASA Astrophysics Data System (ADS)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  11. Predicting protein complexes using a supervised learning method combined with local structural information.

    PubMed

    Dong, Yadong; Sun, Yongqi; Qin, Chao

    2018-01-01

    The existing protein complex detection methods can be broadly divided into two categories: unsupervised and supervised learning methods. Most of the unsupervised learning methods assume that protein complexes are in dense regions of protein-protein interaction (PPI) networks even though many true complexes are not dense subgraphs. Supervised learning methods utilize the informative properties of known complexes; they often extract features from existing complexes and then use the features to train a classification model. The trained model is used to guide the search process for new complexes. However, insufficient extracted features, noise in the PPI data and the incompleteness of complex data make the classification model imprecise. Consequently, the classification model is not sufficient for guiding the detection of complexes. Therefore, we propose a new robust score function that combines the classification model with local structural information. Based on the score function, we provide a search method that works both forwards and backwards. The results from experiments on six benchmark PPI datasets and three protein complex datasets show that our approach can achieve better performance compared with the state-of-the-art supervised, semi-supervised and unsupervised methods for protein complex detection, occasionally significantly outperforming such methods.

  12. A Piecewise Local Partial Least Squares (PLS) Method for the Quantitative Analysis of Plutonium Nitrate Solutions

    DOE PAGES

    Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.

    2017-10-05

    Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less

  13. A Piecewise Local Partial Least Squares (PLS) Method for the Quantitative Analysis of Plutonium Nitrate Solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lascola, Robert; O'Rourke, Patrick E.; Kyser, Edward A.

    Here, we have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relativemore » parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.« less

  14. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  15. A tensor network approach to many-body localization

    NASA Astrophysics Data System (ADS)

    Yu, Xiongjie; Pekker, David; Clark, Bryan

    Understanding the many-body localized phase requires access to eigenstates in the middle of the many-body spectrum. While exact-diagonalization is able to access these eigenstates, it is restricted to systems sizes of about 22 spins. To overcome this limitation, we develop tensor network algorithms which increase the accessible system size by an order of magnitude. We describe both our new algorithms as well as the additional physics about MBL we can extract from them. For example, we demonstrate the power of these methods by verifying the breakdown of the Eigenstate Thermalization Hypothesis (ETH) in the many-body localized phase of the random field Heisenberg model, and show the saturation of entanglement in the MBL phase and generate eigenstates that differ by local excitations. Work was supported by AFOSR FA9550-10-1-0524 and FA9550-12-1-0057, the Kaufmann foundation, and SciDAC FG02-12ER46875.

  16. Global/local stress analysis of composite structures. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.

    1989-01-01

    A method for performing a global/local stress analysis is described and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.

  17. Distributed Power Allocation for Wireless Sensor Network Localization: A Potential Game Approach.

    PubMed

    Ke, Mingxing; Li, Ding; Tian, Shiwei; Zhang, Yuli; Tong, Kaixiang; Xu, Yuhua

    2018-05-08

    The problem of distributed power allocation in wireless sensor network (WSN) localization systems is investigated in this paper, using the game theoretic approach. Existing research focuses on the minimization of the localization errors of individual agent nodes over all anchor nodes subject to power budgets. When the service area and the distribution of target nodes are considered, finding the optimal trade-off between localization accuracy and power consumption is a new critical task. To cope with this issue, we propose a power allocation game where each anchor node minimizes the square position error bound (SPEB) of the service area penalized by its individual power. Meanwhile, it is proven that the power allocation game is an exact potential game which has one pure Nash equilibrium (NE) at least. In addition, we also prove the existence of an ϵ -equilibrium point, which is a refinement of NE and the better response dynamic approach can reach the end solution. Analytical and simulation results demonstrate that: (i) when prior distribution information is available, the proposed strategies have better localization accuracy than the uniform strategies; (ii) when prior distribution information is unknown, the performance of the proposed strategies outperforms power management strategies based on the second-order cone program (SOCP) for particular agent nodes after obtaining the estimated distribution of agent nodes. In addition, proposed strategies also provide an instructional trade-off between power consumption and localization accuracy.

  18. Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods

    PubMed Central

    Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.

    2013-01-01

    Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822

  19. Face Alignment via Regressing Local Binary Features.

    PubMed

    Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian

    2016-03-01

    This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.

  20. A Feedforward Control Approach to the Local Navigation Problem for Autonomous Vehicles

    DTIC Science & Technology

    1994-05-02

    AD-A282 787 " A Feedforward Control Approach to the Local Navigation Problem for Autonomous Vehicles Alonzo Kelly CMU-RI-TR-94-17 The Robotics...follow, or a direction to prefer, it cannot generate its own strategic goals. Therefore, it solves the local planning problem for autonomous vehicles . The... autonomous vehicles . It is intelligent because it uses range images that are generated from either a laser rangefinder or a stereo triangulation

  1. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    PubMed Central

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-01-01

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203

  2. Robust video copy detection approach based on local tangent space alignment

    NASA Astrophysics Data System (ADS)

    Nie, Xiushan; Qiao, Qianping

    2012-04-01

    We propose a robust content-based video copy detection approach based on local tangent space alignment (LTSA), which is an efficient dimensionality reduction algorithm. The idea is motivated by the fact that the content of video becomes richer and the dimension of content becomes higher. It does not give natural tools for video analysis and understanding because of the high dimensionality. The proposed approach reduces the dimensionality of video content using LTSA, and then generates video fingerprints in low dimensional space for video copy detection. Furthermore, a dynamic sliding window is applied to fingerprint matching. Experimental results show that the video copy detection approach has good robustness and discrimination.

  3. Individual tree crown delineation using localized contour tree method and airborne LiDAR data in coniferous forests

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Yu, Bailang; Wu, Qiusheng; Huang, Yan; Chen, Zuoqi; Wu, Jianping

    2016-10-01

    Individual tree crown delineation is of great importance for forest inventory and management. The increasing availability of high-resolution airborne light detection and ranging (LiDAR) data makes it possible to delineate the crown structure of individual trees and deduce their geometric properties with high accuracy. In this study, we developed an automated segmentation method that is able to fully utilize high-resolution LiDAR data for detecting, extracting, and characterizing individual tree crowns with a multitude of geometric and topological properties. The proposed approach captures topological structure of forest and quantifies topological relationships of tree crowns by using a graph theory-based localized contour tree method, and finally segments individual tree crowns by analogy of recognizing hills from a topographic map. This approach consists of five key technical components: (1) derivation of canopy height model from airborne LiDAR data; (2) generation of contours based on the canopy height model; (3) extraction of hierarchical structures of tree crowns using the localized contour tree method; (4) delineation of individual tree crowns by segmenting hierarchical crown structure; and (5) calculation of geometric and topological properties of individual trees. We applied our new method to the Medicine Bow National Forest in the southwest of Laramie, Wyoming and the HJ Andrews Experimental Forest in the central portion of the Cascade Range of Oregon, U.S. The results reveal that the overall accuracy of individual tree crown delineation for the two study areas achieved 94.21% and 75.07%, respectively. Our method holds great potential for segmenting individual tree crowns under various forest conditions. Furthermore, the geometric and topological attributes derived from our method provide comprehensive and essential information for forest management.

  4. International Students' Motivation and Learning Approach: A Comparison with Local Students

    ERIC Educational Resources Information Center

    Chue, Kah Loong; Nie, Youyan

    2016-01-01

    Psychological factors contribute to motivation and learning for international students as much as teaching strategies. 254 international students and 144 local students enrolled in a private education institute were surveyed regarding their perception of psychological needs support, their motivation and learning approach. The results from this…

  5. Hole localization in Fe2O3 from density functional theory and wave-function-based methods

    NASA Astrophysics Data System (ADS)

    Ansari, Narjes; Ulman, Kanchan; Camellone, Matteo Farnesi; Seriani, Nicola; Gebauer, Ralph; Piccinin, Simone

    2017-08-01

    Hematite (α -Fe2O3 ) is a promising photocatalyst material for water splitting, where photoinduced holes lead to the oxidation of water and the release of molecular oxygen. In this work, we investigate the properties of holes in hematite using density functional theory (DFT) calculations with hybrid functionals. We find that holes form small polarons and, depending on the fraction of exact exchange included in the PBE0 functional, the site where the holes localize changes from Fe to O. We find this result to be independent of the size and structure of the system: small Fe2O3 clusters with tetrahedral coordination, larger clusters with octahedral coordination, Fe2O3 (001) surfaces in contact with water, and bulk Fe2O3 display a very similar behavior in terms of hole localization as a function of the fraction of exact exchange. We then use wave-function-based methods such as coupled cluster with single and double excitations and Møller-Plesset second-order perturbation theory applied on a cluster model of Fe2O3 to shed light on which of the two solutions is correct. We find that these high-level quantum chemistry methods suggest holes in hematite are localized on oxygen atoms. We also explore the use of the DFT +U approach as a computationally convenient way to overcome the known limitations of generalized gradient approximation functionals and recover a gap in line with experiments and hole localization on oxygen in agreement with quantum chemistry methods.

  6. Local quantum thermal susceptibility

    NASA Astrophysics Data System (ADS)

    de Pasquale, Antonella; Rossini, Davide; Fazio, Rosario; Giovannetti, Vittorio

    2016-09-01

    Thermodynamics relies on the possibility to describe systems composed of a large number of constituents in terms of few macroscopic variables. Its foundations are rooted into the paradigm of statistical mechanics, where thermal properties originate from averaging procedures which smoothen out local details. While undoubtedly successful, elegant and formally correct, this approach carries over an operational problem, namely determining the precision at which such variables are inferred, when technical/practical limitations restrict our capabilities to local probing. Here we introduce the local quantum thermal susceptibility, a quantifier for the best achievable accuracy for temperature estimation via local measurements. Our method relies on basic concepts of quantum estimation theory, providing an operative strategy to address the local thermal response of arbitrary quantum systems at equilibrium. At low temperatures, it highlights the local distinguishability of the ground state from the excited sub-manifolds, thus providing a method to locate quantum phase transitions.

  7. Local blur analysis and phase error correction method for fringe projection profilometry systems.

    PubMed

    Rao, Li; Da, Feipeng

    2018-05-20

    We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.

  8. A Hidden Markov Model Approach for Simultaneously Estimating Local Ancestry and Admixture Time Using Next Generation Sequence Data in Samples of Arbitrary Ploidy

    PubMed Central

    Nielsen, Rasmus

    2017-01-01

    Admixture—the mixing of genomes from divergent populations—is increasingly appreciated as a central process in evolution. To characterize and quantify patterns of admixture across the genome, a number of methods have been developed for local ancestry inference. However, existing approaches have a number of shortcomings. First, all local ancestry inference methods require some prior assumption about the expected ancestry tract lengths. Second, existing methods generally require genotypes, which is not feasible to obtain for many next-generation sequencing projects. Third, many methods assume samples are diploid, however a wide variety of sequencing applications will fail to meet this assumption. To address these issues, we introduce a novel hidden Markov model for estimating local ancestry that models the read pileup data, rather than genotypes, is generalized to arbitrary ploidy, and can estimate the time since admixture during local ancestry inference. We demonstrate that our method can simultaneously estimate the time since admixture and local ancestry with good accuracy, and that it performs well on samples of high ploidy—i.e. 100 or more chromosomes. As this method is very general, we expect it will be useful for local ancestry inference in a wider variety of populations than what previously has been possible. We then applied our method to pooled sequencing data derived from populations of Drosophila melanogaster on an ancestry cline on the east coast of North America. We find that regions of local recombination rates are negatively correlated with the proportion of African ancestry, suggesting that selection against foreign ancestry is the least efficient in low recombination regions. Finally we show that clinal outlier loci are enriched for genes associated with gene regulatory functions, consistent with a role of regulatory evolution in ecological adaptation of admixed D. melanogaster populations. Our results illustrate the potential of local ancestry

  9. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  10. System and method for object localization

    NASA Technical Reports Server (NTRS)

    Kelly, Alonzo J. (Inventor); Zhong, Yu (Inventor)

    2005-01-01

    A computer-assisted method for localizing a rack, including sensing an image of the rack, detecting line segments in the sensed image, recognizing a candidate arrangement of line segments in the sensed image indicative of a predetermined feature of the rack, generating a matrix of correspondence between the candidate arrangement of line segments and an expected position and orientation of the predetermined feature of the rack, and estimating a position and orientation of the rack based on the matrix of correspondence.

  11. Local motion compensation in image sequences degraded by atmospheric turbulence: a comparative analysis of optical flow vs. block matching methods

    NASA Astrophysics Data System (ADS)

    Huebner, Claudia S.

    2016-10-01

    As a consequence of fluctuations in the index of refraction of the air, atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion creating geometric distortions. To mitigate these effects many different methods have been proposed. Global as well as local motion compensation in some form or other constitutes an integral part of many software-based approaches. For the estimation of motion vectors between consecutive frames simple methods like block matching are preferable to more complex algorithms like optical flow, at least when challenged with near real-time requirements. However, the processing power of commercially available computers continues to increase rapidly and the more powerful optical flow methods have the potential to outperform standard block matching methods. Therefore, in this paper three standard optical flow algorithms, namely Horn-Schunck (HS), Lucas-Kanade (LK) and Farnebäck (FB), are tested for their suitability to be employed for local motion compensation as part of a turbulence mitigation system. Their qualitative performance is evaluated and compared with that of three standard block matching methods, namely Exhaustive Search (ES), Adaptive Rood Pattern Search (ARPS) and Correlation based Search (CS).

  12. Real-time localization of mobile device by filtering method for sensor fusion

    NASA Astrophysics Data System (ADS)

    Fuse, Takashi; Nagara, Keita

    2017-06-01

    Most of the applications with mobile devices require self-localization of the devices. GPS cannot be used in indoor environment, the positions of mobile devices are estimated autonomously by using IMU. Since the self-localization is based on IMU of low accuracy, and then the self-localization in indoor environment is still challenging. The selflocalization method using images have been developed, and the accuracy of the method is increasing. This paper develops the self-localization method without GPS in indoor environment by integrating sensors, such as IMU and cameras, on mobile devices simultaneously. The proposed method consists of observations, forecasting and filtering. The position and velocity of the mobile device are defined as a state vector. In the self-localization, observations correspond to observation data from IMU and camera (observation vector), forecasting to mobile device moving model (system model) and filtering to tracking method by inertial surveying and coplanarity condition and inverse depth model (observation model). Positions of a mobile device being tracked are estimated by system model (forecasting step), which are assumed as linearly moving model. Then estimated positions are optimized referring to the new observation data based on likelihood (filtering step). The optimization at filtering step corresponds to estimation of the maximum a posterior probability. Particle filter are utilized for the calculation through forecasting and filtering steps. The proposed method is applied to data acquired by mobile devices in indoor environment. Through the experiments, the high performance of the method is confirmed.

  13. Measuring walking and cycling using the PABS (pedestrian and bicycling survey) approach : a low-cost survey method for local communities [research brief].

    DOT National Transportation Integrated Search

    2010-10-01

    Many communities want to promote walking and cycling. However, few know how much nonmotorized travel already occurs in their communities. This research project developed the Pedestrian and Bicycling Survey (PABS), a method that local governments can ...

  14. First-principles modeling of localized d states with the GW@LDA+U approach

    NASA Astrophysics Data System (ADS)

    Jiang, Hong; Gomez-Abal, Ricardo I.; Rinke, Patrick; Scheffler, Matthias

    2010-07-01

    First-principles modeling of systems with localized d states is currently a great challenge in condensed-matter physics. Density-functional theory in the standard local-density approximation (LDA) proves to be problematic. This can be partly overcome by including local Hubbard U corrections (LDA+U) but itinerant states are still treated on the LDA level. Many-body perturbation theory in the GW approach offers both a quasiparticle perspective (appropriate for itinerant states) and an exact treatment of exchange (appropriate for localized states), and is therefore promising for these systems. LDA+U has previously been viewed as an approximate GW scheme. We present here a derivation that is simpler and more general, starting from the static Coulomb-hole and screened exchange approximation to the GW self-energy. Following our previous work for f -electron systems [H. Jiang, R. I. Gomez-Abal, P. Rinke, and M. Scheffler, Phys. Rev. Lett. 102, 126403 (2009)10.1103/PhysRevLett.102.126403] we conduct a systematic investigation of the GW method based on LDA+U(GW@LDA+U) , as implemented in our recently developed all-electron GW code FHI-gap (Green’s function with augmented plane waves) for a series of prototypical d -electron systems: (1) ScN with empty d states, (2) ZnS with semicore d states, and (3) late transition-metal oxides (MnO, FeO, CoO, and NiO) with partially occupied d states. We show that for ZnS and ScN, the GW band gaps only weakly depend on U but for the other transition-metal oxides the dependence on U is as strong as in LDA+U . These different trends can be understood in terms of changes in the hybridization and screening. Our work demonstrates that GW@LDA+U with “physical” values of U provides a balanced and accurate description of both localized and itinerant states.

  15. A Novel Method of Localization for Moving Objects with an Alternating Magnetic Field

    PubMed Central

    Gao, Xiang; Yan, Shenggang; Li, Bin

    2017-01-01

    Magnetic detection technology has wide applications in the fields of geological exploration, biomedical treatment, wreck removal and localization of unexploded ordinance. A large number of methods have been developed to locate targets with static magnetic fields, however, the relation between the problem of localization of moving objectives with alternating magnetic fields and the localization with a static magnetic field is rarely studied. A novel method of target localization based on coherent demodulation was proposed in this paper. The problem of localization of moving objects with an alternating magnetic field was transformed into the localization with a static magnetic field. The Levenberg-Marquardt (L-M) algorithm was applied to calculate the position of the target with magnetic field data measured by a single three-component magnetic sensor. Theoretical simulation and experimental results demonstrate the effectiveness of the proposed method. PMID:28430153

  16. Omni-Directional Scanning Localization Method of a Mobile Robot Based on Ultrasonic Sensors.

    PubMed

    Mu, Wei-Yi; Zhang, Guang-Peng; Huang, Yu-Mei; Yang, Xin-Gang; Liu, Hong-Yan; Yan, Wen

    2016-12-20

    Improved ranging accuracy is obtained by the development of a novel ultrasonic sensor ranging algorithm, unlike the conventional ranging algorithm, which considers the divergence angle and the incidence angle of the ultrasonic sensor synchronously. An ultrasonic sensor scanning method is developed based on this algorithm for the recognition of an inclined plate and to obtain the localization of the ultrasonic sensor relative to the inclined plate reference frame. The ultrasonic sensor scanning method is then leveraged for the omni-directional localization of a mobile robot, where the ultrasonic sensors are installed on a mobile robot and follow the spin of the robot, the inclined plate is recognized and the position and posture of the robot are acquired with respect to the coordinate system of the inclined plate, realizing the localization of the robot. Finally, the localization method is implemented into an omni-directional scanning localization experiment with the independently researched and developed mobile robot. Localization accuracies of up to ±3.33 mm for the front, up to ±6.21 for the lateral and up to ±0.20° for the posture are obtained, verifying the correctness and effectiveness of the proposed localization method.

  17. The Local Discontinuous Galerkin Method for Time-Dependent Convection-Diffusion Systems

    NASA Technical Reports Server (NTRS)

    Cockburn, Bernardo; Shu, Chi-Wang

    1997-01-01

    In this paper, we study the Local Discontinuous Galerkin methods for nonlinear, time-dependent convection-diffusion systems. These methods are an extension of the Runge-Kutta Discontinuous Galerkin methods for purely hyperbolic systems to convection-diffusion systems and share with those methods their high parallelizability, their high-order formal accuracy, and their easy handling of complicated geometries, for convection dominated problems. It is proven that for scalar equations, the Local Discontinuous Galerkin methods are L(sup 2)-stable in the nonlinear case. Moreover, in the linear case, it is shown that if polynomials of degree k are used, the methods are k-th order accurate for general triangulations; although this order of convergence is suboptimal, it is sharp for the LDG methods. Preliminary numerical examples displaying the performance of the method are shown.

  18. A Bayesian cluster analysis method for single-molecule localization microscopy data.

    PubMed

    Griffié, Juliette; Shannon, Michael; Bromley, Claire L; Boelen, Lies; Burn, Garth L; Williamson, David J; Heard, Nicholas A; Cope, Andrew P; Owen, Dylan M; Rubin-Delanchy, Patrick

    2016-12-01

    Cell function is regulated by the spatiotemporal organization of the signaling machinery, and a key facet of this is molecular clustering. Here, we present a protocol for the analysis of clustering in data generated by 2D single-molecule localization microscopy (SMLM)-for example, photoactivated localization microscopy (PALM) or stochastic optical reconstruction microscopy (STORM). Three features of such data can cause standard cluster analysis approaches to be ineffective: (i) the data take the form of a list of points rather than a pixel array; (ii) there is a non-negligible unclustered background density of points that must be accounted for; and (iii) each localization has an associated uncertainty in regard to its position. These issues are overcome using a Bayesian, model-based approach. Many possible cluster configurations are proposed and scored against a generative model, which assumes Gaussian clusters overlaid on a completely spatially random (CSR) background, before every point is scrambled by its localization precision. We present the process of generating simulated and experimental data that are suitable to our algorithm, the analysis itself, and the extraction and interpretation of key cluster descriptors such as the number of clusters, cluster radii and the number of localizations per cluster. Variations in these descriptors can be interpreted as arising from changes in the organization of the cellular nanoarchitecture. The protocol requires no specific programming ability, and the processing time for one data set, typically containing 30 regions of interest, is ∼18 h; user input takes ∼1 h.

  19. Developmental differences in auditory detection and localization of approaching vehicles.

    PubMed

    Barton, Benjamin K; Lew, Roger; Kovesdi, Casey; Cottrell, Nicholas D; Ulrich, Thomas

    2013-04-01

    Pedestrian safety is a significant problem in the United States, with thousands being injured each year. Multiple risk factors exist, but one poorly understood factor is pedestrians' ability to attend to vehicles using auditory cues. Auditory information in the pedestrian setting is increasing in importance with the growing number of quieter hybrid and all-electric vehicles on America's roadways that do not emit sound cues pedestrians expect from an approaching vehicle. Our study explored developmental differences in pedestrians' detection and localization of approaching vehicles. Fifty children ages 6-9 years, and 35 adults participated. Participants' performance varied significantly by age, and with increasing speed and direction of the vehicle's approach. Results underscore the importance of understanding children's and adults' use of auditory cues for pedestrian safety and highlight the need for further research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Local adaptive capacity as an alternative approach in dealing with hydrometeorological risk at Depok Peri-Urban City

    NASA Astrophysics Data System (ADS)

    Fitrinitia, I. S.; Junadi, P.; Sutanto, E.; Nugroho, D. A.; Zubair, A.; Suyanti, E.

    2018-03-01

    Located in a tropical area, cities in Indonesia are vulnerable to hydrometeorological risks such as flood and landslide and thus become prone to the climate change effects. Moreover, peri-urban cities had double burden as the consequences of main city spill over and also lack of urban facilities in overcoming the disaster. In another perspective, the city has many alternative resources to recover, so its create urban resiliency. Depok city becomes a case study of this research regarding with its development following the impact of Jakarta growth. This research purposes to capture how the local city dwellers could anticipate and adaptive with flood and landslide with their own mitigation version. Through mix method and spatial analysis using GIS techniques, it derives the two comparison approach, the normative and alternative that had been done by the city dwellers. It uses a spatial analysis to have a big picture of Depok and its environmental changing. It also divided into 4 local group of communities as a representative of city dwellers regarding the characteristic of a settlement with their level of risk. The result found type or characteristic of settlement which influenced the local adaptive capacity, from the establishment of infrastructure, health fulfillment and social livelihood with different kind of methods.

  1. Brain networks, structural realism, and local approaches to the scientific realism debate.

    PubMed

    Yan, Karen; Hricko, Jonathon

    2017-08-01

    We examine recent work in cognitive neuroscience that investigates brain networks. Brain networks are characterized by the ways in which brain regions are functionally and anatomically connected to one another. Cognitive neuroscientists use various noninvasive techniques (e.g., fMRI) to investigate these networks. They represent them formally as graphs. And they use various graph theoretic techniques to analyze them further. We distinguish between knowledge of the graph theoretic structure of such networks (structural knowledge) and knowledge of what instantiates that structure (nonstructural knowledge). And we argue that this work provides structural knowledge of brain networks. We explore the significance of this conclusion for the scientific realism debate. We argue that our conclusion should not be understood as an instance of a global structural realist claim regarding the structure of the unobservable part of the world, but instead, as a local structural realist attitude towards brain networks in particular. And we argue that various local approaches to the realism debate, i.e., approaches that restrict realist commitments to particular theories and/or entities, are problematic insofar as they don't allow for the possibility of such a local structural realist attitude. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Local quantum thermal susceptibility

    PubMed Central

    De Pasquale, Antonella; Rossini, Davide; Fazio, Rosario; Giovannetti, Vittorio

    2016-01-01

    Thermodynamics relies on the possibility to describe systems composed of a large number of constituents in terms of few macroscopic variables. Its foundations are rooted into the paradigm of statistical mechanics, where thermal properties originate from averaging procedures which smoothen out local details. While undoubtedly successful, elegant and formally correct, this approach carries over an operational problem, namely determining the precision at which such variables are inferred, when technical/practical limitations restrict our capabilities to local probing. Here we introduce the local quantum thermal susceptibility, a quantifier for the best achievable accuracy for temperature estimation via local measurements. Our method relies on basic concepts of quantum estimation theory, providing an operative strategy to address the local thermal response of arbitrary quantum systems at equilibrium. At low temperatures, it highlights the local distinguishability of the ground state from the excited sub-manifolds, thus providing a method to locate quantum phase transitions. PMID:27681458

  3. Electron correlation at the MgF2(110) surface: a comparison of incremental and local correlation methods.

    PubMed

    Hammerschmidt, Lukas; Maschio, Lorenzo; Müller, Carsten; Paulus, Beate

    2015-01-13

    We have applied the Method of Increments and the periodic Local-MP2 approach to the study of the (110) surface of magnesium fluoride, a system of significant interest in heterogeneous catalysis. After careful assessment of the approximations inherent in both methods, the two schemes, though conceptually different, are shown to yield nearly identical results. This remains true even when analyzed in fine detail through partition of the individual contribution to the total energy. This kind of partitioning also provides thorough insight into the electron correlation effects underlying the surface formation process, which are discussed in detail.

  4. Calculation of wave-functions with frozen orbitals in mixed quantum mechanics/molecular mechanics methods. II. Application of the local basis equation.

    PubMed

    Ferenczy, György G

    2013-04-05

    The application of the local basis equation (Ferenczy and Adams, J. Chem. Phys. 2009, 130, 134108) in mixed quantum mechanics/molecular mechanics (QM/MM) and quantum mechanics/quantum mechanics (QM/QM) methods is investigated. This equation is suitable to derive local basis nonorthogonal orbitals that minimize the energy of the system and it exhibits good convergence properties in a self-consistent field solution. These features make the equation appropriate to be used in mixed QM/MM and QM/QM methods to optimize orbitals in the field of frozen localized orbitals connecting the subsystems. Calculations performed for several properties in divers systems show that the method is robust with various choices of the frozen orbitals and frontier atom properties. With appropriate basis set assignment, it gives results equivalent with those of a related approach [G. G. Ferenczy previous paper in this issue] using the Huzinaga equation. Thus, the local basis equation can be used in mixed QM/MM methods with small size quantum subsystems to calculate properties in good agreement with reference Hartree-Fock-Roothaan results. It is shown that bond charges are not necessary when the local basis equation is applied, although they are required for the self-consistent field solution of the Huzinaga equation based method. Conversely, the deformation of the wave-function near to the boundary is observed without bond charges and this has a significant effect on deprotonation energies but a less pronounced effect when the total charge of the system is conserved. The local basis equation can also be used to define a two layer quantum system with nonorthogonal localized orbitals surrounding the central delocalized quantum subsystem. Copyright © 2013 Wiley Periodicals, Inc.

  5. Adaptively Reevaluated Bayesian Localization (ARBL). A Novel Technique for Radiological Source Localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.

    2015-01-19

    Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less

  6. Adaptive Local Realignment of Protein Sequences.

    PubMed

    DeBlasio, Dan; Kececioglu, John

    2018-06-11

    While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.

  7. A transfer matrix approach to vibration localization in mistuned blade assemblies

    NASA Technical Reports Server (NTRS)

    Ottarson, Gisli; Pierre, Chritophe

    1993-01-01

    A study of mode localization in mistuned bladed disks is performed using transfer matrices. The transfer matrix approach yields the free response of a general, mono-coupled, perfectly cyclic assembly in closed form. A mistuned structure is represented by random transfer matrices, and the expansion of these matrices in terms of the small mistuning parameter leads to the definition of a measure of sensitivity to mistuning. An approximation of the localization factor, the spatially averaged rate of exponential attenuation per blade-disk sector, is obtained through perturbation techniques in the limits of high and low sensitivity. The methodology is applied to a common model of a bladed disk and the results verified by Monte Carlo simulations. The easily calculated sensitivity measure may prove to be a valuable design tool due to its system-independent quantification of mistuning effects such as mode localization.

  8. On dynamical systems approaches and methods in f ( R ) cosmology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alho, Artur; Carloni, Sante; Uggla, Claes, E-mail: aalho@math.ist.utl.pt, E-mail: sante.carloni@tecnico.ulisboa.pt, E-mail: claes.uggla@kau.se

    We discuss dynamical systems approaches and methods applied to flat Robertson-Walker models in f ( R )-gravity. We argue that a complete description of the solution space of a model requires a global state space analysis that motivates globally covering state space adapted variables. This is shown explicitly by an illustrative example, f ( R ) = R + α R {sup 2}, α > 0, for which we introduce new regular dynamical systems on global compactly extended state spaces for the Jordan and Einstein frames. This example also allows us to illustrate several local and global dynamical systems techniquesmore » involving, e.g., blow ups of nilpotent fixed points, center manifold analysis, averaging, and use of monotone functions. As a result of applying dynamical systems methods to globally state space adapted dynamical systems formulations, we obtain pictures of the entire solution spaces in both the Jordan and the Einstein frames. This shows, e.g., that due to the domain of the conformal transformation between the Jordan and Einstein frames, not all the solutions in the Jordan frame are completely contained in the Einstein frame. We also make comparisons with previous dynamical systems approaches to f ( R ) cosmology and discuss their advantages and disadvantages.« less

  9. A Novel Locally Linear KNN Method With Applications to Visual Recognition.

    PubMed

    Liu, Qingfeng; Liu, Chengjun

    2017-09-01

    A locally linear K Nearest Neighbor (LLK) method is presented in this paper with applications to robust visual recognition. Specifically, the concept of an ideal representation is first presented, which improves upon the traditional sparse representation in many ways. The objective function based on a host of criteria for sparsity, locality, and reconstruction is then optimized to derive a novel representation, which is an approximation to the ideal representation. The novel representation is further processed by two classifiers, namely, an LLK-based classifier and a locally linear nearest mean-based classifier, for visual recognition. The proposed classifiers are shown to connect to the Bayes decision rule for minimum error. Additional new theoretical analysis is presented, such as the nonnegative constraint, the group regularization, and the computational efficiency of the proposed LLK method. New methods such as a shifted power transformation for improving reliability, a coefficients' truncating method for enhancing generalization, and an improved marginal Fisher analysis method for feature extraction are proposed to further improve visual recognition performance. Extensive experiments are implemented to evaluate the proposed LLK method for robust visual recognition. In particular, eight representative data sets are applied for assessing the performance of the LLK method for various visual recognition applications, such as action recognition, scene recognition, object recognition, and face recognition.

  10. Scanning tunneling microscopy image simulation of the rutile (110) TiO2 surface with hybrid functionals and the localized basis set approach

    NASA Astrophysics Data System (ADS)

    Di Valentin, Cristiana

    2007-10-01

    In this work we present a simplified procedure to use hybrid functionals and localized atomic basis sets to simulate scanning tunneling microscopy (STM) images of stoichiometric, reduced and hydroxylated rutile (110) TiO2 surface. For the two defective systems it is necessary to introduce some exact Hartree-Fock exchange in the exchange functional in order to correctly describe the details of the electronic structure. Results are compared to the standard density functional theory and planewave basis set approach. Both methods have advantages and drawbacks that are analyzed in detail. In particular, for the localized basis set approach, it is necessary to introduce a number of Gaussian function in the vacuum region above the surface in order to correctly describe the exponential decay of the integrated local density of states from the surface. In the planewave periodic approach, a thick vacuum region is required to achieve correct results. Simulated STM images are obtained for both the reduced and hydroxylated surface which nicely compare with experimental findings. A direct comparison of the two defects as displayed in the simulated STM images indicates that the OH groups should appear brighter than oxygen vacancies in perfect agreement with the experimental STM data.

  11. On the equivalence between traction- and stress-based approaches for the modeling of localized failure in solids

    NASA Astrophysics Data System (ADS)

    Wu, Jian-Ying; Cervera, Miguel

    2015-09-01

    This work investigates systematically traction- and stress-based approaches for the modeling of strong and regularized discontinuities induced by localized failure in solids. Two complementary methodologies, i.e., discontinuities localized in an elastic solid and strain localization of an inelastic softening solid, are addressed. In the former it is assumed a priori that the discontinuity forms with a continuous stress field and along the known orientation. A traction-based failure criterion is introduced to characterize the discontinuity and the orientation is determined from Mohr's maximization postulate. If the displacement jumps are retained as independent variables, the strong/regularized discontinuity approaches follow, requiring constitutive models for both the bulk and discontinuity. Elimination of the displacement jumps at the material point level results in the embedded/smeared discontinuity approaches in which an overall inelastic constitutive model fulfilling the static constraint suffices. The second methodology is then adopted to check whether the assumed strain localization can occur and identify its consequences on the resulting approaches. The kinematic constraint guaranteeing stress boundedness and continuity upon strain localization is established for general inelastic softening solids. Application to a unified stress-based elastoplastic damage model naturally yields all the ingredients of a localized model for the discontinuity (band), justifying the first methodology. Two dual but not necessarily equivalent approaches, i.e., the traction-based elastoplastic damage model and the stress-based projected discontinuity model, are identified. The former is equivalent to the embedded and smeared discontinuity approaches, whereas in the later the discontinuity orientation and associated failure criterion are determined consistently from the kinematic constraint rather than given a priori. The bi-directional connections and equivalence conditions

  12. A new approach for beam hardening correction based on the local spectrum distributions

    NASA Astrophysics Data System (ADS)

    Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza

    2015-09-01

    Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.

  13. Anchor-Free Localization Method for Mobile Targets in Coal Mine Wireless Sensor Networks

    PubMed Central

    Pei, Zhongmin; Deng, Zhidong; Xu, Shuo; Xu, Xiao

    2009-01-01

    Severe natural conditions and complex terrain make it difficult to apply precise localization in underground mines. In this paper, an anchor-free localization method for mobile targets is proposed based on non-metric multi-dimensional scaling (Multi-dimensional Scaling: MDS) and rank sequence. Firstly, a coal mine wireless sensor network is constructed in underground mines based on the ZigBee technology. Then a non-metric MDS algorithm is imported to estimate the reference nodes’ location. Finally, an improved sequence-based localization algorithm is presented to complete precise localization for mobile targets. The proposed method is tested through simulations with 100 nodes, outdoor experiments with 15 ZigBee physical nodes, and the experiments in the mine gas explosion laboratory with 12 ZigBee nodes. Experimental results show that our method has better localization accuracy and is more robust in underground mines. PMID:22574048

  14. Anchor-free localization method for mobile targets in coal mine wireless sensor networks.

    PubMed

    Pei, Zhongmin; Deng, Zhidong; Xu, Shuo; Xu, Xiao

    2009-01-01

    Severe natural conditions and complex terrain make it difficult to apply precise localization in underground mines. In this paper, an anchor-free localization method for mobile targets is proposed based on non-metric multi-dimensional scaling (Multi-dimensional Scaling: MDS) and rank sequence. Firstly, a coal mine wireless sensor network is constructed in underground mines based on the ZigBee technology. Then a non-metric MDS algorithm is imported to estimate the reference nodes' location. Finally, an improved sequence-based localization algorithm is presented to complete precise localization for mobile targets. The proposed method is tested through simulations with 100 nodes, outdoor experiments with 15 ZigBee physical nodes, and the experiments in the mine gas explosion laboratory with 12 ZigBee nodes. Experimental results show that our method has better localization accuracy and is more robust in underground mines.

  15. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  16. Semi-classical approach to transitionless quantum driving: Explicitness and Locality

    NASA Astrophysics Data System (ADS)

    Loewe, Benjamin; Hipolito, Rafael; Goldbart, Paul M.

    Berry has shown that, via a reverse engineering strategy, non-adiabatic transitions in time-dependent quantum systems can be stifled through the introduction of a specific auxiliary hamiltonian. This hamiltonian comes, however, expressed as a formal sum of outer products of the original instantaneous eigenstates and their time-derivatives. Generically, how to create such an operator in the laboratory is thus not evident. Furthermore, the operator may be non- local. By following a semi-classical approach, we obtain a recipe that yields the auxiliary hamiltonian explicitly in terms of the fundamental operators of the system (e.g., position and momentum). By using this formalism, we are able to ascertain criteria for the locality of the auxiliary hamiltonian, and also to determine its exact form in certain special cases.

  17. A new method of automatic landmark tagging for shape model construction via local curvature scale

    NASA Astrophysics Data System (ADS)

    Rueda, Sylvia; Udupa, Jayaram K.; Bai, Li

    2008-03-01

    Segmentation of organs in medical images is a difficult task requiring very often the use of model-based approaches. To build the model, we need an annotated training set of shape examples with correspondences indicated among shapes. Manual positioning of landmarks is a tedious, time-consuming, and error prone task, and almost impossible in the 3D space. To overcome some of these drawbacks, we devised an automatic method based on the notion of c-scale, a new local scale concept. For each boundary element b, the arc length of the largest homogeneous curvature region connected to b is estimated as well as the orientation of the tangent at b. With this shape description method, we can automatically locate mathematical landmarks selected at different levels of detail. The method avoids the use of landmarks for the generation of the mean shape. The selection of landmarks on the mean shape is done automatically using the c-scale method. Then, these landmarks are propagated to each shape in the training set, defining this way the correspondences among the shapes. Altogether 12 strategies are described along these lines. The methods are evaluated on 40 MRI foot data sets, the object of interest being the talus bone. The results show that, for the same number of landmarks, the proposed methods are more compact than manual and equally spaced annotations. The approach is applicable to spaces of any dimensionality, although we have focused in this paper on 2D shapes.

  18. The Local Integrity Approach for Urban Contexts: Definition and Vehicular Experimental Assessment

    PubMed Central

    Margaria, Davide; Falletti, Emanuela

    2016-01-01

    A novel cooperative integrity monitoring concept, called “local integrity”, suitable to automotive applications in urban scenarios, is discussed in this paper. The idea is to take advantage of a collaborative Vehicular Ad hoc NETwork (VANET) architecture in order to perform a spatial/temporal characterization of possible degradations of Global Navigation Satellite System (GNSS) signals. Such characterization enables the computation of the so-called “Local Protection Levels”, taking into account local impairments to the received signals. Starting from theoretical concepts, this paper describes the experimental validation by means of a measurement campaign and the real-time implementation of the algorithm on a vehicular prototype. A live demonstration in a real scenario has been successfully carried out, highlighting effectiveness and performance of the proposed approach. PMID:26821028

  19. Mixed Methods Approaches in Family Science Research

    ERIC Educational Resources Information Center

    Plano Clark, Vicki L.; Huddleston-Casas, Catherine A.; Churchill, Susan L.; Green, Denise O'Neil; Garrett, Amanda L.

    2008-01-01

    The complex phenomena of interest to family scientists require the use of quantitative and qualitative approaches. Researchers across the social sciences are now turning to mixed methods designs that combine these two approaches. Mixed methods research has great promise for addressing family science topics, but only if researchers understand the…

  20. Performance of local orbital basis sets in the self-consistent Sternheimer method for dielectric matrices of extended systems

    NASA Astrophysics Data System (ADS)

    Hübener, H.; Pérez-Osorio, M. A.; Ordejón, P.; Giustino, F.

    2012-09-01

    We present a systematic study of the performance of numerical pseudo-atomic orbital basis sets in the calculation of dielectric matrices of extended systems using the self-consistent Sternheimer approach of [F. Giustino et al., Phys. Rev. B 81, 115105 (2010)]. In order to cover a range of systems, from more insulating to more metallic character, we discuss results for the three semiconductors diamond, silicon, and germanium. Dielectric matrices of silicon and diamond calculated using our method fall within 1% of reference planewaves calculations, demonstrating that this method is promising. We find that polarization orbitals are critical for achieving good agreement with planewaves calculations, and that only a few additional ζ's are required for obtaining converged results, provided the split norm is properly optimized. Our present work establishes the validity of local orbital basis sets and the self-consistent Sternheimer approach for the calculation of dielectric matrices in extended systems, and prepares the ground for future studies of electronic excitations using these methods.

  1. Dissecting local circuits in vivo: integrated optogenetic and electrophysiology approaches for exploring inhibitory regulation of cortical activity.

    PubMed

    Cardin, Jessica A

    2012-01-01

    Local cortical circuit activity in vivo comprises a complex and flexible series of interactions between excitatory and inhibitory neurons. Our understanding of the functional interactions between these different neural populations has been limited by the difficulty of identifying and selectively manipulating the diverse and sparsely represented inhibitory interneuron classes in the intact brain. The integration of recently developed optical tools with traditional electrophysiological techniques provides a powerful window into the role of inhibition in regulating the activity of excitatory neurons. In particular, optogenetic targeting of specific cell classes reveals the distinct impacts of local inhibitory populations on other neurons in the surrounding local network. In addition to providing the ability to activate or suppress spiking in target cells, optogenetic activation identifies extracellularly recorded neurons by class, even when naturally occurring spike rates are extremely low. However, there are several important limitations on the use of these tools and the interpretation of resulting data. The purpose of this article is to outline the uses and limitations of optogenetic tools, along with current methods for achieving cell type-specific expression, and to highlight the advantages of an experimental approach combining optogenetics and electrophysiology to explore the role of inhibition in active networks. To illustrate the efficacy of these combined approaches, I present data comparing targeted manipulations of cortical fast-spiking, parvalbumin-expressing and low threshold-spiking, somatostatin-expressing interneurons in vivo. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. An off-lattice, self-learning kinetic Monte Carlo method using local environments.

    PubMed

    Konwar, Dhrubajit; Bhute, Vijesh J; Chatterjee, Abhijit

    2011-11-07

    We present a method called local environment kinetic Monte Carlo (LE-KMC) method for efficiently performing off-lattice, self-learning kinetic Monte Carlo (KMC) simulations of activated processes in material systems. Like other off-lattice KMC schemes, new atomic processes can be found on-the-fly in LE-KMC. However, a unique feature of LE-KMC is that as long as the assumption that all processes and rates depend only on the local environment is satisfied, LE-KMC provides a general algorithm for (i) unambiguously describing a process in terms of its local atomic environments, (ii) storing new processes and environments in a catalog for later use with standard KMC, and (iii) updating the system based on the local information once a process has been selected for a KMC move. Search, classification, storage and retrieval steps needed while employing local environments and processes in the LE-KMC method are discussed. The advantages and computational cost of LE-KMC are discussed. We assess the performance of the LE-KMC algorithm by considering test systems involving diffusion in a submonolayer Ag and Ag-Cu alloy films on Ag(001) surface.

  3. Methods of localization of Lamb wave sources on thin plates

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut

    2015-04-01

    Signal localization techniques are ubiquitous in both industry and academic communities. We propose a new localization method on plates which is based on energy amplitude attenuation and inverted source amplitude comparison. This inversion is tested on synthetic data using Lamb wave propagation direct model and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers (1-26 kHz frequency range)). We compare the performance of the technique to the classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. Furthermore, we measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, geometry, Signal to Noise Ratio, and we show that this very versatile technique works better than classical ones over the sampling rates 100kHz - 1MHz. Experimental phase consists of a glass plate having dimensions of 80cmx40cm with a thickness of 1cm. Generated signals due to a wooden hammer hit or a steel ball hit are captured by sensors placed on the plate on different locations with the mentioned sensors. Numerical simulations are done using dispersive far field approximation of plate waves. Signals are generated using a hertzian loading over the plate. Using imaginary sources outside the plate boundaries the effect of reflections is also included. This proposed method, can be modified to be implemented on 3d environments, monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).

  4. A reactive, scalable, and transferable model for molecular energies from a neural network approach based on local information

    NASA Astrophysics Data System (ADS)

    Unke, Oliver T.; Meuwly, Markus

    2018-06-01

    Despite the ever-increasing computer power, accurate ab initio calculations for large systems (thousands to millions of atoms) remain infeasible. Instead, approximate empirical energy functions are used. Most current approaches are either transferable between different chemical systems, but not particularly accurate, or they are fine-tuned to a specific application. In this work, a data-driven method to construct a potential energy surface based on neural networks is presented. Since the total energy is decomposed into local atomic contributions, the evaluation is easily parallelizable and scales linearly with system size. With prediction errors below 0.5 kcal mol-1 for both unknown molecules and configurations, the method is accurate across chemical and configurational space, which is demonstrated by applying it to datasets from nonreactive and reactive molecular dynamics simulations and a diverse database of equilibrium structures. The possibility to use small molecules as reference data to predict larger structures is also explored. Since the descriptor only uses local information, high-level ab initio methods, which are computationally too expensive for large molecules, become feasible for generating the necessary reference data used to train the neural network.

  5. Exact method for numerically analyzing a model of local denaturation in superhelically stressed DNA

    NASA Astrophysics Data System (ADS)

    Fye, Richard M.; Benham, Craig J.

    1999-03-01

    Local denaturation, the separation at specific sites of the two strands comprising the DNA double helix, is one of the most fundamental processes in biology, required to allow the base sequence to be read both in DNA transcription and in replication. In living organisms this process can be mediated by enzymes which regulate the amount of superhelical stress imposed on the DNA. We present a numerically exact technique for analyzing a model of denaturation in superhelically stressed DNA. This approach is capable of predicting the locations and extents of transition in circular superhelical DNA molecules of kilobase lengths and specified base pair sequences. It can also be used for closed loops of DNA which are typically found in vivo to be kilobases long. The analytic method consists of an integration over the DNA twist degrees of freedom followed by the introduction of auxiliary variables to decouple the remaining degrees of freedom, which allows the use of the transfer matrix method. The algorithm implementing our technique requires O(N2) operations and O(N) memory to analyze a DNA domain containing N base pairs. However, to analyze kilobase length DNA molecules it must be implemented in high precision floating point arithmetic. An accelerated algorithm is constructed by imposing an upper bound M on the number of base pairs that can simultaneously denature in a state. This accelerated algorithm requires O(MN) operations, and has an analytically bounded error. Sample calculations show that it achieves high accuracy (greater than 15 decimal digits) with relatively small values of M (M<0.05N) for kilobase length molecules under physiologically relevant conditions. Calculations are performed on the superhelical pBR322 DNA sequence to test the accuracy of the method. With no free parameters in the model, the locations and extents of local denaturation predicted by this analysis are in quantitatively precise agreement with in vitro experimental measurements. Calculations

  6. Improving Empirical Approaches to Estimating Local Greenhouse Gas Emissions

    NASA Astrophysics Data System (ADS)

    Blackhurst, M.; Azevedo, I. L.; Lattanzi, A.

    2016-12-01

    Evidence increasingly indicates our changing climate will have significant global impacts on public health, economies, and ecosystems. As a result, local governments have become increasingly interested in climate change mitigation. In the U.S., cities and counties representing nearly 15% of the domestic population plan to reduce 300 million metric tons of greenhouse gases over the next 40 years (or approximately 1 ton per capita). Local governments estimate greenhouse gas emissions to establish greenhouse gas mitigation goals and select supporting mitigation measures. However, current practices produce greenhouse gas estimates - also known as a "greenhouse gas inventory " - of empirical quality often insufficient for robust mitigation decision making. Namely, current mitigation planning uses sporadic, annual, and deterministic estimates disaggregated by broad end use sector, obscuring sources of emissions uncertainty, variability, and exogeneity that influence mitigation opportunities. As part of AGU's Thriving Earth Exchange, Ari Lattanzi of City of Pittsburgh, PA recently partnered with Dr. Inez Lima Azevedo (Carnegie Mellon University) and Dr. Michael Blackhurst (University of Pittsburgh) to improve the empirical approach to characterizing Pittsburgh's greenhouse gas emissions. The project will produce first-order estimates of the underlying sources of uncertainty, variability, and exogeneity influencing Pittsburgh's greenhouse gases and discuss implications of mitigation decision making. The results of the project will enable local governments to collect more robust greenhouse gas inventories to better support their mitigation goals and improve measurement and verification efforts.

  7. Spatiotemporal integration for tactile localization during arm movements: a probabilistic approach.

    PubMed

    Maij, Femke; Wing, Alan M; Medendorp, W Pieter

    2013-12-01

    It has been shown that people make systematic errors in the localization of a brief tactile stimulus that is delivered to the index finger while they are making an arm movement. Here we modeled these spatial errors with a probabilistic approach, assuming that they follow from temporal uncertainty about the occurrence of the stimulus. In the model, this temporal uncertainty converts into a spatial likelihood about the external stimulus location, depending on arm velocity. We tested the prediction of the model that the localization errors depend on arm velocity. Participants (n = 8) were instructed to localize a tactile stimulus that was presented to their index finger while they were making either slow- or fast-targeted arm movements. Our results confirm the model's prediction that participants make larger localization errors when making faster arm movements. The model, which was used to fit the errors for both slow and fast arm movements simultaneously, accounted very well for all the characteristics of these data with temporal uncertainty in stimulus processing as the only free parameter. We conclude that spatial errors in dynamic tactile perception stem from the temporal precision with which tactile inputs are processed.

  8. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment

    PubMed Central

    2014-01-01

    Background Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. Results MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Conclusions Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy. PMID:24731387

  9. Designing and evaluating the MULTICOM protein local and global model quality prediction methods in the CASP10 experiment.

    PubMed

    Cao, Renzhi; Wang, Zheng; Cheng, Jianlin

    2014-04-15

    Protein model quality assessment is an essential component of generating and using protein structural models. During the Tenth Critical Assessment of Techniques for Protein Structure Prediction (CASP10), we developed and tested four automated methods (MULTICOM-REFINE, MULTICOM-CLUSTER, MULTICOM-NOVEL, and MULTICOM-CONSTRUCT) that predicted both local and global quality of protein structural models. MULTICOM-REFINE was a clustering approach that used the average pairwise structural similarity between models to measure the global quality and the average Euclidean distance between a model and several top ranked models to measure the local quality. MULTICOM-CLUSTER and MULTICOM-NOVEL were two new support vector machine-based methods of predicting both the local and global quality of a single protein model. MULTICOM-CONSTRUCT was a new weighted pairwise model comparison (clustering) method that used the weighted average similarity between models in a pool to measure the global model quality. Our experiments showed that the pairwise model assessment methods worked better when a large portion of models in the pool were of good quality, whereas single-model quality assessment methods performed better on some hard targets when only a small portion of models in the pool were of reasonable quality. Since digging out a few good models from a large pool of low-quality models is a major challenge in protein structure prediction, single model quality assessment methods appear to be poised to make important contributions to protein structure modeling. The other interesting finding was that single-model quality assessment scores could be used to weight the models by the consensus pairwise model comparison method to improve its accuracy.

  10. Sustainability in Health care by Allocating Resources Effectively (SHARE) 11: reporting outcomes of an evidence-driven approach to disinvestment in a local healthcare setting.

    PubMed

    Harris, Claire; Allen, Kelly; Ramsey, Wayne; King, Richard; Green, Sally

    2018-05-30

    This is the final paper in a thematic series reporting a program of Sustainability in Health care by Allocating Resources Effectively (SHARE) in a local healthcare setting. The SHARE Program was established to explore a systematic, integrated, evidence-based organisation-wide approach to disinvestment in a large Australian health service network. This paper summarises the findings, discusses the contribution of the SHARE Program to the body of knowledge and understanding of disinvestment in the local healthcare setting, and considers implications for policy, practice and research. The SHARE program was conducted in three phases. Phase One was undertaken to understand concepts and practices related to disinvestment and the implications for a local health service and, based on this information, to identify potential settings and methods for decision-making about disinvestment. The aim of Phase Two was to implement and evaluate the proposed methods to determine which were sustainable, effective and appropriate in a local health service. A review of the current literature incorporating the SHARE findings was conducted in Phase Three to contribute to the understanding of systematic approaches to disinvestment in the local healthcare context. SHARE differed from many other published examples of disinvestment in several ways: by seeking to identify and implement disinvestment opportunities within organisational infrastructure rather than as standalone projects; considering disinvestment in the context of all resource allocation decisions rather than in isolation; including allocation of non-monetary resources as well as financial decisions; and focusing on effective use of limited resources to optimise healthcare outcomes. The SHARE findings provide a rich source of new information about local health service decision-making, in a level of detail not previously reported, to inform others in similar situations. Multiple innovations related to disinvestment were found to be

  11. A Discourse Based Approach to the Language Documentation of Local Ecological Knowledge

    ERIC Educational Resources Information Center

    Odango, Emerson Lopez

    2016-01-01

    This paper proposes a discourse-based approach to the language documentation of local ecological knowledge (LEK). The knowledge, skills, beliefs, cultural worldviews, and ideologies that shape the way a community interacts with its environment can be examined through the discourse in which LEK emerges. 'Discourse-based' refers to two components:…

  12. Local constitutive behavior of paper determined by an inverse method

    Treesearch

    John M. Considine; C. Tim Scott; Roland Gleisner; Junyong Zhu

    2006-01-01

    The macroscopic behavior of paper is governed by small-scale behavior. Intuitively, we know that a small-scale defect with a paper sheet effectively determines the global behavior of the sheet. In this work, we describe a method to evaluate the local constitutive behavior of paper by using an inverse method.

  13. A local crack-tracking strategy to model three-dimensional crack propagation with embedded methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annavarapu, Chandrasekhar; Settgast, Randolph R.; Vitali, Efrem

    We develop a local, implicit crack tracking approach to propagate embedded failure surfaces in three-dimensions. We build on the global crack-tracking strategy of Oliver et al. (Int J. Numer. Anal. Meth. Geomech., 2004; 28:609–632) that tracks all potential failure surfaces in a problem at once by solving a Laplace equation with anisotropic conductivity. We discuss important modifications to this algorithm with a particular emphasis on the effect of the Dirichlet boundary conditions for the Laplace equation on the resultant crack path. Algorithmic and implementational details of the proposed method are provided. Finally, several three-dimensional benchmark problems are studied and resultsmore » are compared with available literature. Lastly, the results indicate that the proposed method addresses pathological cases, exhibits better behavior in the presence of closely interacting fractures, and provides a viable strategy to robustly evolve embedded failure surfaces in 3D.« less

  14. A local crack-tracking strategy to model three-dimensional crack propagation with embedded methods

    DOE PAGES

    Annavarapu, Chandrasekhar; Settgast, Randolph R.; Vitali, Efrem; ...

    2016-09-29

    We develop a local, implicit crack tracking approach to propagate embedded failure surfaces in three-dimensions. We build on the global crack-tracking strategy of Oliver et al. (Int J. Numer. Anal. Meth. Geomech., 2004; 28:609–632) that tracks all potential failure surfaces in a problem at once by solving a Laplace equation with anisotropic conductivity. We discuss important modifications to this algorithm with a particular emphasis on the effect of the Dirichlet boundary conditions for the Laplace equation on the resultant crack path. Algorithmic and implementational details of the proposed method are provided. Finally, several three-dimensional benchmark problems are studied and resultsmore » are compared with available literature. Lastly, the results indicate that the proposed method addresses pathological cases, exhibits better behavior in the presence of closely interacting fractures, and provides a viable strategy to robustly evolve embedded failure surfaces in 3D.« less

  15. (Non-) homomorphic approaches to denoise intensity SAR images with non-local means and stochastic distances

    NASA Astrophysics Data System (ADS)

    Penna, Pedro A. A.; Mascarenhas, Nelson D. A.

    2018-02-01

    The development of new methods to denoise images still attract researchers, who seek to combat the noise with the minimal loss of resolution and details, like edges and fine structures. Many algorithms have the goal to remove additive white Gaussian noise (AWGN). However, it is not the only type of noise which interferes in the analysis and interpretation of images. Therefore, it is extremely important to expand the filters capacity to different noise models present in li-terature, for example the multiplicative noise called speckle that is present in synthetic aperture radar (SAR) images. The state-of-the-art algorithms in remote sensing area work with similarity between patches. This paper aims to develop two approaches using the non local means (NLM), developed for AWGN. In our research, we expanded its capacity for intensity SAR ima-ges speckle. The first approach is grounded on the use of stochastic distances based on the G0 distribution without transforming the data to the logarithm domain, like homomorphic transformation. It takes into account the speckle and backscatter to estimate the parameters necessary to compute the stochastic distances on NLM. The second method uses a priori NLM denoising with a homomorphic transformation and applies the inverse Gamma distribution to estimate the parameters that were used into NLM with stochastic distances. The latter method also presents a new alternative to compute the parameters for the G0 distribution. Finally, this work compares and analyzes the synthetic and real results of the proposed methods with some recent filters of the literature.

  16. Explosion localization via infrasound.

    PubMed

    Szuberla, Curt A L; Olson, John V; Arnoult, Kenneth M

    2009-11-01

    Two acoustic source localization techniques were applied to infrasonic data and their relative performance was assessed. The standard approach for low-frequency localization uses an ensemble of small arrays to separately estimate far-field source bearings, resulting in a solution from the various back azimuths. This method was compared to one developed by the authors that treats the smaller subarrays as a single, meta-array. In numerical simulation and a field experiment, the latter technique was found to provide improved localization precision everywhere in the vicinity of a 3-km-aperture meta-array, often by an order of magnitude.

  17. Waves on Thin Plates: A New (Energy Based) Method on Localization

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Lengliné, Olivier; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut

    2016-04-01

    Noisy acoustic signal localization is a difficult problem having a wide range of application. We propose a new localization method applicable for thin plates which is based on energy amplitude attenuation and inversed source amplitude comparison. This inversion is tested on synthetic data using a direct model of Lamb wave propagation and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers, 1 - 26 kHz frequency range). We compare the performance of this technique with classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. The experimental setup consist of a glass / plexiglass plate having dimensions of 80 cm x 40 cm x 1 cm equipped with four accelerometers and an acquisition card. Signals are generated using a steel, glass or polyamide ball (having different sizes) quasi perpendicular hit (from a height of 2-3 cm) on the plate. Signals are captured by sensors placed on the plate on different locations. We measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, array geometry, signal to noise ratio and computational time. We show that this new technique, which is very versatile, works better than conventional techniques over a range of sampling rates 8 kHz - 1 MHz. It is possible to have a decent resolution (3cm mean error) using a very cheap equipment set. The numerical simulations allow us to track the contributions of different error sources in different methods. The effect of the reflections is also included in our simulation by using the imaginary sources outside the plate boundaries. This proposed method can easily be extended for applications in three dimensional environments, to monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).

  18. Local and global approaches to the problem of Poincaré recurrences. Applications in nonlinear dynamics

    NASA Astrophysics Data System (ADS)

    Anishchenko, V. S.; Boev, Ya. I.; Semenova, N. I.; Strelkova, G. I.

    2015-07-01

    We review rigorous and numerical results on the statistics of Poincaré recurrences which are related to the modern development of the Poincaré recurrence problem. We analyze and describe the rigorous results which are achieved both in the classical (local) approach and in the recently developed global approach. These results are illustrated by numerical simulation data for simple chaotic and ergodic systems. It is shown that the basic theoretical laws can be applied to noisy systems if the probability measure is ergodic and stationary. Poincaré recurrences are studied numerically in nonautonomous systems. Statistical characteristics of recurrences are analyzed in the framework of the global approach for the cases of positive and zero topological entropy. We show that for the positive entropy, there is a relationship between the Afraimovich-Pesin dimension, Lyapunov exponents and the Kolmogorov-Sinai entropy either without and in the presence of external noise. The case of zero topological entropy is exemplified by numerical results for the Poincare recurrence statistics in the circle map. We show and prove that the dependence of minimal recurrence times on the return region size demonstrates universal properties for the golden and the silver ratio. The behavior of Poincaré recurrences is analyzed at the critical point of Feigenbaum attractor birth. We explore Poincaré recurrences for an ergodic set which is generated in the stroboscopic section of a nonautonomous oscillator and is similar to a circle shift. Based on the obtained results we show how the Poincaré recurrence statistics can be applied for solving a number of nonlinear dynamics issues. We propose and illustrate alternative methods for diagnosing effects of external and mutual synchronization of chaotic systems in the context of the local and global approaches. The properties of the recurrence time probability density can be used to detect the stochastic resonance phenomenon. We also discuss how

  19. Analysis on accuracy improvement of rotor-stator rubbing localization based on acoustic emission beamforming method.

    PubMed

    He, Tian; Xiao, Denghong; Pan, Qiang; Liu, Xiandong; Shan, Yingchun

    2014-01-01

    This paper attempts to introduce an improved acoustic emission (AE) beamforming method to localize rotor-stator rubbing fault in rotating machinery. To investigate the propagation characteristics of acoustic emission signals in casing shell plate of rotating machinery, the plate wave theory is used in a thin plate. A simulation is conducted and its result shows the localization accuracy of beamforming depends on multi-mode, dispersion, velocity and array dimension. In order to reduce the effect of propagation characteristics on the source localization, an AE signal pre-process method is introduced by combining plate wave theory and wavelet packet transform. And the revised localization velocity to reduce effect of array size is presented. The accuracy of rubbing localization based on beamforming and the improved method of present paper are compared by the rubbing test carried on a test table of rotating machinery. The results indicate that the improved method can localize rub fault effectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  20. A Voxel-Based Approach to Explore Local Dose Differences Associated With Radiation-Induced Lung Damage

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Palma, Giuseppe; Monti, Serena; D'Avino, Vittoria

    Purpose: To apply a voxel-based (VB) approach aimed at exploring local dose differences associated with late radiation-induced lung damage (RILD). Methods and Materials: An interinstitutional database of 98 patients who were Hodgkin lymphoma (HL) survivors treated with postchemotherapy supradiaphragmatic radiation therapy was analyzed in the study. Eighteen patients experienced late RILD, classified according to the Radiation Therapy Oncology Group scoring system. Each patient's computed tomographic (CT) scan was normalized to a single reference case anatomy (common coordinate system, CCS) through a log-diffeomorphic approach. The obtained deformation fields were used to map the dose of each patient into the CCS. Themore » coregistration robustness and the dose mapping accuracy were evaluated by geometric and dose scores. Two different statistical mapping schemes for nonparametric multiple permutation inference on dose maps were applied, and the corresponding P<.05 significance lung subregions were generated. A receiver operating characteristic (ROC)-based test was performed on the mean dose extracted from each subregion. Results: The coregistration process resulted in a geometrically robust and accurate dose warping. A significantly higher dose was consistently delivered to RILD patients in voxel clusters near the peripheral medial-basal portion of the lungs. The area under the ROC curves (AUC) from the mean dose of the voxel clusters was higher than the corresponding AUC derived from the total lung mean dose. Conclusions: We implemented a framework including a robust registration process and a VB approach accounting for the multiple comparison problem in dose-response modeling, and applied it to a cohort of HL survivors to explore a local dose–RILD relationship in the lungs. Patients with RILD received a significantly greater dose in parenchymal regions where low doses (∼6 Gy) were delivered. Interestingly, the relation between differences in the high

  1. A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems

    DTIC Science & Technology

    2005-05-01

    Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT

  2. New approaches for automatic threedimensional source localization of acoustic emissions--Applications to concrete specimens.

    PubMed

    Kurz, Jochen H

    2015-12-01

    The task of locating a source in space by measuring travel time differences of elastic or electromagnetic waves from the source to several sensors is evident in varying fields. The new concepts of automatic acoustic emission localization presented in this article are based on developments from geodesy and seismology. A detailed description of source location determination in space is given with the focus on acoustic emission data from concrete specimens. Direct and iterative solvers are compared. A concept based on direct solvers from geodesy extended by a statistical approach is described which allows a stable source location determination even for partly erroneous onset times. The developed approach is validated with acoustic emission data from a large specimen leading to travel paths up to 1m and therefore to noisy data with errors in the determined onsets. The adaption of the algorithms from geodesy to the localization procedure of sources of elastic waves offers new possibilities concerning stability, automation and performance of localization results. Fracture processes can be assessed more accurately. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Local Field Response Method Phenomenologically Introducing Spin Correlations

    NASA Astrophysics Data System (ADS)

    Tomaru, Tatsuya

    2018-03-01

    The local field response (LFR) method is a way of searching for the ground state in a similar manner to quantum annealing. However, the LFR method operates on a classical machine, and quantum effects are introduced through a priori information and through phenomenological means reflecting the states during the computations. The LFR method has been treated with a one-body approximation, and therefore, the effect of entanglement has not been sufficiently taken into account. In this report, spin correlations are phenomenologically introduced as one of the effects of entanglement, by which multiple tunneling at anticrossing points is taken into account. As a result, the accuracy of solutions for a 128-bit system increases by 31% compared with that without spin correlations.

  4. How Nectar-Feeding Bats Localize their Food: Echolocation Behavior of Leptonycteris yerbabuenae Approaching Cactus Flowers

    PubMed Central

    Koblitz, Jens C.; Fleming, Theodore H.; Medellín, Rodrigo A.; Kalko, Elisabeth K. V.; Schnitzler, Hans-Ulrich; Tschapka, Marco

    2016-01-01

    Nectar-feeding bats show morphological, physiological, and behavioral adaptations for feeding on nectar. How they find and localize flowers is still poorly understood. While scent cues alone allow no precise localization of a floral target, the spatial properties of flower echoes are very precise and could play a major role, particularly at close range. The aim of this study is to understand the role of echolocation for classification and localization of flowers. We compared the approach behavior of Leptonycteris yerbabuenae to flowers of a columnar cactus, Pachycereus pringlei, to that to an acrylic hollow hemisphere that is acoustically conspicuous to bats, but has different acoustic properties and, contrary to the cactus flower, present no scent. For recording the flight and echolocation behaviour we used two infrared video cameras under stroboscopic illumination synchronized with ultrasound recordings. During search flights all individuals identified both targets as a possible food source and initiated an approach flight; however, they visited only the cactus flower. In experiments with the acrylic hemisphere bats aborted the approach at ca. 40–50 cm. In the last instant before the flower visit the bats emitted a long terminal group of 10–20 calls. This is the first report of this behaviour for a nectar-feeding bat. Our findings suggest that L. yerbabuenae use echolocation for classification and localization of cactus flowers and that the echo-acoustic characteristics of the flower guide the bats directly to the flower opening. PMID:27684373

  5. Exact density functional and wave function embedding schemes based on orbital localization

    NASA Astrophysics Data System (ADS)

    Hégely, Bence; Nagy, Péter R.; Ferenczy, György G.; Kállay, Mihály

    2016-08-01

    Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.

  6. Local tolerance testing under REACH: Accepted non-animal methods are not on equal footing with animal tests.

    PubMed

    Sauer, Ursula G; Hill, Erin H; Curren, Rodger D; Raabe, Hans A; Kolle, Susanne N; Teubner, Wera; Mehling, Annette; Landsiedel, Robert

    2016-07-01

    In general, no single non-animal method can cover the complexity of any given animal test. Therefore, fixed sets of in vitro (and in chemico) methods have been combined into testing strategies for skin and eye irritation and skin sensitisation testing, with pre-defined prediction models for substance classification. Many of these methods have been adopted as OECD test guidelines. Various testing strategies have been successfully validated in extensive in-house and inter-laboratory studies, but they have not yet received formal acceptance for substance classification. Therefore, under the European REACH Regulation, data from testing strategies can, in general, only be used in so-called weight-of-evidence approaches. While animal testing data generated under the specific REACH information requirements are per se sufficient, the sufficiency of weight-of-evidence approaches can be questioned under the REACH system, and further animal testing can be required. This constitutes an imbalance between the regulatory acceptance of data from approved non-animal methods and animal tests that is not justified on scientific grounds. To ensure that testing strategies for local tolerance testing truly serve to replace animal testing for the REACH registration 2018 deadline (when the majority of existing chemicals have to be registered), clarity on their regulatory acceptance as complete replacements is urgently required. 2016 FRAME.

  7. A new approach to enforce element-wise mass/species balance using the augmented Lagrangian method

    NASA Astrophysics Data System (ADS)

    Chang, J.; Nakshatrala, K.

    2015-12-01

    The least-squares finite element method (LSFEM) is one of many ways in which one can discretize and express a set of first ordered partial differential equations as a mixed formulation. However, the standard LSFEM is not locally conservative by design. The absence of this physical property can have serious implications in the numerical simulation of subsurface flow and transport. Two commonly employed ways to circumvent this issue is through the Lagrange multiplier method, which explicitly satisfies the element-wise divergence by introducing new unknowns, or through appending a penalty factor to the continuity constraint, which reduces the violation in the mass balance. However, these methodologies have some well-known drawbacks. Herein, we propose a new approach to improve the local balance of species/mass balance. The approach augments constraints to a least-square function by a novel mathematical construction of the local species/mass balance, which is different from the conventional ways. The resulting constrained optimization problem is solved using the augmented Lagrangian, which corrects the balance errors in an iterative fashion. The advantages of this methodology are that the problem size is not increased (thus preserving the symmetry and positive definite-ness) and that one need not provide an accurate guess for the initial penalty to reach a prescribed mass balance tolerance. We derive the least-squares weighting needed to ensure accurate solutions. We also demonstrate the robustness of the weighted LSFEM coupled with the augmented Lagrangian by solving large-scale heterogenous and variably saturated flow through porous media problems. The performance of the iterative solvers with respect to various user-defined augmented Lagrangian parameters will be documented.

  8. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  9. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation

    PubMed Central

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-01-01

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is

  10. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation.

    PubMed

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-07-22

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is

  11. Description of the atomic disorder (local order) in crystals by the mixed-symmetry method

    NASA Astrophysics Data System (ADS)

    Dudka, A. P.; Novikova, N. E.

    2017-11-01

    An approach to the description of local atomic disorder (short-range order) in single crystals by the mixed-symmetry method based on Bragg scattering data is proposed, and the corresponding software is developed. In defect-containing crystals, each atom in the unit cell can be described by its own symmetry space group. The expression for the calculated structural factor includes summation over different sets of symmetry operations for different atoms. To facilitate the search for new symmetry elements, an "atomic disorder expert" was developed, which estimates the significance of tested models. It is shown that the symmetry lowering for some atoms correlates with the existence of phase transitions (in langasite family crystals) and the anisotropy of physical properties (in rare-earth dodecaborides RB12).

  12. Non-animal methods to predict skin sensitization (II): an assessment of defined approaches *.

    PubMed

    Kleinstreuer, Nicole C; Hoffmann, Sebastian; Alépée, Nathalie; Allen, David; Ashikaga, Takao; Casey, Warren; Clouet, Elodie; Cluzel, Magalie; Desprez, Bertrand; Gellatly, Nichola; Göbel, Carsten; Kern, Petra S; Klaric, Martina; Kühnl, Jochen; Martinozzi-Teissier, Silvia; Mewes, Karsten; Miyazawa, Masaaki; Strickland, Judy; van Vliet, Erwin; Zang, Qingda; Petersohn, Dirk

    2018-05-01

    Skin sensitization is a toxicity endpoint of widespread concern, for which the mechanistic understanding and concurrent necessity for non-animal testing approaches have evolved to a critical juncture, with many available options for predicting sensitization without using animals. Cosmetics Europe and the National Toxicology Program Interagency Center for the Evaluation of Alternative Toxicological Methods collaborated to analyze the performance of multiple non-animal data integration approaches for the skin sensitization safety assessment of cosmetics ingredients. The Cosmetics Europe Skin Tolerance Task Force (STTF) collected and generated data on 128 substances in multiple in vitro and in chemico skin sensitization assays selected based on a systematic assessment by the STTF. These assays, together with certain in silico predictions, are key components of various non-animal testing strategies that have been submitted to the Organization for Economic Cooperation and Development as case studies for skin sensitization. Curated murine local lymph node assay (LLNA) and human skin sensitization data were used to evaluate the performance of six defined approaches, comprising eight non-animal testing strategies, for both hazard and potency characterization. Defined approaches examined included consensus methods, artificial neural networks, support vector machine models, Bayesian networks, and decision trees, most of which were reproduced using open source software tools. Multiple non-animal testing strategies incorporating in vitro, in chemico, and in silico inputs demonstrated equivalent or superior performance to the LLNA when compared to both animal and human data for skin sensitization.

  13. A fictitious domain approach for the Stokes problem based on the extended finite element method

    NASA Astrophysics Data System (ADS)

    Court, Sébastien; Fournié, Michel; Lozinski, Alexei

    2014-01-01

    In the present work, we propose to extend to the Stokes problem a fictitious domain approach inspired by eXtended Finite Element Method and studied for Poisson problem in [Renard]. The method allows computations in domains whose boundaries do not match. A mixed finite element method is used for fluid flow. The interface between the fluid and the structure is localized by a level-set function. Dirichlet boundary conditions are taken into account using Lagrange multiplier. A stabilization term is introduced to improve the approximation of the normal trace of the Cauchy stress tensor at the interface and avoid the inf-sup condition between the spaces for velocity and the Lagrange multiplier. Convergence analysis is given and several numerical tests are performed to illustrate the capabilities of the method.

  14. 3-D localization of virtual sound sources: effects of visual environment, pointing method, and training.

    PubMed

    Majdak, Piotr; Goupell, Matthew J; Laback, Bernhard

    2010-02-01

    The ability to localize sound sources in three-dimensional space was tested in humans. In Experiment 1, naive subjects listened to noises filtered with subject-specific head-related transfer functions. The tested conditions included the pointing method (head or manual pointing) and the visual environment (VE; darkness or virtual VE). The localization performance was not significantly different between the pointing methods. The virtual VE significantly improved the horizontal precision and reduced the number of front-back confusions. These results show the benefit of using a virtual VE in sound localization tasks. In Experiment 2, subjects were provided with sound localization training. Over the course of training, the performance improved for all subjects, with the largest improvements occurring during the first 400 trials. The improvements beyond the first 400 trials were smaller. After the training, there was still no significant effect of pointing method, showing that the choice of either head- or manual-pointing method plays a minor role in sound localization performance. The results of Experiment 2 reinforce the importance of perceptual training for at least 400 trials in sound localization studies.

  15. Locally Weighted Ensemble Clustering.

    PubMed

    Huang, Dong; Wang, Chang-Dong; Lai, Jian-Huang

    2018-05-01

    Due to its ability to combine multiple base clusterings into a probably better and more robust clustering, the ensemble clustering technique has been attracting increasing attention in recent years. Despite the significant success, one limitation to most of the existing ensemble clustering methods is that they generally treat all base clusterings equally regardless of their reliability, which makes them vulnerable to low-quality base clusterings. Although some efforts have been made to (globally) evaluate and weight the base clusterings, yet these methods tend to view each base clustering as an individual and neglect the local diversity of clusters inside the same base clustering. It remains an open problem how to evaluate the reliability of clusters and exploit the local diversity in the ensemble to enhance the consensus performance, especially, in the case when there is no access to data features or specific assumptions on data distribution. To address this, in this paper, we propose a novel ensemble clustering approach based on ensemble-driven cluster uncertainty estimation and local weighting strategy. In particular, the uncertainty of each cluster is estimated by considering the cluster labels in the entire ensemble via an entropic criterion. A novel ensemble-driven cluster validity measure is introduced, and a locally weighted co-association matrix is presented to serve as a summary for the ensemble of diverse clusters. With the local diversity in ensembles exploited, two novel consensus functions are further proposed. Extensive experiments on a variety of real-world datasets demonstrate the superiority of the proposed approach over the state-of-the-art.

  16. Approaches to local climate action in Colorado

    NASA Astrophysics Data System (ADS)

    Huang, Y. D.

    2011-12-01

    Though climate change is a global problem, the impacts are felt on the local scale; it follows that the solutions must come at the local level. Fortunately, many cities and municipalities are implementing climate mitigation (or climate action) policies and programs. However, they face many procedural and institutional barriers to their efforts, such of lack of expertise or data, limited human and financial resources, and lack of community engagement (Krause 2011). To address the first obstacle, thirteen in-depth case studies were done of successful model practices ("best practices") of climate action programs carried out by various cities, counties, and organizations in Colorado, and one outside Colorado, and developed into "how-to guides" for other municipalities to use. Research was conducted by reading documents (e.g. annual reports, community guides, city websites), email correspondence with program managers and city officials, and via phone interviews. The information gathered was then compiled into a series of reports containing a narrative description of the initiative; an overview of the plan elements (target audience and goals); implementation strategies and any indicators of success to date (e.g. GHG emissions reductions, cost savings); and the adoption or approval process, as well as community engagement efforts and marketing or messaging strategies. The types of programs covered were energy action plans, energy efficiency programs, renewable energy programs, and transportation and land use programs. Between the thirteen case studies, there was a range of approaches to implementing local climate action programs, examined along two dimensions: focus on climate change (whether it was direct/explicit or indirect/implicit) and extent of government authority. This benchmarking exercise affirmed the conventional wisdom propounded by Pitt (2010), that peer pressure (that is, the presence of neighboring jurisdictions with climate initiatives), the level of

  17. RENT+: an improved method for inferring local genealogical trees from haplotypes with recombination

    PubMed Central

    Mirzaei, Sajad; Wu, Yufeng

    2017-01-01

    Abstract Motivation: Haplotypes from one or multiple related populations share a common genealogical history. If this shared genealogy can be inferred from haplotypes, it can be very useful for many population genetics problems. However, with the presence of recombination, the genealogical history of haplotypes is complex and cannot be represented by a single genealogical tree. Therefore, inference of genealogical history with recombination is much more challenging than the case of no recombination. Results: In this paper, we present a new approach called RENT+ for the inference of local genealogical trees from haplotypes with the presence of recombination. RENT+ builds on a previous genealogy inference approach called RENT, which infers a set of related genealogical trees at different genomic positions. RENT+ represents a significant improvement over RENT in the sense that it is more effective in extracting information contained in the haplotype data about the underlying genealogy than RENT. The key components of RENT+ are several greatly enhanced genealogy inference rules. Through simulation, we show that RENT+ is more efficient and accurate than several existing genealogy inference methods. As an application, we apply RENT+ in the inference of population demographic history from haplotypes, which outperforms several existing methods. Availability and Implementation: RENT+ is implemented in Java, and is freely available for download from: https://github.com/SajadMirzaei/RentPlus. Contacts: sajad@engr.uconn.edu or ywu@engr.uconn.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28065901

  18. Improving Biomaterials Imaging for Nanotechnology: Rapid Methods for Protein Localization at Ultrastructural Level.

    PubMed

    Cano-Garrido, Olivia; Garcia-Fruitós, Elena; Villaverde, Antonio; Sánchez-Chardi, Alejandro

    2018-04-01

    The preparation of biological samples for electron microscopy is material- and time-consuming because it is often based on long protocols that also may produce artifacts. Protein labeling for transmission electron microscopy (TEM) is such an example, taking several days. However, for protein-based nanotechnology, high resolution imaging techniques are unique and crucial tools for studying the spatial distribution of these molecules, either alone or as components of biomaterials. In this paper, we tested two new short methods of immunolocalization for TEM, and compared them with a standard protocol in qualitative and quantitative approaches by using four protein-based nanoparticles. We reported a significant increase of labeling per area of nanoparticle in both new methodologies (H = 19.811; p < 0.001) with all the model antigens tested: GFP (H = 22.115; p < 0.001), MMP-2 (H = 19.579; p < 0.001), MMP-9 (H = 7.567; p < 0.023), and IFN-γ (H = 62.110; p < 0.001). We also found that the most suitable protocol for labeling depends on the nanoparticle's tendency to aggregate. Moreover, the shorter methods reduce artifacts, time (by 30%), residues, and reagents hindering, losing, or altering antigens, and obtaining a significant increase of protein localization (of about 200%). Overall, this study makes a step forward in the development of optimized protocols for the nanoscale localization of peptides and proteins within new biomaterials. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Educational Resource Multipliers for Use in Local Public Finance: An Input-Output Approach.

    ERIC Educational Resources Information Center

    Boardman, A. E.; Schinnar, A. P.

    1982-01-01

    Develops an input-output model, with related multipliers, showing how changes in earmarked and discretionary educational funds (whether local, state, or federal) affect all of a state's districts and educational programs. Illustrates the model with Pennsylvania data and relates it to the usual educational finance approach, which uses demand…

  20. Efficient anharmonic vibrational spectroscopy for large molecules using local-mode coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, Xiaolu; Steele, Ryan P., E-mail: ryan.steele@utah.edu

    This article presents a general computational approach for efficient simulations of anharmonic vibrational spectra in chemical systems. An automated local-mode vibrational approach is presented, which borrows techniques from localized molecular orbitals in electronic structure theory. This approach generates spatially localized vibrational modes, in contrast to the delocalization exhibited by canonical normal modes. The method is rigorously tested across a series of chemical systems, ranging from small molecules to large water clusters and a protonated dipeptide. It is interfaced with exact, grid-based approaches, as well as vibrational self-consistent field methods. Most significantly, this new set of reference coordinates exhibits a well-behavedmore » spatial decay of mode couplings, which allows for a systematic, a priori truncation of mode couplings and increased computational efficiency. Convergence can typically be reached by including modes within only about 4 Å. The local nature of this truncation suggests particular promise for the ab initio simulation of anharmonic vibrational motion in large systems, where connection to experimental spectra is currently most challenging.« less

  1. Gaussian process regression for sensor networks under localization uncertainty

    USGS Publications Warehouse

    Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming

    2013-01-01

    In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.

  2. An Amplitude-Based Estimation Method for International Space Station (ISS) Leak Detection and Localization Using Acoustic Sensor Networks

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Madaras, Eric I.

    2009-01-01

    The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.

  3. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  4. A data centred method to estimate and map how the local distribution of daily precipitation is changing

    NASA Astrophysics Data System (ADS)

    Chapman, Sandra; Stainforth, David; Watkins, Nick

    2014-05-01

    Estimates of how our climate is changing are needed locally in order to inform adaptation planning decisions. This requires quantifying the geographical patterns in changes at specific quantiles in distributions of variables such as daily temperature or precipitation. Here we focus on these local changes and on a method to transform daily observations of precipitation into patterns of local climate change. We develop a method[1] for analysing local climatic timeseries to assess which quantiles of the local climatic distribution show the greatest and most robust changes, to specifically address the challenges presented by daily precipitation data. We extract from the data quantities that characterize the changes in time of the likelihood of daily precipitation above a threshold and of the relative amount of precipitation in those days. Our method is a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. This deconstruction facilitates an assessment of how fast different quantiles of precipitation distributions are changing. This involves both determining which quantiles and geographical locations show the greatest change but also, those at which any change is highly uncertain. We demonstrate this approach using E-OBS gridded data[2] timeseries of local daily precipitation from specific locations across Europe over the last 60 years. We treat geographical location and precipitation as independent variables and thus obtain as outputs the pattern of change at a given threshold of precipitation and with geographical location. This is model- independent, thus providing data of direct value in model calibration and assessment. Our results show regionally consistent patterns of systematic increase in precipitation on the wettest days, and of drying across all days which is of potential value in

  5. Methods of Farm Guidance

    ERIC Educational Resources Information Center

    Vir, Dharm

    1971-01-01

    A survey of teaching methods for farm guidance workers in India, outlining some approaches developed by and used in other nations. Discusses mass educational methods, group educational methods, and the local leadership method. (JB)

  6. Communication: Exact analytical derivatives for the domain-based local pair natural orbital MP2 method (DLPNO-MP2)

    NASA Astrophysics Data System (ADS)

    Pinski, Peter; Neese, Frank

    2018-01-01

    Electron correlation methods based on pair natural orbitals (PNOs) have gained an increasing degree of interest in recent years, as they permit energy calculations to be performed on systems containing up to many hundred atoms, while maintaining chemical accuracy for reaction energies. We present an approach for taking exact analytical first derivatives of the energy contributions in the simplest method of the family of Domain-based Local Pair Natural Orbital (DLPNO) methods, closed-shell DLPNO-MP2. The Lagrangian function contains constraints to account for the relaxation of PNOs. RI-MP2 reference geometries are reproduced accurately, as exemplified for four systems with a substantial degree of nonbonding interactions. By the example of electric field gradients, we demonstrate that omitting PNO-specific constraints can lead to dramatic errors for orbital-relaxed properties.

  7. Application of meteorology-based methods to determine local and external contributions to particulate matter pollution: A case study in Venice (Italy)

    NASA Astrophysics Data System (ADS)

    Squizzato, Stefania; Masiol, Mauro

    2015-10-01

    The air quality is influenced by the potential effects of meteorology at meso- and synoptic scales. While local weather and mixing layer dynamics mainly drive the dispersion of sources at small scales, long-range transports affect the movements of air masses over regional, transboundary and even continental scales. Long-range transport may advect polluted air masses from hot-spots by increasing the levels of pollution at nearby or remote locations or may further raise air pollution levels where external air masses originate from other hot-spots. Therefore, the knowledge of ground-wind circulation and potential long-range transports is fundamental not only to evaluate how local or external sources may affect the air quality at a receptor site but also to quantify it. This review is focussed on establishing the relationships among PM2.5 sources, meteorological condition and air mass origin in the Po Valley, which is one of the most polluted areas in Europe. We have chosen the results from a recent study carried out in Venice (Eastern Po Valley) and have analysed them using different statistical approaches to understand the influence of external and local contribution of PM2.5 sources. External contributions were evaluated by applying Trajectory Statistical Methods (TSMs) based on back-trajectory analysis including (i) back-trajectories cluster analysis, (ii) potential source contribution function (PSCF) and (iii) concentration weighted trajectory (CWT). Furthermore, the relationships between the source contributions and ground-wind circulation patterns were investigated by using (iv) cluster analysis on wind data and (v) conditional probability function (CPF). Finally, local source contribution have been estimated by applying the Lenschow' approach. In summary, the integrated approach of different techniques has successfully identified both local and external sources of particulate matter pollution in a European hot-spot affected by the worst air quality.

  8. On the effect of local barrier height in scanning tunneling microscopy: Measurement methods and control implications

    NASA Astrophysics Data System (ADS)

    Tajaddodianfar, Farid; Moheimani, S. O. Reza; Owen, James; Randall, John N.

    2018-01-01

    A common cause of tip-sample crashes in a Scanning Tunneling Microscope (STM) operating in constant current mode is the poor performance of its feedback control system. We show that there is a direct link between the Local Barrier Height (LBH) and robustness of the feedback control loop. A method known as the "gap modulation method" was proposed in the early STM studies for estimating the LBH. We show that the obtained measurements are affected by controller parameters and propose an alternative method which we prove to produce LBH measurements independent of the controller dynamics. We use the obtained LBH estimation to continuously update the gains of a STM proportional-integral (PI) controller and show that while tuning the PI gains, the closed-loop system tolerates larger variations of LBH without experiencing instability. We report experimental results, conducted on two STM scanners, to establish the efficiency of the proposed PI tuning approach. Improved feedback stability is believed to help in avoiding the tip/sample crash in STMs.

  9. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure

    PubMed Central

    Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng

    2014-01-01

    Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials. PMID:24763649

  10. Local or global? How to choose the training set for principal component compression of hyperspectral satellite measurements: a hybrid approach

    NASA Astrophysics Data System (ADS)

    Hultberg, Tim; August, Thomas; Lenti, Flavia

    2017-09-01

    Principal Component (PC) compression is the method of choice to achieve band-width reduction for dissemination of hyper spectral (HS) satellite measurements and will become increasingly important with the advent of future HS missions (such as IASI-NG and MTG-IRS) with ever higher data-rates. It is a linear transformation defined by a truncated set of the leading eigenvectors of the covariance of the measurements as well as the mean of the measurements. We discuss the strategy for generation of the eigenvectors, based on the operational experience made with IASI. To compute the covariance and mean, a so-called training set of measurements is needed, which ideally should include all relevant spectral features. For the dissemination of IASI PC scores a global static training set consisting of a large sample of measured spectra covering all seasons and all regions is used. This training set was updated once after the start of the dissemination of IASI PC scores in April 2010 by adding spectra from the 2010 Russian wildfires, in which spectral features not captured by the previous training set were identified. An alternative approach, which has sometimes been proposed, is to compute the eigenvectors on the fly from a local training set, for example consisting of all measurements in the current processing granule. It might naively be thought that this local approach would improve the compression rate by reducing the number of PC scores needed to represent the measurements within each granule. This false belief is apparently confirmed, if the reconstruction scores (root mean square of the reconstruction residuals) is used as the sole criteria for choosing the number of PC scores to retain, which would overlook the fact that the decrease in reconstruction score (for the same number of PCs) is achieved only by the retention of an increased amount of random noise. We demonstrate that the local eigenvectors retain a higher amount of noise and a lower amount of atmospheric

  11. Geometric approach to segmentation and protein localization in cell culture assays.

    PubMed

    Raman, S; Maxwell, C A; Barcellos-Hoff, M H; Parvin, B

    2007-01-01

    Cell-based fluorescence imaging assays are heterogeneous and require the collection of a large number of images for detailed quantitative analysis. Complexities arise as a result of variation in spatial nonuniformity, shape, overlapping compartments and scale (size). A new technique and methodology has been developed and tested for delineating subcellular morphology and partitioning overlapping compartments at multiple scales. This system is packaged as an integrated software platform for quantifying images that are obtained through fluorescence microscopy. Proposed methods are model based, leveraging geometric shape properties of subcellular compartments and corresponding protein localization. From the morphological perspective, convexity constraint is imposed to delineate and partition nuclear compartments. From the protein localization perspective, radial symmetry is imposed to localize punctate protein events at submicron resolution. Convexity constraint is imposed against boundary information, which are extracted through a combination of zero-crossing and gradient operator. If the convexity constraint fails for the boundary then positive curvature maxima are localized along the contour and the entire blob is partitioned into disjointed convex objects representing individual nuclear compartment, by enforcing geometric constraints. Nuclear compartments provide the context for protein localization, which may be diffuse or punctate. Punctate signal are localized through iterative voting and radial symmetries for improved reliability and robustness. The technique has been tested against 196 images that were generated to study centrosome abnormalities. Corresponding computed representations are compared against manual counts for validation.

  12. MEG source localization of spatially extended generators of epileptic activity: comparing entropic and hierarchical bayesian approaches.

    PubMed

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.

  13. MEG Source Localization of Spatially Extended Generators of Epileptic Activity: Comparing Entropic and Hierarchical Bayesian Approaches

    PubMed Central

    Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe

    2013-01-01

    Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485

  14. Localization of synchronous cortical neural sources.

    PubMed

    Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc

    2013-03-01

    Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.

  15. Adaptation Method for Overall and Local Performances of Gas Turbine Engine Model

    NASA Astrophysics Data System (ADS)

    Kim, Sangjo; Kim, Kuisoon; Son, Changmin

    2018-04-01

    An adaptation method was proposed to improve the modeling accuracy of overall and local performances of gas turbine engine. The adaptation method was divided into two steps. First, the overall performance parameters such as engine thrust, thermal efficiency, and pressure ratio were adapted by calibrating compressor maps, and second, the local performance parameters such as temperature of component intersection and shaft speed were adjusted by additional adaptation factors. An optimization technique was used to find the correlation equation of adaptation factors for compressor performance maps. The multi-island genetic algorithm (MIGA) was employed in the present optimization. The correlations of local adaptation factors were generated based on the difference between the first adapted engine model and performance test data. The proposed adaptation method applied to a low-bypass ratio turbofan engine of 12,000 lb thrust. The gas turbine engine model was generated and validated based on the performance test data in the sea-level static condition. In flight condition at 20,000 ft and 0.9 Mach number, the result of adapted engine model showed improved prediction in engine thrust (overall performance parameter) by reducing the difference from 14.5 to 3.3%. Moreover, there was further improvement in the comparison of low-pressure turbine exit temperature (local performance parameter) as the difference is reduced from 3.2 to 0.4%.

  16. Effects of local anaesthesia or local anaesthesia plus a non-steroidal anti-inflammatory drug on the acute cortisol response of calves to five different methods of castration.

    PubMed

    Stafford, K J; Mellor, D J; Todd, S E; Bruce, R A; Ward, R N

    2002-08-01

    The cortisol response of calves to different methods of castration (ring, band, surgical, clamp) with or without local anaesthetic, or local anaesthetic plus a non-steroidal anti-inflammatory drug were recorded. All methods of castration caused a significant cortisol response and by inference pain and distress. Band castration caused a greater cortisol response than ring castration but the responses were eliminated by local anaesthetic. The cortisol response to surgical castration, by traction on the spermatic cords or by cutting across them with an emasculator, was not diminished by local anaesthetic but when ketoprofen was given with local anaesthetic the cortisol response was eliminated. Local anaesthetic did reduce the behavioural response to cutting the scrotum and handling the testes. Clamp castration caused the smallest cortisol response which was reduced or eliminated by local anaesthetic or local anesthetic plus ketoprofen respectively, but this method of castration was not always successful.

  17. Combining global and local approximations

    NASA Technical Reports Server (NTRS)

    Haftka, Raphael T.

    1991-01-01

    A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.

  18. Local Minima Free Parameterized Appearance Models

    PubMed Central

    Nguyen, Minh Hoai; De la Torre, Fernando

    2010-01-01

    Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches. PMID:21804750

  19. A mixed-methods approach to systematic reviews.

    PubMed

    Pearson, Alan; White, Heath; Bath-Hextall, Fiona; Salmond, Susan; Apostolo, Joao; Kirkpatrick, Pamela

    2015-09-01

    There are an increasing number of published single-method systematic reviews that focus on different types of evidence related to a particular topic. As policy makers and practitioners seek clear directions for decision-making from systematic reviews, it is likely that it will be increasingly difficult for them to identify 'what to do' if they are required to find and understand a plethora of syntheses related to a particular topic.Mixed-methods systematic reviews are designed to address this issue and have the potential to produce systematic reviews of direct relevance to policy makers and practitioners.On the basis of the recommendations of the Joanna Briggs Institute International Mixed Methods Reviews Methodology Group in 2012, the Institute adopted a segregated approach to mixed-methods synthesis as described by Sandelowski et al., which consists of separate syntheses of each component method of the review. Joanna Briggs Institute's mixed-methods synthesis of the findings of the separate syntheses uses a Bayesian approach to translate the findings of the initial quantitative synthesis into qualitative themes and pooling these with the findings of the initial qualitative synthesis.

  20. Hierarchical Leak Detection and Localization Method in Natural Gas Pipeline Monitoring Sensor Networks

    PubMed Central

    Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning

    2012-01-01

    In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point’s position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate. PMID:22368464

  1. Hierarchical leak detection and localization method in natural gas pipeline monitoring sensor networks.

    PubMed

    Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning

    2012-01-01

    In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point's position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate.

  2. Method for local temperature measurement in a nanoreactor for in situ high-resolution electron microscopy.

    PubMed

    Vendelbo, S B; Kooyman, P J; Creemer, J F; Morana, B; Mele, L; Dona, P; Nelissen, B J; Helveg, S

    2013-10-01

    In situ high-resolution transmission electron microscopy (TEM) of solids under reactive gas conditions can be facilitated by microelectromechanical system devices called nanoreactors. These nanoreactors are windowed cells containing nanoliter volumes of gas at ambient pressures and elevated temperatures. However, due to the high spatial confinement of the reaction environment, traditional methods for measuring process parameters, such as the local temperature, are difficult to apply. To address this issue, we devise an electron energy loss spectroscopy (EELS) method that probes the local temperature of the reaction volume under inspection by the electron beam. The local gas density, as measured using quantitative EELS, is combined with the inherent relation between gas density and temperature, as described by the ideal gas law, to obtain the local temperature. Using this method we determined the temperature gradient in a nanoreactor in situ, while the average, global temperature was monitored by a traditional measurement of the electrical resistivity of the heater. The local gas temperatures had a maximum of 56 °C deviation from the global heater values under the applied conditions. The local temperatures, obtained with the proposed method, are in good agreement with predictions from an analytical model. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. RENT+: an improved method for inferring local genealogical trees from haplotypes with recombination.

    PubMed

    Mirzaei, Sajad; Wu, Yufeng

    2017-04-01

    : Haplotypes from one or multiple related populations share a common genealogical history. If this shared genealogy can be inferred from haplotypes, it can be very useful for many population genetics problems. However, with the presence of recombination, the genealogical history of haplotypes is complex and cannot be represented by a single genealogical tree. Therefore, inference of genealogical history with recombination is much more challenging than the case of no recombination. : In this paper, we present a new approach called RENT+  for the inference of local genealogical trees from haplotypes with the presence of recombination. RENT+  builds on a previous genealogy inference approach called RENT , which infers a set of related genealogical trees at different genomic positions. RENT+  represents a significant improvement over RENT in the sense that it is more effective in extracting information contained in the haplotype data about the underlying genealogy than RENT . The key components of RENT+  are several greatly enhanced genealogy inference rules. Through simulation, we show that RENT+  is more efficient and accurate than several existing genealogy inference methods. As an application, we apply RENT+  in the inference of population demographic history from haplotypes, which outperforms several existing methods. : RENT+  is implemented in Java, and is freely available for download from: https://github.com/SajadMirzaei/RentPlus . : sajad@engr.uconn.edu or ywu@engr.uconn.edu. : Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  4. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse

  5. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  6. Remotely actuated localized pressure and heat apparatus and method of use

    NASA Technical Reports Server (NTRS)

    Merret, John B. (Inventor); Taylor, DeVor R. (Inventor); Wheeler, Mark M. (Inventor); Gale, Dan R. (Inventor)

    2004-01-01

    Apparatus and method for the use of a remotely actuated localized pressure and heat apparatus for the consolidation and curing of fiber elements in, structures. The apparatus includes members for clamping the desired portion of the fiber elements to be joined, pressure members and/or heat members. The method is directed to the application and use of the apparatus.

  7. Atomistic cluster alignment method for local order mining in liquids and glasses

    NASA Astrophysics Data System (ADS)

    Fang, X. W.; Wang, C. Z.; Yao, Y. X.; Ding, Z. J.; Ho, K. M.

    2010-11-01

    An atomistic cluster alignment method is developed to identify and characterize the local atomic structural order in liquids and glasses. With the “order mining” idea for structurally disordered systems, the method can detect the presence of any type of local order in the system and can quantify the structural similarity between a given set of templates and the aligned clusters in a systematic and unbiased manner. Moreover, population analysis can also be carried out for various types of clusters in the system. The advantages of the method in comparison with other previously developed analysis methods are illustrated by performing the structural analysis for four prototype systems (i.e., pure Al, pure Zr, Zr35Cu65 , and Zr36Ni64 ). The results show that the cluster alignment method can identify various types of short-range orders (SROs) in these systems correctly while some of these SROs are difficult to capture by most of the currently available analysis methods (e.g., Voronoi tessellation method). Such a full three-dimensional atomistic analysis method is generic and can be applied to describe the magnitude and nature of noncrystalline ordering in many disordered systems.

  8. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2017-02-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  9. Determining localized garment insulation values from manikin studies: computational method and results.

    PubMed

    Nelson, D A; Curlee, J S; Curran, A R; Ziriax, J M; Mason, P A

    2005-12-01

    The localized thermal insulation value expresses a garment's thermal resistance over the region which is covered by the garment, rather than over the entire surface of a subject or manikin. The determination of localized garment insulation values is critical to the development of high-resolution models of sensible heat exchange. A method is presented for determining and validating localized garment insulation values, based on whole-body insulation values (clo units) and using computer-aided design and thermal analysis software. Localized insulation values are presented for a catalog consisting of 106 garments and verified using computer-generated models. The values presented are suitable for use on volume element-based or surface element-based models of heat transfer involving clothed subjects.

  10. A Green's function method for local and non-local parallel transport in general magnetic fields

    NASA Astrophysics Data System (ADS)

    Del-Castillo-Negrete, Diego; Chacón, Luis

    2009-11-01

    The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion and astrophysics research. Three issues make this problem particularly challenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), χ, and the perpendicular, χ, conductivities (χ/χ may exceed 10^10 in fusion plasmas); (ii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates; and (iii) Nonlocal parallel transport in the limit of small collisionality. Motivated by these issues, we present a Lagrangian Green's function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields. The numerical implementation employs a volume-preserving field-line integrator [Finn and Chac'on, Phys. Plasmas, 12 (2005)] for an accurate representation of the magnetic field lines regardless of the level of stochasticity. The general formalism and its algorithmic properties are discussed along with illustrative analytical and numerical examples. Problems of particular interest include: the departures from the Rochester--Rosenbluth diffusive scaling in the weak magnetic chaos regime, the interplay between non-locality and chaos, and the robustness of transport barriers in reverse shear configurations.

  11. Hybrid Genetic Algorithm - Local Search Method for Ground-Water Management

    NASA Astrophysics Data System (ADS)

    Chiu, Y.; Nishikawa, T.; Martin, P.

    2008-12-01

    Ground-water management problems commonly are formulated as a mixed-integer, non-linear programming problem (MINLP). Relying only on conventional gradient-search methods to solve the management problem is computationally fast; however, the methods may become trapped in a local optimum. Global-optimization schemes can identify the global optimum, but the convergence is very slow when the optimal solution approaches the global optimum. In this study, we developed a hybrid optimization scheme, which includes a genetic algorithm and a gradient-search method, to solve the MINLP. The genetic algorithm identifies a near- optimal solution, and the gradient search uses the near optimum to identify the global optimum. Our methodology is applied to a conjunctive-use project in the Warren ground-water basin, California. Hi- Desert Water District (HDWD), the primary water-manager in the basin, plans to construct a wastewater treatment plant to reduce future septic-tank effluent from reaching the ground-water system. The treated wastewater instead will recharge the ground-water basin via percolation ponds as part of a larger conjunctive-use strategy, subject to State regulations (e.g. minimum distances and travel times). HDWD wishes to identify the least-cost conjunctive-use strategies that control ground-water levels, meet regulations, and identify new production-well locations. As formulated, the MINLP objective is to minimize water-delivery costs subject to constraints including pump capacities, available recharge water, water-supply demand, water-level constraints, and potential new-well locations. The methodology was demonstrated by an enumerative search of the entire feasible solution and comparing the optimum solution with results from the branch-and-bound algorithm. The results also indicate that the hybrid method identifies the global optimum within an affordable computation time. Sensitivity analyses, which include testing different recharge-rate scenarios, pond

  12. An Unsupervised kNN Method to Systematically Detect Changes in Protein Localization in High-Throughput Microscopy Images.

    PubMed

    Lu, Alex Xijie; Moses, Alan M

    2016-01-01

    Despite the importance of characterizing genes that exhibit subcellular localization changes between conditions in proteome-wide imaging experiments, many recent studies still rely upon manual evaluation to assess the results of high-throughput imaging experiments. We describe and demonstrate an unsupervised k-nearest neighbours method for the detection of localization changes. Compared to previous classification-based supervised change detection methods, our method is much simpler and faster, and operates directly on the feature space to overcome limitations in needing to manually curate training sets that may not generalize well between screens. In addition, the output of our method is flexible in its utility, generating both a quantitatively ranked list of localization changes that permit user-defined cut-offs, and a vector for each gene describing feature-wise direction and magnitude of localization changes. We demonstrate that our method is effective at the detection of localization changes using the Δrpd3 perturbation in Saccharomyces cerevisiae, where we capture 71.4% of previously known changes within the top 10% of ranked genes, and find at least four new localization changes within the top 1% of ranked genes. The results of our analysis indicate that simple unsupervised methods may be able to identify localization changes in images without laborious manual image labelling steps.

  13. Using Online Dialogues to Connect Local Leaders and Climate Experts: Methods, Feedback and Lessons Learned from the Resilience Dialogues

    NASA Astrophysics Data System (ADS)

    Goodwin, M.; Pandya, R.; Weaver, C. P.; Zerbonne, S.; Bennett, N.; Spangler, B.

    2017-12-01

    Inclusive, multi-stakeholder dialogue, participatory planning and actionable science are necessary for just and effective climate resilience outcomes. How can we support that in practice? The Resilience Dialogues launched a public Beta in 2016-2017 to allow scientists and resilience practitioners to engage with local leaders from 10 communities around the US through a series of facilitated, online dialogues. We developed two, one-week dialogues for each community: one to consider ways to respond to observed and anticipated climate impacts through a resilience lens, and one to identify next steps and resources to advance key priorities. We divided the communities into three cohorts and refined the structure and facilitation strategy for these dialogues from one to the next based on participant feedback. This adaptive method helped participants engage in the dialogues more effectively and develop useful results. We distributed a survey to all participants following each cohort to capture feedback on the use and utility of the dialogues. While there was room for improvement in the program's technical interface, survey participants valued the dialogues and the opportunity to engage as equals. Local leaders said the dialogues helped identify new local pathways to approach resilience priorities. They felt they benefited from focused conversation and personalized introductions to best-matched resources. Practitioners learned how local leaders seek to apply climate science, and how to effectively communicate their expertise to community leaders in support of local planning efforts. We learned there is demand for specialized dialogues on issues like communication, financing and extreme weather. Overall, the desire of participants to continue to engage through this program, and others to enter, indicates that facilitated, open conversations between experts and local leaders can break down communication and access barriers between climate services providers and end

  14. Graph-Based Cooperative Localization Using Symmetric Measurement Equations.

    PubMed

    Gulati, Dhiraj; Zhang, Feihu; Clarke, Daniel; Knoll, Alois

    2017-06-17

    Precise localization is a key requirement for the success of highly assisted or autonomous vehicles. The diminishing cost of hardware has resulted in a proliferation of the number of sensors in the environment. Cooperative localization (CL) presents itself as a feasible and effective solution for localizing the ego-vehicle and its neighboring vehicles. However, one of the major challenges to fully realize the effective use of infrastructure sensors for jointly estimating the state of a vehicle in cooperative vehicle-infrastructure localization is an effective data association. In this paper, we propose a method which implements symmetric measurement equations within factor graphs in order to overcome the data association challenge with a reduced bandwidth overhead. Simulated results demonstrate the benefits of the proposed approach in comparison with our previously proposed approach of topology factors.

  15. Graph-Based Cooperative Localization Using Symmetric Measurement Equations

    PubMed Central

    Gulati, Dhiraj; Zhang, Feihu; Clarke, Daniel; Knoll, Alois

    2017-01-01

    Precise localization is a key requirement for the success of highly assisted or autonomous vehicles. The diminishing cost of hardware has resulted in a proliferation of the number of sensors in the environment. Cooperative localization (CL) presents itself as a feasible and effective solution for localizing the ego-vehicle and its neighboring vehicles. However, one of the major challenges to fully realize the effective use of infrastructure sensors for jointly estimating the state of a vehicle in cooperative vehicle-infrastructure localization is an effective data association. In this paper, we propose a method which implements symmetric measurement equations within factor graphs in order to overcome the data association challenge with a reduced bandwidth overhead. Simulated results demonstrate the benefits of the proposed approach in comparison with our previously proposed approach of topology factors. PMID:28629141

  16. Kinematic Localization for Global Navigation Satellite Systems: A Kalman Filtering Approach

    NASA Astrophysics Data System (ADS)

    Tabatabaee, Mohammad Hadi

    Use of the Global Positioning System (GNSS) has expanded significantly in the past decade, especially with advances in embedded systems and the emergence of smartphones and the Internet of Things (IoT). The growing demand has stimulated research on development of GNSS techniques and programming tools. The focus of much of the research efforts have been on high-level algorithms and augmentations. This dissertation focuses on the low-level methods at the heart of GNSS systems and proposes a new methods for GNSS positioning problems based on concepts of distance geometry and the use of Kalman filters. The methods presented in this dissertation provide algebraic solutions to problems that have predominantly been solved using iterative methods. The proposed methods are highly efficient, provide accurate estimates, and exhibit a degree of robustness in the presence of unfavorable satellite geometry. The algorithm operates in two stages; an estimation of the receiver clock bias and removal of the bias from the pseudorange observables, followed by the localization of the GNSS receiver. The use of a Kalman filter in between the two stages allows for an improvement of the clock bias estimate with a noticeable impact on the position estimates. The receiver localization step has also been formulated in a linear manner allowing for the direct application of a Kalman filter without any need for linearization. The methodology has also been extended to double differential observables for high accuracy pseudorange and carrier phase position estimates.

  17. Determining Coastal Hazards Risk Perception to Enhance Local Mitigation Planning through a Participatory Mapping Approach

    NASA Astrophysics Data System (ADS)

    Bethel, M.; Braud, D.; Lambeth, T.; Biber, P.; Wu, W.

    2017-12-01

    Coastal community leaders, government officials, and natural resource managers must be able to accurately assess and predict a given coastal landscape's sustainability and/or vulnerability as coastal habitat continues to undergo rapid and dramatic changes associated with natural and anthropogenic activities such as accelerated relative sea level rise (SLR). To help address this information need, a multi-disciplinary project team conducted Sea Grant sponsored research in Louisiana and Mississippi with traditional ecosystem users and natural resource managers to determine a method for producing localized vulnerability and sustainability maps for projected SLR and storm surge impacts, and determine how and whether the results of such an approach can provide more useful information to enhance hazard mitigation planning. The goals of the project are to develop and refine SLR visualization tools for local implementation in areas experiencing subsidence and erosion, and discover the different ways stakeholder groups evaluate risk and plan mitigation strategies associated with projected SLR and storm surge. Results from physical information derived from data and modeling of subsidence, erosion, engineered restoration and coastal protection features, historical land loss, and future land projections under SLR are integrated with complimentary traditional ecological knowledge (TEK) offered by the collaborating local ecosystem users for these assessments. The data analysis involves interviewing stakeholders, coding the interviews for themes, and then converting the themes into vulnerability and sustainability factors. Each factor is weighted according to emphasis by the TEK experts and number of experts who mention it to determine which factors are the highest priority. The priority factors are then mapped with emphasis on the perception of contributing to local community vulnerability or sustainability to SLR and storm surge. The maps are used by the collaborators to benefit

  18. Computationally efficient method for localizing the spiral rotor source using synthetic intracardiac electrograms during atrial fibrillation.

    PubMed

    Shariat, M H; Gazor, S; Redfearn, D

    2015-08-01

    Atrial fibrillation (AF), the most common sustained cardiac arrhythmia, is an extremely costly public health problem. Catheter-based ablation is a common minimally invasive procedure to treat AF. Contemporary mapping methods are highly dependent on the accuracy of anatomic localization of rotor sources within the atria. In this paper, using simulated atrial intracardiac electrograms (IEGMs) during AF, we propose a computationally efficient method for localizing the tip of the electrical rotor with an Archimedean/arithmetic spiral wavefront. The proposed method deploys the locations of electrodes of a catheter and their IEGMs activation times to estimate the unknown parameters of the spiral wavefront including its tip location. The proposed method is able to localize the spiral as soon as the wave hits three electrodes of the catheter. Our simulation results show that the method can efficiently localize the spiral wavefront that rotates either clockwise or counterclockwise.

  19. Qualitative Epidemiologic Methods Can Improve Local Prevention Programming among Adolescents

    ERIC Educational Resources Information Center

    Daniulaityte, Raminta; Siegal, Harvey A.; Carlson, Robert G.; Kenne, Deric R.; Starr, Sanford; DeCamp, Brad

    2004-01-01

    The Ohio Substance Abuse Monitoring Network (OSAM) is designed to provide accurate, timely, qualitatively-oriented epidemiologic descriptions of substance abuse trends and emerging problems in the state's major urban and rural areas. Use of qualitative methods in identifying and assessing substance abuse practices in local communities is one of…

  20. Kinetically reduced local Navier-Stokes equations: an alternative approach to hydrodynamics.

    PubMed

    Karlin, Iliya V; Tomboulides, Ananias G; Frouzakis, Christos E; Ansumali, Santosh

    2006-09-01

    An alternative approach, the kinetically reduced local Navier-Stokes (KRLNS) equations for the grand potential and the momentum, is proposed for the simulation of low Mach number flows. The Taylor-Green vortex flow is considered in the KRLNS framework, and compared to the results of the direct numerical simulation of the incompressible Navier-Stokes equations. The excellent agreement between the KRLNS equations and the incompressible nonlocal Navier-Stokes equations for this nontrivial time-dependent flow indicates that the former is a viable alternative for computational fluid dynamics at low Mach numbers.

  1. A novel crystallization method for visualizing the membrane localization of potassium channels.

    PubMed Central

    Lopatin, A N; Makhina, E N; Nichols, C G

    1998-01-01

    The high permeability of K+ channels to monovalent thallium (Tl+) ions and the low solubility of thallium bromide salt were used to develop a simple yet very sensitive approach to the study of membrane localization of potassium channels. K+ channels (Kir1.1, Kir2.1, Kir2.3, Kv2.1), were expressed in Xenopus oocytes and loaded with Br ions by microinjection. Oocytes were then exposed to extracellular thallium. Under conditions favoring influx of Tl+ ions (negative membrane potential under voltage clamp, or high concentration of extracellular Tl+), crystals of TlBr, visible under low-power microscopy, formed under the membrane in places of high density of K+ channels. Crystals were not formed in uninjected oocytes, but were formed in oocytes expressing as little as 5 microS K+ conductance. The number of observed crystals was much lower than the estimated number of functional channels. Based on the pattern of crystal formation, K+ channels appear to be expressed mostly around the point of cRNA injection when injected either into the animal or vegetal hemisphere. In addition to this pseudopolarized distribution of K+ channels due to localized microinjection of cRNA, a naturally polarized (animal/vegetal side) distribution of K+ channels was also frequently observed when K+ channel cRNA was injected at the equator. A second novel "agarose-hemiclamp" technique was developed to permit direct measurements of K+ currents from different hemispheres of oocytes under two-microelectrode voltage clamp. This technique, together with direct patch-clamping of patches of membrane in regions of high crystal density, confirmed that the localization of TlBr crystals corresponded to the localization of functional K+ channels and suggested a clustered organization of functional channels. With appropriate permeant ion/counterion pairs, this approach may be applicable to the visualization of the membrane distribution of any functional ion channel. PMID:9591643

  2. A novel crystallization method for visualizing the membrane localization of potassium channels.

    PubMed

    Lopatin, A N; Makhina, E N; Nichols, C G

    1998-05-01

    The high permeability of K+ channels to monovalent thallium (Tl+) ions and the low solubility of thallium bromide salt were used to develop a simple yet very sensitive approach to the study of membrane localization of potassium channels. K+ channels (Kir1.1, Kir2.1, Kir2.3, Kv2.1), were expressed in Xenopus oocytes and loaded with Br ions by microinjection. Oocytes were then exposed to extracellular thallium. Under conditions favoring influx of Tl+ ions (negative membrane potential under voltage clamp, or high concentration of extracellular Tl+), crystals of TlBr, visible under low-power microscopy, formed under the membrane in places of high density of K+ channels. Crystals were not formed in uninjected oocytes, but were formed in oocytes expressing as little as 5 microS K+ conductance. The number of observed crystals was much lower than the estimated number of functional channels. Based on the pattern of crystal formation, K+ channels appear to be expressed mostly around the point of cRNA injection when injected either into the animal or vegetal hemisphere. In addition to this pseudopolarized distribution of K+ channels due to localized microinjection of cRNA, a naturally polarized (animal/vegetal side) distribution of K+ channels was also frequently observed when K+ channel cRNA was injected at the equator. A second novel "agarose-hemiclamp" technique was developed to permit direct measurements of K+ currents from different hemispheres of oocytes under two-microelectrode voltage clamp. This technique, together with direct patch-clamping of patches of membrane in regions of high crystal density, confirmed that the localization of TlBr crystals corresponded to the localization of functional K+ channels and suggested a clustered organization of functional channels. With appropriate permeant ion/counterion pairs, this approach may be applicable to the visualization of the membrane distribution of any functional ion channel.

  3. A maximally stable extremal region based scene text localization method

    NASA Astrophysics Data System (ADS)

    Xiao, Chengqiu; Ji, Lixin; Gao, Chao; Li, Shaomei

    2015-07-01

    Text localization in natural scene images is an important prerequisite for many content-based image analysis tasks. This paper proposes a novel text localization algorithm. Firstly, a fast pruning algorithm is designed to extract Maximally Stable Extremal Regions (MSER) as basic character candidates. Secondly, these candidates are filtered by using the properties of fitting ellipse and the distribution properties of characters to exclude most non-characters. Finally, a new extremal regions projection merging algorithm is designed to group character candidates into words. Experimental results show that the proposed method has an advantage in speed and achieve relatively high precision and recall rates than the latest published algorithms.

  4. The Cauchy Problem in Local Spaces for the Complex Ginzburg-Landau EquationII. Contraction Methods

    NASA Astrophysics Data System (ADS)

    Ginibre, J.; Velo, G.

    We continue the study of the initial value problem for the complex Ginzburg-Landau equation (with a > 0, b > 0, g>= 0) in initiated in a previous paper [I]. We treat the case where the initial data and the solutions belong to local uniform spaces, more precisely to spaces of functions satisfying local regularity conditions and uniform bounds in local norms, but no decay conditions (or arbitrarily weak decay conditions) at infinity in . In [I] we used compactness methods and an extended version of recent local estimates [3] and proved in particular the existence of solutions globally defined in time with local regularity of the initial data corresponding to the spaces Lr for r>= 2 or H1. Here we treat the same problem by contraction methods. This allows us in particular to prove that the solutions obtained in [I] are unique under suitable subcriticality conditions, and to obtain for them additional regularity properties and uniform bounds. The method extends some of those previously applied to the nonlinear heat equation in global spaces to the framework of local uniform spaces.

  5. A novel image registration approach via combining local features and geometric invariants

    PubMed Central

    Lu, Yan; Gao, Kun; Zhang, Tinghua; Xu, Tingfa

    2018-01-01

    Image registration is widely used in many fields, but the adaptability of the existing methods is limited. This work proposes a novel image registration method with high precision for various complex applications. In this framework, the registration problem is divided into two stages. First, we detect and describe scale-invariant feature points using modified computer vision-oriented fast and rotated brief (ORB) algorithm, and a simple method to increase the performance of feature points matching is proposed. Second, we develop a new local constraint of rough selection according to the feature distances. Evidence shows that the existing matching techniques based on image features are insufficient for the images with sparse image details. Then, we propose a novel matching algorithm via geometric constraints, and establish local feature descriptions based on geometric invariances for the selected feature points. Subsequently, a new price function is constructed to evaluate the similarities between points and obtain exact matching pairs. Finally, we employ the progressive sample consensus method to remove wrong matches and calculate the space transform parameters. Experimental results on various complex image datasets verify that the proposed method is more robust and significantly reduces the rate of false matches while retaining more high-quality feature points. PMID:29293595

  6. Explaining Andean Potato Weevils in Relation to Local and Landscape Features: A Facilitated Ecoinformatics Approach

    PubMed Central

    Parsa, Soroush; Ccanto, Raúl; Olivera, Edgar; Scurrah, María; Alcázar, Jesús; Rosenheim, Jay A.

    2012-01-01

    Background Pest impact on an agricultural field is jointly influenced by local and landscape features. Rarely, however, are these features studied together. The present study applies a “facilitated ecoinformatics” approach to jointly screen many local and landscape features of suspected importance to Andean potato weevils (Premnotrypes spp.), the most serious pests of potatoes in the high Andes. Methodology/Principal Findings We generated a comprehensive list of predictors of weevil damage, including both local and landscape features deemed important by farmers and researchers. To test their importance, we assembled an observational dataset measuring these features across 138 randomly-selected potato fields in Huancavelica, Peru. Data for local features were generated primarily by participating farmers who were trained to maintain records of their management operations. An information theoretic approach to modeling the data resulted in 131,071 models, the best of which explained 40.2–46.4% of the observed variance in infestations. The best model considering both local and landscape features strongly outperformed the best models considering them in isolation. Multi-model inferences confirmed many, but not all of the expected patterns, and suggested gaps in local knowledge for Andean potato weevils. The most important predictors were the field's perimeter-to-area ratio, the number of nearby potato storage units, the amount of potatoes planted in close proximity to the field, and the number of insecticide treatments made early in the season. Conclusions/Significance Results underscored the need to refine the timing of insecticide applications and to explore adjustments in potato hilling as potential control tactics for Andean weevils. We believe our study illustrates the potential of ecoinformatics research to help streamline IPM learning in agricultural learning collaboratives. PMID:22693551

  7. Local Feature Selection for Data Classification.

    PubMed

    Armanfard, Narges; Reilly, James P; Komeili, Majid

    2016-06-01

    Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.

  8. Boosting instance prototypes to detect local dermoscopic features.

    PubMed

    Situ, Ning; Yuan, Xiaojing; Zouridakis, George

    2010-01-01

    Local dermoscopic features are useful in many dermoscopic criteria for skin cancer detection. We address the problem of detecting local dermoscopic features from epiluminescence (ELM) microscopy skin lesion images. We formulate the recognition of local dermoscopic features as a multi-instance learning (MIL) problem. We employ the method of diverse density (DD) and evidence confidence (EC) function to convert MIL to a single-instance learning (SIL) problem. We apply Adaboost to improve the classification performance with support vector machines (SVMs) as the base classifier. We also propose to boost the selection of instance prototypes through changing the data weights in the DD function. We validate the methods on detecting ten local dermoscopic features from a dataset with 360 images. We compare the performance of the MIL approach, its boosting version, and a baseline method without using MIL. Our results show that boosting can provide performance improvement compared to the other two methods.

  9. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  10. A combined volume-of-fluid method and low-Mach-number approach for DNS of evaporating droplets in turbulence

    NASA Astrophysics Data System (ADS)

    Dodd, Michael; Ferrante, Antonino

    2017-11-01

    Our objective is to perform DNS of finite-size droplets that are evaporating in isotropic turbulence. This requires fully resolving the process of momentum, heat, and mass transfer between the droplets and surrounding gas. We developed a combined volume-of-fluid (VOF) method and low-Mach-number approach to simulate this flow. The two main novelties of the method are: (i) the VOF algorithm captures the motion of the liquid gas interface in the presence of mass transfer due to evaporation and condensation without requiring a projection step for the liquid velocity, and (ii) the low-Mach-number approach allows for local volume changes caused by phase change while the total volume of the liquid-gas system is constant. The method is verified against an analytical solution for a Stefan flow problem, and the D2 law is verified for a single droplet in quiescent gas. We also demonstrate the schemes robustness when performing DNS of an evaporating droplet in forced isotropic turbulence.

  11. A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application.

    PubMed

    Vivacqua, Rafael; Vassallo, Raquel; Martins, Felipe

    2017-10-16

    Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle's backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.

  12. A Low Cost Sensors Approach for Accurate Vehicle Localization and Autonomous Driving Application

    PubMed Central

    Vassallo, Raquel

    2017-01-01

    Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle’s backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation. PMID:29035334

  13. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    NASA Astrophysics Data System (ADS)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  14. Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).

    PubMed

    Bevilacqua, Marta; Marini, Federico

    2014-08-01

    The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Noise source separation of diesel engine by combining binaural sound localization method and blind source separation method

    NASA Astrophysics Data System (ADS)

    Yao, Jiachi; Xiang, Yang; Qian, Sichong; Li, Shengyang; Wu, Shaowei

    2017-11-01

    In order to separate and identify the combustion noise and the piston slap noise of a diesel engine, a noise source separation and identification method that combines a binaural sound localization method and blind source separation method is proposed. During a diesel engine noise and vibration test, because a diesel engine has many complex noise sources, a lead covering method was carried out on a diesel engine to isolate other interference noise from the No. 1-5 cylinders. Only the No. 6 cylinder parts were left bare. Two microphones that simulated the human ears were utilized to measure the radiated noise signals 1 m away from the diesel engine. First, a binaural sound localization method was adopted to separate the noise sources that are in different places. Then, for noise sources that are in the same place, a blind source separation method is utilized to further separate and identify the noise sources. Finally, a coherence function method, continuous wavelet time-frequency analysis method, and prior knowledge of the diesel engine are combined to further identify the separation results. The results show that the proposed method can effectively separate and identify the combustion noise and the piston slap noise of a diesel engine. The frequency of the combustion noise and the piston slap noise are respectively concentrated at 4350 Hz and 1988 Hz. Compared with the blind source separation method, the proposed method has superior separation and identification effects, and the separation results have fewer interference components from other noise.

  16. New approach to predict photoallergic potentials of chemicals based on murine local lymph node assay.

    PubMed

    Maeda, Yosuke; Hirosaki, Haruka; Yamanaka, Hidenori; Takeyoshi, Masahiro

    2018-05-23

    Photoallergic dermatitis, caused by pharmaceuticals and other consumer products, is a very important issue in human health. However, S10 guidelines of the International Conference on Harmonization do not recommend the existing prediction methods for photoallergy because of their low predictability in human cases. We applied local lymph node assay (LLNA), a reliable, quantitative skin sensitization prediction test, to develop a new photoallergy prediction method. This method involves a three-step approach: (1) ultraviolet (UV) absorption analysis; (2) determination of no observed adverse effect level for skin phototoxicity based on LLNA; and (3) photoallergy evaluation based on LLNA. Photoallergic potential of chemicals was evaluated by comparing lymph node cell proliferation among groups treated with chemicals with minimal effect levels of skin sensitization and skin phototoxicity under UV irradiation (UV+) or non-UV irradiation (UV-). A case showing significant difference (P < .05) in lymph node cell proliferation rates between UV- and UV+ groups was considered positive for photoallergic reaction. After testing 13 chemicals, seven human photoallergens tested positive and the other six, with no evidence of causing photoallergic dermatitis or UV absorption, tested negative. Among these chemicals, both doxycycline hydrochloride and minocycline hydrochloride were tetracycline antibiotics with different photoallergic properties, and the new method clearly distinguished between the photoallergic properties of these chemicals. These findings suggested high predictability of our method; therefore, it is promising and effective in predicting human photoallergens. Copyright © 2018 John Wiley & Sons, Ltd.

  17. Localization of causal locus in the genome of the brown macroalga Ectocarpus: NGS-based mapping and positional cloning approaches

    PubMed Central

    Billoud, Bernard; Jouanno, Émilie; Nehr, Zofia; Carton, Baptiste; Rolland, Élodie; Chenivesse, Sabine; Charrier, Bénédicte

    2015-01-01

    Mutagenesis is the only process by which unpredicted biological gene function can be identified. Despite that several macroalgal developmental mutants have been generated, their causal mutation was never identified, because experimental conditions were not gathered at that time. Today, progresses in macroalgal genomics and judicious choices of suitable genetic models make mutated gene identification possible. This article presents a comparative study of two methods aiming at identifying a genetic locus in the brown alga Ectocarpus siliculosus: positional cloning and Next-Generation Sequencing (NGS)-based mapping. Once necessary preliminary experimental tools were gathered, we tested both analyses on an Ectocarpus morphogenetic mutant. We show how a narrower localization results from the combination of the two methods. Advantages and drawbacks of these two approaches as well as potential transfer to other macroalgae are discussed. PMID:25745426

  18. Percutaneous Irreversible Electroporation of Locally Advanced Pancreatic Carcinoma Using the Dorsal Approach: A Case Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scheffer, Hester J., E-mail: hj.scheffer@vumc.nl; Melenhorst, Marleen C. A. M., E-mail: m.melenhorst@vumc.nl; Vogel, Jantien A., E-mail: j.a.vogel@amc.uva.nl

    Irreversible electroporation (IRE) is a novel image-guided ablation technique that is increasingly used to treat locally advanced pancreatic carcinoma (LAPC). We describe a 67-year-old male patient with a 5 cm stage III pancreatic tumor who was referred for IRE. Because the ventral approach for electrode placement was considered dangerous due to vicinity of the tumor to collateral vessels and duodenum, the dorsal approach was chosen. Under CT-guidance, six electrodes were advanced in the tumor, approaching paravertebrally alongside the aorta and inferior vena cava. Ablation was performed without complications. This case describes that when ventral electrode placement for pancreatic IRE is impaired,more » the dorsal approach could be considered alternatively.« less

  19. System and method for bullet tracking and shooter localization

    DOEpatents

    Roberts, Randy S [Livermore, CA; Breitfeller, Eric F [Dublin, CA

    2011-06-21

    A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.

  20. Prediction of In-hospital Mortality in Emergency Department Patients With Sepsis: A Local Big Data–Driven, Machine Learning Approach

    PubMed Central

    Taylor, R. Andrew; Pare, Joseph R.; Venkatesh, Arjun K.; Mowafi, Hani; Melnick, Edward R.; Fleischman, William; Hall, M. Kennedy

    2018-01-01

    Objectives Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data–driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. Methods This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. Results There were 5,278 visits among 4,676 unique patients who

  1. Costs and effects of a 'healthy living' approach to community development in two deprived communities: findings from a mixed methods study

    PubMed Central

    2011-01-01

    Background Inequalities in health have proved resistant to 'top down' approaches. It is increasingly recognised that health promotion initiatives are unlikely to succeed without strong local involvement at all stages of the process and many programmes now use grass roots approaches. A healthy living approach to community development (HLA) was developed as an innovative response to local concerns about a lack of appropriate services in two deprived communities in Pembrokeshire, West Wales. We sought to assess feasibility, costs, benefits and working relationships of this HLA. Methods The HLA intervention operated through existing community forums and focused on the whole community and its relationship with statutory and voluntary sectors. Local people were trained as community researchers and gathered views about local needs though resident interviews. Forums used interview results to write action plans, disseminated to commissioning organisations. The process was supported throughout through the project. The evaluation used a multi-method before and after study design including process and outcome formative and summative evaluation; data gathered through documentary evidence, diaries and reflective accounts, semi-structured interviews, focus groups and costing proformas. Main outcome measures were processes and timelines of implementation of HLA; self reported impact on communities and participants; community-agency processes of liaison; costs. Results Communities were able to produce and disseminate action plans based on locally-identified needs. The process was slower than anticipated: few community changes had occurred but expectations were high. Community participants gained skills and confidence. Cross-sector partnership working developed. The process had credibility within service provider organisations but mechanisms for refocusing commissioning were patchy. Intervention costs averaged £58,304 per community per annum. Conclusions The intervention was

  2. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  3. Calculation of smooth potential energy surfaces using local electron correlation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mata, Ricardo A.; Werner, Hans-Joachim

    2006-11-14

    The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl{sup -} with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barriermore » heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.« less

  4. The Rise and Attenuation of the Basic Education Programme (BEP) in Botswana: A Global-Local Dialectic Approach

    ERIC Educational Resources Information Center

    Tabulawa, Richard

    2011-01-01

    Using a global-local dialectic approach, this paper traces the rise of the basic education programme in the 1980s and 1990s in Botswana and its subsequent attenuation in the 2000s. Amongst the local forces that led to the rise of BEP were Botswana's political project of nation-building; the country's dire human resources situation in the decades…

  5. Local synchronization of a complex network model.

    PubMed

    Yu, Wenwu; Cao, Jinde; Chen, Guanrong; Lü, Jinhu; Han, Jian; Wei, Wei

    2009-02-01

    This paper introduces a novel complex network model to evaluate the reputation of virtual organizations. By using the Lyapunov function and linear matrix inequality approaches, the local synchronization of the proposed model is further investigated. Here, the local synchronization is defined by the inner synchronization within a group which does not mean the synchronization between different groups. Moreover, several sufficient conditions are derived to ensure the local synchronization of the proposed network model. Finally, several representative examples are given to show the effectiveness of the proposed methods and theories.

  6. Local unitary transformation method toward practical electron correlation calculations with scalar relativistic effect in large-scale molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seino, Junji; Nakai, Hiromi, E-mail: nakai@waseda.jp; Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555

    In order to perform practical electron correlation calculations, the local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas–Kroll–Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys.136, 244102 (2012); J. Seino and H. Nakai, J. Chem. Phys.137, 144101 (2012)], which is based on the locality of relativistic effects, has been combined with the linear-scaling divide-and-conquer (DC)-based Hartree–Fock (HF) and electron correlation methods, such as the second-order Møller–Plesset (MP2) and the coupled cluster theories with single and double excitations (CCSD). Numerical applications in hydrogen halide molecules, (HX){sub n} (X = F, Cl, Br, and I), coinage metal chain systems,more » M{sub n} (M = Cu and Ag), and platinum-terminated polyynediyl chain, trans,trans-((p-CH{sub 3}C{sub 6}H{sub 4}){sub 3}P){sub 2}(C{sub 6}H{sub 5})Pt(C≡C){sub 4}Pt(C{sub 6}H{sub 5})((p-CH{sub 3}C{sub 6}H{sub 4}){sub 3}P){sub 2}, clarified that the present methods, namely DC-HF, MP2, and CCSD with the LUT-IODKH Hamiltonian, reproduce the results obtained using conventional methods with small computational costs. The combination of both LUT and DC techniques could be the first approach that achieves overall quasi-linear-scaling with a small prefactor for relativistic electron correlation calculations.« less

  7. The Health Role of Local Area Coordinators in Scotland: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Brown, Michael; Karatzias, Thanos; O'Leary, Lisa

    2013-01-01

    The study set out to explore whether local area coordinators (LACs) and their managers view the health role of LACs as an essential component of their work and identify the health-related activities undertaken by LACs in Scotland. A mixed methods cross-sectional phenomenological study involving local authority service managers (n = 25) and LACs (n…

  8. Parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method of ledre profile attributes

    NASA Astrophysics Data System (ADS)

    Hastuti, S.; Harijono; Murtini, E. S.; Fibrianto, K.

    2018-03-01

    This current study is aimed to investigate the use of parametric and non-parametric approach for sensory RATA (Rate-All-That-Apply) method. Ledre as Bojonegoro unique local food product was used as point of interest, in which 319 panelists were involved in the study. The result showed that ledre is characterized as easy-crushed texture, sticky in mouth, stingy sensation and easy to swallow. It has also strong banana flavour with brown in colour. Compared to eggroll and semprong, ledre has more variances in terms of taste as well the roll length. As RATA questionnaire is designed to collect categorical data, non-parametric approach is the common statistical procedure. However, similar results were also obtained as parametric approach, regardless the fact of non-normal distributed data. Thus, it suggests that parametric approach can be applicable for consumer study with large number of respondents, even though it may not satisfy the assumption of ANOVA (Analysis of Variances).

  9. Local reaction kinetics by imaging☆

    PubMed Central

    Suchorski, Yuri; Rupprechter, Günther

    2016-01-01

    In the present contribution we present an overview of our recent studies using the “kinetics by imaging” approach for CO oxidation on heterogeneous model systems. The method is based on the correlation of the PEEM image intensity with catalytic activity: scaled down to the μm-sized surface regions, such correlation allows simultaneous local kinetic measurements on differently oriented individual domains of a polycrystalline metal-foil, including the construction of local kinetic phase diagrams. This allows spatially- and component-resolved kinetic studies and, e.g., a direct comparison of inherent catalytic properties of Pt(hkl)- and Pd(hkl)-domains or supported μm-sized Pd-powder agglomerates, studies of the local catalytic ignition and the role of defects and grain boundaries in the local reaction kinetics. PMID:26865736

  10. A state space based approach to localizing single molecules from multi-emitter images.

    PubMed

    Vahid, Milad R; Chao, Jerry; Ward, E Sally; Ober, Raimund J

    2017-01-28

    Single molecule super-resolution microscopy is a powerful tool that enables imaging at sub-diffraction-limit resolution. In this technique, subsets of stochastically photoactivated fluorophores are imaged over a sequence of frames and accurately localized, and the estimated locations are used to construct a high-resolution image of the cellular structures labeled by the fluorophores. Available localization methods typically first determine the regions of the image that contain emitting fluorophores through a process referred to as detection. Then, the locations of the fluorophores are estimated accurately in an estimation step. We propose a novel localization method which combines the detection and estimation steps. The method models the given image as the frequency response of a multi-order system obtained with a balanced state space realization algorithm based on the singular value decomposition of a Hankel matrix, and determines the locations of intensity peaks in the image as the pole locations of the resulting system. The locations of the most significant peaks correspond to the locations of single molecules in the original image. Although the accuracy of the location estimates is reasonably good, we demonstrate that, by using the estimates as the initial conditions for a maximum likelihood estimator, refined estimates can be obtained that have a standard deviation close to the Cramér-Rao lower bound-based limit of accuracy. We validate our method using both simulated and experimental multi-emitter images.

  11. An Integrated Approach to Indoor and Outdoor Localization

    DTIC Science & Technology

    2017-04-17

    localization estimate, followed by particle filter based tracking. Initial localization is performed using WiFi and image observations. For tracking we...source. A two-step process is proposed that performs an initial localization es-timate, followed by particle filter based t racking. Initial...mapped, it is possible to use them for localization [20, 21, 22]. Haverinen et al. show that these fields could be used with a particle filter to

  12. Localization microscopy of DNA in situ using Vybrant(®) DyeCycle™ Violet fluorescent probe: A new approach to study nuclear nanostructure at single molecule resolution.

    PubMed

    Żurek-Biesiada, Dominika; Szczurek, Aleksander T; Prakash, Kirti; Mohana, Giriram K; Lee, Hyun-Keun; Roignant, Jean-Yves; Birk, Udo J; Dobrucki, Jurek W; Cremer, Christoph

    2016-05-01

    Higher order chromatin structure is not only required to compact and spatially arrange long chromatids within a nucleus, but have also important functional roles, including control of gene expression and DNA processing. However, studies of chromatin nanostructures cannot be performed using conventional widefield and confocal microscopy because of the limited optical resolution. Various methods of superresolution microscopy have been described to overcome this difficulty, like structured illumination and single molecule localization microscopy. We report here that the standard DNA dye Vybrant(®) DyeCycle™ Violet can be used to provide single molecule localization microscopy (SMLM) images of DNA in nuclei of fixed mammalian cells. This SMLM method enabled optical isolation and localization of large numbers of DNA-bound molecules, usually in excess of 10(6) signals in one cell nucleus. The technique yielded high-quality images of nuclear DNA density, revealing subdiffraction chromatin structures of the size in the order of 100nm; the interchromatin compartment was visualized at unprecedented optical resolution. The approach offers several advantages over previously described high resolution DNA imaging methods, including high specificity, an ability to record images using a single wavelength excitation, and a higher density of single molecule signals than reported in previous SMLM studies. The method is compatible with DNA/multicolor SMLM imaging which employs simple staining methods suited also for conventional optical microscopy. Copyright © 2016. Published by Elsevier Inc.

  13. The Green's functions for peridynamic non-local diffusion.

    PubMed

    Wang, L J; Xu, J F; Wang, J X

    2016-09-01

    In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.

  14. On some variational acceleration techniques and related methods for local refinement

    NASA Astrophysics Data System (ADS)

    Teigland, Rune

    1998-10-01

    This paper shows that the well-known variational acceleration method described by Wachspress (E. Wachspress, Iterative Solution of Elliptic Systems and Applications to the Neutron Diffusion Equations of Reactor Physics, Prentice-Hall, Englewood Cliffs, NJ, 1966) and later generalized to multilevels (known as the additive correction multigrid method (B.R Huthchinson and G.D. Raithby, Numer. Heat Transf., 9, 511-537 (1986))) is similar to the FAC method of McCormick and Thomas (S.F McCormick and J.W. Thomas, Math. Comput., 46, 439-456 (1986)) and related multilevel methods. The performance of the method is demonstrated for some simple model problems using local refinement and suggestions for improving the performance of the method are given.

  15. [Spectral scatter correction of coal samples based on quasi-linear local weighted method].

    PubMed

    Lei, Meng; Li, Ming; Ma, Xiao-Ping; Miao, Yan-Zi; Wang, Jian-Sheng

    2014-07-01

    The present paper puts forth a new spectral correction method based on quasi-linear expression and local weighted function. The first stage of the method is to search 3 quasi-linear expressions to replace the original linear expression in MSC method, such as quadratic, cubic and growth curve expression. Then the local weighted function is constructed by introducing 4 kernel functions, such as Gaussian, Epanechnikov, Biweight and Triweight kernel function. After adding the function in the basic estimation equation, the dependency between the original and ideal spectra is described more accurately and meticulously at each wavelength point. Furthermore, two analytical models were established respectively based on PLS and PCA-BP neural network method, which can be used for estimating the accuracy of corrected spectra. At last, the optimal correction mode was determined by the analytical results with different combination of quasi-linear expression and local weighted function. The spectra of the same coal sample have different noise ratios while the coal sample was prepared under different particle sizes. To validate the effectiveness of this method, the experiment analyzed the correction results of 3 spectral data sets with the particle sizes of 0.2, 1 and 3 mm. The results show that the proposed method can eliminate the scattering influence, and also can enhance the information of spectral peaks. This paper proves a more efficient way to enhance the correlation between corrected spectra and coal qualities significantly, and improve the accuracy and stability of the analytical model substantially.

  16. An approach to the language discrimination in different scripts using adjacent local binary pattern

    NASA Astrophysics Data System (ADS)

    Brodić, D.; Amelio, A.; Milivojević, Z. N.

    2017-09-01

    The paper proposes a language discrimination method of documents. First, each letter is encoded with the certain script type according to its status in baseline area. Such a cipher text is subjected to a feature extraction process. Accordingly, the local binary pattern as well as its expanded version called adjacent local binary pattern are extracted. Because of the difference in the language characteristics, the above analysis shows significant diversity. This type of diversity is a key aspect in the decision-making differentiation of the languages. Proposed method is tested on an example of documents. The experiments give encouraging results.

  17. Local region power spectrum-based unfocused ship detection method in synthetic aperture radar images

    NASA Astrophysics Data System (ADS)

    Wei, Xiangfei; Wang, Xiaoqing; Chong, Jinsong

    2018-01-01

    Ships on synthetic aperture radar (SAR) images will be severely defocused and their energy will disperse into numerous resolution cells under long SAR integration time. Therefore, the image intensity of ships is weak and sometimes even overwhelmed by sea clutter on SAR image. Consequently, it is hard to detect the ships from SAR intensity images. A ship detection method based on local region power spectrum of SAR complex image is proposed. Although the energies of the ships are dispersed on SAR intensity images, their spectral energies are rather concentrated or will cause the power spectra of local areas of SAR images to deviate from that of sea surface background. Therefore, the key idea of the proposed method is to detect ships via the power spectra distortion of local areas of SAR images. The local region power spectrum of a moving target on SAR image is analyzed and the way to obtain the detection threshold through the probability density function (pdf) of the power spectrum is illustrated. Numerical P- and L-band airborne SAR ocean data are utilized and the detection results are also illustrated. Results show that the proposed method can well detect the unfocused ships, with a detection rate of 93.6% and a false-alarm rate of 8.6%. Moreover, by comparing with some other algorithms, it indicates that the proposed method performs better under long SAR integration time. Finally, the applicability of the proposed method and the way of parameters selection are also discussed.

  18. A Localization Method for Underwater Wireless Sensor Networks Based on Mobility Prediction and Particle Swarm Optimization Algorithms

    PubMed Central

    Zhang, Ying; Liang, Jixing; Jiang, Shengming; Chen, Wei

    2016-01-01

    Due to their special environment, Underwater Wireless Sensor Networks (UWSNs) are usually deployed over a large sea area and the nodes are usually floating. This results in a lower beacon node distribution density, a longer time for localization, and more energy consumption. Currently most of the localization algorithms in this field do not pay enough consideration on the mobility of the nodes. In this paper, by analyzing the mobility patterns of water near the seashore, a localization method for UWSNs based on a Mobility Prediction and a Particle Swarm Optimization algorithm (MP-PSO) is proposed. In this method, the range-based PSO algorithm is used to locate the beacon nodes, and their velocities can be calculated. The velocity of an unknown node is calculated by using the spatial correlation of underwater object’s mobility, and then their locations can be predicted. The range-based PSO algorithm may cause considerable energy consumption and its computation complexity is a little bit high, nevertheless the number of beacon nodes is relatively smaller, so the calculation for the large number of unknown nodes is succinct, and this method can obviously decrease the energy consumption and time cost of localizing these mobile nodes. The simulation results indicate that this method has higher localization accuracy and better localization coverage rate compared with some other widely used localization methods in this field. PMID:26861348

  19. An integrated lean-methods approach to hospital facilities redesign.

    PubMed

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  20. An evaluation of methods for estimating the number of local optima in combinatorial optimization problems.

    PubMed

    Hernando, Leticia; Mendiburu, Alexander; Lozano, Jose A

    2013-01-01

    The solution of many combinatorial optimization problems is carried out by metaheuristics, which generally make use of local search algorithms. These algorithms use some kind of neighborhood structure over the search space. The performance of the algorithms strongly depends on the properties that the neighborhood imposes on the search space. One of these properties is the number of local optima. Given an instance of a combinatorial optimization problem and a neighborhood, the estimation of the number of local optima can help not only to measure the complexity of the instance, but also to choose the most convenient neighborhood to solve it. In this paper we review and evaluate several methods to estimate the number of local optima in combinatorial optimization problems. The methods reviewed not only come from the combinatorial optimization literature, but also from the statistical literature. A thorough evaluation in synthetic as well as real problems is given. We conclude by providing recommendations of methods for several scenarios.

  1. Earth Observation and Indicators Pertaining to Determinants of Health- An Approach to Support Local Scale Characterization of Environmental Determinants of Vector-Borne Diseases

    NASA Astrophysics Data System (ADS)

    Kotchi, Serge Olivier; Brazeau, Stephanie; Ludwig, Antoinette; Aube, Guy; Berthiaume, Pilippe

    2016-08-01

    Environmental determinants (EVDs) were identified as key determinant of health (DoH) for the emergence and re-emergence of several vector-borne diseases. Maintaining ongoing acquisition of data related to EVDs at local scale and for large regions constitutes a significant challenge. Earth observation (EO) satellites offer a framework to overcome this challenge. However, EO image analysis methods commonly used to estimate EVDs are time and resource consuming. Moreover, variations of microclimatic conditions combined with high landscape heterogeneity limit the effectiveness of climatic variables derived from EO. In this study, we present what are DoH and EVDs, the impacts of EVDs on vector-borne diseases in the context of global environmental change, the need to characterize EVDs of vector-borne diseases at local scale and its challenges, and finally we propose an approach based on EO images to estimate at local scale indicators pertaining to EVDs of vector-borne diseases.

  2. A local quasicontinuum method for 3D multilattice crystalline materials: Application to shape-memory alloys

    NASA Astrophysics Data System (ADS)

    Sorkin, V.; Elliott, R. S.; Tadmor, E. B.

    2014-07-01

    The quasicontinuum (QC) method, in its local (continuum) limit, is applied to materials with a multilattice crystal structure. Cauchy-Born (CB) kinematics, which accounts for the shifts of the crystal motif, is used to relate atomic motions to continuum deformation gradients. To avoid failures of CB kinematics, QC is augmented with a phonon stability analysis that detects lattice period extensions and identifies the minimum required periodic cell size. This approach is referred to as Cascading Cauchy-Born kinematics (CCB). In this paper, the method is described and developed. It is then used, along with an effective interaction potential (EIP) model for shape-memory alloys, to simulate the shape-memory effect and pseudoelasticity in a finite specimen. The results of these simulations show that (i) the CCB methodology is an essential tool that is required in order for QC-type simulations to correctly capture the first-order phase transitions responsible for these material behaviors, and (ii) that the EIP model adopted in this work coupled with the QC/CCB methodology is capable of predicting the characteristic behavior found in shape-memory alloys.

  3. Local and global evaluation for remote sensing image segmentation

    NASA Astrophysics Data System (ADS)

    Su, Tengfei; Zhang, Shengwei

    2017-08-01

    In object-based image analysis, how to produce accurate segmentation is usually a very important issue that needs to be solved before image classification or target recognition. The study for segmentation evaluation method is key to solving this issue. Almost all of the existent evaluation strategies only focus on the global performance assessment. However, these methods are ineffective for the situation that two segmentation results with very similar overall performance have very different local error distributions. To overcome this problem, this paper presents an approach that can both locally and globally quantify segmentation incorrectness. In doing so, region-overlapping metrics are utilized to quantify each reference geo-object's over and under-segmentation error. These quantified error values are used to produce segmentation error maps which have effective illustrative power to delineate local segmentation error patterns. The error values for all of the reference geo-objects are aggregated through using area-weighted summation, so that global indicators can be derived. An experiment using two scenes of very different high resolution images showed that the global evaluation part of the proposed approach was almost as effective as other two global evaluation methods, and the local part was a useful complement to comparing different segmentation results.

  4. Localized Surface Plasmon Resonance Biosensing: Current Challenges and Approaches

    PubMed Central

    Unser, Sarah; Bruzas, Ian; He, Jie; Sagle, Laura

    2015-01-01

    Localized surface plasmon resonance (LSPR) has emerged as a leader among label-free biosensing techniques in that it offers sensitive, robust, and facile detection. Traditional LSPR-based biosensing utilizes the sensitivity of the plasmon frequency to changes in local index of refraction at the nanoparticle surface. Although surface plasmon resonance technologies are now widely used to measure biomolecular interactions, several challenges remain. In this article, we have categorized these challenges into four categories: improving sensitivity and limit of detection, selectivity in complex biological solutions, sensitive detection of membrane-associated species, and the adaptation of sensing elements for point-of-care diagnostic devices. The first section of this article will involve a conceptual discussion of surface plasmon resonance and the factors affecting changes in optical signal detected. The following sections will discuss applications of LSPR biosensing with an emphasis on recent advances and approaches to overcome the four limitations mentioned above. First, improvements in limit of detection through various amplification strategies will be highlighted. The second section will involve advances to improve selectivity in complex media through self-assembled monolayers, “plasmon ruler” devices involving plasmonic coupling, and shape complementarity on the nanoparticle surface. The following section will describe various LSPR platforms designed for the sensitive detection of membrane-associated species. Finally, recent advances towards multiplexed and microfluidic LSPR-based devices for inexpensive, rapid, point-of-care diagnostics will be discussed. PMID:26147727

  5. A Cross-Layer User Centric Vertical Handover Decision Approach Based on MIH Local Triggers

    NASA Astrophysics Data System (ADS)

    Rehan, Maaz; Yousaf, Muhammad; Qayyum, Amir; Malik, Shahzad

    Vertical handover decision algorithm that is based on user preferences and coupled with Media Independent Handover (MIH) local triggers have not been explored much in the literature. We have developed a comprehensive cross-layer solution, called Vertical Handover Decision (VHOD) approach, which consists of three parts viz. mechanism for collecting and storing user preferences, Vertical Handover Decision (VHOD) algorithm and the MIH Function (MIHF). MIHF triggers the VHOD algorithm which operates on user preferences to issue handover commands to mobility management protocol. VHOD algorithm is an MIH User and therefore needs to subscribe events and configure thresholds for receiving triggers from MIHF. In this regard, we have performed experiments in WLAN to suggest thresholds for Link Going Down trigger. We have also critically evaluated the handover decision process, proposed Just-in-time interface activation technique, compared our proposed approach with prominent user centric approaches and analyzed our approach from different aspects.

  6. Full-Field Strain Measurement On Titanium Welds And Local Elasto-Plastic Identification With The Virtual Fields Method

    NASA Astrophysics Data System (ADS)

    Tattoli, F.; Pierron, F.; Rotinat, R.; Casavola, C.; Pappalettere, C.

    2011-01-01

    One of the main problems in welding is the microstructural transformation within the area affected by the thermal history. The resulting heterogeneous microstructure within the weld nugget and the heat affected zones is often associated with changes in local material properties. The present work deals with the identification of material parameters governing the elasto—plastic behaviour of the fused and heat affected zones as well as the base material for titanium hybrid welded joints (Ti6Al4V alloy). The material parameters are identified from heterogeneous strain fields with the Virtual Fields Method. This method is based on a relevant use of the principle of virtual work and it has been shown to be useful and much less time consuming than classical finite element model updating approaches applied to similar problems. The paper will present results and discuss the problem of selection of the weld zones for the identification.

  7. Robust method to detect and locate local earthquakes by means of amplitude measurements.

    NASA Astrophysics Data System (ADS)

    del Puy Papí Isaba, María; Brückl, Ewald

    2016-04-01

    In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic

  8. A new Method for Determining the Interplanetary Current-Sheet Local Orientation

    NASA Astrophysics Data System (ADS)

    Blanco, J. J.; Rodríguez-pacheco, J.; Sequeiros, J.

    2003-03-01

    In this work we have developed a new method for determining the interplanetary current sheet local parameters. The method, called `HYTARO' (from Hyperbolic Tangent Rotation), is based on a modified Harris magnetic field. This method has been applied to a pool of 57 events, all of them recorded during solar minimum conditions. The model performance has been tested by comparing both, its outputs and noise response, with these of the `classic MVM' (from Minimum Variance Method). The results suggest that, despite the fact that in many cases they behave in a similar way, there are specific crossing conditions that produce an erroneous MVM response. Moreover, our method shows a lower noise level sensitivity than that of MVM.

  9. Methods for measuring denitrification: Diverse approaches to a difficult problem

    USGS Publications Warehouse

    Groffman, Peter M; Altabet, Mary A.; Böhlke, J.K.; Butterbach-Bahl, Klaus; David, Mary B.; Firestone, Mary K.; Giblin, Anne E.; Kana, Todd M.; Nielsen , Lars Peter; Voytek, Mary A.

    2006-01-01

    Denitrification, the reduction of the nitrogen (N) oxides, nitrate (NO3−) and nitrite (NO2−), to the gases nitric oxide (NO), nitrous oxide (N2O), and dinitrogen (N2), is important to primary production, water quality, and the chemistry and physics of the atmosphere at ecosystem, landscape, regional, and global scales. Unfortunately, this process is very difficult to measure, and existing methods are problematic for different reasons in different places at different times. In this paper, we review the major approaches that have been taken to measure denitrification in terrestrial and aquatic environments and discuss the strengths, weaknesses, and future prospects for the different methods. Methodological approaches covered include (1) acetylene-based methods, (2) 15N tracers, (3) direct N2 quantification, (4) N2:Ar ratio quantification, (5) mass balance approaches, (6) stoichiometric approaches, (7) methods based on stable isotopes, (8) in situ gradients with atmospheric environmental tracers, and (9) molecular approaches. Our review makes it clear that the prospects for improved quantification of denitrification vary greatly in different environments and at different scales. While current methodology allows for the production of accurate estimates of denitrification at scales relevant to water and air quality and ecosystem fertility questions in some systems (e.g., aquatic sediments, well-defined aquifers), methodology for other systems, especially upland terrestrial areas, still needs development. Comparison of mass balance and stoichiometric approaches that constrain estimates of denitrification at large scales with point measurements (made using multiple methods), in multiple systems, is likely to propel more improvement in denitrification methods over the next few years.

  10. Novel Approach for Prediction of Localized Necking in Case of Nonlinear Strain Paths

    NASA Astrophysics Data System (ADS)

    Drotleff, K.; Liewald, M.

    2017-09-01

    Rising customer expectations regarding design complexity and weight reduction of sheet metal components alongside with further reduced time to market implicate increased demand for process validation using numerical forming simulation. Formability prediction though often is still based on the forming limit diagram first presented in the 1960s. Despite many drawbacks in case of nonlinear strain paths and major advances in research in the recent years, the forming limit curve (FLC) is still one of the most commonly used criteria for assessing formability of sheet metal materials. Especially when forming complex part geometries nonlinear strain paths may occur, which cannot be predicted using the conventional FLC-Concept. In this paper a novel approach for calculation of FLCs for nonlinear strain paths is presented. Combining an interesting approach for prediction of FLC using tensile test data and IFU-FLC-Criterion a model for prediction of localized necking for nonlinear strain paths can be derived. Presented model is purely based on experimental tensile test data making it easy to calibrate for any given material. Resulting prediction of localized necking is validated using an experimental deep drawing specimen made of AA6014 material having a sheet thickness of 1.04 mm. The results are compared to IFU-FLC-Criterion based on data of pre-stretched Nakajima specimen.

  11. Hippocampus Segmentation Based on Local Linear Mapping

    PubMed Central

    Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin

    2017-01-01

    We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively. PMID:28368016

  12. Hippocampus Segmentation Based on Local Linear Mapping.

    PubMed

    Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin

    2017-04-03

    We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.

  13. Hippocampus Segmentation Based on Local Linear Mapping

    NASA Astrophysics Data System (ADS)

    Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin

    2017-04-01

    We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.

  14. Experimental Validation of Normalized Uniform Load Surface Curvature Method for Damage Localization

    PubMed Central

    Jung, Ho-Yeon; Sung, Seung-Hoon; Jung, Hyung-Jo

    2015-01-01

    In this study, we experimentally validated the normalized uniform load surface (NULS) curvature method, which has been developed recently to assess damage localization in beam-type structures. The normalization technique allows for the accurate assessment of damage localization with greater sensitivity irrespective of the damage location. In this study, damage to a simply supported beam was numerically and experimentally investigated on the basis of the changes in the NULS curvatures, which were estimated from the modal flexibility matrices obtained from the acceleration responses under an ambient excitation. Two damage scenarios were considered for the single damage case as well as the multiple damages case by reducing the bending stiffness (EI) of the affected element(s). Numerical simulations were performed using MATLAB as a preliminary step. During the validation experiments, a series of tests were performed. It was found that the damage locations could be identified successfully without any false-positive or false-negative detections using the proposed method. For comparison, the damage detection performances were compared with those of two other well-known methods based on the modal flexibility matrix, namely, the uniform load surface (ULS) method and the ULS curvature method. It was confirmed that the proposed method is more effective for investigating the damage locations of simply supported beams than the two conventional methods in terms of sensitivity to damage under measurement noise. PMID:26501286

  15. Restoration of STORM images from sparse subset of localizations (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Moiseev, Alexander A.; Gelikonov, Grigory V.; Gelikonov, Valentine M.

    2016-02-01

    To construct a Stochastic Optical Reconstruction Microscopy (STORM) image one should collect sufficient number of localized fluorophores to satisfy Nyquist criterion. This requirement limits time resolution of the method. In this work we propose a probabalistic approach to construct STORM images from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion. Using a set of STORM images constructed from number of localizations sufficient for Nyquist criterion we derive a model which allows us to predict the probability for every location to be occupied by a fluorophore at the end of hypothetical acquisition, having as an input parameters distribution of already localized fluorophores in the proximity of this location. We show that probability map obtained from number of fluorophores 3-4 times less than required by Nyquist criterion may be used as superresolution image itself. Thus we are able to construct STORM image from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion, proportionaly decreasing STORM data acquisition time. This method may be used complementary with other approaches desined for increasing STORM time resolution.

  16. Digital Sequences and a Time Reversal-Based Impact Region Imaging and Localization Method

    PubMed Central

    Qiu, Lei; Yuan, Shenfang; Mei, Hanfei; Qian, Weifeng

    2013-01-01

    To reduce time and cost of damage inspection, on-line impact monitoring of aircraft composite structures is needed. A digital monitor based on an array of piezoelectric transducers (PZTs) is developed to record the impact region of impacts on-line. It is small in size, lightweight and has low power consumption, but there are two problems with the impact alarm region localization method of the digital monitor at the current stage. The first one is that the accuracy rate of the impact alarm region localization is low, especially on complex composite structures. The second problem is that the area of impact alarm region is large when a large scale structure is monitored and the number of PZTs is limited which increases the time and cost of damage inspections. To solve the two problems, an impact alarm region imaging and localization method based on digital sequences and time reversal is proposed. In this method, the frequency band of impact response signals is estimated based on the digital sequences first. Then, characteristic signals of impact response signals are constructed by sinusoidal modulation signals. Finally, the phase synthesis time reversal impact imaging method is adopted to obtain the impact region image. Depending on the image, an error ellipse is generated to give out the final impact alarm region. A validation experiment is implemented on a complex composite wing box of a real aircraft. The validation results show that the accuracy rate of impact alarm region localization is approximately 100%. The area of impact alarm region can be reduced and the number of PZTs needed to cover the same impact monitoring region is reduced by more than a half. PMID:24084123

  17. Test particle propagation in magnetostatic turbulence. 2: The local approximation method

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.

    1976-01-01

    An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.

  18. An Improved Aerial Target Localization Method with a Single Vector Sensor

    PubMed Central

    Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin

    2017-01-01

    This paper focuses on the problems encountered in the actual data processing with the use of the existing aerial target localization methods, analyzes the causes of the problems, and proposes an improved algorithm. Through the processing of the sea experiment data, it is found that the existing algorithms have higher requirements for the accuracy of the angle estimation. The improved algorithm reduces the requirements of the angle estimation accuracy and obtains the robust estimation results. The closest distance matching estimation algorithm and the horizontal distance estimation compensation algorithm are proposed. The smoothing effect of the data after being post-processed by using the forward and backward two-direction double-filtering method has been improved, thus the initial stage data can be filtered, so that the filtering results retain more useful information. In this paper, the aerial target height measurement methods are studied, the estimation results of the aerial target are given, so as to realize the three-dimensional localization of the aerial target and increase the understanding of the underwater platform to the aerial target, so that the underwater platform has better mobility and concealment. PMID:29135956

  19. Time-Accurate Local Time Stepping and High-Order Time CESE Methods for Multi-Dimensional Flows Using Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary

    2013-01-01

    With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.

  20. Charge localization and ordering in A 2 Mn 8 O 16 hollandite group oxides: Impact of density functional theory approaches

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kaltak, Merzuk; Fernandez-Serra, Marivi; Hybertsen, Mark S.

    The phases of A 2Mn 8O 16 hollandite group oxides emerge from the competition between ionic interactions, Jahn-Teller effects, charge ordering, and magnetic interactions. Their balanced treatment with feasible computational approaches can be challenging for commonly used approximations in density functional theory. Three examples (A = Ag, Li, and K) are studied with a sequence of different approximate exchange-correlation functionals. Starting from a generalized gradient approximation (GGA), an extension to include van der Waals interactions and a recently proposed meta-GGA are considered. Then local Coulomb interactions for the Mn 3d electrons are more explicitly considered with the DFT + Umore » approach. Finally, selected results from a hybrid functional approach provide a reference. Results for the binding energy of the A species in the parent oxide highlight the role of van der Waals interactions. Relatively accurate results for insertion energies can be achieved with a low-U and a high-U approach. In the low-U case, the materials are described as band metals with a high-symmetry, tetragonal crystal structure. In the high-U case, the electrons donated by A result in formation of local Mn 3+ centers and corresponding Jahn-Teller distortions characterized by a local order parameter. The resulting degree of monoclinic distortion depends on charge ordering and magnetic interactions in the phase formed. The reference hybrid functional results show charge localization and ordering. Comparison to low-temperature experiments of related compounds suggests that charge localization is the physically correct result for the hollandite group oxides studied here. Lastly, while competing effects in the local magnetic coupling are subtle, the fully anisotropic implementation of DFT + U gives the best overall agreement with results from the hybrid functional.« less

  1. Charge localization and ordering in A 2 Mn 8 O 16 hollandite group oxides: Impact of density functional theory approaches

    DOE PAGES

    Kaltak, Merzuk; Fernandez-Serra, Marivi; Hybertsen, Mark S.

    2017-12-01

    The phases of A 2Mn 8O 16 hollandite group oxides emerge from the competition between ionic interactions, Jahn-Teller effects, charge ordering, and magnetic interactions. Their balanced treatment with feasible computational approaches can be challenging for commonly used approximations in density functional theory. Three examples (A = Ag, Li, and K) are studied with a sequence of different approximate exchange-correlation functionals. Starting from a generalized gradient approximation (GGA), an extension to include van der Waals interactions and a recently proposed meta-GGA are considered. Then local Coulomb interactions for the Mn 3d electrons are more explicitly considered with the DFT + Umore » approach. Finally, selected results from a hybrid functional approach provide a reference. Results for the binding energy of the A species in the parent oxide highlight the role of van der Waals interactions. Relatively accurate results for insertion energies can be achieved with a low-U and a high-U approach. In the low-U case, the materials are described as band metals with a high-symmetry, tetragonal crystal structure. In the high-U case, the electrons donated by A result in formation of local Mn 3+ centers and corresponding Jahn-Teller distortions characterized by a local order parameter. The resulting degree of monoclinic distortion depends on charge ordering and magnetic interactions in the phase formed. The reference hybrid functional results show charge localization and ordering. Comparison to low-temperature experiments of related compounds suggests that charge localization is the physically correct result for the hollandite group oxides studied here. Lastly, while competing effects in the local magnetic coupling are subtle, the fully anisotropic implementation of DFT + U gives the best overall agreement with results from the hybrid functional.« less

  2. Charge localization and ordering in A2Mn8O16 hollandite group oxides: Impact of density functional theory approaches

    NASA Astrophysics Data System (ADS)

    Kaltak, Merzuk; Fernández-Serra, Marivi; Hybertsen, Mark S.

    2017-12-01

    The phases of A2Mn8O16 hollandite group oxides emerge from the competition between ionic interactions, Jahn-Teller effects, charge ordering, and magnetic interactions. Their balanced treatment with feasible computational approaches can be challenging for commonly used approximations in density functional theory. Three examples (A = Ag, Li, and K) are studied with a sequence of different approximate exchange-correlation functionals. Starting from a generalized gradient approximation (GGA), an extension to include van der Waals interactions and a recently proposed meta-GGA are considered. Then local Coulomb interactions for the Mn 3 d electrons are more explicitly considered with the DFT + U approach. Finally, selected results from a hybrid functional approach provide a reference. Results for the binding energy of the A species in the parent oxide highlight the role of van der Waals interactions. Relatively accurate results for insertion energies can be achieved with a low-U and a high-U approach. In the low-U case, the materials are described as band metals with a high-symmetry, tetragonal crystal structure. In the high-U case, the electrons donated by A result in formation of local Mn3 + centers and corresponding Jahn-Teller distortions characterized by a local order parameter. The resulting degree of monoclinic distortion depends on charge ordering and magnetic interactions in the phase formed. The reference hybrid functional results show charge localization and ordering. Comparison to low-temperature experiments of related compounds suggests that charge localization is the physically correct result for the hollandite group oxides studied here. Finally, while competing effects in the local magnetic coupling are subtle, the fully anisotropic implementation of DFT + U gives the best overall agreement with results from the hybrid functional.

  3. Dynamic texture recognition using local binary patterns with an application to facial expressions.

    PubMed

    Zhao, Guoying; Pietikäinen, Matti

    2007-06-01

    Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.

  4. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach

    PubMed Central

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-01-01

    One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft’s real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach. PMID:28629189

  5. Localization Framework for Real-Time UAV Autonomous Landing: An On-Ground Deployed Visual Approach.

    PubMed

    Kong, Weiwei; Hu, Tianjiang; Zhang, Daibing; Shen, Lincheng; Zhang, Jianwei

    2017-06-19

    [-5]One of the greatest challenges for fixed-wing unmanned aircraft vehicles (UAVs) is safe landing. Hereafter, an on-ground deployed visual approach is developed in this paper. This approach is definitely suitable for landing within the global navigation satellite system (GNSS)-denied environments. As for applications, the deployed guidance system makes full use of the ground computing resource and feedbacks the aircraft's real-time localization to its on-board autopilot. Under such circumstances, a separate long baseline stereo architecture is proposed to possess an extendable baseline and wide-angle field of view (FOV) against the traditional fixed baseline schemes. Furthermore, accuracy evaluation of the new type of architecture is conducted by theoretical modeling and computational analysis. Dataset-driven experimental results demonstrate the feasibility and effectiveness of the developed approach.

  6. Multi-step approach for comparing the local air pollution contributions of conventional and innovative MSW thermo-chemical treatments.

    PubMed

    Ragazzi, M; Rada, E C

    2012-10-01

    In the sector of municipal solid waste management the debate on the performances of conventional and novel thermo-chemical technologies is still relevant. When a plant must be constructed, decision makers often select a technology prior to analyzing the local environmental impact of the available options, as this type of study is generally developed when the design of the plant has been carried out. Additionally, in the literature there is a lack of comparative analyses of the contributions to local air pollution from different technologies. The present study offers a multi-step approach, based on pollutant emission factors and atmospheric dilution coefficients, for a local comparative analysis. With this approach it is possible to check if some assumptions related to the advantages of the novel thermochemical technologies, in terms of local direct impact on air quality, can be applied to municipal solid waste treatment. The selected processes concern combustion, gasification and pyrolysis, alone or in combination. The pollutants considered are both carcinogenic and non-carcinogenic. A case study is presented concerning the location of a plant in an alpine region and its contribution to the local air pollution. Results show that differences among technologies are less than expected. Performances of each technology are discussed in details. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Modeling of fatigue crack induced nonlinear ultrasonics using a highly parallelized explicit local interaction simulation approach

    NASA Astrophysics Data System (ADS)

    Shen, Yanfeng; Cesnik, Carlos E. S.

    2016-04-01

    This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.

  8. Water-sanitation-hygiene mapping: an improved approach for data collection at local level.

    PubMed

    Giné-Garriga, Ricard; de Palencia, Alejandro Jiménez-Fernández; Pérez-Foguet, Agustí

    2013-10-01

    Strategic planning and appropriate development and management of water and sanitation services are strongly supported by accurate and accessible data. If adequately exploited, these data might assist water managers with performance monitoring, benchmarking comparisons, policy progress evaluation, resources allocation, and decision making. A variety of tools and techniques are in place to collect such information. However, some methodological weaknesses arise when developing an instrument for routine data collection, particularly at local level: i) comparability problems due to heterogeneity of indicators, ii) poor reliability of collected data, iii) inadequate combination of different information sources, and iv) statistical validity of produced estimates when disaggregated into small geographic subareas. This study proposes an improved approach for water, sanitation and hygiene (WASH) data collection at decentralised level in low income settings, as an attempt to overcome previous shortcomings. The ultimate aim is to provide local policymakers with strong evidences to inform their planning decisions. The survey design takes the Water Point Mapping (WPM) as a starting point to record all available water sources at a particular location. This information is then linked to data produced by a household survey. Different survey instruments are implemented to collect reliable data by employing a variety of techniques, such as structured questionnaires, direct observation and water quality testing. The collected data is finally validated through simple statistical analysis, which in turn produces valuable outputs that might feed into the decision-making process. In order to demonstrate the applicability of the method, outcomes produced from three different case studies (Homa Bay District-Kenya-; Kibondo District-Tanzania-; and Municipality of Manhiça-Mozambique-) are presented. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. Bridging theory and practice: Mixed methods approach to instruction of law and ethics within the pharmaceutical sciences.

    PubMed

    Wilby, Kyle John; Nasr, Ziad Ghantous

    2016-11-01

    Background: Professional responsibilities are guided by laws and ethics that must be introduced and mastered within pharmaceutical sciences training. Instructional design to teaching typically introduces concepts in a traditional didactic approach and requires student memorization prior to application within practice settings. Additionally, many centers rely on best practices from abroad, due to lack of locally published laws and guidance documents. Objectives: The aim of this paper was to summarize and critically evaluate a professional skills laboratory designed to enhance learning through diversity in instructional methods relating to pharmacy law and best practices regarding narcotics, controlled medications, and benzodiazepines. Setting: This study took place within the Professional Skills Laboratory at the College of Pharmacy at Qatar University. Method: A total of 25 students participated in a redesigned laboratory session administered by a faculty member, clinical lecturer, teaching assistant, and a professional skills laboratory technician. The laboratory consisted of eight independent stations that students rotated during the 3-h session. Stations were highly interactive in nature and were designed using non-traditional approaches such as charades, role-plays, and reflective drawings. All stations attempted to have students relate learned concepts to practice within Qatar. Main outcome measures: Student perceptions of the laboratory were measured on a post-questionnaire and were summarized descriptively. Using reflection and consensus techniques, two faculty members completed a SWOC (Strengths, Weaknesses, Opportunities, and Challenges) analysis in preparation for future cycles. Results: 100% (25/25) of students somewhat or strongly agreed that their knowledge regarding laws and best practices increased and that their learning experience was enhanced by a mixed-methods approach. A total of 96% (24/25) of students stated that the mixed-methods

  10. Computing wave functions in multichannel collisions with non-local potentials using the R-matrix method

    NASA Astrophysics Data System (ADS)

    Bonitati, Joey; Slimmer, Ben; Li, Weichuan; Potel, Gregory; Nunes, Filomena

    2017-09-01

    The calculable form of the R-matrix method has been previously shown to be a useful tool in approximately solving the Schrodinger equation in nuclear scattering problems. We use this technique combined with the Gauss quadrature for the Lagrange-mesh method to efficiently solve for the wave functions of projectile nuclei in low energy collisions (1-100 MeV) involving an arbitrary number of channels. We include the local Woods-Saxon potential, the non-local potential of Perey and Buck, a Coulomb potential, and a coupling potential to computationally solve for the wave function of two nuclei at short distances. Object oriented programming is used to increase modularity, and parallel programming techniques are introduced to reduce computation time. We conclude that the R-matrix method is an effective method to predict the wave functions of nuclei in scattering problems involving both multiple channels and non-local potentials. Michigan State University iCER ACRES REU.

  11. Promising ethical arguments for product differentiation in the organic food sector. A mixed methods research approach.

    PubMed

    Zander, Katrin; Stolz, Hanna; Hamm, Ulrich

    2013-03-01

    Ethical consumerism is a growing trend worldwide. Ethical consumers' expectations are increasing and neither the Fairtrade nor the organic farming concept covers all the ethical concerns of consumers. Against this background the aim of this research is to elicit consumers' preferences regarding organic food with additional ethical attributes and their relevance at the market place. A mixed methods research approach was applied by combining an Information Display Matrix, Focus Group Discussions and Choice Experiments in five European countries. According to the results of the Information Display Matrix, 'higher animal welfare', 'local production' and 'fair producer prices' were preferred in all countries. These three attributes were discussed with Focus Groups in depth, using rather emotive ways of labelling. While the ranking of the attributes was the same, the emotive way of communicating these attributes was, for the most part, disliked by participants. The same attributes were then used in Choice Experiments, but with completely revised communication arguments. According to the results of the Focus Groups, the arguments were presented in a factual manner, using short and concise statements. In this research step, consumers in all countries except Austria gave priority to 'local production'. 'Higher animal welfare' and 'fair producer prices' turned out to be relevant for buying decisions only in Germany and Switzerland. According to our results, there is substantial potential for product differentiation in the organic sector through making use of production standards that exceed existing minimum regulations. The combination of different research methods in a mixed methods approach proved to be very helpful. The results of earlier research steps provided the basis from which to learn - findings could be applied in subsequent steps, and used to adjust and deepen the research design. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.

    PubMed

    Xiao Yang; Jianjiang Feng; Jie Zhou

    2014-05-01

    Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.

  13. Localized Arm Volume Index: A New Method for Body Type-Corrected Evaluation of Localized Arm Lymphedematous Volume Change.

    PubMed

    Yamamoto, Takumi; Yamamoto, Nana; Yoshimatsu, Hidehiko

    2017-10-01

    Volume measurement is a common evaluation for upper extremity lymphedema. However, volume comparison between different patients with different body types may be inappropriate, and it is difficult to evaluate localized limb volume change using arm volume. Localized arm volumes (Vk, k = 1-5) and localized arm volume indices (LAVIk) at 5 points (1, upper arm; 2, elbow; 3, forearm; 4, wrist; 5, hand) of 106 arms of 53 examinees with no arm edema were calculated based on physical measurements (arm circumferences and lengths and body mass index [BMI]). Interrater and intrarater reliabilities of LAVIk were assessed, and Vk and LAVIk were compared between lower BMI (BMI, <22 kg/m) group and higher BMI (BMI, ≥22 kg/m) group. Interrater and intrarater reliabilities of LAVIk were all high (all, r > 0.98). Between lower and higher BMI groups, significant differences were observed in all Vk (V1 [P = 6.8 × 10], V2 [P = 3.1 × 10], V3 [P = 1.1 × 10], V4 [P = 8.3 × 10], and V5 [P = 3.0 × 10]). Regarding localized arm volume index (LAVI) between groups, significant differences were seen in LAVI1 (P = 9.7 × 10) and LAVI5 (P = 1.2 × 10); there was no significant difference in LAVI2 (P = 0.60), LAVI3 (P = 0.61), or LAVI4 (P = 0.22). Localized arm volume index is a convenient and highly reproducible method for evaluation of localized arm volume change, which is less affected by body physique compared with arm volumetry.

  14. Goal oriented soil mapping: applying modern methods supported by local knowledge: A review

    NASA Astrophysics Data System (ADS)

    Pereira, Paulo; Brevik, Eric; Oliva, Marc; Estebaranz, Ferran; Depellegrin, Daniel; Novara, Agata; Cerda, Artemi; Menshov, Oleksandr

    2017-04-01

    In the recent years the amount of soil data available increased importantly. This facilitated the production of better and accurate maps, important for sustainable land management (Pereira et al., 2017). Despite these advances, the human knowledge is extremely important to understand the natural characteristics of the landscape. The knowledge accumulated and transmitted generation after generation is priceless, and should be considered as a valuable data source for soil mapping and modelling. The local knowledge and wisdom can complement the new advances in soil analysis. In addition, farmers are the most interested in the participation and incorporation of their knowledge in the models, since they are the end-users of the study that soil scientists produce. Integration of local community's vision and understanding about nature is assumed to be an important step to the implementation of decision maker's policies. Despite this, many challenges appear regarding the integration of local and scientific knowledge, since in some cases there is no spatial correlation between folk and scientific classifications, which may be attributed to the different cultural variables that influence local soil classification. The objective of this work is to review how modern soil methods incorporated local knowledge in their models. References Pereira, P., Brevik, E., Oliva, M., Estebaranz, F., Depellegrin, D., Novara, A., Cerda, A., Menshov, O. (2017) Goal Oriented soil mapping: applying modern methods supported by local knowledge. In: Pereira, P., Brevik, E., Munoz-Rojas, M., Miller, B. (Eds.) Soil mapping and process modelling for sustainable land use management (Elsevier Publishing House) ISBN: 9780128052006

  15. Localized N20 Component of Somatosensory Evoked Magnetic Fields in Frontoparietal Brain Tumor Patients Using Noise-Normalized Approaches.

    PubMed

    Elaina, Nor Safira; Malik, Aamir Saeed; Shams, Wafaa Khazaal; Badruddin, Nasreen; Abdullah, Jafri Malin; Reza, Mohammad Faruque

    2018-06-01

    To localize sensorimotor cortical activation in 10 patients with frontoparietal tumors using quantitative magnetoencephalography (MEG) with noise-normalized approaches. Somatosensory evoked magnetic fields (SEFs) were elicited in 10 patients with somatosensory tumors and in 10 control participants using electrical stimulation of the median nerve via the right and left wrists. We localized the N20m component of the SEFs using dynamic statistical parametric mapping (dSPM) and standardized low-resolution brain electromagnetic tomography (sLORETA) combined with 3D magnetic resonance imaging (MRI). The obtained coordinates were compared between groups. Finally, we statistically evaluated the N20m parameters across hemispheres using non-parametric statistical tests. The N20m sources were accurately localized to Brodmann area 3b in all members of the control group and in seven of the patients; however, the sources were shifted in three patients relative to locations outside the primary somatosensory cortex (SI). Compared with the affected (tumor) hemispheres in the patient group, N20m amplitudes and the strengths of the current sources were significantly lower in the unaffected hemispheres and in both hemispheres of the control group. These results were consistent for both dSPM and sLORETA approaches. Tumors in the sensorimotor cortex lead to cortical functional reorganization and an increase in N20m amplitude and current-source strengths. Noise-normalized approaches for MEG analysis that are integrated with MRI show accurate and reliable localization of sensorimotor function.

  16. Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.

    PubMed

    Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo

    Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object

  17. High-precision approach to localization scheme of visible light communication based on artificial neural networks and modified genetic algorithms

    NASA Astrophysics Data System (ADS)

    Guan, Weipeng; Wu, Yuxiang; Xie, Canyu; Chen, Hao; Cai, Ye; Chen, Yingcong

    2017-10-01

    An indoor positioning algorithm based on visible light communication (VLC) is presented. This algorithm is used to calculate a three-dimensional (3-D) coordinate of an indoor optical wireless environment, which includes sufficient orders of multipath reflections from reflecting surfaces of the room. Leveraging the global optimization ability of the genetic algorithm (GA), an innovative framework for 3-D position estimation based on a modified genetic algorithm is proposed. Unlike other techniques using VLC for positioning, the proposed system can achieve indoor 3-D localization without making assumptions about the height or acquiring the orientation angle of the mobile terminal. Simulation results show that an average localization error of less than 1.02 cm can be achieved. In addition, in most VLC-positioning systems, the effect of reflection is always neglected and its performance is limited by reflection, which makes the results not so accurate for a real scenario and the positioning errors at the corners are relatively larger than other places. So, we take the first-order reflection into consideration and use artificial neural network to match the model of a nonlinear channel. The studies show that under the nonlinear matching of direct and reflected channels the average positioning errors of four corners decrease from 11.94 to 0.95 cm. The employed algorithm is emerged as an effective and practical method for indoor localization and outperform other existing indoor wireless localization approaches.

  18. EPODE approach for childhood obesity prevention: methods, progress and international development

    PubMed Central

    Borys, J-M; Le Bodo, Y; Jebb, S A; Seidell, J C; Summerbell, C; Richard, D; De Henauw, S; Moreno, L A; Romon, M; Visscher, T L S; Raffin, S; Swinburn, B

    2012-01-01

    Summary Childhood obesity is a complex issue and needs multistakeholder involvement at all levels to foster healthier lifestyles in a sustainable way. ‘Ensemble Prévenons l'ObésitéDes Enfants’ (EPODE, Together Let's Prevent Childhood Obesity) is a large-scale, coordinated, capacity-building approach for communities to implement effective and sustainable strategies to prevent childhood obesity. This paper describes EPODE methodology and its objective of preventing childhood obesity. At a central level, a coordination team, using social marketing and organizational techniques, trains and coaches a local project manager nominated in each EPODE community by the local authorities. The local project manager is also provided with tools to mobilize local stakeholders through a local steering committee and local networks. The added value of the methodology is to mobilize stakeholders at all levels across the public and the private sectors. Its critical components include political commitment, sustainable resources, support services and a strong scientific input – drawing on the evidence-base – together with evaluation of the programme. Since 2004, EPODE methodology has been implemented in more than 500 communities in six countries. Community-based interventions are integral to childhood obesity prevention. EPODE provides a valuable model to address this challenge. PMID:22106871

  19. An iterative method for the localization of a neutron source in a large box (container)

    NASA Astrophysics Data System (ADS)

    Dubinski, S.; Presler, O.; Alfassi, Z. B.

    2007-12-01

    The localization of an unknown neutron source in a bulky box was studied. This can be used for the inspection of cargo, to prevent the smuggling of neutron and α emitters. It is important to localize the source from the outside for safety reasons. Source localization is necessary in order to determine its activity. A previous study showed that, by using six detectors, three on each parallel face of the box (460×420×200 mm 3), the location of the source can be found with an average distance of 4.73 cm between the real source position and the calculated one and a maximal distance of about 9 cm. Accuracy was improved in this work by applying an iteration method based on four fixed detectors and the successive iteration of positioning of an external calibrating source. The initial positioning of the calibrating source is the plane of detectors 1 and 2. This method finds the unknown source location with an average distance of 0.78 cm between the real source position and the calculated one and a maximum distance of 3.66 cm for the same box. For larger boxes, localization without iterations requires an increase in the number of detectors, while localization with iterations requires only an increase in the number of iteration steps. In addition to source localization, two methods for determining the activity of the unknown source were also studied.

  20. An infrared small target detection method based on multiscale local homogeneity measure

    NASA Astrophysics Data System (ADS)

    Nie, Jinyan; Qu, Shaocheng; Wei, Yantao; Zhang, Liming; Deng, Lizhen

    2018-05-01

    Infrared (IR) small target detection plays an important role in the field of image detection area owing to its intrinsic characteristics. This paper presents a multiscale local homogeneity measure (MLHM) for infrared small target detection, which can enhance the performance of IR small target detection system. Firstly, intra-patch homogeneity of the target itself and the inter-patch heterogeneity between target and the local background regions are integrated to enhance the significant of small target. Secondly, a multiscale measure based on local regions is proposed to obtain the most appropriate response. Finally, an adaptive threshold method is applied to small target segmentation. Experimental results on three different scenarios indicate that the MLHM has good performance under the interference of strong noise.

  1. The Green’s functions for peridynamic non-local diffusion

    PubMed Central

    Wang, L. J.; Xu, J. F.

    2016-01-01

    In this work, we develop the Green’s function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green’s functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green’s functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems. PMID:27713658

  2. Comparison of two methods for detection of strain localization in sheet forming

    NASA Astrophysics Data System (ADS)

    Lumelskyj, Dmytro; Lazarescu, Lucian; Banabic, Dorel; Rojek, Jerzy

    2018-05-01

    This paper presents a comparison of two criteria of strain localization in experimental research and numerical simulation of sheet metal forming. The first criterion is based on the analysis of the through-thickness thinning (through-thickness strain) and its first time derivative in the most strained zone. The limit strain in the second method is determined by the maximum of the strain acceleration. Experimental and numerical investigation have been carried out for the Nakajima test performed for different specimens of the DC04 grade steel sheet. The strain localization has been identified by analysis of experimental and numerical curves showing the evolution of strains and their derivatives in failure zones. The numerical and experimental limit strains calculated from both criteria have been compared with the experimental FLC evaluated according to the ISO 12004-2 norm. It has been shown that the first method predicts formability limits closer to the experimental FLC. The second criterion predicts values of strains higher than FLC determined according to ISO norm. These values are closer to the strains corresponding to the fracture limit. The results show that analysis of strain evolution allows us to determine strain localization in numerical simulation and experimental studies.

  3. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 3 2012-10-01 2012-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  4. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 3 2013-10-01 2013-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  5. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  6. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 3 2014-10-01 2014-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  7. 47 CFR Appendix to Part 52 - Deployment Schedule for Long-Term Database Methods for Local Number Portability

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Deployment Schedule for Long-Term Database Methods for Local Number Portability Appendix to Part 52 Telecommunication FEDERAL COMMUNICATIONS...—Deployment Schedule for Long-Term Database Methods for Local Number Portability Implementation must be...

  8. Multiview Locally Linear Embedding for Effective Medical Image Retrieval

    PubMed Central

    Shen, Hualei; Tao, Dacheng; Ma, Dianfu

    2013-01-01

    Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277

  9. Local functional descriptors for surface comparison based binding prediction

    PubMed Central

    2012-01-01

    Background Molecular recognition in proteins occurs due to appropriate arrangements of physical, chemical, and geometric properties of an atomic surface. Similar surface regions should create similar binding interfaces. Effective methods for comparing surface regions can be used in identifying similar regions, and to predict interactions without regard to the underlying structural scaffold that creates the surface. Results We present a new descriptor for protein functional surfaces and algorithms for using these descriptors to compare protein surface regions to identify ligand binding interfaces. Our approach uses descriptors of local regions of the surface, and assembles collections of matches to compare larger regions. Our approach uses a variety of physical, chemical, and geometric properties, adaptively weighting these properties as appropriate for different regions of the interface. Our approach builds a classifier based on a training corpus of examples of binding sites of the target ligand. The constructed classifiers can be applied to a query protein providing a probability for each position on the protein that the position is part of a binding interface. We demonstrate the effectiveness of the approach on a number of benchmarks, demonstrating performance that is comparable to the state-of-the-art, with an approach with more generality than these prior methods. Conclusions Local functional descriptors offer a new method for protein surface comparison that is sufficiently flexible to serve in a variety of applications. PMID:23176080

  10. Prediction of In-hospital Mortality in Emergency Department Patients With Sepsis: A Local Big Data-Driven, Machine Learning Approach.

    PubMed

    Taylor, R Andrew; Pare, Joseph R; Venkatesh, Arjun K; Mowafi, Hani; Melnick, Edward R; Fleischman, William; Hall, M Kennedy

    2016-03-01

    Predictive analytics in emergency care has mostly been limited to the use of clinical decision rules (CDRs) in the form of simple heuristics and scoring systems. In the development of CDRs, limitations in analytic methods and concerns with usability have generally constrained models to a preselected small set of variables judged to be clinically relevant and to rules that are easily calculated. Furthermore, CDRs frequently suffer from questions of generalizability, take years to develop, and lack the ability to be updated as new information becomes available. Newer analytic and machine learning techniques capable of harnessing the large number of variables that are already available through electronic health records (EHRs) may better predict patient outcomes and facilitate automation and deployment within clinical decision support systems. In this proof-of-concept study, a local, big data-driven, machine learning approach is compared to existing CDRs and traditional analytic methods using the prediction of sepsis in-hospital mortality as the use case. This was a retrospective study of adult ED visits admitted to the hospital meeting criteria for sepsis from October 2013 to October 2014. Sepsis was defined as meeting criteria for systemic inflammatory response syndrome with an infectious admitting diagnosis in the ED. ED visits were randomly partitioned into an 80%/20% split for training and validation. A random forest model (machine learning approach) was constructed using over 500 clinical variables from data available within the EHRs of four hospitals to predict in-hospital mortality. The machine learning prediction model was then compared to a classification and regression tree (CART) model, logistic regression model, and previously developed prediction tools on the validation data set using area under the receiver operating characteristic curve (AUC) and chi-square statistics. There were 5,278 visits among 4,676 unique patients who met criteria for sepsis. Of

  11. BAUM: improving genome assembly by adaptive unique mapping and local overlap-layout-consensus approach.

    PubMed

    Wang, Anqi; Wang, Zhanyu; Li, Zheng; Li, Lei M

    2018-06-15

    It is highly desirable to assemble genomes of high continuity and consistency at low cost. The current bottleneck of draft genome continuity using the second generation sequencing (SGS) reads is primarily caused by uncertainty among repetitive sequences. Even though the single-molecule real-time sequencing technology is very promising to overcome the uncertainty issue, its relatively high cost and error rate add burden on budget or computation. Many long-read assemblers take the overlap-layout-consensus (OLC) paradigm, which is less sensitive to sequencing errors, heterozygosity and variability of coverage. However, current assemblers of SGS data do not sufficiently take advantage of the OLC approach. Aiming at minimizing uncertainty, the proposed method BAUM, breaks the whole genome into regions by adaptive unique mapping; then the local OLC is used to assemble each region in parallel. BAUM can (i) perform reference-assisted assembly based on the genome of a close species (ii) or improve the results of existing assemblies that are obtained based on short or long sequencing reads. The tests on two eukaryote genomes, a wild rice Oryza longistaminata and a parrot Melopsittacus undulatus, show that BAUM achieved substantial improvement on genome size and continuity. Besides, BAUM reconstructed a considerable amount of repetitive regions that failed to be assembled by existing short read assemblers. We also propose statistical approaches to control the uncertainty in different steps of BAUM. http://www.zhanyuwang.xin/wordpress/index.php/2017/07/21/baum. Supplementary data are available at Bioinformatics online.

  12. PLPD: reliable protein localization prediction from imbalanced and overlapped datasets

    PubMed Central

    Lee, KiYoung; Kim, Dae-Won; Na, DoKyun; Lee, Kwang H.; Lee, Doheon

    2006-01-01

    Subcellular localization is one of the key functional characteristics of proteins. An automatic and efficient prediction method for the protein subcellular localization is highly required owing to the need for large-scale genome analysis. From a machine learning point of view, a dataset of protein localization has several characteristics: the dataset has too many classes (there are more than 10 localizations in a cell), it is a multi-label dataset (a protein may occur in several different subcellular locations), and it is too imbalanced (the number of proteins in each localization is remarkably different). Even though many previous works have been done for the prediction of protein subcellular localization, none of them tackles effectively these characteristics at the same time. Thus, a new computational method for protein localization is eventually needed for more reliable outcomes. To address the issue, we present a protein localization predictor based on D-SVDD (PLPD) for the prediction of protein localization, which can find the likelihood of a specific localization of a protein more easily and more correctly. Moreover, we introduce three measurements for the more precise evaluation of a protein localization predictor. As the results of various datasets which are made from the experiments of Huh et al. (2003), the proposed PLPD method represents a different approach that might play a complimentary role to the existing methods, such as Nearest Neighbor method and discriminate covariant method. Finally, after finding a good boundary for each localization using the 5184 classified proteins as training data, we predicted 138 proteins whose subcellular localizations could not be clearly observed by the experiments of Huh et al. (2003). PMID:16966337

  13. Capsule endoscope localization based on computer vision technique.

    PubMed

    Liu, Li; Hu, Chao; Cai, Wentao; Meng, Max Q H

    2009-01-01

    To build a new type of wireless capsule endoscope with interactive gastrointestinal tract examination, a localization and orientation system is needed for tracking 3D location and 3D orientation of the capsule movement. The magnetic localization and orientation method produces only 5 DOF, but misses the information of rotation angle along capsule's main axis. In this paper, we presented a complementary orientation approach for the capsule endoscope, and the 3D rotation can be determined by applying computer vision technique on the captured endoscopic images. The experimental results show that the complementary orientation method has good accuracy and high feasibility.

  14. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations

    DOE PAGES

    Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong

    2015-01-23

    In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.

  15. Illustrating a Mixed-Method Approach for Validating Culturally Specific Constructs

    ERIC Educational Resources Information Center

    Hitchcock, J.H.; Nastasi, B.K.; Dai, D.Y.; Newman, J.; Jayasena, A.; Bernstein-Moore, R.; Sarkar, S.; Varjas, K.

    2005-01-01

    The purpose of this article is to illustrate a mixed-method approach (i.e., combining qualitative and quantitative methods) for advancing the study of construct validation in cross-cultural research. The article offers a detailed illustration of the approach using the responses 612 Sri Lankan adolescents provided to an ethnographic survey. Such…

  16. Research on vibration signal analysis and extraction method of gear local fault

    NASA Astrophysics Data System (ADS)

    Yang, X. F.; Wang, D.; Ma, J. F.; Shao, W.

    2018-02-01

    Gear is the main connection parts and power transmission parts in the mechanical equipment. If the fault occurs, it directly affects the running state of the whole machine and even endangers the personal safety. So it has important theoretical significance and practical value to study on the extraction of the gear fault signal and fault diagnosis of the gear. In this paper, the gear local fault as the research object, set up the vibration model of gear fault vibration mechanism, derive the vibration mechanism of the gear local fault and analyzes the similarities and differences of the vibration signal between the gear non fault and the gears local faults. In the MATLAB environment, the wavelet transform algorithm is used to denoise the fault signal. Hilbert transform is used to demodulate the fault vibration signal. The results show that the method can denoise the strong noise mechanical vibration signal and extract the local fault feature information from the fault vibration signal..

  17. Designing and Evaluating Bamboo Harvesting Methods for Local Needs: Integrating Local Ecological Knowledge and Science.

    PubMed

    Darabant, András; Rai, Prem Bahadur; Staudhammer, Christina Lynn; Dorji, Tshewang

    2016-08-01

    Dendrocalamus hamiltonii, a large, clump-forming bamboo, has great potential to contribute towards poverty alleviation efforts across its distributional range. Harvesting methods that maximize yield while they fulfill local objectives and ensure sustainability are a research priority. Documenting local ecological knowledge on the species and identifying local users' goals for its production, we defined three harvesting treatments (selective cut, horseshoe cut, clear cut) and experimentally compared them with a no-intervention control treatment in an action research framework. We implemented harvesting over three seasons and monitored annually and two years post-treatment. Even though the total number of culms positively influenced the number of shoots regenerated, a much stronger relationship was detected between the number of culms harvested and the number of shoots regenerated, indicating compensatory growth mechanisms to guide shoot regeneration. Shoot recruitment declined over time in all treatments as well as the control; however, there was no difference among harvest treatments. Culm recruitment declined with an increase in harvesting intensity. When univariately assessing the number of harvested culms and shoots, there were no differences among treatments. However, multivariate analyses simultaneously considering both variables showed that harvested output of shoots and culms was higher with clear cut and horseshoe cut as compared to selective cut. Given the ease of implementation and issues of work safety, users preferred the horseshoe cut, but the lack of sustainability of shoot production calls for investigating longer cutting cycles.

  18. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    PubMed

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  19. Comparison of alternative MS/MS and bioinformatics approaches for confident phosphorylation site localization.

    PubMed

    Wiese, Heike; Kuhlmann, Katja; Wiese, Sebastian; Stoepel, Nadine S; Pawlas, Magdalena; Meyer, Helmut E; Stephan, Christian; Eisenacher, Martin; Drepper, Friedel; Warscheid, Bettina

    2014-02-07

    Over the past years, phosphoproteomics has advanced to a prime tool in signaling research. Since then, an enormous amount of information about in vivo protein phosphorylation events has been collected providing a treasure trove for gaining a better understanding of the molecular processes involved in cell signaling. Yet, we still face the problem of how to achieve correct modification site localization. Here we use alternative fragmentation and different bioinformatics approaches for the identification and confident localization of phosphorylation sites. Phosphopeptide-enriched fractions were analyzed by multistage activation, collision-induced dissociation and electron transfer dissociation (ETD), yielding complementary phosphopeptide identifications. We further found that MASCOT, OMSSA and Andromeda each identified a distinct set of phosphopeptides allowing the number of site assignments to be increased. The postsearch engine SLoMo provided confident phosphorylation site localization, whereas different versions of PTM-Score integrated in MaxQuant differed in performance. Based on high-resolution ETD and higher collisional dissociation (HCD) data sets from a large synthetic peptide and phosphopeptide reference library reported by Marx et al. [Nat. Biotechnol. 2013, 31 (6), 557-564], we show that an Andromeda/PTM-Score probability of 1 is required to provide an false localization rate (FLR) of 1% for HCD data, while 0.55 is sufficient for high-resolution ETD spectra. Additional analyses of HCD data demonstrated that for phosphotyrosine peptides and phosphopeptides containing two potential phosphorylation sites, PTM-Score probability cutoff values of <1 can be applied to ensure an FLR of 1%. Proper adjustment of localization probability cutoffs allowed us to significantly increase the number of confident sites with an FLR of <1%.Our findings underscore the need for the systematic assessment of FLRs for different score values to report confident modification site

  20. Exploiting the spatial locality of electron correlation within the parametric two-electron reduced-density-matrix method

    NASA Astrophysics Data System (ADS)

    DePrince, A. Eugene; Mazziotti, David A.

    2010-01-01

    The parametric variational two-electron reduced-density-matrix (2-RDM) method is applied to computing electronic correlation energies of medium-to-large molecular systems by exploiting the spatial locality of electron correlation within the framework of the cluster-in-molecule (CIM) approximation [S. Li et al., J. Comput. Chem. 23, 238 (2002); J. Chem. Phys. 125, 074109 (2006)]. The 2-RDMs of individual molecular fragments within a molecule are determined, and selected portions of these 2-RDMs are recombined to yield an accurate approximation to the correlation energy of the entire molecule. In addition to extending CIM to the parametric 2-RDM method, we (i) suggest a more systematic selection of atomic-orbital domains than that presented in previous CIM studies and (ii) generalize the CIM method for open-shell quantum systems. The resulting method is tested with a series of polyacetylene molecules, water clusters, and diazobenzene derivatives in minimal and nonminimal basis sets. Calculations show that the computational cost of the method scales linearly with system size. We also compute hydrogen-abstraction energies for a series of hydroxyurea derivatives. Abstraction of hydrogen from hydroxyurea is thought to be a key step in its treatment of sickle cell anemia; the design of hydroxyurea derivatives that oxidize more rapidly is one approach to devising more effective treatments.

  1. Planning Target Margin Calculations for Prostate Radiotherapy Based on Intrafraction and Interfraction Motion Using Four Localization Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beltran, Chris; Herman, Michael G.; Davis, Brian J.

    2008-01-01

    Purpose: To determine planning target volume (PTV) margins for prostate radiotherapy based on the internal margin (IM) (intrafractional motion) and the setup margin (SM) (interfractional motion) for four daily localization methods: skin marks (tattoo), pelvic bony anatomy (bone), intraprostatic gold seeds using a 5-mm action threshold, and using no threshold. Methods and Materials: Forty prostate cancer patients were treated with external radiotherapy according to an online localization protocol using four intraprostatic gold seeds and electronic portal images (EPIs). Daily localization and treatment EPIs were obtained. These data allowed inter- and intrafractional analysis of prostate motion. The SM for the fourmore » daily localization methods and the IM were determined. Results: A total of 1532 fractions were analyzed. Tattoo localization requires a SM of 6.8 mm left-right (LR), 7.2 mm inferior-superior (IS), and 9.8 mm anterior-posterior (AP). Bone localization requires 3.1, 8.9, and 10.7 mm, respectively. The 5-mm threshold localization requires 4.0, 3.9, and 3.7 mm. No threshold localization requires 3.4, 3.2, and 3.2 mm. The intrafractional prostate motion requires an IM of 2.4 mm LR, 3.4 mm IS and AP. The PTV margin using the 5-mm threshold, including interobserver uncertainty, IM, and SM, is 4.8 mm LR, 5.4 mm IS, and 5.2 mm AP. Conclusions: Localization based on EPI with implanted gold seeds allows a large PTV margin reduction when compared with tattoo localization. Except for the LR direction, bony anatomy localization does not decrease the margins compared with tattoo localization. Intrafractional prostate motion is a limiting factor on margin reduction.« less

  2. Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.

    PubMed

    Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E

    2018-06-01

    An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.

  3. Simultaneous Local Binary Feature Learning and Encoding for Homogeneous and Heterogeneous Face Recognition.

    PubMed

    Lu, Jiwen; Erin Liong, Venice; Zhou, Jie

    2017-08-09

    In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.

  4. A hybrid neural learning algorithm using evolutionary learning and derivative free local search method.

    PubMed

    Ghosh, Ranadhir; Yearwood, John; Ghosh, Moumita; Bagirov, Adil

    2006-06-01

    In this paper we investigate a hybrid model based on the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. Also we discuss different variants for hybrid models using the Discrete Gradient method and an evolutionary strategy for determining the weights in a feed forward artificial neural network. The Discrete Gradient method has the advantage of being able to jump over many local minima and find very deep local minima. However, earlier research has shown that a good starting point for the discrete gradient method can improve the quality of the solution point. Evolutionary algorithms are best suited for global optimisation problems. Nevertheless they are cursed with longer training times and often unsuitable for real world application. For optimisation problems such as weight optimisation for ANNs in real world applications the dimensions are large and time complexity is critical. Hence the idea of a hybrid model can be a suitable option. In this paper we propose different fusion strategies for hybrid models combining the evolutionary strategy with the discrete gradient method to obtain an optimal solution much quicker. Three different fusion strategies are discussed: a linear hybrid model, an iterative hybrid model and a restricted local search hybrid model. Comparative results on a range of standard datasets are provided for different fusion hybrid models.

  5. A Historical Perspective on Local Environmental Movements in Japan: Lessons for the Transdisciplinary Approach on Water Resource Governance

    NASA Astrophysics Data System (ADS)

    Oh, T.

    2014-12-01

    Typical studies on natural resources from a social science perspective tend to choose one type of resource—water, for example— and ask what factors contribute to the sustainable use or wasteful exploitation of that resource. However, climate change and economic development, which are causing increased pressure on local resources and presenting communities with increased levels of tradeoffs and potential conflicts, force us to consider the trade-offs between options for using a particular resource. Therefore, the transdisciplinary approach that accurately captures the advantages and disadvantages of various possible resource uses is particularly important in the complex social-ecological systems, where concerns about inequality with respect to resource use and access have become unavoidable. Needless to say, resource management and policy require sound scientific understanding of the complex interconnections between nature and society, however, in contrast to typical international discussions, I discuss Japan not as an "advanced" case where various dilemmas have been successfully addressed by the government through the optimal use of technology, but rather as a nation seeing an emerging trend that is based on a awareness of the connections between local resources and the environment. Furthermore, from a historical viewpoint, the nexus of local resources is not a brand-new idea in the experience of environmental governance in Japan. There exist the local environment movements, which emphasized the interconnection of local resources and succeeded in urging the governmental action and policymaking. For this reason, local movements and local knowledge for the resource governance warrant attention. This study focuses on the historical cases relevant to water resource management including groundwater, and considers the contexts and conditions to holistically address local resource problems, paying particular attention to interactions between science and society. I

  6. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  7. Ethics, Collaboration, and Presentation Methods for Local and Traditional Knowledge for Understanding Arctic Change

    NASA Astrophysics Data System (ADS)

    Parsons, M. A.; Gearheard, S.; McNeave, C.

    2009-12-01

    Local and traditional knowledge (LTK) provides rich information about the Arctic environment at spatial and temporal scales that scientific knowledge often does not have access to (e.g. localized observations of fine-scale ecological change potentially from many different communities, or local sea ice and conditions prior to 1950s ice charts and 1970s satellite records). Community-based observations and monitoring are an opportunity for Arctic residents to provide ‘frontline’ observations and measurements that are an early warning system for Arctic change. The Exchange for Local Observations and Knowledge of the Arctic (ELOKA) was established in response to the growing number of community-based and community-oriented research and observation projects in the Arctic. ELOKA provides data management and user support to facilitate the collection, preservation, exchange, and use of local observations and knowledge. Managing these data presents unique ethical challenges in terms of appropriate use of rare human knowledge and ensuring that knowledge is not lost from the local communities and not exploited in ways antithetical to community culture and desires. Local Arctic residents must be engaged as true collaborative partners while respecting their perspectives, which may vary substantially from a western science perspective. At the same time, we seek to derive scientific meaning from the local knowledge that can be used in conjunction with quantitative science data. This creates new challenges in terms of data presentation, knowledge representations, and basic issues of metadata. This presentation reviews these challenges, some initial approaches to addressing them, and overall lessons learned and future directions.

  8. A novel endoscopic fluorescent band ligation method for tumor localization.

    PubMed

    Hyun, Jong Hee; Kim, Seok-Ki; Kim, Kwang Gi; Kim, Hong Rae; Lee, Hyun Min; Park, Sunup; Kim, Sung Chun; Choi, Yongdoo; Sohn, Dae Kyung

    2016-10-01

    Accurate tumor localization is essential for minimally invasive surgery. This study describes the development of a novel endoscopic fluorescent band ligation method for the rapid and accurate identification of tumor sites during surgery. The method utilized a fluorescent rubber band, made of indocyanine green (ICG) and a liquid rubber solution mixture, as well as a near-infrared fluorescence laparoscopic system with a dual light source using a high-powered light-emitting diode (LED) and a 785-nm laser diode. The fluorescent rubber bands were endoscopically placed on the mucosae of porcine stomachs and colons. During subsequent conventional laparoscopic stomach and colon surgery, the fluorescent bands were assayed using the near-infrared fluorescence laparoscopy system. The locations of the fluorescent clips were clearly identified on the fluorescence images in real time. The system was able to distinguish the two or three bands marked on the mucosal surfaces of the stomach and colon. Resection margins around the fluorescent bands were sufficient in the resected specimens obtained during stomach and colon surgery. These novel endoscopic fluorescent bands could be rapidly and accurately localized during stomach and colon surgery. Use of these bands may make possible the excision of exact target sites during minimally invasive gastrointestinal surgery.

  9. Point cloud registration from local feature correspondences-Evaluation on challenging datasets.

    PubMed

    Petricek, Tomas; Svoboda, Tomas

    2017-01-01

    Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.

  10. Efficient and accurate local single reference correlation methods for high-spin open-shell molecules using pair natural orbitals

    NASA Astrophysics Data System (ADS)

    Hansen, Andreas; Liakos, Dimitrios G.; Neese, Frank

    2011-12-01

    A production level implementation of the high-spin open-shell (spin unrestricted) single reference coupled pair, quadratic configuration interaction and coupled cluster methods with up to doubly excited determinants in the framework of the local pair natural orbital (LPNO) concept is reported. This work is an extension of the closed-shell LPNO methods developed earlier [F. Neese, F. Wennmohs, and A. Hansen, J. Chem. Phys. 130, 114108 (2009), 10.1063/1.3086717; F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009), 10.1063/1.3173827]. The internal space is spanned by localized orbitals, while the external space for each electron pair is represented by a truncated PNO expansion. The laborious integral transformation associated with the large number of PNOs becomes feasible through the extensive use of density fitting (resolution of the identity (RI)) techniques. Technical complications arising for the open-shell case and the use of quasi-restricted orbitals for the construction of the reference determinant are discussed in detail. As in the closed-shell case, only three cutoff parameters control the average number of PNOs per electron pair, the size of the significant pair list, and the number of contributing auxiliary basis functions per PNO. The chosen threshold default values ensure robustness and the results of the parent canonical methods are reproduced to high accuracy. Comprehensive numerical tests on absolute and relative energies as well as timings consistently show that the outstanding performance of the LPNO methods carries over to the open-shell case with minor modifications. Finally, hyperfine couplings calculated with the variational LPNO-CEPA/1 method, for which a well-defined expectation value type density exists, indicate the great potential of the LPNO approach for the efficient calculation of molecular properties.

  11. Approach-Method Interaction: The Role of Teaching Method on the Effect of Context-Based Approach in Physics Instruction

    ERIC Educational Resources Information Center

    Pesman, Haki; Ozdemir, Omer Faruk

    2012-01-01

    The purpose of this study is to explore not only the effect of context-based physics instruction on students' achievement and motivation in physics, but also how the use of different teaching methods influences it (interaction effect). Therefore, two two-level-independent variables were defined, teaching approach (contextual and non-contextual…

  12. Post-partum depression in Kinshasa, Democratic Republic of Congo: validation of a concept using a mixed-methods cross-cultural approach.

    PubMed

    Bass, Judith K; Ryder, Robert W; Lammers, Marie-Christine; Mukaba, Thibaut N; Bolton, Paul A

    2008-12-01

    To determine if a post-partum depression syndrome exists among mothers in Kinshasa, Democratic Republic of Congo, by adapting and validating standard screening instruments. Using qualitative interviewing techniques, we interviewed a convenience sample of 80 women living in a large peri-urban community to better understand local conceptions of mental illness. We used this information to adapt two standard depression screeners, the Edinburgh Post-partum Depression Scale and the Hopkins Symptom Checklist. In a subsequent quantitative study, we identified another 133 women with and without the local depression syndrome and used this information to validate the adapted screening instruments. Based on the qualitative data, we found a local syndrome that closely approximates the Western model of major depressive disorder. The women we interviewed, representative of the local populace, considered this an important syndrome among new mothers because it negatively affects women and their young children. Women (n = 41) identified as suffering from this syndrome had statistically significantly higher depression severity scores on both adapted screeners than women identified as not having this syndrome (n = 20; P < 0.0001). When it is unclear or unknown if Western models of psychopathology are appropriate for use in the local context, these models must be validated to ensure cross-cultural applicability. Using a mixed-methods approach we found a local syndrome similar to depression and validated instruments to screen for this disorder. As the importance of compromised mental health in developing world populations becomes recognized, the methods described in this report will be useful more widely.

  13. Understanding Design Tradeoffs for Health Technologies: A Mixed-Methods Approach

    PubMed Central

    O’Leary, Katie; Eschler, Jordan; Kendall, Logan; Vizer, Lisa M.; Ralston, James D.; Pratt, Wanda

    2017-01-01

    We introduce a mixed-methods approach for determining how people weigh tradeoffs in values related to health and technologies for health self-management. Our approach combines interviews with Q-methodology, a method from psychology uniquely suited to quantifying opinions. We derive the framework for structured data collection and analysis for the Q-methodology from theories of self-management of chronic illness and technology adoption. To illustrate the power of this new approach, we used it in a field study of nine older adults with type 2 diabetes, and nine mothers of children with asthma. Our mixed-methods approach provides three key advantages for health design science in HCI: (1) it provides a structured health sciences theoretical framework to guide data collection and analysis; (2) it enhances the coding of unstructured data with statistical patterns of polarizing and consensus views; and (3) it empowers participants to actively weigh competing values that are most personally significant to them. PMID:28804794

  14. Iterative approach as alternative to S-matrix in modal methods

    NASA Astrophysics Data System (ADS)

    Semenikhin, Igor; Zanuccoli, Mauro

    2014-12-01

    The continuously increasing complexity of opto-electronic devices and the rising demands of simulation accuracy lead to the need of solving very large systems of linear equations making iterative methods promising and attractive from the computational point of view with respect to direct methods. In particular, iterative approach potentially enables the reduction of required computational time to solve Maxwell's equations by Eigenmode Expansion algorithms. Regardless of the particular eigenmodes finding method used, the expansion coefficients are computed as a rule by scattering matrix (S-matrix) approach or similar techniques requiring order of M3 operations. In this work we consider alternatives to the S-matrix technique which are based on pure iterative or mixed direct-iterative approaches. The possibility to diminish the impact of M3 -order calculations to overall time and in some cases even to reduce the number of arithmetic operations to M2 by applying iterative techniques are discussed. Numerical results are illustrated to discuss validity and potentiality of the proposed approaches.

  15. Virtual local target method for avoiding local minimum in potential field based robot navigation.

    PubMed

    Zou, Xi-Yong; Zhu, Jing

    2003-01-01

    A novel robot navigation algorithm with global path generation capability is presented. Local minimum is a most intractable but is an encountered frequently problem in potential field based robot navigation. Through appointing appropriately some virtual local targets on the journey, it can be solved effectively. The key concept employed in this algorithm are the rules that govern when and how to appoint these virtual local targets. When the robot finds itself in danger of local minimum, a virtual local target is appointed to replace the global goal temporarily according to the rules. After the virtual target is reached, the robot continues on its journey by heading towards the global goal. The algorithm prevents the robot from running into local minima anymore. Simulation results showed that it is very effective in complex obstacle environments.

  16. Sonic-box method employing local Mach number for oscillating wings with thickness

    NASA Technical Reports Server (NTRS)

    Ruo, S. Y.

    1978-01-01

    A computer program was developed to account approximately for the effects of finite wing thickness in the transonic potential flow over an oscillating wing of finite span. The program is based on the original sonic-box program for planar wing which was previously extended to include the effects of the swept trailing edge and the thickness of the wing. Account for the nonuniform flow caused by finite thickness is made by application of the local linearization concept. The thickness effect, expressed in terms of the local Mach number, is included in the basic solution to replace the coordinate transformation method used in the earlier work. Calculations were made for a delta wing and a rectangular wing performing plunge and pitch oscillations, and the results were compared with those obtained from other methods. An input quide and a complete listing of the computer code are presented.

  17. Local surface curvature analysis based on reflection estimation

    NASA Astrophysics Data System (ADS)

    Lu, Qinglin; Laligant, Olivier; Fauvet, Eric; Zakharova, Anastasia

    2015-07-01

    In this paper, we propose a novel reflection based method to estimate the local orientation of a specular surface. For a calibrated scene with a fixed light band, the band is reflected by the surface to the image plane of a camera. Then the local geometry between the surface and reflected band is estimated. Firstly, in order to find the relationship relying the object position, the object surface orientation and the band reflection, we study the fundamental theory of the geometry between a specular mirror surface and a band source. Then we extend our approach to the spherical surface with arbitrary curvature. Experiments are conducted with mirror surface and spherical surface. Results show that our method is able to obtain the local surface orientation merely by measuring the displacement and the form of the reflection.

  18. Numerical modeling of local scour around hydraulic structure in sandy beds by dynamic mesh method

    NASA Astrophysics Data System (ADS)

    Fan, Fei; Liang, Bingchen; Bai, Yuchuan; Zhu, Zhixia; Zhu, Yanjun

    2017-10-01

    Local scour, a non-negligible factor in hydraulic engineering, endangers the safety of hydraulic structures. In this work, a numerical model for simulating local scour was constructed, based on the open source code computational fluid dynamics model OpenFOAM. We consider both the bedload and suspended load sediment transport in the scour model and adopt the dynamic mesh method to simulate the evolution of the bed elevation. We use the finite area method to project data between the three-dimensional flow model and the two-dimensional (2D) scour model. We also improved the 2D sand slide method and added it to the scour model to correct the bed bathymetry when the bed slope angle exceeds the angle of repose. Moreover, to validate our scour model, we conducted and compared the results of three experiments with those of the developed model. The validation results show that our developed model can reliably simulate local scour.

  19. Local Laplacian Coding From Theoretical Analysis of Local Coding Schemes for Locally Linear Classification.

    PubMed

    Pang, Junbiao; Qin, Lei; Zhang, Chunjie; Zhang, Weigang; Huang, Qingming; Yin, Baocai

    2015-12-01

    Local coordinate coding (LCC) is a framework to approximate a Lipschitz smooth function by combining linear functions into a nonlinear one. For locally linear classification, LCC requires a coding scheme that heavily determines the nonlinear approximation ability, posing two main challenges: 1) the locality making faraway anchors have smaller influences on current data and 2) the flexibility balancing well between the reconstruction of current data and the locality. In this paper, we address the problem from the theoretical analysis of the simplest local coding schemes, i.e., local Gaussian coding and local student coding, and propose local Laplacian coding (LPC) to achieve the locality and the flexibility. We apply LPC into locally linear classifiers to solve diverse classification tasks. The comparable or exceeded performances of state-of-the-art methods demonstrate the effectiveness of the proposed method.

  20. Performance Analysis of Classification Methods for Indoor Localization in Vlc Networks

    NASA Astrophysics Data System (ADS)

    Sánchez-Rodríguez, D.; Alonso-González, I.; Sánchez-Medina, J.; Ley-Bosch, C.; Díaz-Vilariño, L.

    2017-09-01

    Indoor localization has gained considerable attention over the past decade because of the emergence of numerous location-aware services. Research works have been proposed on solving this problem by using wireless networks. Nevertheless, there is still much room for improvement in the quality of the proposed classification models. In the last years, the emergence of Visible Light Communication (VLC) brings a brand new approach to high quality indoor positioning. Among its advantages, this new technology is immune to electromagnetic interference and has the advantage of having a smaller variance of received signal power compared to RF based technologies. In this paper, a performance analysis of seventeen machine leaning classifiers for indoor localization in VLC networks is carried out. The analysis is accomplished in terms of accuracy, average distance error, computational cost, training size, precision and recall measurements. Results show that most of classifiers harvest an accuracy above 90 %. The best tested classifier yielded a 99.0 % accuracy, with an average error distance of 0.3 centimetres.

  1. Utilising a collective case study system theory mixed methods approach: a rural health example

    PubMed Central

    2014-01-01

    Background Insight into local health service provision in rural communities is limited in the literature. The dominant workforce focus in the rural health literature, while revealing issues of shortage of maldistribution, does not describe service provision in rural towns. Similarly aggregation of data tends to render local health service provision virtually invisible. This paper describes a methodology to explore specific aspects of rural health service provision with an initial focus on understanding rurality as it pertains to rural physiotherapy service provision. Method A system theory-case study heuristic combined with a sequential mixed methods approach to provide a framework for both quantitative and qualitative exploration across sites. Stakeholder perspectives were obtained through surveys and in depth interviews. The investigation site was a large area of one Australian state with a mix of rural, regional and remote communities. Results 39 surveys were received from 11 locations within the investigation site and 19 in depth interviews were conducted. Stakeholder perspectives of rurality and workforce numbers informed the development of six case types relevant to the exploration of rural physiotherapy service provision. Participant perspective of rurality often differed with the geographical classification of their location. The numbers of onsite colleagues and local access to health services contributed to participant perceptions of rurality. Conclusions The complexity of understanding the concept of rurality was revealed by interview participants when providing their perspectives about rural physiotherapy service provision. Dual measures, such as rurality and workforce numbers, provide more relevant differentiation of sites to explore specific services, such rural physiotherapy service provision, than single measure of rurality as defined by geographic classification. The system theory-case study heuristic supports both qualitative and quantitative

  2. Comprehensive approach to breast cancer detection using light: photon localization by ultrasound modulation and tissue characterization by spectral discrimination

    NASA Astrophysics Data System (ADS)

    Marks, Fay A.; Tomlinson, Harold W.; Brooksby, Glen W.

    1993-09-01

    A new technique called Ultrasound Tagging of Light (UTL) for imaging breast tissue is described. In this approach, photon localization in turbid tissue is achieved by cross- modulating a laser beam with focussed, pulsed ultrasound. Light which passes through the ultrasound focal spot is `tagged' with the frequency of the ultrasound pulse. The experimental system uses an Argon-Ion laser, a single PIN photodetector, and a 1 MHz fixed-focus pulsed ultrasound transducer. The utility of UTL as a photon localization technique in scattering media is examined using tissue phantoms consisting of gelatin and intralipid. In a separate study, in vivo optical reflectance spectrophotometry was performed on human breast tumors implanted intramuscularly and subcutaneously in nineteen nude mice. The validity of applying a quadruple wavelength breast cancer discrimination metric (developed using breast biopsy specimens) to the in vivo condition was tested. A scatter diagram for the in vivo model tumors based on this metric is presented using as the `normal' controls the hands and fingers of volunteers. Tumors at different growth stages were studied; these tumors ranged in size from a few millimeters to two centimeters. It is expected that when coupled with a suitable photon localization technique like UTL, spectral discrimination methods like this one will prove useful in the detection of breast cancer by non-ionizing means.

  3. Local structures around the substituted elements in mixed layered oxides

    PubMed Central

    Akama, Shota; Kobayashi, Wataru; Amaha, Kaoru; Niwa, Hideharu; Nitani, Hiroaki; Moritomo, Yutaka

    2017-01-01

    The chemical substitution of a transition metal (M) is an effective method to improve the functionality of a material, such as its electrochemical, magnetic, and dielectric properties. The substitution, however, causes local lattice distortion because the difference in the ionic radius (r) modifies the local interatomic distances. Here, we systematically investigated the local structures in the pure (x = 0.0) and mixed (x = 0.05 or 0.1) layered oxides, Na(M1−xM′x)O2 (M and M′ are the majority and minority transition metals, respectively), by means of extended X-ray absorption fine structure (EXAFS) analysis. We found that the local interatomic distance (dM-O) around the minority element approaches that around the majority element to reduces the local lattice distortion. We further found that the valence of the minority Mn changes so that its ionic radius approaches that of the majority M. PMID:28252008

  4. Nonconforming mortar element methods: Application to spectral discretizations

    NASA Technical Reports Server (NTRS)

    Maday, Yvon; Mavriplis, Cathy; Patera, Anthony

    1988-01-01

    Spectral element methods are p-type weighted residual techniques for partial differential equations that combine the generality of finite element methods with the accuracy of spectral methods. Presented here is a new nonconforming discretization which greatly improves the flexibility of the spectral element approach as regards automatic mesh generation and non-propagating local mesh refinement. The method is based on the introduction of an auxiliary mortar trace space, and constitutes a new approach to discretization-driven domain decomposition characterized by a clean decoupling of the local, structure-preserving residual evaluations and the transmission of boundary and continuity conditions. The flexibility of the mortar method is illustrated by several nonconforming adaptive Navier-Stokes calculations in complex geometry.

  5. General Method for Constructing Local Hidden Variable Models for Entangled Quantum States

    NASA Astrophysics Data System (ADS)

    Cavalcanti, D.; Guerini, L.; Rabelo, R.; Skrzypczyk, P.

    2016-11-01

    Entanglement allows for the nonlocality of quantum theory, which is the resource behind device-independent quantum information protocols. However, not all entangled quantum states display nonlocality. A central question is to determine the precise relation between entanglement and nonlocality. Here we present the first general test to decide whether a quantum state is local, and show that the test can be implemented by semidefinite programing. This method can be applied to any given state and for the construction of new examples of states with local hidden variable models for both projective and general measurements. As applications, we provide a lower-bound estimate of the fraction of two-qubit local entangled states and present new explicit examples of such states, including those that arise from physical noise models, Bell-diagonal states, and noisy Greenberger-Horne-Zeilinger and W states.

  6. A locally conservative stabilized continuous Galerkin finite element method for two-phase flow in poroelastic subsurfaces

    NASA Astrophysics Data System (ADS)

    Deng, Q.; Ginting, V.; McCaskill, B.; Torsu, P.

    2017-10-01

    We study the application of a stabilized continuous Galerkin finite element method (CGFEM) in the simulation of multiphase flow in poroelastic subsurfaces. The system involves a nonlinear coupling between the fluid pressure, subsurface's deformation, and the fluid phase saturation, and as such, we represent this coupling through an iterative procedure. Spatial discretization of the poroelastic system employs the standard linear finite element in combination with a numerical diffusion term to maintain stability of the algebraic system. Furthermore, direct calculation of the normal velocities from pressure and deformation does not entail a locally conservative field. To alleviate this drawback, we propose an element based post-processing technique through which local conservation can be established. The performance of the method is validated through several examples illustrating the convergence of the method, the effectivity of the stabilization term, and the ability to achieve locally conservative normal velocities. Finally, the efficacy of the method is demonstrated through simulations of realistic multiphase flow in poroelastic subsurfaces.

  7. Local deformation for soft tissue simulation

    PubMed Central

    Omar, Nadzeri; Zhong, Yongmin; Smith, Julian; Gu, Chengfan

    2016-01-01

    ABSTRACT This paper presents a new methodology to localize the deformation range to improve the computational efficiency for soft tissue simulation. This methodology identifies the local deformation range from the stress distribution in soft tissues due to an external force. A stress estimation method is used based on elastic theory to estimate the stress in soft tissues according to a depth from the contact surface. The proposed methodology can be used with both mass-spring and finite element modeling approaches for soft tissue deformation. Experimental results show that the proposed methodology can improve the computational efficiency while maintaining the modeling realism. PMID:27286482

  8. Non-axisymmetric local magnetostatic equilibrium

    DOE PAGES

    Candy, Jefferey M.; Belli, Emily A.

    2015-03-24

    In this study, we outline an approach to the problem of local equilibrium in non-axisymmetric configurations that adheres closely to Miller's original method for axisymmetric plasmas. Importantly, this method is novel in that it allows not only specification of 3D shape, but also explicit specification of the shear in the 3D shape. A spectrally-accurate method for solution of the resulting nonlinear partial differential equations is also developed. We verify the correctness of the spectral method, in the axisymmetric limit, through comparisons with an independent numerical solution. Some analytic results for the two-dimensional case are given, and the connection to Boozermore » coordinates is clarified.« less

  9. An Evaluation of Active Learning Causal Discovery Methods for Reverse-Engineering Local Causal Pathways of Gene Regulation

    PubMed Central

    Ma, Sisi; Kemmeren, Patrick; Aliferis, Constantin F.; Statnikov, Alexander

    2016-01-01

    Reverse-engineering of causal pathways that implicate diseases and vital cellular functions is a fundamental problem in biomedicine. Discovery of the local causal pathway of a target variable (that consists of its direct causes and direct effects) is essential for effective intervention and can facilitate accurate diagnosis and prognosis. Recent research has provided several active learning methods that can leverage passively observed high-throughput data to draft causal pathways and then refine the inferred relations with a limited number of experiments. The current study provides a comprehensive evaluation of the performance of active learning methods for local causal pathway discovery in real biological data. Specifically, 54 active learning methods/variants from 3 families of algorithms were applied for local causal pathways reconstruction of gene regulation for 5 transcription factors in S. cerevisiae. Four aspects of the methods’ performance were assessed, including adjacency discovery quality, edge orientation accuracy, complete pathway discovery quality, and experimental cost. The results of this study show that some methods provide significant performance benefits over others and therefore should be routinely used for local causal pathway discovery tasks. This study also demonstrates the feasibility of local causal pathway reconstruction in real biological systems with significant quality and low experimental cost. PMID:26939894

  10. Effects and repercussions of local/hospital-based health technology assessment (HTA): a systematic review

    PubMed Central

    2014-01-01

    Background Health technology assessment (HTA) is increasingly performed at the local or hospital level where the costs, impacts, and benefits of health technologies can be directly assessed. Although local/hospital-based HTA has been implemented for more than two decades in some jurisdictions, little is known about its effects and impact on hospital budget, clinical practices, and patient outcomes. We conducted a mixed-methods systematic review that aimed to synthesize current evidence regarding the effects and impact of local/hospital-based HTA. Methods We identified articles through PubMed and Embase and by citation tracking of included studies. We selected qualitative, quantitative, or mixed-methods studies with empirical data about the effects or impact of local/hospital-based HTA on decision-making, budget, or perceptions of stakeholders. We extracted the following information from included studies: country, methodological approach, and use of conceptual framework; local/hospital HTA approach and activities described; reported effects and impacts of local/hospital-based HTA; factors facilitating/hampering the use of hospital-based HTA recommendations; and perceptions of stakeholders concerning local/hospital HTA. Due to the great heterogeneity among studies, we conducted a narrative synthesis of their results. Results A total of 18 studies met the inclusion criteria. We reported the results according to the four approaches for performing HTA proposed by the Hospital Based HTA Interest Sub-Group: ambassador model, mini-HTA, internal committee, and HTA unit. Results showed that each of these approaches for performing HTA corresponds to specific needs and structures and has its strengths and limitations. Overall, studies showed positive impacts related to local/hospital-based HTA on hospital decisions and budgets, as well as positive perceptions from managers and clinicians. Conclusions Local/hospital-based HTA could influence decision-making on several aspects

  11. A procurement-based pathway for promoting public health: innovative purchasing approaches for state and local government agencies.

    PubMed

    Noonan, Kathleen; Miller, Dorothy; Sell, Katherine; Rubin, David

    2013-11-01

    Through their purchasing powers, government agencies can play a critical role in leveraging markets to create healthier foods. In the United States, state and local governments are implementing creative approaches to procuring healthier foods, moving beyond the traditional regulatory relationship between government and vendors. They are forging new partnerships between government, non-profits, and researchers to increase healthier purchasing. On the basis of case examples, this article proposes a pathway in which state and local government agencies can use the procurement cycle to improve healthy eating.

  12. Does remote sensing help translating local SGD investigation to large spatial scales?

    NASA Astrophysics Data System (ADS)

    Moosdorf, N.; Mallast, U.; Hennig, H.; Schubert, M.; Knoeller, K.; Neehaul, Y.

    2016-02-01

    Within the last 20 years, studies on submarine groundwater discharge (SGD) have revealed numerous processes, temporal behavior and quantitative estimations as well as best-practice and localization methods. This plethora on information is valuable regarding the understanding of magnitude and effects of SGD for the respective location. Yet, since given local conditions vary, the translation of local understanding, magnitudes and effects to a regional or global scale is not trivial. In contrast, modeling approaches (e.g. 228Ra budget) tackling SGD on a global scale do provide quantitative global estimates but have not been related to local investigations. This gap between the two approaches, local and global, and the combination and/or translation of either one to the other represents one of the mayor challenges the SGD community currently faces. But what if remote sensing can provide certain information that may be used as translation between the two, similar to transfer functions in many other disciplines allowing an extrapolation from in-situ investigated and quantified SGD (discrete information) to regional scales or beyond? Admittedly, the sketched future is ambitious and we will certainly not be able to present a solution to the raised question. Nonetheless, we will show a remote sensing based approach that is already able to identify potential SGD sites independent on location or hydrogeological conditions. Based on multi-temporal thermal information of the water surface as core of the approach, SGD influenced sites display a smaller thermal variation (thermal anomalies) than surrounding uninfluenced areas. Despite the apparent simplicity, the automatized approach has helped to localize several sites that could be validated with proven in-situ methods. At the same time it embodies the risk to identify false positives that can only be avoided if we can `calibrate' the so obtained thermal anomalies to in-situ data. We will present all pros and cons of our

  13. An integrated bioanalytical method development and validation approach: case studies.

    PubMed

    Xue, Y-J; Melo, Brian; Vallejo, Martha; Zhao, Yuwen; Tang, Lina; Chen, Yuan-Shek; Keller, Karin M

    2012-10-01

    We proposed an integrated bioanalytical method development and validation approach: (1) method screening based on analyte's physicochemical properties and metabolism information to determine the most appropriate extraction/analysis conditions; (2) preliminary stability evaluation using both quality control and incurred samples to establish sample collection, storage and processing conditions; (3) mock validation to examine method accuracy and precision and incurred sample reproducibility; and (4) method validation to confirm the results obtained during method development. This integrated approach was applied to the determination of compound I in rat plasma and compound II in rat and dog plasma. The effectiveness of the approach was demonstrated by the superior quality of three method validations: (1) a zero run failure rate; (2) >93% of quality control results within 10% of nominal values; and (3) 99% incurred sample within 9.2% of the original values. In addition, rat and dog plasma methods for compound II were successfully applied to analyze more than 900 plasma samples obtained from Investigational New Drug (IND) toxicology studies in rats and dogs with near perfect results: (1) a zero run failure rate; (2) excellent accuracy and precision for standards and quality controls; and (3) 98% incurred samples within 15% of the original values. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Automated delineation and characterization of drumlins using a localized contour tree approach

    NASA Astrophysics Data System (ADS)

    Wang, Shujie; Wu, Qiusheng; Ward, Dylan

    2017-10-01

    Drumlins are ubiquitous landforms in previously glaciated regions, formed through a series of complex subglacial processes operating underneath the paleo-ice sheets. Accurate delineation and characterization of drumlins are essential for understanding the formation mechanism of drumlins as well as the flow behaviors and basal conditions of paleo-ice sheets. Automated mapping of drumlins is particularly important for examining the distribution patterns of drumlins across large spatial scales. This paper presents an automated vector-based approach to mapping drumlins from high-resolution light detection and ranging (LiDAR) data. The rationale is to extract a set of concentric contours by building localized contour trees and establishing topological relationships. This automated method can overcome the shortcomings of previously manual and automated methods for mapping drumlins, for instance, the azimuthal biases during the generation of shaded relief images. A case study was carried out over a portion of the New York Drumlin Field. Overall 1181 drumlins were identified from the LiDAR-derived DEM across the study region, which had been underestimated in previous literature. The delineation results were visually and statistically compared to the manual digitization results. The morphology of drumlins was characterized by quantifying the length, width, elongation ratio, height, area, and volume. Statistical and spatial analyses were conducted to examine the distribution pattern and spatial variability of drumlin size and form. The drumlins and the morphologic characteristics exhibit significant spatial clustering rather than randomly distributed patterns. The form of drumlins varies from ovoid to spindle shapes towards the downstream direction of paleo ice flows, along with the decrease in width, area, and volume. This observation is in line with previous studies, which may be explained by the variations in sediment thickness and/or the velocity increases of ice flows

  15. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation

    PubMed Central

    Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on

  16. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    PubMed

    Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies

  17. Method for localizing and isolating an errant process step

    DOEpatents

    Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.

    2003-01-01

    A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.

  18. Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2003-01-01

    A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.

  19. Surgical management of primary bone tumors of the spine: validation of an approach to enhance cure and reduce local recurrence.

    PubMed

    Fisher, Charles G; Saravanja, Davor D; Dvorak, Marcel F; Rampersaud, Y Raja; Clarkson, Paul W; Hurlbert, John; Fox, Richard; Zhang, Hongbin; Lewis, Stephen; Riaz, Salman; Ferguson, Peter C; Boyd, Michael C

    2011-05-01

    Multicenter ambispective cohort analysis. The purpose of this study is to determine whether applying Enneking's principles to surgical management of primary bone tumors of the spine significant decreases local recurrence and/or mortality. Oncologic management of primary tumors of spine has historically been inconsistent, controversial, and open to individual interpretation. A multicenter ambispective cohort analysis from 4 tertiary care spine referral centers was done. Patients were analyzed in 2 cohorts, "Enneking Appropriate" (EA), surgical margin as recommended by Enneking, and "Enneking Inappropriate" (EI), surgical margin not recommended by Enneking. Benign tumors were not included in mortality analysis. Two cohorts represented an analytic dataset with 147 patients, 86 male, average age 46 years (range: 10-83). Median follow-up was 4 (2-7) years in the EA and 6 (5.5-15.5) years in the EI. Seventy-one patients suffered at least 1 local recurrence during the study, 57 of 77 in the EI group and 14 of 70 in the EA group. EI surgical approach caused higher risk of first local recurrence (P < 0.0001). There were 48 deaths in total; 29 in the EI group and 19 in the EA. There was a strong correlation between the first local recurrence and mortality with an odds ratio of 4.69, (P < 0.0001). EI surgical approach resulted in a higher risk of mortality with a hazard ratio of 3.10, (P = 0.0485) compared to EA approach. Surgery results in a significant reduction in local recurrence when primary bone tumors of the spine are resected with EA margins. Local recurrence has a high concordance with mortality in resection of these tumors. A significant decrease in mortality occurs when EA surgery is used.

  20. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  1. Prediction of protein subcellular localization by weighted gene ontology terms.

    PubMed

    Chi, Sang-Mun

    2010-08-27

    We develop a new weighting approach of gene ontology (GO) terms for predicting protein subcellular localization. The weights of individual GO terms, corresponding to their contribution to the prediction algorithm, are determined by the term-weighting methods used in text categorization. We evaluate several term-weighting methods, which are based on inverse document frequency, information gain, gain ratio, odds ratio, and chi-square and its variants. Additionally, we propose a new term-weighting method based on the logarithmic transformation of chi-square. The proposed term-weighting method performs better than other term-weighting methods, and also outperforms state-of-the-art subcellular prediction methods. Our proposed method achieves 98.1%, 99.3%, 98.1%, 98.1%, and 95.9% overall accuracies for the animal BaCelLo independent dataset (IDS), fungal BaCelLo IDS, animal Höglund IDS, fungal Höglund IDS, and PLOC dataset, respectively. Furthermore, the close correlation between high-weighted GO terms and subcellular localizations suggests that our proposed method appropriately weights GO terms according to their relevance to the localizations. Copyright 2010 Elsevier Inc. All rights reserved.

  2. Raster-based outranking method: a new approach for municipal solid waste landfill (MSW) siting.

    PubMed

    Hamzeh, Mohamad; Abbaspour, Rahim Ali; Davalou, Romina

    2015-08-01

    MSW landfill siting is a complicated process because it requires integration of several factors. In this paper, geographic information system (GIS) and multiple criteria decision analysis (MCDA) were combined to handle the municipal solid waste (MSW) landfill siting. For this purpose, first, 16 input data layers were prepared in GIS environment. Then, the exclusionary lands were eliminated and potentially suitable areas for the MSW disposal were identified. These potentially suitable areas, in an innovative approach, were further examined by deploying Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE) II and analytic network process (ANP), which are two of the most recent MCDA methods, in order to determine land suitability for landfilling. PROMETHEE II was used to determine a complete ranking of the alternatives, while ANP was employed to quantify the subjective judgments of evaluators as criteria weights. The resulting land suitability was reported on a grading scale of 1-5 from 1 to 5, which is the least to the most suitable area, respectively. Finally, three optimal sites were selected by taking into consideration the local conditions of 15 sites, which were candidates for MSW landfilling. Research findings show that the raster-based method yields effective results.

  3. Extension of local front reconstruction method with controlled coalescence model

    NASA Astrophysics Data System (ADS)

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  4. Local atomic structure of Fe/Cr multilayers: Depth-resolved method

    NASA Astrophysics Data System (ADS)

    Babanov, Yu. A.; Ponomarev, D. A.; Devyaterikov, D. I.; Salamatov, Yu. A.; Romashev, L. N.; Ustinov, V. V.; Vasin, V. V.; Ageev, A. L.

    2017-10-01

    A depth-resolved method for the investigation of the local atomic structure by combining data of X-ray reflectivity and angle-resolved EXAFS is proposed. The solution of the problem can be divided into three stages: 1) determination of the element concentration profile with the depth z from X-ray reflectivity data, 2) determination of the X-ray fluorescence emission spectrum of the element i absorption coefficient μia (z,E) as a function of depth and photon energy E using the angle-resolved EXAFS data Iif (E , ϑl) , 3) determination of partial correlation functions gij (z , r) as a function of depth from μi (z , E) . All stages of the proposed method are demonstrated on a model example of a multilayer nanoheterostructure Cr/Fe/Cr/Al2O3. Three partial pair correlation functions are obtained. A modified Levenberg-Marquardt algorithm and a regularization method are applied.

  5. Semi-rigid single hook localization the best method for localizing ground glass opacities during video-assisted thoracoscopic surgery: re-aerated swine lung experimental and primary clinical results

    PubMed Central

    Zhao, Guang; Sun, Long; Geng, Guojun; Liu, Hongming; Li, Ning; Liu, Suhuan; Hao, Bing

    2017-01-01

    Background The aim of this study was to compare the effects of currently available preoperative localization methods, including semi-rigid single hook-wire, double-thorn hook-wire, and microcoil, in localizing the pulmonary nodules, thus to select the best technology to assist video-assisted thoracoscopic surgery (VATS) for small ground glass opacities (GGO). Methods Preoperative CT-guided localizing techniques including semi-rigid single hook-wire, double-thorn hook-wire and microcoil were used in re-aerated fresh swine lung for location experiments. The advantages and drawbacks of the three positioning technologies were compared, and then the most optimal technique was used in patients with GGO. Technical success and post-operative complications were used as primary endpoints. Results All three localizing techniques were successfully performed in the re-aerated fresh swine lung. The median tractive force of semi-rigid single hook wire, double-thorn hook wire and microcoil were 6.5, 4.85 and 0.2 N, which measured by a spring dynamometer. The wound sizes in the superficial pleura, caused by unplugging the needles, were 2 mm in double-thorn hook wire, 1 mm in semi-rigid single hook and 1 mm in microcoil, respectively. In patients with GGOs, the semi-rigid hook wires localizations were successfully performed, without any complication that need to be intervened. Dislodgement was reported in one patient before VATS. No major complications related to the preoperative hook wire localization and VATS were observed. Conclusions We found from our localization experiments in the swine lung that, among the commonly used three localization methods, semi-rigid hook wire showed the best operability and practicability than double-thorn hook wire and microcoil. Preoperative localization of small pulmonary nodules with single semi-rigid hook wire system shows a high success rate, acceptable utility and especially low dislodgement in VATS. PMID:29312722

  6. Microfabrication Method using a Combination of Local Ion Implantation and Magnetorheological Finishing

    NASA Astrophysics Data System (ADS)

    Han, Jin; Kim, Jong-Wook; Lee, Hiwon; Min, Byung-Kwon; Lee, Sang Jo

    2009-02-01

    A new microfabrication method that combines localized ion implantation and magnetorheological finishing is proposed. The proposed technique involves two steps. First, selected regions of a silicon wafer are irradiated with gallium ions by using a focused ion beam system. The mechanical properties of the irradiated regions are altered as a result of the ion implantation. Second, the wafer is processed by using a magnetorheological finishing method. During the finishing process, the regions not implanted with ion are preferentially removed. The material removal rate difference is utilized for microfabrication. The mechanisms of the proposed method are discussed, and applications are presented.

  7. Local motion-compensated method for high-quality 3D coronary artery reconstruction

    PubMed Central

    Liu, Bo; Bai, Xiangzhi; Zhou, Fugen

    2016-01-01

    The 3D reconstruction of coronary artery from X-ray angiograms rotationally acquired on C-arm has great clinical value. While cardiac-gated reconstruction has shown promising results, it suffers from the problem of residual motion. This work proposed a new local motion-compensated reconstruction method to handle this issue. An initial image was firstly reconstructed using a regularized iterative reconstruction method. Then a 3D/2D registration method was proposed to estimate the residual vessel motion. Finally, the residual motion was compensated in the final reconstruction using the extended iterative reconstruction method. Through quantitative evaluation, it was found that high-quality 3D reconstruction could be obtained and the result was comparable to state-of-the-art method. PMID:28018741

  8. Teaching Research Methods in Communication Disorders: "A Problem-Based Learning Approach"

    ERIC Educational Resources Information Center

    Greenwald, Margaret L.

    2006-01-01

    A critical professional issue in speech-language pathology and audiology is the current shortage of researchers. In this context, the most effective methods for training graduate students in research must be identified and implemented. This article describes a problem-based approach to teaching research methods. In this approach, the instructor…

  9. A composite experimental dynamic substructuring method based on partitioned algorithms and localized Lagrange multipliers

    NASA Astrophysics Data System (ADS)

    Abbiati, Giuseppe; La Salandra, Vincenzo; Bursi, Oreste S.; Caracoglia, Luca

    2018-02-01

    Successful online hybrid (numerical/physical) dynamic substructuring simulations have shown their potential in enabling realistic dynamic analysis of almost any type of non-linear structural system (e.g., an as-built/isolated viaduct, a petrochemical piping system subjected to non-stationary seismic loading, etc.). Moreover, owing to faster and more accurate testing equipment, a number of different offline experimental substructuring methods, operating both in time (e.g. the impulse-based substructuring) and frequency domains (i.e. the Lagrange multiplier frequency-based substructuring), have been employed in mechanical engineering to examine dynamic substructure coupling. Numerous studies have dealt with the above-mentioned methods and with consequent uncertainty propagation issues, either associated with experimental errors or modelling assumptions. Nonetheless, a limited number of publications have systematically cross-examined the performance of the various Experimental Dynamic Substructuring (EDS) methods and the possibility of their exploitation in a complementary way to expedite a hybrid experiment/numerical simulation. From this perspective, this paper performs a comparative uncertainty propagation analysis of three EDS algorithms for coupling physical and numerical subdomains with a dual assembly approach based on localized Lagrange multipliers. The main results and comparisons are based on a series of Monte Carlo simulations carried out on a five-DoF linear/non-linear chain-like systems that include typical aleatoric uncertainties emerging from measurement errors and excitation loads. In addition, we propose a new Composite-EDS (C-EDS) method to fuse both online and offline algorithms into a unique simulator. Capitalizing from the results of a more complex case study composed of a coupled isolated tank-piping system, we provide a feasible way to employ the C-EDS method when nonlinearities and multi-point constraints are present in the emulated system.

  10. A Fully Automated Method for Quantifying and Localizing White Matter Hyperintensities on MR Images

    PubMed Central

    Wu, Minjie; Rosano, Caterina; Butters, Meryl; Whyte, Ellen; Nable, Megan; Crooks, Ryan; Meltzer, Carolyn C.; Reynolds, Charles F.; Aizenstein3, Howard J.

    2006-01-01

    White matter hyperintensities (WMH), commonly found on T2-weighted FLAIR brain MR images in the elderly, are associated with a number of neuropsychiatric disorders, including vascular dementia, Alzheimer’s disease, and late-life depression. Previous MRI studies of WMHs have primarily relied on the subjective and global (i.e., full-brain) ratings of WMH grade. In the current study we implement and validate an automated method for quantifying and localizing WMHs. We adapt a fuzzy connected algorithm to automate the segmentation of WMHs and use a demons-based image registration to automate the anatomic localization of the WMHs using the Johns Hopkins University White Matter Atlas. The method is validated using the brain MR images acquired from eleven elderly subjects with late-onset late-life depression (LLD) and eight elderly controls. This dataset was chosen because LLD subjects are known to have significant WMH burden. The volumes of WMH identified in our automated method are compared with the accepted gold standard (manual ratings). A significant correlation of the automated method and the manual ratings is found (P<0.0001), thus demonstrating similar WMH quantifications of both methods. As has been shown in other studies e.g. (Taylor, et al. 2003)), we found there was a significantly greater WMH burden in the LLD subjects versus the controls for both the manual and automated method. The effect size was greater for the automated method, suggesting that it is a more specific measure. Additionally, we describe the anatomic localization of the WMHs in LLD subjects as well as in the control subjects, and detect the regions of interest (ROIs) specific for the WMH burden of LLD patients. Given the emergence of large neuroimage databases, techniques, such as that described here, will allow for a better understanding of the relationship between WMHs and neuropsychiatric disorders. PMID:17097277

  11. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  12. Linear-scaling explicitly correlated treatment of solids: periodic local MP2-F12 method.

    PubMed

    Usvyat, Denis

    2013-11-21

    Theory and implementation of the periodic local MP2-F12 method in the 3*A fixed-amplitude ansatz is presented. The method is formulated in the direct space, employing local representation for the occupied, virtual, and auxiliary orbitals in the form of Wannier functions (WFs), projected atomic orbitals (PAOs), and atom-centered Gaussian-type orbitals, respectively. Local approximations are introduced, restricting the list of the explicitly correlated pairs, as well as occupied, virtual, and auxiliary spaces in the strong orthogonality projector to the pair-specific domains on the basis of spatial proximity of respective orbitals. The 4-index two-electron integrals appearing in the formalism are approximated via the direct-space density fitting technique. In this procedure, the fitting orbital spaces are also restricted to local fit-domains surrounding the fitted densities. The formulation of the method and its implementation exploits the translational symmetry and the site-group symmetries of the WFs. Test calculations are performed on LiH crystal. The results show that the periodic LMP2-F12 method substantially accelerates basis set convergence of the total correlation energy, and even more so the correlation energy differences. The resulting energies are quite insensitive to the resolution-of-the-identity domain sizes and the quality of the auxiliary basis sets. The convergence with the orbital domain size is somewhat slower, but still acceptable. Moreover, inclusion of slightly more diffuse functions, than those usually used in the periodic calculations, improves the convergence of the LMP2-F12 correlation energy with respect to both the size of the PAO-domains and the quality of the orbital basis set. At the same time, the essentially diffuse atomic orbitals from standard molecular basis sets, commonly utilized in molecular MP2-F12 calculations, but problematic in the periodic context, are not necessary for LMP2-F12 treatment of crystals.

  13. An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars.

    PubMed

    Huang, Jiyan; Zhang, Ying; Luo, Shan

    2017-12-15

    Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The simulation results verified the proposed method.

  14. Receptive Field Inference with Localized Priors

    PubMed Central

    Park, Mijung; Pillow, Jonathan W.

    2011-01-01

    The linear receptive field describes a mapping from sensory stimuli to a one-dimensional variable governing a neuron's spike response. However, traditional receptive field estimators such as the spike-triggered average converge slowly and often require large amounts of data. Bayesian methods seek to overcome this problem by biasing estimates towards solutions that are more likely a priori, typically those with small, smooth, or sparse coefficients. Here we introduce a novel Bayesian receptive field estimator designed to incorporate locality, a powerful form of prior information about receptive field structure. The key to our approach is a hierarchical receptive field model that flexibly adapts to localized structure in both spacetime and spatiotemporal frequency, using an inference method known as empirical Bayes. We refer to our method as automatic locality determination (ALD), and show that it can accurately recover various types of smooth, sparse, and localized receptive fields. We apply ALD to neural data from retinal ganglion cells and V1 simple cells, and find it achieves error rates several times lower than standard estimators. Thus, estimates of comparable accuracy can be achieved with substantially less data. Finally, we introduce a computationally efficient Markov Chain Monte Carlo (MCMC) algorithm for fully Bayesian inference under the ALD prior, yielding accurate Bayesian confidence intervals for small or noisy datasets. PMID:22046110

  15. No effect of the infiltration of local anaesthetic for total hip arthroplasty using an anterior approach: a randomised placebo controlled trial.

    PubMed

    den Hartog, Y M; Mathijssen, N M C; van Dasselaar, N T; Langendijk, P N J; Vehmeijer, S B W

    2015-06-01

    Only limited data are available regarding the infiltration of local anaesthetic for total hip arthroplasty (THA), and no studies were performed for THA using the anterior approach. In this prospective, randomised placebo-controlled study we investigated the effect of both standard and reverse infiltration of local anaesthetic in combination with the anterior approach for THA. The primary endpoint was the mean numeric rating score for pain four hours post-operatively. In addition, we recorded the length of hospital stay, the operating time, the destination of the patient at discharge, the use of pain medication, the occurrence of side effects and pain scores at various times post-operatively. Between November 2012 and January 2014, 75 patients were included in the study. They were randomised into three groups: standard infiltration of local anaesthetic, reversed infiltration of local anaesthetic, and placebo. There was no difference in mean numeric rating score for pain four hours post-operatively (p = 0.87). There were significantly more side effects at one and eight hours post-operatively in the placebo group (p = 0.02; p = 0.03), but this did not influence the mobilisation of the patients. There were no differences in all other outcomes between the groups. We found no clinically relevant effect when the infiltration of local anaesthetic with ropivacaine and epinephrine was used in a multimodal pain protocol for THA using the anterior approach. ©2015 The British Editorial Society of Bone & Joint Surgery.

  16. Feasibility of A-mode ultrasound attenuation as a monitoring method of local hyperthermia treatment.

    PubMed

    Manaf, Noraida Abd; Aziz, Maizatul Nadwa Che; Ridzuan, Dzulfadhli Saffuan; Mohamad Salim, Maheza Irna; Wahab, Asnida Abd; Lai, Khin Wee; Hum, Yan Chai

    2016-06-01

    Recently, there is an increasing interest in the use of local hyperthermia treatment for a variety of clinical applications. The desired therapeutic outcome in local hyperthermia treatment is achieved by raising the local temperature to surpass the tissue coagulation threshold, resulting in tissue necrosis. In oncology, local hyperthermia is used as an effective way to destroy cancerous tissues and is said to have the potential to replace conventional treatment regime like surgery, chemotherapy or radiotherapy. However, the inability to closely monitor temperature elevations from hyperthermia treatment in real time with high accuracy continues to limit its clinical applicability. Local hyperthermia treatment requires real-time monitoring system to observe the progression of the destroyed tissue during and after the treatment. Ultrasound is one of the modalities that have great potential for local hyperthermia monitoring, as it is non-ionizing, convenient and has relatively simple signal processing requirement compared to magnetic resonance imaging and computed tomography. In a two-dimensional ultrasound imaging system, changes in tissue microstructure during local hyperthermia treatment are observed in terms of pixel value analysis extracted from the ultrasound image itself. Although 2D ultrasound has shown to be the most widely used system for monitoring hyperthermia in ultrasound imaging family, 1D ultrasound on the other hand could offer a real-time monitoring and the method enables quantitative measurement to be conducted faster and with simpler measurement instrument. Therefore, this paper proposes a new local hyperthermia monitoring method that is based on one-dimensional ultrasound. Specifically, the study investigates the effect of ultrasound attenuation in normal and pathological breast tissue when the temperature in tissue is varied between 37 and 65 °C during local hyperthermia treatment. Besides that, the total protein content measurement was also

  17. Mentorship in Practice: A Multi-Method Approach.

    ERIC Educational Resources Information Center

    Schreck, Timothy J.; And Others

    This study was conducted to evaluate a field-based mentorship program using a multi-method approach. It explored the use of mentorship as practiced in the Florida Compact, a business education partnership established in Florida in 1987. The study was designed to identify differences between mentors and mentorees, as well as differences within…

  18. Towards a Viscous Wall Model for Immersed Boundary Methods

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Immersed boundary methods are frequently employed for simulating flows at low Reynolds numbers or for applications where viscous boundary layer effects can be neglected. The primary shortcoming of Cartesian mesh immersed boundary methods is the inability of efficiently resolving thin turbulent boundary layers in high-Reynolds number flow application. The inefficiency of resolving the thin boundary is associated with the use of constant aspect ratio Cartesian grid cells. Conventional CFD approaches can efficiently resolve the large wall normal gradients by utilizing large aspect ratio cells near the wall. This paper presents different approaches for immersed boundary methods to account for the viscous boundary layer interaction with the flow-field away from the walls. Different wall modeling approaches proposed in previous research studies are addressed and compared to a new integral boundary layer based approach. In contrast to common wall-modeling approaches that usually only utilize local flow information, the integral boundary layer based approach keeps the streamwise history of the boundary layer. This allows the method to remain effective at much larger y+ values than local wall modeling approaches. After a theoretical discussion of the different approaches, the method is applied to increasingly more challenging flow fields including fully attached, separated, and shock-induced separated (laminar and turbulent) flows.

  19. Identification of inelastic parameters based on deep drawing forming operations using a global-local hybrid Particle Swarm approach

    NASA Astrophysics Data System (ADS)

    Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.

    2016-04-01

    Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.

  20. A quantitative microscopic approach to predict local recurrence based on in vivo intraoperative imaging of sarcoma tumor margins

    PubMed Central

    Mueller, Jenna L.; Fu, Henry L.; Mito, Jeffrey K.; Whitley, Melodi J.; Chitalia, Rhea; Erkanli, Alaattin; Dodd, Leslie; Cardona, Diana M.; Geradts, Joseph; Willett, Rebecca M.; Kirsch, David G.; Ramanujam, Nimmi

    2015-01-01

    The goal of resection of soft tissue sarcomas located in the extremity is to preserve limb function while completely excising the tumor with a margin of normal tissue. With surgery alone, one-third of patients with soft tissue sarcoma of the extremity will have local recurrence due to microscopic residual disease in the tumor bed. Currently, a limited number of intraoperative pathology-based techniques are used to assess margin status; however, few have been widely adopted due to sampling error and time constraints. To aid in intraoperative diagnosis, we developed a quantitative optical microscopy toolbox, which includes acriflavine staining, fluorescence microscopy, and analytic techniques called sparse component analysis and circle transform to yield quantitative diagnosis of tumor margins. A series of variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. For comparison, if pathology was used to predict local recurrence in this data set, it would achieve a sensitivity of 29% and a specificity of 71%. These results indicate a robust approach for detecting microscopic residual disease, which is an effective predictor of local recurrence. PMID:25994353

  1. Local interaction simulation approach to modelling nonclassical, nonlinear elastic behavior in solids.

    PubMed

    Scalerandi, Marco; Agostini, Valentina; Delsanto, Pier Paolo; Van Den Abeele, Koen; Johnson, Paul A

    2003-06-01

    Recent studies show that a broad category of materials share "nonclassical" nonlinear elastic behavior much different from "classical" (Landau-type) nonlinearity. Manifestations of "nonclassical" nonlinearity include stress-strain hysteresis and discrete memory in quasistatic experiments, and specific dependencies of the harmonic amplitudes with respect to the drive amplitude in dynamic wave experiments, which are remarkably different from those predicted by the classical theory. These materials have in common soft "bond" elements, where the elastic nonlinearity originates, contained in hard matter (e.g., a rock sample). The bond system normally comprises a small fraction of the total material volume, and can be localized (e.g., a crack in a solid) or distributed, as in a rock. In this paper a model is presented in which the soft elements are treated as hysteretic or reversible elastic units connected in a one-dimensional lattice to elastic elements (grains), which make up the hard matrix. Calculations are performed in the framework of the local interaction simulation approach (LISA). Experimental observations are well predicted by the model, which is now ready both for basic investigations about the physical origins of nonlinear elasticity and for applications to material damage diagnostics.

  2. Bioinspired sensory systems for local flow characterization

    NASA Astrophysics Data System (ADS)

    Colvert, Brendan; Chen, Kevin; Kanso, Eva

    2016-11-01

    Empirical evidence suggests that many aquatic organisms sense differential hydrodynamic signals.This sensory information is decoded to extract relevant flow properties. This task is challenging because it relies on local and partial measurements, whereas classical flow characterization methods depend on an external observer to reconstruct global flow fields. Here, we introduce a mathematical model in which a bioinspired sensory array measuring differences in local flow velocities characterizes the flow type and intensity. We linearize the flow field around the sensory array and express the velocity gradient tensor in terms of frame-independent parameters. We develop decoding algorithms that allow the sensory system to characterize the local flow and discuss the conditions under which this is possible. We apply this framework to the canonical problem of a circular cylinder in uniform flow, finding excellent agreement between sensed and actual properties. Our results imply that combining suitable velocity sensors with physics-based methods for decoding sensory measurements leads to a powerful approach for understanding and developing underwater sensory systems.

  3. Teaching Local Lore in EFL Class: New Approaches

    ERIC Educational Resources Information Center

    Yarmakeev, Iskander E.; Pimenova, Tatiana S.; Zamaletdinova, Gulyusa R.

    2016-01-01

    This paper is dedicated to the up-to-date educational problem, that is, the role of local lore in teaching EFL to University students. Although many educators admit that local lore knowledge plays a great role in the development of a well-bred and well-educated personality and meets students' needs, the problem has not been thoroughly studied.…

  4. Local statistics adaptive entropy coding method for the improvement of H.26L VLC coding

    NASA Astrophysics Data System (ADS)

    Yoo, Kook-yeol; Kim, Jong D.; Choi, Byung-Sun; Lee, Yung Lyul

    2000-05-01

    In this paper, we propose an adaptive entropy coding method to improve the VLC coding efficiency of H.26L TML-1 codec. First of all, we will show that the VLC coding presented in TML-1 does not satisfy the sibling property of entropy coding. Then, we will modify the coding method into the local statistics adaptive one to satisfy the property. The proposed method based on the local symbol statistics dynamically changes the mapping relationship between symbol and bit pattern in the VLC table according to sibling property. Note that the codewords in the VLC table of TML-1 codec is not changed. Since this changed mapping relationship also derived in the decoder side by using the decoded symbols, the proposed VLC coding method does not require any overhead information. The simulation results show that the proposed method gives about 30% and 37% reduction in average bit rate for MB type and CBP information, respectively.

  5. Impact localization in dispersive waveguides based on energy-attenuation of waves with the traveled distance

    NASA Astrophysics Data System (ADS)

    Alajlouni, Sa'ed; Albakri, Mohammad; Tarazaga, Pablo

    2018-05-01

    An algorithm is introduced to solve the general multilateration (source localization) problem in a dispersive waveguide. The algorithm is designed with the intention of localizing impact forces in a dispersive floor, and can potentially be used to localize and track occupants in a building using vibration sensors connected to the lower surface of the walking floor. The lower the wave frequencies generated by the impact force, the more accurate the localization is expected to be. An impact force acting on a floor, generates a seismic wave that gets distorted as it travels away from the source. This distortion is noticeable even over relatively short traveled distances, and is mainly caused by the dispersion phenomenon among other reasons, therefore using conventional localization/multilateration methods will produce localization error values that are highly variable and occasionally large. The proposed localization approach is based on the fact that the wave's energy, calculated over some time window, decays exponentially as the wave travels away from the source. Although localization methods that assume exponential decay exist in the literature (in the field of wireless communications), these methods have only been considered for wave propagation in non-dispersive media, in addition to the limiting assumption required by these methods that the source must not coincide with a sensor location. As a result, these methods cannot be applied to the indoor localization problem in their current form. We show how our proposed method is different from the other methods, and that it overcomes the source-sensor location coincidence limitation. Theoretical analysis and experimental data will be used to motivate and justify the pursuit of the proposed approach for localization in a dispersive medium. Additionally, hammer impacts on an instrumented floor section inside an operational building, as well as finite element model simulations, are used to evaluate the performance of

  6. The contactless detection of local normal transitions in superconducting coils by using Poynting’s vector method

    NASA Astrophysics Data System (ADS)

    Habu, K.; Kaminohara, S.; Kimoto, T.; Kawagoe, A.; Sumiyoshi, F.; Okamoto, H.

    2010-11-01

    We have developed a new monitoring system to detect an unusual event in the superconducting coils without direct contact on the coils, using Poynting's vector method. In this system, the potential leads and pickup coils are set around the superconducting coils to measure local electric and magnetic fields, respectively. By measuring the sets of magnetic and electric fields, the Poynting's vectors around the coil can be obtained. An unusual event in the coil can be detected as the result of the change of the Poynting's vector. This system has no risk of the voltage breakdown which may happen with the balance voltage method, because there is no need of direct contacts on the coil windings. In a previous paper, we have demonstrated that our system can detect the normal transitions in the Bi-2223 coil without direct contact on the coil windings by using a small test system. For our system to be applied to practical devices, it is necessary for the early detection of an unusual event in the coils to be able to detect local normal transitions in the coils. The signal voltages of the small sensors to measure local magnetic and electric fields are small. Although the increase in signals of the pickup coils is attained easily by an increase in the number of turns of the pickup coils, an increase in the signals of the potential lead is not easily attained. In this paper, a new method to amplify the signal of local electric fields around the coil is proposed. The validity of the method has been confirmed by measuring local electric fields around the Bi-2223 coil.

  7. Global, quantitative and dynamic mapping of protein subcellular localization.

    PubMed

    Itzhak, Daniel N; Tyanova, Stefka; Cox, Jürgen; Borner, Georg Hh

    2016-06-09

    Subcellular localization critically influences protein function, and cells control protein localization to regulate biological processes. We have developed and applied Dynamic Organellar Maps, a proteomic method that allows global mapping of protein translocation events. We initially used maps statically to generate a database with localization and absolute copy number information for over 8700 proteins from HeLa cells, approaching comprehensive coverage. All major organelles were resolved, with exceptional prediction accuracy (estimated at >92%). Combining spatial and abundance information yielded an unprecedented quantitative view of HeLa cell anatomy and organellar composition, at the protein level. We subsequently demonstrated the dynamic capabilities of the approach by capturing translocation events following EGF stimulation, which we integrated into a quantitative model. Dynamic Organellar Maps enable the proteome-wide analysis of physiological protein movements, without requiring any reagents specific to the investigated process, and will thus be widely applicable in cell biology.

  8. A three-dimensional finite-volume Eulerian-Lagrangian Localized Adjoint Method (ELLAM) for solute-transport modeling

    USGS Publications Warehouse

    Heberton, C.I.; Russell, T.F.; Konikow, Leonard F.; Hornberger, G.Z.

    2000-01-01

    This report documents the U.S. Geological Survey Eulerian-Lagrangian Localized Adjoint Method (ELLAM) algorithm that solves an integral form of the solute-transport equation, incorporating an implicit-in-time difference approximation for the dispersive and sink terms. Like the algorithm in the original version of the U.S. Geological Survey MOC3D transport model, ELLAM uses a method of characteristics approach to solve the transport equation on the basis of the velocity field. The ELLAM algorithm, however, is based on an integral formulation of conservation of mass and uses appropriate numerical techniques to obtain global conservation of mass. The implicit procedure eliminates several stability criteria required for an explicit formulation. Consequently, ELLAM allows large transport time increments to be used. ELLAM can produce qualitatively good results using a small number of transport time steps. A description of the ELLAM numerical method, the data-input requirements and output options, and the results of simulator testing and evaluation are presented. The ELLAM algorithm was evaluated for the same set of problems used to test and evaluate Version 1 and Version 2 of MOC3D. These test results indicate that ELLAM offers a viable alternative to the explicit and implicit solvers in MOC3D. Its use is desirable when mass balance is imperative or a fast, qualitative model result is needed. Although accurate solutions can be generated using ELLAM, its efficiency relative to the two previously documented solution algorithms is problem dependent.

  9. Iterative raw measurements restoration method with penalized weighted least squares approach for low-dose CT

    NASA Astrophysics Data System (ADS)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu

    2014-03-01

    Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.

  10. Local and nonlocal parallel heat transport in general magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Del-Castillo-Negrete, Diego B; Chacon, Luis

    2011-01-01

    A novel approach for the study of parallel transport in magnetized plasmas is presented. The method avoids numerical pollution issues of grid-based formulations and applies to integrable and chaotic magnetic fields with local or nonlocal parallel closures. In weakly chaotic fields, the method gives the fractal structure of the devil's staircase radial temperature profile. In fully chaotic fields, the temperature exhibits self-similar spatiotemporal evolution with a stretched-exponential scaling function for local closures and an algebraically decaying one for nonlocal closures. It is shown that, for both closures, the effective radial heat transport is incompatible with the quasilinear diffusion model.

  11. COS Views of Local Galaxies Approaching Primeval Conditions

    NASA Astrophysics Data System (ADS)

    Wofford, Aida

    2014-10-01

    We will use COS G160M+G185M to observe the cosmollogically important lines C IV 1548+1551 A, He II 1640 A, O III] 1661+1666 A, and C III] 1907+1909 A in the three closest most metal-poor blue compact dwarf galaxies known. These galaxies approach primeval insterstellar and stellar conditions. One of the galaxies has no existing spectroscopic coverage in the UV. Available spectroscopy of the most metal-poor galaxies in the local universe are scarce, inhomogeneous, mostly low spectral-resolution, and are either noisy in main UV lines or lack their coverage. The proposed spectral resolution of about 20 km/s represents an order of magnitude improvement over existing HST data and allows us to disentangle stellar, nebular, and/or shock components to the lines. The high-quality constraints obtained in the framework of this proposal will make it possible to assess the relative likelihood of new spectral models of star-forming galaxies from different groups, in the best possible way achievable with current instrumentation. This will ensure that the best possible studies of early chemical enrichment of the universe can be achieved. The proposed observations are necessary to minimize large existing systematic uncertainties in the determination of high-redshift galaxy properties that JWST was in large part designed to measure.

  12. Cell-Averaged discretization for incompressible Navier-Stokes with embedded boundaries and locally refined Cartesian meshes: a high-order finite volume approach

    NASA Astrophysics Data System (ADS)

    Bhalla, Amneet Pal Singh; Johansen, Hans; Graves, Dan; Martin, Dan; Colella, Phillip; Applied Numerical Algorithms Group Team

    2017-11-01

    We present a consistent cell-averaged discretization for incompressible Navier-Stokes equations on complex domains using embedded boundaries. The embedded boundary is allowed to freely cut the locally-refined background Cartesian grid. Implicit-function representation is used for the embedded boundary, which allows us to convert the required geometric moments in the Taylor series expansion (upto arbitrary order) of polynomials into an algebraic problem in lower dimensions. The computed geometric moments are then used to construct stencils for various operators like the Laplacian, divergence, gradient, etc., by solving a least-squares system locally. We also construct the inter-level data-transfer operators like prolongation and restriction for multi grid solvers using the same least-squares system approach. This allows us to retain high-order of accuracy near coarse-fine interface and near embedded boundaries. Canonical problems like Taylor-Green vortex flow and flow past bluff bodies will be presented to demonstrate the proposed method. U.S. Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231).

  13. [Systemic approach to ecologic safety at objects with radiation jeopardy, involved into localization of low and medium radioactive waste].

    PubMed

    Veselov, E I

    2011-01-01

    The article deals with specifying systemic approach to ecologic safety of objects with radiation jeopardy. The authors presented stages of work and algorithm of decisions on preserving reliability of storage for radiation jeopardy waste. Findings are that providing ecologic safety can cover 3 approaches: complete exemption of radiation jeopardy waste, removal of more dangerous waste from present buildings and increasing reliability of prolonged localization of radiation jeopardy waste at the initial place. The systemic approach presented could be realized at various radiation jeopardy objects.

  14. Estimation of effective brain connectivity with dual Kalman filter and EEG source localization methods.

    PubMed

    Rajabioun, Mehdi; Nasrabadi, Ali Motie; Shamsollahi, Mohammad Bagher

    2017-09-01

    Effective connectivity is one of the most important considerations in brain functional mapping via EEG. It demonstrates the effects of a particular active brain region on others. In this paper, a new method is proposed which is based on dual Kalman filter. In this method, firstly by using a brain active localization method (standardized low resolution brain electromagnetic tomography) and applying it to EEG signal, active regions are extracted, and appropriate time model (multivariate autoregressive model) is fitted to extracted brain active sources for evaluating the activity and time dependence between sources. Then, dual Kalman filter is used to estimate model parameters or effective connectivity between active regions. The advantage of this method is the estimation of different brain parts activity simultaneously with the calculation of effective connectivity between active regions. By combining dual Kalman filter with brain source localization methods, in addition to the connectivity estimation between parts, source activity is updated during the time. The proposed method performance has been evaluated firstly by applying it to simulated EEG signals with interacting connectivity simulation between active parts. Noisy simulated signals with different signal to noise ratios are used for evaluating method sensitivity to noise and comparing proposed method performance with other methods. Then the method is applied to real signals and the estimation error during a sweeping window is calculated. By comparing proposed method results in different simulation (simulated and real signals), proposed method gives acceptable results with least mean square error in noisy or real conditions.

  15. Local connectome phenotypes predict social, health, and cognitive factors

    PubMed Central

    Powell, Michael A.; Garcia, Javier O.; Yeh, Fang-Cheng; Vettel, Jean M.

    2018-01-01

    The unique architecture of the human connectome is defined initially by genetics and subsequently sculpted over time with experience. Thus, similarities in predisposition and experience that lead to similarities in social, biological, and cognitive attributes should also be reflected in the local architecture of white matter fascicles. Here we employ a method known as local connectome fingerprinting that uses diffusion MRI to measure the fiber-wise characteristics of macroscopic white matter pathways throughout the brain. This fingerprinting approach was applied to a large sample (N = 841) of subjects from the Human Connectome Project, revealing a reliable degree of between-subject correlation in the local connectome fingerprints, with a relatively complex, low-dimensional substructure. Using a cross-validated, high-dimensional regression analysis approach, we derived local connectome phenotype (LCP) maps that could reliably predict a subset of subject attributes measured, including demographic, health, and cognitive measures. These LCP maps were highly specific to the attribute being predicted but also sensitive to correlations between attributes. Collectively, these results indicate that the local architecture of white matter fascicles reflects a meaningful portion of the variability shared between subjects along several dimensions. PMID:29911679

  16. Local connectome phenotypes predict social, health, and cognitive factors.

    PubMed

    Powell, Michael A; Garcia, Javier O; Yeh, Fang-Cheng; Vettel, Jean M; Verstynen, Timothy

    2018-01-01

    The unique architecture of the human connectome is defined initially by genetics and subsequently sculpted over time with experience. Thus, similarities in predisposition and experience that lead to similarities in social, biological, and cognitive attributes should also be reflected in the local architecture of white matter fascicles. Here we employ a method known as local connectome fingerprinting that uses diffusion MRI to measure the fiber-wise characteristics of macroscopic white matter pathways throughout the brain. This fingerprinting approach was applied to a large sample ( N = 841) of subjects from the Human Connectome Project, revealing a reliable degree of between-subject correlation in the local connectome fingerprints, with a relatively complex, low-dimensional substructure. Using a cross-validated, high-dimensional regression analysis approach, we derived local connectome phenotype (LCP) maps that could reliably predict a subset of subject attributes measured, including demographic, health, and cognitive measures. These LCP maps were highly specific to the attribute being predicted but also sensitive to correlations between attributes. Collectively, these results indicate that the local architecture of white matter fascicles reflects a meaningful portion of the variability shared between subjects along several dimensions.

  17. Local SAR in Parallel Transmission Pulse Design

    PubMed Central

    Lee, Joonsung; Gebhardt, Matthias; Wald, Lawrence L.; Adalsteinsson, Elfar

    2011-01-01

    The management of local and global power deposition in human subjects (Specific Absorption Rate, SAR) is a fundamental constraint to the application of parallel transmission (pTx) systems. Even though the pTx and single channel have to meet the same SAR requirements, the complex behavior of the spatial distribution of local SAR for transmission arrays poses problems that are not encountered in conventional single-channel systems and places additional requirements on pTx RF pulse design. We propose a pTx pulse design method which builds on recent work to capture the spatial distribution of local SAR in numerical tissue models in a compressed parameterization in order to incorporate local SAR constraints within computation times that accommodate pTx pulse design during an in vivo MRI scan. Additionally, the algorithm yields a Protocol-specific Ultimate Peak in Local SAR (PUPiL SAR), which is shown to bound the achievable peak local SAR for a given excitation profile fidelity. The performance of the approach was demonstrated using a numerical human head model and a 7T eight-channel transmit array. The method reduced peak local 10g SAR by 14–66% for slice-selective pTx excitations and 2D selective pTx excitations compared to a pTx pulse design constrained only by global SAR. The primary tradeoff incurred for reducing peak local SAR was an increase in global SAR, up to 34% for the evaluated examples, which is favorable in cases where local SAR constraints dominate the pulse applications. PMID:22083594

  18. Local SAR in parallel transmission pulse design.

    PubMed

    Lee, Joonsung; Gebhardt, Matthias; Wald, Lawrence L; Adalsteinsson, Elfar

    2012-06-01

    The management of local and global power deposition in human subjects (specific absorption rate, SAR) is a fundamental constraint to the application of parallel transmission (pTx) systems. Even though the pTx and single channel have to meet the same SAR requirements, the complex behavior of the spatial distribution of local SAR for transmission arrays poses problems that are not encountered in conventional single-channel systems and places additional requirements on pTx radio frequency pulse design. We propose a pTx pulse design method which builds on recent work to capture the spatial distribution of local SAR in numerical tissue models in a compressed parameterization in order to incorporate local SAR constraints within computation times that accommodate pTx pulse design during an in vivo magnetic resonance imaging scan. Additionally, the algorithm yields a protocol-specific ultimate peak in local SAR, which is shown to bound the achievable peak local SAR for a given excitation profile fidelity. The performance of the approach was demonstrated using a numerical human head model and a 7 Tesla eight-channel transmit array. The method reduced peak local 10 g SAR by 14-66% for slice-selective pTx excitations and 2D selective pTx excitations compared to a pTx pulse design constrained only by global SAR. The primary tradeoff incurred for reducing peak local SAR was an increase in global SAR, up to 34% for the evaluated examples, which is favorable in cases where local SAR constraints dominate the pulse applications. Copyright © 2011 Wiley Periodicals, Inc.

  19. An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars

    PubMed Central

    Zhang, Ying; Luo, Shan

    2017-01-01

    Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer–Rao lower bound (CRLB) are derived. The simulation results verified the proposed method. PMID:29244727

  20. An integration of minimum local feature representation methods to recognize large variation of foods

    NASA Astrophysics Data System (ADS)

    Razali, Mohd Norhisham bin; Manshor, Noridayu; Halin, Alfian Abdul; Mustapha, Norwati; Yaakob, Razali

    2017-10-01

    Local invariant features have shown to be successful in describing object appearances for image classification tasks. Such features are robust towards occlusion and clutter and are also invariant against scale and orientation changes. This makes them suitable for classification tasks with little inter-class similarity and large intra-class difference. In this paper, we propose an integrated representation of the Speeded-Up Robust Feature (SURF) and Scale Invariant Feature Transform (SIFT) descriptors, using late fusion strategy. The proposed representation is used for food recognition from a dataset of food images with complex appearance variations. The Bag of Features (BOF) approach is employed to enhance the discriminative ability of the local features. Firstly, the individual local features are extracted to construct two kinds of visual vocabularies, representing SURF and SIFT. The visual vocabularies are then concatenated and fed into a Linear Support Vector Machine (SVM) to classify the respective food categories. Experimental results demonstrate impressive overall recognition at 82.38% classification accuracy based on the challenging UEC-Food100 dataset.