NASA Astrophysics Data System (ADS)
Ahmed, H. M.; Al-azawi, R. J.; Abdulhameed, A. A.
2018-05-01
Huge efforts have been put in the developing of diagnostic methods to skin cancer disease. In this paper, two different approaches have been addressed for detection the skin cancer in dermoscopy images. The first approach uses a global method that uses global features for classifying skin lesions, whereas the second approach uses a local method that uses local features for classifying skin lesions. The aim of this paper is selecting the best approach for skin lesion classification. The dataset has been used in this paper consist of 200 dermoscopy images from Pedro Hispano Hospital (PH2). The achieved results are; sensitivity about 96%, specificity about 100%, precision about 100%, and accuracy about 97% for globalization approach while, sensitivity about 100%, specificity about 100%, precision about 100%, and accuracy about 100% for Localization Approach, these results showed that the localization approach achieved acceptable accuracy and better than globalization approach for skin cancer lesions classification.
NASA Astrophysics Data System (ADS)
Guo, Yang; Becker, Ute; Neese, Frank
2018-03-01
Local correlation theories have been developed in two main flavors: (1) "direct" local correlation methods apply local approximation to the canonical equations and (2) fragment based methods reconstruct the correlation energy from a series of smaller calculations on subsystems. The present work serves two purposes. First, we investigate the relative efficiencies of the two approaches using the domain-based local pair natural orbital (DLPNO) approach as the "direct" method and the cluster in molecule (CIM) approach as the fragment based approach. Both approaches are applied in conjunction with second-order many-body perturbation theory (MP2) as well as coupled-cluster theory with single-, double- and perturbative triple excitations [CCSD(T)]. Second, we have investigated the possible merits of combining the two approaches by performing CIM calculations with DLPNO methods serving as the method of choice for performing the subsystem calculations. Our cluster-in-molecule approach is closely related to but slightly deviates from approaches in the literature since we have avoided real space cutoffs. Moreover, the neglected distant pair correlations in the previous CIM approach are considered approximately. Six very large molecules (503-2380 atoms) were studied. At both MP2 and CCSD(T) levels of theory, the CIM and DLPNO methods show similar efficiency. However, DLPNO methods are more accurate for 3-dimensional systems. While we have found only little incentive for the combination of CIM with DLPNO-MP2, the situation is different for CIM-DLPNO-CCSD(T). This combination is attractive because (1) the better parallelization opportunities offered by CIM; (2) the methodology is less memory intensive than the genuine DLPNO-CCSD(T) method and, hence, allows for large calculations on more modest hardware; and (3) the methodology is applicable and efficient in the frequently met cases, where the largest subsystem calculation is too large for the canonical CCSD(T) method.
Formulation analysis and computation of an optimization-based local-to-nonlocal coupling method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Elia, Marta; Bochev, Pavel Blagoveston
2017-01-01
In this paper, we present an optimization-based coupling method for local and nonlocal continuum models. Our approach couches the coupling of the models into a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the local and nonlocal problem domains, and the virtual controls are the nonlocal volume constraint and the local boundary condition. We present the method in the context of Local-to-Nonlocal di usion coupling. Numerical examples illustrate the theoretical properties of the approach.
Global/local stress analysis of composite panels
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.; Knight, Norman F., Jr.
1989-01-01
A method for performing a global/local stress analysis is described, and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.
Global/local stress analysis of composite structures. M.S. Thesis
NASA Technical Reports Server (NTRS)
Ransom, Jonathan B.
1989-01-01
A method for performing a global/local stress analysis is described and its capabilities are demonstrated. The method employs spline interpolation functions which satisfy the linear plate bending equation to determine displacements and rotations from a global model which are used as boundary conditions for the local model. Then, the local model is analyzed independent of the global model of the structure. This approach can be used to determine local, detailed stress states for specific structural regions using independent, refined local models which exploit information from less-refined global models. The method presented is not restricted to having a priori knowledge of the location of the regions requiring local detailed stress analysis. This approach also reduces the computational effort necessary to obtain the detailed stress state. Criteria for applying the method are developed. The effectiveness of the method is demonstrated using a classical stress concentration problem and a graphite-epoxy blade-stiffened panel with a discontinuous stiffener.
NASA Astrophysics Data System (ADS)
Zhang, Hongqin; Tian, Xiangjun
2018-04-01
Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.
Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.
Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona
2016-05-31
Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.
Kernel PLS Estimation of Single-trial Event-related Potentials
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Trejo, Leonard J.
2004-01-01
Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.
Jiang, Ling; Yang, Christopher C
2017-09-01
The rapid growth of online health social websites has captured a vast amount of healthcare information and made the information easy to access for health consumers. E-patients often use these social websites for informational and emotional support. However, health consumers could be easily overwhelmed by the overloaded information. Healthcare information searching can be very difficult for consumers, not to mention most of them are not skilled information searcher. In this work, we investigate the approaches for measuring user similarity in online health social websites. By recommending similar users to consumers, we can help them to seek informational and emotional support in a more efficient way. We propose to represent the healthcare social media data as a heterogeneous healthcare information network and introduce the local and global structural approaches for measuring user similarity in a heterogeneous network. We compare the proposed structural approaches with the content-based approach. Experiments were conducted on a dataset collected from a popular online health social website, and the results showed that content-based approach performed better for inactive users, while structural approaches performed better for active users. Moreover, global structural approach outperformed local structural approach for all user groups. In addition, we conducted experiments on local and global structural approaches using different weight schemas for the edges in the network. Leverage performed the best for both local and global approaches. Finally, we integrated different approaches and demonstrated that hybrid method yielded better performance than the individual approach. The results indicate that content-based methods can effectively capture the similarity of inactive users who usually have focused interests, while structural methods can achieve better performance when rich structural information is available. Local structural approach only considers direct connections between nodes in the network, while global structural approach takes the indirect connections into account. Therefore, the global similarity approach can deal with sparse networks and capture the implicit similarity between two users. Different approaches may capture different aspects of the similarity relationship between two users. When we combine different methods together, we could achieve a better performance than using each individual method. Copyright © 2017 Elsevier B.V. All rights reserved.
SubCellProt: predicting protein subcellular localization using machine learning approaches.
Garg, Prabha; Sharma, Virag; Chaudhari, Pradeep; Roy, Nilanjan
2009-01-01
High-throughput genome sequencing projects continue to churn out enormous amounts of raw sequence data. However, most of this raw sequence data is unannotated and, hence, not very useful. Among the various approaches to decipher the function of a protein, one is to determine its localization. Experimental approaches for proteome annotation including determination of a protein's subcellular localizations are very costly and labor intensive. Besides the available experimental methods, in silico methods present alternative approaches to accomplish this task. Here, we present two machine learning approaches for prediction of the subcellular localization of a protein from the primary sequence information. Two machine learning algorithms, k Nearest Neighbor (k-NN) and Probabilistic Neural Network (PNN) were used to classify an unknown protein into one of the 11 subcellular localizations. The final prediction is made on the basis of a consensus of the predictions made by two algorithms and a probability is assigned to it. The results indicate that the primary sequence derived features like amino acid composition, sequence order and physicochemical properties can be used to assign subcellular localization with a fair degree of accuracy. Moreover, with the enhanced accuracy of our approach and the definition of a prediction domain, this method can be used for proteome annotation in a high throughput manner. SubCellProt is available at www.databases.niper.ac.in/SubCellProt.
Supplementary routes to local anaesthesia.
Meechan, J G
2002-11-01
The satisfactory provision of many dental treatments, particularly endodontics, relies on achieving excellent pain control. Unfortunately, the administration of a local anaesthetic solution does not always produce satisfactory anaesthesia of the dental pulp. This may be distressing for both patient and operator. Fortunately, failure of local anaesthetic injections can be overcome. This is often achieved by using alternative routes of approach for subsequent injections. Nerves such as the inferior alveolar nerve can be anaesthetized by a variety of block methods. However, techniques of anaesthesia other than the standard infiltration and regional block injections may be employed successfully when these former methods have failed to produce adequate pain control. This paper describes some supplementary local anaesthetic techniques that may be used to achieve pulpal anaesthesia for endodontic procedures when conventional approaches have failed. Although some of these techniques can be used as the primary form of anaesthesia, these are normally employed as 'back-up'. The methods described are intraligamentary (periodontal ligament) injections, intraosseous anaesthesia and the intrapulpal approach. The factors that influence the success of these methods and the advantages and disadvantages of each technique are discussed. The advent of new instrumentation, which permits the slow delivery of local anaesthetic solution has led to the development of novel methods of anaesthesia in dentistry. These new approaches are discussed.
NASA Astrophysics Data System (ADS)
Malekan, Mohammad; Barros, Felicio Bruzzi
2016-11-01
Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.
Localized diabatization applied to excitons in molecular crystals
NASA Astrophysics Data System (ADS)
Jin, Zuxin; Subotnik, Joseph E.
2017-06-01
Traditional ab initio electronic structure calculations of periodic systems yield delocalized eigenstates that should be understood as adiabatic states. For example, excitons are bands of extended states which superimpose localized excitations on every lattice site. However, in general, in order to study the effects of nuclear motion on exciton transport, it is standard to work with a localized description of excitons, especially in a hopping regime; even in a band regime, a localized description can be helpful. To extract localized excitons from a band requires essentially a diabatization procedure. In this paper, three distinct methods are proposed for such localized diabatization: (i) a simple projection method, (ii) a more general Pipek-Mezey localization scheme, and (iii) a variant of Boys diabatization. Approaches (i) and (ii) require localized, single-particle Wannier orbitals, while approach (iii) has no such dependence. These methods should be very useful for studying energy transfer through solids with ab initio calculations.
Enhanced Methods for Local Ancestry Assignment in Sequenced Admixed Individuals
Brown, Robert; Pasaniuc, Bogdan
2014-01-01
Inferring the ancestry at each locus in the genome of recently admixed individuals (e.g., Latino Americans) plays a major role in medical and population genetic inferences, ranging from finding disease-risk loci, to inferring recombination rates, to mapping missing contigs in the human genome. Although many methods for local ancestry inference have been proposed, most are designed for use with genotyping arrays and fail to make use of the full spectrum of data available from sequencing. In addition, current haplotype-based approaches are very computationally demanding, requiring large computational time for moderately large sample sizes. Here we present new methods for local ancestry inference that leverage continent-specific variants (CSVs) to attain increased performance over existing approaches in sequenced admixed genomes. A key feature of our approach is that it incorporates the admixed genomes themselves jointly with public datasets, such as 1000 Genomes, to improve the accuracy of CSV calling. We use simulations to show that our approach attains accuracy similar to widely used computationally intensive haplotype-based approaches with large decreases in runtime. Most importantly, we show that our method recovers comparable local ancestries, as the 1000 Genomes consensus local ancestry calls in the real admixed individuals from the 1000 Genomes Project. We extend our approach to account for low-coverage sequencing and show that accurate local ancestry inference can be attained at low sequencing coverage. Finally, we generalize CSVs to sub-continental population-specific variants (sCSVs) and show that in some cases it is possible to determine the sub-continental ancestry for short chromosomal segments on the basis of sCSVs. PMID:24743331
Wlan-Based Indoor Localization Using Neural Networks
NASA Astrophysics Data System (ADS)
Saleem, Fasiha; Wyne, Shurjeel
2016-07-01
Wireless indoor localization has generated recent research interest due to its numerous applications. This work investigates Wi-Fi based indoor localization using two variants of the fingerprinting approach. Specifically, we study the application of an artificial neural network (ANN) for implementing the fingerprinting approach and compare its localization performance with a probabilistic fingerprinting method that is based on maximum likelihood estimation (MLE) of the user location. We incorporate spatial correlation of fading into our investigations, which is often neglected in simulation studies and leads to erroneous location estimates. The localization performance is quantified in terms of accuracy, precision, robustness, and complexity. Multiple methods for handling the case of missing APs in online stage are investigated. Our results indicate that ANN-based fingerprinting outperforms the probabilistic approach for all performance metrics considered in this work.
Near real-time skin deformation mapping
NASA Astrophysics Data System (ADS)
Kacenjar, Steve; Chen, Suzie; Jafri, Madiha; Wall, Brian; Pedersen, Richard; Bezozo, Richard
2013-02-01
A novel in vivo approach is described that provides large area mapping of the mechanical properties of the skin in human patients. Such information is important in the understanding of skin health, cosmetic surgery[1], aging, and impacts of sun exposure. Currently, several methods have been developed to estimate the local biomechanical properties of the skin, including the use of a physical biopsy of local areas of the skin (in vitro methods) [2, 3, and 4], and also the use of non-invasive methods (in vivo) [5, 6, and 7]. All such methods examine localized areas of the skin. Our approach examines the local elastic properties via the generation of field displacement maps of the skin created using time-sequence imaging [9] with 2D digital imaging correlation (DIC) [10]. In this approach, large areas of the skin are reviewed rapidly, and skin displacement maps are generated showing the contour maps of skin deformation. These maps are then used to precisely register skin images for purposes of diagnostic comparison. This paper reports on our mapping and registration approach, and demonstrates its ability to accurately measure the skin deformation through a described nulling interpolation process. The result of local translational DIC alignment is compared using this interpolation process. The effectiveness of the approach is reported in terms of residual RMS, image entropy measures, and differential segmented regional errors.
Exact density functional and wave function embedding schemes based on orbital localization
NASA Astrophysics Data System (ADS)
Hégely, Bence; Nagy, Péter R.; Ferenczy, György G.; Kállay, Mihály
2016-08-01
Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.
ERIC Educational Resources Information Center
Yan, Xun
2014-01-01
This paper reports on a mixed-methods approach to evaluate rater performance on a local oral English proficiency test. Three types of reliability estimates were reported to examine rater performance from different perspectives. Quantitative results were also triangulated with qualitative rater comments to arrive at a more representative picture of…
Algebraic Algorithm Design and Local Search
1996-12-01
method for performing algorithm design that is more purely algebraic than that of KIDS. This method is then applied to local search. Local search is a...synthesis. Our approach was to follow KIDS in spirit, but to adopt a pure algebraic formalism, supported by Kestrel’s SPECWARE environment (79), that...design was developed that is more purely algebraic than that of KIDS. This method was then applied to local search. A general theory of local search was
Efficient anharmonic vibrational spectroscopy for large molecules using local-mode coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Xiaolu; Steele, Ryan P., E-mail: ryan.steele@utah.edu
This article presents a general computational approach for efficient simulations of anharmonic vibrational spectra in chemical systems. An automated local-mode vibrational approach is presented, which borrows techniques from localized molecular orbitals in electronic structure theory. This approach generates spatially localized vibrational modes, in contrast to the delocalization exhibited by canonical normal modes. The method is rigorously tested across a series of chemical systems, ranging from small molecules to large water clusters and a protonated dipeptide. It is interfaced with exact, grid-based approaches, as well as vibrational self-consistent field methods. Most significantly, this new set of reference coordinates exhibits a well-behavedmore » spatial decay of mode couplings, which allows for a systematic, a priori truncation of mode couplings and increased computational efficiency. Convergence can typically be reached by including modes within only about 4 Å. The local nature of this truncation suggests particular promise for the ab initio simulation of anharmonic vibrational motion in large systems, where connection to experimental spectra is currently most challenging.« less
A new local-global approach for classification.
Peres, R T; Pedreira, C E
2010-09-01
In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.
Distributed State Estimation Using a Modified Partitioned Moving Horizon Strategy for Power Systems.
Chen, Tengpeng; Foo, Yi Shyh Eddy; Ling, K V; Chen, Xuebing
2017-10-11
In this paper, a distributed state estimation method based on moving horizon estimation (MHE) is proposed for the large-scale power system state estimation. The proposed method partitions the power systems into several local areas with non-overlapping states. Unlike the centralized approach where all measurements are sent to a processing center, the proposed method distributes the state estimation task to the local processing centers where local measurements are collected. Inspired by the partitioned moving horizon estimation (PMHE) algorithm, each local area solves a smaller optimization problem to estimate its own local states by using local measurements and estimated results from its neighboring areas. In contrast with PMHE, the error from the process model is ignored in our method. The proposed modified PMHE (mPMHE) approach can also take constraints on states into account during the optimization process such that the influence of the outliers can be further mitigated. Simulation results on the IEEE 14-bus and 118-bus systems verify that our method achieves comparable state estimation accuracy but with a significant reduction in the overall computation load.
A Tomographic Method for the Reconstruction of Local Probability Density Functions
NASA Technical Reports Server (NTRS)
Sivathanu, Y. R.; Gore, J. P.
1993-01-01
A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.
An efficient linear-scaling CCSD(T) method based on local natural orbitals.
Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály
2013-09-07
An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)] and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)] with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Mo̸ller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.
A machine learning approach for efficient uncertainty quantification using multiscale methods
NASA Astrophysics Data System (ADS)
Chan, Shing; Elsheikh, Ahmed H.
2018-02-01
Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.
Kawakami, Tsuyoshi
2011-12-01
Participatory approaches are increasingly applied to improve safety, health and working conditions of grassroots workplaces in Asia. The core concepts and methods in human ergology research such as promoting real work life studies, relying on positive efforts of local people (daily life-technology), promoting active participation of local people to identify practical solutions, and learning from local human networks to reach grassroots workplaces, have provided useful viewpoints to devise such participatory training programmes. This study was aimed to study and analyze how human ergology approaches were applied in the actual development and application of three typical participatory training programmes: WISH (Work Improvement for Safe Home) with home workers in Cambodia, WISCON (Work Improvement in Small Construction Sites) with construction workers in Thailand, and WARM (Work Adjustment for Recycling and Managing Waste) with waste collectors in Fiji. The results revealed that all the three programmes, in the course of their developments, commonly applied direct observation methods of the work of target workers before devising the training programmes, learned from existing local good examples and efforts, and emphasized local human networks for cooperation. These methods and approaches were repeatedly applied in grassroots workplaces by taking advantage of their the sustainability and impacts. It was concluded that human ergology approaches largely contributed to the developments and expansion of participatory training programmes and could continue to support the self-help initiatives of local people for promoting human-centred work.
Localized diabatization applied to excitons in molecular crystals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Zuxin; Subotnik, Joseph E.
Traditional ab initio electronic structure calculations of periodic systems yield delocalized eigenstates that should be understood as adiabatic states. For example, excitons are bands of extended states which superimpose localized excitations on every lattice site. However, in general, in order to study the effects of nuclear motion on exciton transport, it is standard to work with a localized description of excitons, especially in a hopping regime; even in a band regime, a localized description can be helpful. To extract localized excitons from a band requires essentially a diabatization procedure. In this paper, three distinct methods are proposed for such localizedmore » diabatization: (i) a simple projection method, (ii) a more general Pipek-Mezey localization scheme, and (iii) a variant of Boys diabatization. Approaches (i) and (ii) require localized, single-particle Wannier orbitals, while approach (iii) has no such dependence. Lastly, these methods should be very useful for studying energy transfer through solids with ab initio calculations.« less
Localized diabatization applied to excitons in molecular crystals
Jin, Zuxin; Subotnik, Joseph E.
2017-06-28
Traditional ab initio electronic structure calculations of periodic systems yield delocalized eigenstates that should be understood as adiabatic states. For example, excitons are bands of extended states which superimpose localized excitations on every lattice site. However, in general, in order to study the effects of nuclear motion on exciton transport, it is standard to work with a localized description of excitons, especially in a hopping regime; even in a band regime, a localized description can be helpful. To extract localized excitons from a band requires essentially a diabatization procedure. In this paper, three distinct methods are proposed for such localizedmore » diabatization: (i) a simple projection method, (ii) a more general Pipek-Mezey localization scheme, and (iii) a variant of Boys diabatization. Approaches (i) and (ii) require localized, single-particle Wannier orbitals, while approach (iii) has no such dependence. Lastly, these methods should be very useful for studying energy transfer through solids with ab initio calculations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hégely, Bence; Nagy, Péter R.; Kállay, Mihály, E-mail: kallay@mail.bme.hu
Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up themore » system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.« less
NASA Astrophysics Data System (ADS)
Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan
2016-09-01
In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.
Hansen, William B; Derzon, James H; Reese, Eric L
2014-06-01
We propose a method for creating groups against which outcomes of local pretest-posttest evaluations of evidence-based programs can be judged. This involves assessing pretest markers for new and previously conducted evaluations to identify groups that have high pretest similarity. A database of 802 prior local evaluations provided six summary measures for analysis. The proximity of all groups using these variables is calculated as standardized proximities having values between 0 and 1. Five methods for creating standardized proximities are demonstrated. The approach allows proximity limits to be adjusted to find sufficient numbers of synthetic comparators. Several index cases are examined to assess the numbers of groups available to serve as comparators. Results show that most local evaluations would have sufficient numbers of comparators available for estimating program effects. This method holds promise as a tool for local evaluations to estimate relative effectiveness. © The Author(s) 2012.
AN OPTIMAL ADAPTIVE LOCAL GRID REFINEMENT APPROACH TO MODELING CONTAMINANT TRANSPORT
A Lagrangian-Eulerian method with an optimal adaptive local grid refinement is used to model contaminant transport equations. pplication of this approach to two bench-mark problems indicates that it completely resolves difficulties of peak clipping, numerical diffusion, and spuri...
LOCALIZING THE RANGELAND HEALTH METHOD FOR SOUTHEASTERN ARIZONA
The interagency manual Interpreting Indicators of Rangeland Health, Version 4 (Technical Reference 1734-6) provides a method for making rangeland health assessments. The manual recommends that the rangeland health assessment approach be adapted to local conditions. This technica...
Fluctuating local field method probed for a description of small classical correlated lattices
NASA Astrophysics Data System (ADS)
Rubtsov, Alexey N.
2018-05-01
Thermal-equilibrated finite classical lattices are considered as a minimal model of the systems showing an interplay between low-energy collective fluctuations and single-site degrees of freedom. Standard local field approach, as well as classical limit of the bosonic DMFT method, do not provide a satisfactory description of Ising and Heisenberg small lattices subjected to an external polarizing field. We show that a dramatic improvement can be achieved within a simple approach, in which the local field appears to be a fluctuating quantity related to the low-energy degree(s) of freedom.
Hip joint center localisation: A biomechanical application to hip arthroplasty population
Bouffard, Vicky; Begon, Mickael; Champagne, Annick; Farhadnia, Payam; Vendittoli, Pascal-André; Lavigne, Martin; Prince, François
2012-01-01
AIM: To determine hip joint center (HJC) location on hip arthroplasty population comparing predictive and functional approaches with radiographic measurements. METHODS: The distance between the HJC and the mid-pelvis was calculated and compared between the three approaches. The localisation error between the predictive and functional approach was compared using the radiographic measurements as the reference. The operated leg was compared to the non-operated leg. RESULTS: A significant difference was found for the distance between the HJC and the mid-pelvis when comparing the predictive and functional method. The functional method leads to fewer errors. A statistical difference was found for the localization error between the predictive and functional method. The functional method is twice more precise. CONCLUSION: Although being more individualized, the functional method improves HJC localization and should be used in three-dimensional gait analysis. PMID:22919569
Localization of phonons in mass-disordered alloys: A typical medium dynamical cluster approach
Jarrell, Mark; Moreno, Juana; Raja Mondal, Wasim; ...
2017-07-20
The effect of disorder on lattice vibrational modes has been a topic of interest for several decades. In this article, we employ a Green's function based approach, namely, the dynamical cluster approximation (DCA), to investigate phonons in mass-disordered systems. Detailed benchmarks with previous exact calculations are used to validate the method in a wide parameter space. An extension of the method, namely, the typical medium DCA (TMDCA), is used to study Anderson localization of phonons in three dimensions. We show that, for binary isotopic disorder, lighter impurities induce localized modes beyond the bandwidth of the host system, while heavier impuritiesmore » lead to a partial localization of the low-frequency acoustic modes. For a uniform (box) distribution of masses, the physical spectrum is shown to develop long tails comprising mostly localized modes. The mobility edge separating extended and localized modes, obtained through the TMDCA, agrees well with results from the transfer matrix method. A reentrance behavior of the mobility edge with increasing disorder is found that is similar to, but somewhat more pronounced than, the behavior in disordered electronic systems. Our work establishes a computational approach, which recovers the thermodynamic limit, is versatile and computationally inexpensive, to investigate lattice vibrations in disordered lattice systems.« less
Localization of phonons in mass-disordered alloys: A typical medium dynamical cluster approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarrell, Mark; Moreno, Juana; Raja Mondal, Wasim
The effect of disorder on lattice vibrational modes has been a topic of interest for several decades. In this article, we employ a Green's function based approach, namely, the dynamical cluster approximation (DCA), to investigate phonons in mass-disordered systems. Detailed benchmarks with previous exact calculations are used to validate the method in a wide parameter space. An extension of the method, namely, the typical medium DCA (TMDCA), is used to study Anderson localization of phonons in three dimensions. We show that, for binary isotopic disorder, lighter impurities induce localized modes beyond the bandwidth of the host system, while heavier impuritiesmore » lead to a partial localization of the low-frequency acoustic modes. For a uniform (box) distribution of masses, the physical spectrum is shown to develop long tails comprising mostly localized modes. The mobility edge separating extended and localized modes, obtained through the TMDCA, agrees well with results from the transfer matrix method. A reentrance behavior of the mobility edge with increasing disorder is found that is similar to, but somewhat more pronounced than, the behavior in disordered electronic systems. Our work establishes a computational approach, which recovers the thermodynamic limit, is versatile and computationally inexpensive, to investigate lattice vibrations in disordered lattice systems.« less
Optic disk localization by a robust fusion method
NASA Astrophysics Data System (ADS)
Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin
2013-02-01
The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.
Lu, Jiwen; Erin Liong, Venice; Zhou, Jie
2017-08-09
In this paper, we propose a simultaneous local binary feature learning and encoding (SLBFLE) approach for both homogeneous and heterogeneous face recognition. Unlike existing hand-crafted face descriptors such as local binary pattern (LBP) and Gabor features which usually require strong prior knowledge, our SLBFLE is an unsupervised feature learning approach which automatically learns face representation from raw pixels. Unlike existing binary face descriptors such as the LBP, discriminant face descriptor (DFD), and compact binary face descriptor (CBFD) which use a two-stage feature extraction procedure, our SLBFLE jointly learns binary codes and the codebook for local face patches so that discriminative information from raw pixels from face images of different identities can be obtained by using a one-stage feature learning and encoding procedure. Moreover, we propose a coupled simultaneous local binary feature learning and encoding (C-SLBFLE) method to make the proposed approach suitable for heterogeneous face matching. Unlike most existing coupled feature learning methods which learn a pair of transformation matrices for each modality, we exploit both the common and specific information from heterogeneous face samples to characterize their underlying correlations. Experimental results on six widely used face datasets are presented to demonstrate the effectiveness of the proposed method.
Evolutionary Local Search of Fuzzy Rules through a novel Neuro-Fuzzy encoding method.
Carrascal, A; Manrique, D; Ríos, J; Rossi, C
2003-01-01
This paper proposes a new approach for constructing fuzzy knowledge bases using evolutionary methods. We have designed a genetic algorithm that automatically builds neuro-fuzzy architectures based on a new indirect encoding method. The neuro-fuzzy architecture represents the fuzzy knowledge base that solves a given problem; the search for this architecture takes advantage of a local search procedure that improves the chromosomes at each generation. Experiments conducted both on artificially generated and real world problems confirm the effectiveness of the proposed approach.
Face Alignment via Regressing Local Binary Features.
Ren, Shaoqing; Cao, Xudong; Wei, Yichen; Sun, Jian
2016-03-01
This paper presents a highly efficient and accurate regression approach for face alignment. Our approach has two novel components: 1) a set of local binary features and 2) a locality principle for learning those features. The locality principle guides us to learn a set of highly discriminative local binary features for each facial landmark independently. The obtained local binary features are used to jointly learn a linear regression for the final output. This approach achieves the state-of-the-art results when tested on the most challenging benchmarks to date. Furthermore, because extracting and regressing local binary features are computationally very cheap, our system is much faster than previous methods. It achieves over 3000 frames per second (FPS) on a desktop or 300 FPS on a mobile phone for locating a few dozens of landmarks. We also study a key issue that is important but has received little attention in the previous research, which is the face detector used to initialize alignment. We investigate several face detectors and perform quantitative evaluation on how they affect alignment accuracy. We find that an alignment friendly detector can further greatly boost the accuracy of our alignment method, reducing the error up to 16% relatively. To facilitate practical usage of face detection/alignment methods, we also propose a convenient metric to measure how good a detector is for alignment initialization.
A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.
Huang, Shiping; Wu, Zhifeng; Misra, Anil
2017-12-11
Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.
Towards a Viscous Wall Model for Immersed Boundary Methods
NASA Technical Reports Server (NTRS)
Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.
2016-01-01
Immersed boundary methods are frequently employed for simulating flows at low Reynolds numbers or for applications where viscous boundary layer effects can be neglected. The primary shortcoming of Cartesian mesh immersed boundary methods is the inability of efficiently resolving thin turbulent boundary layers in high-Reynolds number flow application. The inefficiency of resolving the thin boundary is associated with the use of constant aspect ratio Cartesian grid cells. Conventional CFD approaches can efficiently resolve the large wall normal gradients by utilizing large aspect ratio cells near the wall. This paper presents different approaches for immersed boundary methods to account for the viscous boundary layer interaction with the flow-field away from the walls. Different wall modeling approaches proposed in previous research studies are addressed and compared to a new integral boundary layer based approach. In contrast to common wall-modeling approaches that usually only utilize local flow information, the integral boundary layer based approach keeps the streamwise history of the boundary layer. This allows the method to remain effective at much larger y+ values than local wall modeling approaches. After a theoretical discussion of the different approaches, the method is applied to increasingly more challenging flow fields including fully attached, separated, and shock-induced separated (laminar and turbulent) flows.
Nonconforming mortar element methods: Application to spectral discretizations
NASA Technical Reports Server (NTRS)
Maday, Yvon; Mavriplis, Cathy; Patera, Anthony
1988-01-01
Spectral element methods are p-type weighted residual techniques for partial differential equations that combine the generality of finite element methods with the accuracy of spectral methods. Presented here is a new nonconforming discretization which greatly improves the flexibility of the spectral element approach as regards automatic mesh generation and non-propagating local mesh refinement. The method is based on the introduction of an auxiliary mortar trace space, and constitutes a new approach to discretization-driven domain decomposition characterized by a clean decoupling of the local, structure-preserving residual evaluations and the transmission of boundary and continuity conditions. The flexibility of the mortar method is illustrated by several nonconforming adaptive Navier-Stokes calculations in complex geometry.
NASA Astrophysics Data System (ADS)
Voorhoeve, Robbert; van der Maas, Annemiek; Oomen, Tom
2018-05-01
Frequency response function (FRF) identification is often used as a basis for control systems design and as a starting point for subsequent parametric system identification. The aim of this paper is to develop a multiple-input multiple-output (MIMO) local parametric modeling approach for FRF identification of lightly damped mechanical systems with improved speed and accuracy. The proposed method is based on local rational models, which can efficiently handle the lightly-damped resonant dynamics. A key aspect herein is the freedom in the multivariable rational model parametrizations. Several choices for such multivariable rational model parametrizations are proposed and investigated. For systems with many inputs and outputs the required number of model parameters can rapidly increase, adversely affecting the performance of the local modeling approach. Therefore, low-order model structures are investigated. The structure of these low-order parametrizations leads to an undesired directionality in the identification problem. To address this, an iterative local rational modeling algorithm is proposed. As a special case recently developed SISO algorithms are recovered. The proposed approach is successfully demonstrated on simulations and on an active vibration isolation system benchmark, confirming good performance of the method using significantly less parameters compared with alternative approaches.
SChloro: directing Viridiplantae proteins to six chloroplastic sub-compartments.
Savojardo, Castrense; Martelli, Pier Luigi; Fariselli, Piero; Casadio, Rita
2017-02-01
Chloroplasts are organelles found in plants and involved in several important cell processes. Similarly to other compartments in the cell, chloroplasts have an internal structure comprising several sub-compartments, where different proteins are targeted to perform their functions. Given the relation between protein function and localization, the availability of effective computational tools to predict protein sub-organelle localizations is crucial for large-scale functional studies. In this paper we present SChloro, a novel machine-learning approach to predict protein sub-chloroplastic localization, based on targeting signal detection and membrane protein information. The proposed approach performs multi-label predictions discriminating six chloroplastic sub-compartments that include inner membrane, outer membrane, stroma, thylakoid lumen, plastoglobule and thylakoid membrane. In comparative benchmarks, the proposed method outperforms current state-of-the-art methods in both single- and multi-compartment predictions, with an overall multi-label accuracy of 74%. The results demonstrate the relevance of the approach that is eligible as a good candidate for integration into more general large-scale annotation pipelines of protein subcellular localization. The method is available as web server at http://schloro.biocomp.unibo.it gigi@biocomp.unibo.it.
Cicmil, Nela; Bridge, Holly; Parker, Andrew J.; Woolrich, Mark W.; Krug, Kristine
2014-01-01
Magnetoencephalography (MEG) allows the physiological recording of human brain activity at high temporal resolution. However, spatial localization of the source of the MEG signal is an ill-posed problem as the signal alone cannot constrain a unique solution and additional prior assumptions must be enforced. An adequate source reconstruction method for investigating the human visual system should place the sources of early visual activity in known locations in the occipital cortex. We localized sources of retinotopic MEG signals from the human brain with contrasting reconstruction approaches (minimum norm, multiple sparse priors, and beamformer) and compared these to the visual retinotopic map obtained with fMRI in the same individuals. When reconstructing brain responses to visual stimuli that differed by angular position, we found reliable localization to the appropriate retinotopic visual field quadrant by a minimum norm approach and by beamforming. Retinotopic map eccentricity in accordance with the fMRI map could not consistently be localized using an annular stimulus with any reconstruction method, but confining eccentricity stimuli to one visual field quadrant resulted in significant improvement with the minimum norm. These results inform the application of source analysis approaches for future MEG studies of the visual system, and indicate some current limits on localization accuracy of MEG signals. PMID:24904268
Dynamic texture recognition using local binary patterns with an application to facial expressions.
Zhao, Guoying; Pietikäinen, Matti
2007-06-01
Dynamic texture (DT) is an extension of texture to the temporal domain. Description and recognition of DTs have attracted growing attention. In this paper, a novel approach for recognizing DTs is proposed and its simplifications and extensions to facial image analysis are also considered. First, the textures are modeled with volume local binary patterns (VLBP), which are an extension of the LBP operator widely used in ordinary texture analysis, combining motion and appearance. To make the approach computationally simple and easy to extend, only the co-occurrences of the local binary patterns on three orthogonal planes (LBP-TOP) are then considered. A block-based method is also proposed to deal with specific dynamic events such as facial expressions in which local information and its spatial locations should also be taken into account. In experiments with two DT databases, DynTex and Massachusetts Institute of Technology (MIT), both the VLBP and LBP-TOP clearly outperformed the earlier approaches. The proposed block-based method was evaluated with the Cohn-Kanade facial expression database with excellent results. The advantages of our approach include local processing, robustness to monotonic gray-scale changes, and simple computation.
Papademetriou, Iason T; Porter, Tyrone
2015-01-01
Brain drug delivery is a major challenge for therapy of central nervous system (CNS) diseases. Biochemical modifications of drugs or drug nanocarriers, methods of local delivery, and blood–brain barrier (BBB) disruption with focused ultrasound and microbubbles are promising approaches which enhance transport or bypass the BBB. These approaches are discussed in the context of brain cancer as an example in CNS drug development. Targeting to receptors enabling transport across the BBB offers noninvasive delivery of small molecule and biological cancer therapeutics. Local delivery methods enable high dose delivery while avoiding systemic exposure. BBB disruption with focused ultrasound and microbubbles offers local and noninvasive treatment. Clinical trials show the prospects of these technologies and point to challenges for the future. PMID:26488496
Papademetriou, Iason T; Porter, Tyrone
2015-01-01
Brain drug delivery is a major challenge for therapy of central nervous system (CNS) diseases. Biochemical modifications of drugs or drug nanocarriers, methods of local delivery, and blood-brain barrier (BBB) disruption with focused ultrasound and microbubbles are promising approaches which enhance transport or bypass the BBB. These approaches are discussed in the context of brain cancer as an example in CNS drug development. Targeting to receptors enabling transport across the BBB offers noninvasive delivery of small molecule and biological cancer therapeutics. Local delivery methods enable high dose delivery while avoiding systemic exposure. BBB disruption with focused ultrasound and microbubbles offers local and noninvasive treatment. Clinical trials show the prospects of these technologies and point to challenges for the future.
Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan
2012-01-01
Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.
The Analysis of Patients Operated for Frontal Sinus Osteomas
Turan, Şükrü; Kaya, Ercan; Pınarbaşlı, Mehmet Özgür; Çaklı, Hamdi
2015-01-01
Objective Paranasal sinus osteomas are benign tumors that are smooth-walled, slow-growing, and induced by bone tissue. Although their most common localization is the frontal sinus, some osteomas are seen in the ethmoid, maxillary, and sphenoid sinuses. Frontal sinus osteomas are often asymptomatic; however, when they become symptomatic, headache is the most common complaint. In this study, we aimed to analyze the postoperative results of patients who were diagnosed with frontal sinus osteoma and were operated with appropriate surgical techniques. Methods We retrospectively evaluated 14 patients who were diagnosed with frontal sinus osteoma and were operated in our department between March 2009 and July 2014. The following parameters were analyzed: patients’ age and gender, complaints at the time of admission to our clinic, pathological findings from physically examination, tumor features observed in preoperative paranasal sinus computed tomography (size and localization), surgical methods applied, intra- and postoperative complications, and recurrence rates. All patients preoperatively provided informed consent. Results Of the 14 patients, 7 were males and 7 were females, with a mean age of 40.57 years. A total of 11 (79%) osteomas were located within the frontal sinus and 3 (21%) within the frontal recess. External surgical approach was performed to 11 patients, endoscopic approach was performed to 2 patients and external and endoscopic approach was performed to 1 patient together. Conclusion Although the preferred surgical method in frontal sinus osteoma depends depended on size and localization of tumors, experience of surgeon is also important. Although the external surgical approach is appropriate for large and laterally localized osteomas, the endoscopic approach is appropriate for small and inferomedially localized osteomas. In both surgical approaches the site of origin should be drilled. PMID:29391998
Semi-supervised protein subcellular localization.
Xu, Qian; Hu, Derek Hao; Xue, Hong; Yu, Weichuan; Yang, Qiang
2009-01-30
Protein subcellular localization is concerned with predicting the location of a protein within a cell using computational method. The location information can indicate key functionalities of proteins. Accurate predictions of subcellular localizations of protein can aid the prediction of protein function and genome annotation, as well as the identification of drug targets. Computational methods based on machine learning, such as support vector machine approaches, have already been widely used in the prediction of protein subcellular localization. However, a major drawback of these machine learning-based approaches is that a large amount of data should be labeled in order to let the prediction system learn a classifier of good generalization ability. However, in real world cases, it is laborious, expensive and time-consuming to experimentally determine the subcellular localization of a protein and prepare instances of labeled data. In this paper, we present an approach based on a new learning framework, semi-supervised learning, which can use much fewer labeled instances to construct a high quality prediction model. We construct an initial classifier using a small set of labeled examples first, and then use unlabeled instances to refine the classifier for future predictions. Experimental results show that our methods can effectively reduce the workload for labeling data using the unlabeled data. Our method is shown to enhance the state-of-the-art prediction results of SVM classifiers by more than 10%.
The regional approach and regional studies method in the process of geography teaching
NASA Astrophysics Data System (ADS)
Dermendzhieva, Stela; Doikov, Martin
2017-03-01
We define the regional approach as a manner of relations among the global trends of development of the "Society-man-nature" system and the local differentiating level of knowledge. Conditionally, interactions interlace under the influence of the character of Geography as a science, education, approaches, goals and teaching methods. Global, national and local development differentiates in three concentric circles at the level of knowledge. It is determined as a conception of modern, complex and effective mechanism for young people, through which knowledge develops in regional historical and cultural perspective; self-consciousness for socio-economic and cultural integration is formed as a part of the. historical-geographical image of the native land. This way an attitude to the. native land is formed as a connecting construct between patriotism to the motherland and the same in global aspect. The possibility for integration and cooperation of the educative geographical content with all the local historical-geographical, regional, profession orientating, artistic, municipal and district institutions, is outlined. Contemporary geographical education appears to be a powerful and indispensable mechanism for organization of human sciences, while the regional approach and the application of the regional studies method stimulate and motivate the development and realization of optimal capacities for direct connection with the local structures and environments.
ERIC Educational Resources Information Center
Vir, Dharm
1971-01-01
A survey of teaching methods for farm guidance workers in India, outlining some approaches developed by and used in other nations. Discusses mass educational methods, group educational methods, and the local leadership method. (JB)
A new distributed systems scheduling algorithm: a swarm intelligence approach
NASA Astrophysics Data System (ADS)
Haghi Kashani, Mostafa; Sarvizadeh, Raheleh; Jameii, Mahdi
2011-12-01
The scheduling problem in distributed systems is known as an NP-complete problem, and methods based on heuristic or metaheuristic search have been proposed to obtain optimal and suboptimal solutions. The task scheduling is a key factor for distributed systems to gain better performance. In this paper, an efficient method based on memetic algorithm is developed to solve the problem of distributed systems scheduling. With regard to load balancing efficiently, Artificial Bee Colony (ABC) has been applied as local search in the proposed memetic algorithm. The proposed method has been compared to existing memetic-Based approach in which Learning Automata method has been used as local search. The results demonstrated that the proposed method outperform the above mentioned method in terms of communication cost.
NASA Astrophysics Data System (ADS)
Alajlouni, Sa'ed; Albakri, Mohammad; Tarazaga, Pablo
2018-05-01
An algorithm is introduced to solve the general multilateration (source localization) problem in a dispersive waveguide. The algorithm is designed with the intention of localizing impact forces in a dispersive floor, and can potentially be used to localize and track occupants in a building using vibration sensors connected to the lower surface of the walking floor. The lower the wave frequencies generated by the impact force, the more accurate the localization is expected to be. An impact force acting on a floor, generates a seismic wave that gets distorted as it travels away from the source. This distortion is noticeable even over relatively short traveled distances, and is mainly caused by the dispersion phenomenon among other reasons, therefore using conventional localization/multilateration methods will produce localization error values that are highly variable and occasionally large. The proposed localization approach is based on the fact that the wave's energy, calculated over some time window, decays exponentially as the wave travels away from the source. Although localization methods that assume exponential decay exist in the literature (in the field of wireless communications), these methods have only been considered for wave propagation in non-dispersive media, in addition to the limiting assumption required by these methods that the source must not coincide with a sensor location. As a result, these methods cannot be applied to the indoor localization problem in their current form. We show how our proposed method is different from the other methods, and that it overcomes the source-sensor location coincidence limitation. Theoretical analysis and experimental data will be used to motivate and justify the pursuit of the proposed approach for localization in a dispersive medium. Additionally, hammer impacts on an instrumented floor section inside an operational building, as well as finite element model simulations, are used to evaluate the performance of the algorithm. It is shown that the algorithm produces promising results providing a foundation for further future development and optimization.
2014-01-01
Background Health technology assessment (HTA) is increasingly performed at the local or hospital level where the costs, impacts, and benefits of health technologies can be directly assessed. Although local/hospital-based HTA has been implemented for more than two decades in some jurisdictions, little is known about its effects and impact on hospital budget, clinical practices, and patient outcomes. We conducted a mixed-methods systematic review that aimed to synthesize current evidence regarding the effects and impact of local/hospital-based HTA. Methods We identified articles through PubMed and Embase and by citation tracking of included studies. We selected qualitative, quantitative, or mixed-methods studies with empirical data about the effects or impact of local/hospital-based HTA on decision-making, budget, or perceptions of stakeholders. We extracted the following information from included studies: country, methodological approach, and use of conceptual framework; local/hospital HTA approach and activities described; reported effects and impacts of local/hospital-based HTA; factors facilitating/hampering the use of hospital-based HTA recommendations; and perceptions of stakeholders concerning local/hospital HTA. Due to the great heterogeneity among studies, we conducted a narrative synthesis of their results. Results A total of 18 studies met the inclusion criteria. We reported the results according to the four approaches for performing HTA proposed by the Hospital Based HTA Interest Sub-Group: ambassador model, mini-HTA, internal committee, and HTA unit. Results showed that each of these approaches for performing HTA corresponds to specific needs and structures and has its strengths and limitations. Overall, studies showed positive impacts related to local/hospital-based HTA on hospital decisions and budgets, as well as positive perceptions from managers and clinicians. Conclusions Local/hospital-based HTA could influence decision-making on several aspects. It is difficult to evaluate the real impacts of local HTA at the different levels of health care given the relatively small number of evaluations with quantitative data and the lack of clear comparators. Further research is necessary to explore the conditions under which local/hospital-based HTA results and recommendations can impact hospital policies, clinical decisions, and quality of care and optimize the use of scarce resources. PMID:25352182
Rebranding city: A strategic urban planning approach in Indonesia
NASA Astrophysics Data System (ADS)
Firzal, Yohannes
2018-03-01
Concomitant with entering the decentralization period has had a significant effect on cities in Indonesia, and is seen as a new era for local life. The decentralization period has also generated sentiments which are locally bounded that can be identified in the discretion given to the local government in charge to rebranding the city. In this paper, the rebranding phenomena have learned from Pekanbaru city where has changed its city brand for few times. By using a qualitative research approach and combining multi methods to collect and process the data, this paper investigates that the rebranding city has found as a strategic approach in urban planning today that is used to inject more senses to the city and its local life by the local government. This research has confirmed, for almost two decades of the decentralization period, the rebranding phenomena are not only found to generate sense locally, but also as a power marker of the local regime.
DOT National Transportation Integrated Search
2010-10-01
Many communities want to promote walking and cycling. However, few know how much nonmotorized travel already occurs in their communities. This research project developed the Pedestrian and Bicycling Survey (PABS), a method that local governments can ...
Gaussian process regression for sensor networks under localization uncertainty
Jadaliha, M.; Xu, Yunfei; Choi, Jongeun; Johnson, N.S.; Li, Weiming
2013-01-01
In this paper, we formulate Gaussian process regression with observations under the localization uncertainty due to the resource-constrained sensor networks. In our formulation, effects of observations, measurement noise, localization uncertainty, and prior distributions are all correctly incorporated in the posterior predictive statistics. The analytically intractable posterior predictive statistics are proposed to be approximated by two techniques, viz., Monte Carlo sampling and Laplace's method. Such approximation techniques have been carefully tailored to our problems and their approximation error and complexity are analyzed. Simulation study demonstrates that the proposed approaches perform much better than approaches without considering the localization uncertainty properly. Finally, we have applied the proposed approaches on the experimentally collected real data from a dye concentration field over a section of a river and a temperature field of an outdoor swimming pool to provide proof of concept tests and evaluate the proposed schemes in real situations. In both simulation and experimental results, the proposed methods outperform the quick-and-dirty solutions often used in practice.
Probability and Cumulative Density Function Methods for the Stochastic Advection-Reaction Equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barajas-Solano, David A.; Tartakovsky, Alexandre M.
We present a cumulative density function (CDF) method for the probabilistic analysis of $d$-dimensional advection-dominated reactive transport in heterogeneous media. We employ a probabilistic approach in which epistemic uncertainty on the spatial heterogeneity of Darcy-scale transport coefficients is modeled in terms of random fields with given correlation structures. Our proposed CDF method employs a modified Large-Eddy-Diffusivity (LED) approach to close and localize the nonlocal equations governing the one-point PDF and CDF of the concentration field, resulting in a $(d + 1)$ dimensional PDE. Compared to the classsical LED localization, the proposed modified LED localization explicitly accounts for the mean-field advectivemore » dynamics over the phase space of the PDF and CDF. To illustrate the accuracy of the proposed closure, we apply our CDF method to one-dimensional single-species reactive transport with uncertain, heterogeneous advection velocities and reaction rates modeled as random fields.« less
Plasmonics simulations including nonlocal effects using a boundary element method approach
NASA Astrophysics Data System (ADS)
Trügler, Andreas; Hohenester, Ulrich; García de Abajo, F. Javier
2017-09-01
Spatial nonlocality in the photonic response of metallic nanoparticles is actually known to produce near-field quenching and significant plasmon frequency shifts relative to local descriptions. As the control over size and morphology of fabricated nanostructures is truly reaching the nanometer scale, understanding and accounting for nonlocal phenomena is becoming increasingly important. Recent advances clearly point out the need to go beyond the local theory. We here present a general formalism for incorporating spatial dispersion effects through the hydrodynamic model and generalizations for arbitrary surface morphologies. Our method relies on the boundary element method, which we supplement with a nonlocal interaction potential. We provide numerical examples in excellent agreement with the literature for individual and paired gold nanospheres, and critically examine the accuracy of our approach. The present method involves marginal extra computational cost relative to local descriptions and facilitates the simulation of spatial dispersion effects in the photonic response of complex nanoplasmonic structures.
A Novel Local Learning based Approach With Application to Breast Cancer Diagnosis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, Songhua; Tourassi, Georgia
2012-01-01
The purpose of this study is to develop and evaluate a novel local learning-based approach for computer-assisted diagnosis of breast cancer. Our new local learning based algorithm using the linear logistic regression method as its base learner is described. Overall, our algorithm will perform its stochastic searching process until the total allowed computing time is used up by our random walk process in identifying the most suitable population subdivision scheme and their corresponding individual base learners. The proposed local learning-based approach was applied for the prediction of breast cancer given 11 mammographic and clinical findings reported by physicians using themore » BI-RADS lexicon. Our database consisted of 850 patients with biopsy confirmed diagnosis (290 malignant and 560 benign). We also compared the performance of our method with a collection of publicly available state-of-the-art machine learning methods. Predictive performance for all classifiers was evaluated using 10-fold cross validation and Receiver Operating Characteristics (ROC) analysis. Figure 1 reports the performance of 54 machine learning methods implemented in the machine learning toolkit Weka (version 3.0). We introduced a novel local learning-based classifier and compared it with an extensive list of other classifiers for the problem of breast cancer diagnosis. Our experiments show that the algorithm superior prediction performance outperforming a wide range of other well established machine learning techniques. Our conclusion complements the existing understanding in the machine learning field that local learning may capture complicated, non-linear relationships exhibited by real-world datasets.« less
Uematsu, T; Kasami, M; Uchida, Y; Sanuki, J; Kimura, K; Tanaka, K; Takahashi, K
2007-06-01
Hookwire localization is the current standard technique for radiological marking of nonpalpable breast lesions. Stereotactic directional vacuum-assisted breast biopsy (SVAB) is of sufficient sensitivity and specificity to replace surgical biopsy. Wire localization for metallic marker clips placed after SVAB is needed. To describe a method for performing computed tomography (CT)-guided hookwire localization using a radial approach for metallic marker clips placed percutaneously after SVAB. Nineteen women scheduled for SVAB with marker-clip placement, CT-guided wire localization of marker clips, and, eventually, surgical excision were prospectively entered into the study. CT-guided wire localization was performed with a radial approach, followed by placement of a localizing marker-clip surgical excision. Feasibility and reliability of the procedure and the incidence of complications were examined. CT-guided wire localization surgical excision was successfully performed in all 19 women without any complications. The mean total procedure time was 15 min. The median distance on CT image from marker clip to hookwire was 2 mm (range 0-3 mm). CT-guided preoperative hookwire localization with a radial approach for marker clips after SVAB is technically feasible.
Healy, R.W.; Russell, T.F.
1998-01-01
We extend the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) for solution of the advection-dispersion equation to two dimensions. The method can conserve mass globally and is not limited by restrictions on the size of the grid Peclet or Courant number. Therefore, it is well suited for solution of advection-dominated ground-water solute transport problems. In test problem comparisons with standard finite differences, FVELLAM is able to attain accurate solutions on much coarser space and time grids. On fine grids, the accuracy of the two methods is comparable. A critical aspect of FVELLAM (and all other ELLAMs) is evaluation of the mass storage integral from the preceding time level. In FVELLAM this may be accomplished with either a forward or backtracking approach. The forward tracking approach conserves mass globally and is the preferred approach. The backtracking approach is less computationally intensive, but not globally mass conservative. Boundary terms are systematically represented as integrals in space and time which are evaluated by a common integration scheme in conjunction with forward tracking through time. Unlike the one-dimensional case, local mass conservation cannot be guaranteed, so slight oscillations in concentration can develop, particularly in the vicinity of inflow or outflow boundaries. Published by Elsevier Science Ltd.
Adaptive Local Realignment of Protein Sequences.
DeBlasio, Dan; Kececioglu, John
2018-06-11
While mutation rates can vary markedly over the residues of a protein, multiple sequence alignment tools typically use the same values for their scoring-function parameters across a protein's entire length. We present a new approach, called adaptive local realignment, that in contrast automatically adapts to the diversity of mutation rates along protein sequences. This builds upon a recent technique known as parameter advising, which finds global parameter settings for an aligner, to now adaptively find local settings. Our approach in essence identifies local regions with low estimated accuracy, constructs a set of candidate realignments using a carefully-chosen collection of parameter settings, and replaces the region if a realignment has higher estimated accuracy. This new method of local parameter advising, when combined with prior methods for global advising, boosts alignment accuracy as much as 26% over the best default setting on hard-to-align protein benchmarks, and by 6.4% over global advising alone. Adaptive local realignment has been implemented within the Opal aligner using the Facet accuracy estimator.
Riley, William; Briggs, Jill; McCullough, Mac
2011-01-01
This study presents a model for determining total funding needed for individual local health departments. The aim is to determine the financial resources needed to provide services for statewide local public health departments in Minnesota based on a gaps analysis done to estimate the funding needs. We used a multimethod analysis consisting of 3 approaches to estimate gaps in local public health funding consisting of (1) interviews of selected local public health leaders, (2) a Delphi panel, and (3) a Nominal Group Technique. On the basis of these 3 approaches, a consensus estimate of funding gaps was generated for statewide projections. The study includes an analysis of cost, performance, and outcomes from 2005 to 2007 for all 87 local governmental health departments in Minnesota. For each of the methods, we selected a panel to represent a profile of Minnesota health departments. The 2 main outcome measures were local-level gaps in financial resources and total resources needed to provide public health services at the local level. The total public health expenditure in Minnesota for local governmental public health departments was $302 million in 2007 ($58.92 per person). The consensus estimate of the financial gaps in local public health departments indicates that an additional $32.5 million (a 10.7% increase or $6.32 per person) is needed to adequately serve public health needs in the local communities. It is possible to make informed estimates of funding gaps for public health activities on the basis of a combination of quantitative methods. There is a wide variation in public health expenditure at the local levels, and methods are needed to establish minimum baseline expenditure levels to adequately treat a population. The gaps analysis can be used by stakeholders to inform policy makers of the need for improved funding of the public health system.
Mapping the Similarities of Spectra: Global and Locally-biased Approaches to SDSS Galaxies
NASA Astrophysics Data System (ADS)
Lawlor, David; Budavári, Tamás; Mahoney, Michael W.
2016-12-01
We present a novel approach to studying the diversity of galaxies. It is based on a novel spectral graph technique, that of locally-biased semi-supervised eigenvectors. Our method introduces new coordinates that summarize an entire spectrum, similar to but going well beyond the widely used Principal Component Analysis (PCA). Unlike PCA, however, this technique does not assume that the Euclidean distance between galaxy spectra is a good global measure of similarity. Instead, we relax that condition to only the most similar spectra, and we show that doing so yields more reliable results for many astronomical questions of interest. The global variant of our approach can identify very finely numerous astronomical phenomena of interest. The locally-biased variants of our basic approach enable us to explore subtle trends around a set of chosen objects. The power of the method is demonstrated in the Sloan Digital Sky Survey Main Galaxy Sample, by illustrating that the derived spectral coordinates carry an unprecedented amount of information.
Localized-overlap approach to calculations of intermolecular interactions
NASA Astrophysics Data System (ADS)
Rob, Fazle
Symmetry-adapted perturbation theory (SAPT) based on the density functional theory (DFT) description of the monomers [SAPT(DFT)] is one of the most robust tools for computing intermolecular interaction energies. Currently, one can use the SAPT(DFT) method to calculate interaction energies of dimers consisting of about a hundred atoms. To remove the methodological and technical limits and extend the size of the systems that can be calculated with the method, a novel approach has been proposed that redefines the electron densities and polarizabilities in a localized way. In the new method, accurate but computationally expensive quantum-chemical calculations are only applied for the regions where it is necessary and for other regions, where overlap effects of the wave functions are negligible, inexpensive asymptotic techniques are used. Unlike other hybrid methods, this new approach is mathematically rigorous. The main benefit of this method is that with the increasing size of the system the calculation scales linearly and, therefore, this approach will be denoted as local-overlap SAPT(DFT) or LSAPT(DFT). As a byproduct of developing LSAPT(DFT), some important problems concerning distributed molecular response, in particular, the unphysical charge-flow terms were eliminated. Additionally, to illustrate the capabilities of SAPT(DFT), a potential energy function has been developed for an energetic molecular crystal of 1,1-diamino-2,2-dinitroethylene (FOX-7), where an excellent agreement with the experimental data has been found.
From politics to policy: a new payment approach in Medicare Advantage.
Berenson, Robert A
2008-01-01
While the Medicare Advantage program's future remains contentious politically, the Medicare Payment Advisory Commission's (MedPAC's) recommended policy of financial neutrality at the local level between private plans and traditional Medicare ignores local market dynamics in important ways. An analysis correlating plan bids against traditional Medicare's local spending levels likely would provide an alternative method of setting benchmarks, by producing a blend of local and national rates. A result would be that the rural and lower-cost urban "floor counties" would have benchmarks below currently inflated levels but above what financial neutrality at the local level--MedPAC's approach--would produce.
NASA Astrophysics Data System (ADS)
Shiri, Jalal
2018-06-01
Among different reference evapotranspiration (ETo) modeling approaches, mass transfer-based methods have been less studied. These approaches utilize temperature and wind speed records. On the other hand, the empirical equations proposed in this context generally produce weak simulations, except when a local calibration is used for improving their performance. This might be a crucial drawback for those equations in case of local data scarcity for calibration procedure. So, application of heuristic methods can be considered as a substitute for improving the performance accuracy of the mass transfer-based approaches. However, given that the wind speed records have usually higher variation magnitudes than the other meteorological parameters, application of a wavelet transform for coupling with heuristic models would be necessary. In the present paper, a coupled wavelet-random forest (WRF) methodology was proposed for the first time to improve the performance accuracy of the mass transfer-based ETo estimation approaches using cross-validation data management scenarios in both local and cross-station scales. The obtained results revealed that the new coupled WRF model (with the minimum scatter index values of 0.150 and 0.192 for local and external applications, respectively) improved the performance accuracy of the single RF models as well as the empirical equations to great extent.
Estimation of reflectance from camera responses by the regularized local linear model.
Zhang, Wei-Feng; Tang, Gongguo; Dai, Dao-Qing; Nehorai, Arye
2011-10-01
Because of the limited approximation capability of using fixed basis functions, the performance of reflectance estimation obtained by traditional linear models will not be optimal. We propose an approach based on the regularized local linear model. Our approach performs efficiently and knowledge of the spectral power distribution of the illuminant and the spectral sensitivities of the camera is not needed. Experimental results show that the proposed method performs better than some well-known methods in terms of both reflectance error and colorimetric error. © 2011 Optical Society of America
Local discretization method for overdamped Brownian motion on a potential with multiple deep wells.
Nguyen, P T T; Challis, K J; Jack, M W
2016-11-01
We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.
Local discretization method for overdamped Brownian motion on a potential with multiple deep wells
NASA Astrophysics Data System (ADS)
Nguyen, P. T. T.; Challis, K. J.; Jack, M. W.
2016-11-01
We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.
Geometrical optics in the near field: local plane-interface approach with evanescent waves.
Bose, Gaurav; Hyvärinen, Heikki J; Tervo, Jani; Turunen, Jari
2015-01-12
We show that geometrical models may provide useful information on light propagation in wavelength-scale structures even if evanescent fields are present. We apply a so-called local plane-wave and local plane-interface methods to study a geometry that resembles a scanning near-field microscope. We show that fair agreement between the geometrical approach and rigorous electromagnetic theory can be achieved in the case where evanescent waves are required to predict any transmission through the structure.
A locally p-adaptive approach for Large Eddy Simulation of compressible flows in a DG framework
NASA Astrophysics Data System (ADS)
Tugnoli, Matteo; Abbà, Antonella; Bonaventura, Luca; Restelli, Marco
2017-11-01
We investigate the possibility of reducing the computational burden of LES models by employing local polynomial degree adaptivity in the framework of a high-order DG method. A novel degree adaptation technique especially featured to be effective for LES applications is proposed and its effectiveness is compared to that of other criteria already employed in the literature. The resulting locally adaptive approach allows to achieve significant reductions in computational cost of representative LES computations.
Full-Field Strain Methods for Investigating Failure Mechanisms in Triaxial Braided Composites
NASA Technical Reports Server (NTRS)
Littell, Justin D.; Binienda, Wieslaw K.; Goldberg, Robert K.; Roberts, Gary D.
2008-01-01
Recent advancements in braiding technology have led to commercially viable manufacturing approaches for making large structures with complex shape out of triaxial braided composite materials. In some cases, the static load capability of structures made using these materials has been higher than expected based on material strength properties measured using standard coupon tests. A more detailed investigation of deformation and failure processes in large-unit-cell-size triaxial braid composites is needed to evaluate the applicability of standard test methods for these materials and to develop alternative testing approaches. This report presents some new techniques that have been developed to investigate local deformation and failure using digital image correlation techniques. The methods were used to measure both local and global strains during standard straight-sided coupon tensile tests on composite materials made with 12- and 24-k yarns and a 0 /+60 /-60 triaxial braid architecture. Local deformation and failure within fiber bundles was observed and correlations were made between these local failures and global composite deformation and strength.
NASA Astrophysics Data System (ADS)
Titantah, John T.; Karttunen, Mikko
2016-05-01
Electronic and optical properties of silver clusters were calculated using two different ab initio approaches: (1) based on all-electron full-potential linearized-augmented plane-wave method and (2) local basis function pseudopotential approach. Agreement is found between the two methods for small and intermediate sized clusters for which the former method is limited due to its all-electron formulation. The latter, due to non-periodic boundary conditions, is the more natural approach to simulate small clusters. The effect of cluster size is then explored using the local basis function approach. We find that as the cluster size increases, the electronic structure undergoes a transition from molecular behavior to nanoparticle behavior at a cluster size of 140 atoms (diameter ~1.7 nm). Above this cluster size the step-like electronic structure, evident as several features in the imaginary part of the polarizability of all clusters smaller than Ag147, gives way to a dominant plasmon peak localized at wavelengths 350 nm ≤ λ ≤ 600 nm. It is, thus, at this length-scale that the conduction electrons' collective oscillations that are responsible for plasmonic resonances begin to dominate the opto-electronic properties of silver nanoclusters.
Hotspot detection using image pattern recognition based on higher-order local auto-correlation
NASA Astrophysics Data System (ADS)
Maeda, Shimon; Matsunawa, Tetsuaki; Ogawa, Ryuji; Ichikawa, Hirotaka; Takahata, Kazuhiro; Miyairi, Masahiro; Kotani, Toshiya; Nojima, Shigeki; Tanaka, Satoshi; Nakagawa, Kei; Saito, Tamaki; Mimotogi, Shoji; Inoue, Soichi; Nosato, Hirokazu; Sakanashi, Hidenori; Kobayashi, Takumi; Murakawa, Masahiro; Higuchi, Tetsuya; Takahashi, Eiichi; Otsu, Nobuyuki
2011-04-01
Below 40nm design node, systematic variation due to lithography must be taken into consideration during the early stage of design. So far, litho-aware design using lithography simulation models has been widely applied to assure that designs are printed on silicon without any error. However, the lithography simulation approach is very time consuming, and under time-to-market pressure, repetitive redesign by this approach may result in the missing of the market window. This paper proposes a fast hotspot detection support method by flexible and intelligent vision system image pattern recognition based on Higher-Order Local Autocorrelation. Our method learns the geometrical properties of the given design data without any defects as normal patterns, and automatically detects the design patterns with hotspots from the test data as abnormal patterns. The Higher-Order Local Autocorrelation method can extract features from the graphic image of design pattern, and computational cost of the extraction is constant regardless of the number of design pattern polygons. This approach can reduce turnaround time (TAT) dramatically only on 1CPU, compared with the conventional simulation-based approach, and by distributed processing, this has proven to deliver linear scalability with each additional CPU.
Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun
2015-01-01
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203
Wu, Sangwook
2009-03-01
We investigate dynamical self-arrest in a diblock copolymer melt using a replica approach within a self-consistent local method based on dynamical mean-field theory (DMFT). The local replica approach effectively predicts (chiN)_{A} for dynamical self-arrest in a block copolymer melt for symmetric and asymmetric cases. We discuss the competition of the cubic and quartic interactions in the Landau free energy for a block copolymer melt in stabilizing a glassy state depending on the chain length. Our local replica theory provides a universal value for the dynamical self-arrest in block copolymer melts with (chiN)_{A} approximately 10.5+64N;{-3/10} for the symmetric case.
Nielsen, Rasmus
2017-01-01
Admixture—the mixing of genomes from divergent populations—is increasingly appreciated as a central process in evolution. To characterize and quantify patterns of admixture across the genome, a number of methods have been developed for local ancestry inference. However, existing approaches have a number of shortcomings. First, all local ancestry inference methods require some prior assumption about the expected ancestry tract lengths. Second, existing methods generally require genotypes, which is not feasible to obtain for many next-generation sequencing projects. Third, many methods assume samples are diploid, however a wide variety of sequencing applications will fail to meet this assumption. To address these issues, we introduce a novel hidden Markov model for estimating local ancestry that models the read pileup data, rather than genotypes, is generalized to arbitrary ploidy, and can estimate the time since admixture during local ancestry inference. We demonstrate that our method can simultaneously estimate the time since admixture and local ancestry with good accuracy, and that it performs well on samples of high ploidy—i.e. 100 or more chromosomes. As this method is very general, we expect it will be useful for local ancestry inference in a wider variety of populations than what previously has been possible. We then applied our method to pooled sequencing data derived from populations of Drosophila melanogaster on an ancestry cline on the east coast of North America. We find that regions of local recombination rates are negatively correlated with the proportion of African ancestry, suggesting that selection against foreign ancestry is the least efficient in low recombination regions. Finally we show that clinal outlier loci are enriched for genes associated with gene regulatory functions, consistent with a role of regulatory evolution in ecological adaptation of admixed D. melanogaster populations. Our results illustrate the potential of local ancestry inference for elucidating fundamental evolutionary processes. PMID:28045893
NASA Astrophysics Data System (ADS)
Laubie, Hadrien; Radjaï, Farhang; Pellenq, Roland; Ulm, Franz-Josef
2017-08-01
Fracture of heterogeneous materials has emerged as a critical issue in many engineering applications, ranging from subsurface energy to biomedical applications, and requires a rational framework that allows linking local fracture processes with global fracture descriptors such as the energy release rate, fracture energy and fracture toughness. This is achieved here by means of a local and a global potential-of-mean-force (PMF) inspired Lattice Element Method (LEM) approach. In the local approach, fracture-strength criteria derived from the effective interaction potentials between mass points are shown to exhibit a scaling commensurable with the energy dissipation of fracture processes. In the global PMF-approach, fracture is considered as a sequence of equilibrium states associated with minimum potential energy states analogous to Griffith's approach. It is found that this global approach has much in common with a Grand Canonical Monte Carlo (GCMC) approach, in which mass points are randomly removed following a maximum dissipation criterion until the energy release rate reaches the fracture energy. The duality of the two approaches is illustrated through the application of the PMF-inspired LEM for fracture propagation in a homogeneous linear elastic solid using different means of evaluating the energy release rate. Finally, by application of the method to a textbook example of fracture propagation in a heterogeneous material, it is shown that the proposed PMF-inspired LEM approach captures some well-known toughening mechanisms related to fracture energy contrast, elasticity contrast and crack deflection in the considered two-phase layered composite material.
Alternatives for Jet Engine Control
NASA Technical Reports Server (NTRS)
Leake, R. J.; Sain, M. K.
1976-01-01
Approaches are developed as alternatives to current design methods which rely heavily on linear quadratic and Riccati equation methods. The main alternatives are discussed in two broad categories, local multivariable frequency domain methods and global nonlinear optimal methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Glass, Samuel W.; Fifield, Leonard S.; Bowler, Nicola
This Pacific Northwest National Laboratory milestone report describes progress to date on the investigation of non-destructive test methods focusing on local cable insulation and jacket testing using an interdigital capacitance (IDC) approach. Earlier studies have assessed a number of non-destructive examination (NDE) methods for bulk, distributed, and local cable tests. A typical test strategy is to perform bulk assessments of the cable response using dielectric spectroscopy, Tan , or partial discharge followed by distributed tests like time domain reflectometry or frequency domain reflectometry to identify the most likely defect location followed by a local test that can include visual inspection,more » indenter modulus tests, or Fourier Transform Infrared Spectroscopy (FTIR) or Near Infrared Spectroscopy FTIR (FTNIR). If a cable is covered with an overlaying jacket, the jacket’s condition is likely to be more severely degraded than the underlying insulation. None of the above local test approaches can be used to evaluate insulation beneath a cable jacket. Since the jacket’s function is neither structural nor electrical, a degraded jacket may not have any significance regarding the cable’s performance or suitability for service. IDC measurements offer a promising alternative or complement to these local test approaches including the possibility to test insulation beneath an overlaying jacket.« less
NASA Astrophysics Data System (ADS)
Tam, Kai-Chung; Lau, Siu-Kit; Tang, Shiu-Keung
2016-07-01
A microphone array signal processing method for locating a stationary point source over a locally reactive ground and for estimating ground impedance is examined in detail in the present study. A non-linear least square approach using the Levenberg-Marquardt method is proposed to overcome the problem of unknown ground impedance. The multiple signal classification method (MUSIC) is used to give the initial estimation of the source location, while the technique of forward backward spatial smoothing is adopted as a pre-processer of the source localization to minimize the effects of source coherence. The accuracy and robustness of the proposed signal processing method are examined. Results show that source localization in the horizontal direction by MUSIC is satisfactory. However, source coherence reduces drastically the accuracy in estimating the source height. The further application of Levenberg-Marquardt method with the results from MUSIC as the initial inputs improves significantly the accuracy of source height estimation. The present proposed method provides effective and robust estimation of the ground surface impedance.
Does remote sensing help translating local SGD investigation to large spatial scales?
NASA Astrophysics Data System (ADS)
Moosdorf, N.; Mallast, U.; Hennig, H.; Schubert, M.; Knoeller, K.; Neehaul, Y.
2016-02-01
Within the last 20 years, studies on submarine groundwater discharge (SGD) have revealed numerous processes, temporal behavior and quantitative estimations as well as best-practice and localization methods. This plethora on information is valuable regarding the understanding of magnitude and effects of SGD for the respective location. Yet, since given local conditions vary, the translation of local understanding, magnitudes and effects to a regional or global scale is not trivial. In contrast, modeling approaches (e.g. 228Ra budget) tackling SGD on a global scale do provide quantitative global estimates but have not been related to local investigations. This gap between the two approaches, local and global, and the combination and/or translation of either one to the other represents one of the mayor challenges the SGD community currently faces. But what if remote sensing can provide certain information that may be used as translation between the two, similar to transfer functions in many other disciplines allowing an extrapolation from in-situ investigated and quantified SGD (discrete information) to regional scales or beyond? Admittedly, the sketched future is ambitious and we will certainly not be able to present a solution to the raised question. Nonetheless, we will show a remote sensing based approach that is already able to identify potential SGD sites independent on location or hydrogeological conditions. Based on multi-temporal thermal information of the water surface as core of the approach, SGD influenced sites display a smaller thermal variation (thermal anomalies) than surrounding uninfluenced areas. Despite the apparent simplicity, the automatized approach has helped to localize several sites that could be validated with proven in-situ methods. At the same time it embodies the risk to identify false positives that can only be avoided if we can `calibrate' the so obtained thermal anomalies to in-situ data. We will present all pros and cons of our approach with the intention to contribute to the solution of translating SGD investigation to larger scales.
ERIC Educational Resources Information Center
Watkins, F.; Bristow, K.; Robertson, S.; Norman, R.; Litva, A.; Stanistreet, D.
2013-01-01
Objective: To explore the experiences of young men aged 16-19, living in an area of high deprivation, when accessing local sexual health services. Design: A qualitative design drawing on ethnographic methods. Setting: A local college. Methods: A multi-method approach was adopted using: one-to-one semi-structured interviews with young men and…
NASA Astrophysics Data System (ADS)
Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.
2018-06-01
The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.
Approximate Dynamic Programming: Combining Regional and Local State Following Approximations.
Deptula, Patryk; Rosenfeld, Joel A; Kamalapurkar, Rushikesh; Dixon, Warren E
2018-06-01
An infinite-horizon optimal regulation problem for a control-affine deterministic system is solved online using a local state following (StaF) kernel and a regional model-based reinforcement learning (R-MBRL) method to approximate the value function. Unlike traditional methods such as R-MBRL that aim to approximate the value function over a large compact set, the StaF kernel approach aims to approximate the value function in a local neighborhood of the state that travels within a compact set. In this paper, the value function is approximated using a state-dependent convex combination of the StaF-based and the R-MBRL-based approximations. As the state enters a neighborhood containing the origin, the value function transitions from being approximated by the StaF approach to the R-MBRL approach. Semiglobal uniformly ultimately bounded (SGUUB) convergence of the system states to the origin is established using a Lyapunov-based analysis. Simulation results are provided for two, three, six, and ten-state dynamical systems to demonstrate the scalability and performance of the developed method.
Factors affecting wound ooze in total knee replacement
Butt, U; Ahmad, R; Aspros, D; Bannister, GC
2010-01-01
INTRODUCTION Wound ooze is common following total knee arthroplasty (TKA) and persistent wound infection is a risk factor for infection, and increased length and cost of hospitalisation. PATIENTS AND METHODS We undertook a prospective study to assess the effect of tourniquet time, peri-articular local anaesthesia and surgical approach on wound oozing after TKA. RESULTS The medial parapatellar approach was used in 59 patients (77%) and subvastus in 18 patients (23%). Peri-articular local anaesthesia (0.25% Bupivacaine with 1:1,000,000 adrenalin) was used in 34 patients (44%). The mean tourniquet time was 83 min (range, 38–125 min). We found a significant association between cessation of oozing and peri-articular local anaesthesia (P = 0.003), length of the tourniquet time (P = 0.03) and the subvastus approach (P = 0.01). CONCLUSIONS Peri-articular local anaesthesia, the subvastus approach and shorter tourniquet time were all associated with less wound oozing after total knee arthroplasty. PMID:20836920
Meng, Xiaoli
2017-01-01
Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit)/DMI (Distance-Measuring Instruments), a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization. PMID:28926996
Meng, Xiaoli; Wang, Heng; Liu, Bingbing
2017-09-18
Precise and robust localization in a large-scale outdoor environment is essential for an autonomous vehicle. In order to improve the performance of the fusion of GNSS (Global Navigation Satellite System)/IMU (Inertial Measurement Unit)/DMI (Distance-Measuring Instruments), a multi-constraint fault detection approach is proposed to smooth the vehicle locations in spite of GNSS jumps. Furthermore, the lateral localization error is compensated by the point cloud-based lateral localization method proposed in this paper. Experiment results have verified the algorithms proposed in this paper, which shows that the algorithms proposed in this paper are capable of providing precise and robust vehicle localization.
Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme
2013-07-01
The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.
An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
1999-01-01
An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.
Knowledge-Based Topic Model for Unsupervised Object Discovery and Localization.
Niu, Zhenxing; Hua, Gang; Wang, Le; Gao, Xinbo
Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.Unsupervised object discovery and localization is to discover some dominant object classes and localize all of object instances from a given image collection without any supervision. Previous work has attempted to tackle this problem with vanilla topic models, such as latent Dirichlet allocation (LDA). However, in those methods no prior knowledge for the given image collection is exploited to facilitate object discovery. On the other hand, the topic models used in those methods suffer from the topic coherence issue-some inferred topics do not have clear meaning, which limits the final performance of object discovery. In this paper, prior knowledge in terms of the so-called must-links are exploited from Web images on the Internet. Furthermore, a novel knowledge-based topic model, called LDA with mixture of Dirichlet trees, is proposed to incorporate the must-links into topic modeling for object discovery. In particular, to better deal with the polysemy phenomenon of visual words, the must-link is re-defined as that one must-link only constrains one or some topic(s) instead of all topics, which leads to significantly improved topic coherence. Moreover, the must-links are built and grouped with respect to specific object classes, thus the must-links in our approach are semantic-specific , which allows to more efficiently exploit discriminative prior knowledge from Web images. Extensive experiments validated the efficiency of our proposed approach on several data sets. It is shown that our method significantly improves topic coherence and outperforms the unsupervised methods for object discovery and localization. In addition, compared with discriminative methods, the naturally existing object classes in the given image collection can be subtly discovered, which makes our approach well suited for realistic applications of unsupervised object discovery.
Computing frequency by using generalized zero-crossing applied to intrinsic mode functions
NASA Technical Reports Server (NTRS)
Huang, Norden E. (Inventor)
2006-01-01
This invention presents a method for computing Instantaneous Frequency by applying Empirical Mode Decomposition to a signal and using Generalized Zero-Crossing (GZC) and Extrema Sifting. The GZC approach is the most direct, local, and also the most accurate in the mean. Furthermore, this approach will also give a statistical measure of the scattering of the frequency value. For most practical applications, this mean frequency localized down to quarter of a wave period is already a well-accepted result. As this method physically measures the period, or part of it, the values obtained can serve as the best local mean over the period to which it applies. Through Extrema Sifting, instead of the cubic spline fitting, this invention constructs the upper envelope and the lower envelope by connecting local maxima points and local minima points of the signal with straight lines, respectively, when extracting a collection of Intrinsic Mode Functions (IMFs) from a signal under consideration.
Lin, Tzu-Hsuan; Lu, Yung-Chi; Hung, Shih-Lin
2014-01-01
This study developed an integrated global-local approach for locating damage on building structures. A damage detection approach with a novel embedded frequency response function damage index (NEFDI) was proposed and embedded in the Imote2.NET-based wireless structural health monitoring (SHM) system to locate global damage. Local damage is then identified using an electromechanical impedance- (EMI-) based damage detection method. The electromechanical impedance was measured using a single-chip impedance measurement device which has the advantages of small size, low cost, and portability. The feasibility of the proposed damage detection scheme was studied with reference to a numerical example of a six-storey shear plane frame structure and a small-scale experimental steel frame. Numerical and experimental analysis using the integrated global-local SHM approach reveals that, after NEFDI indicates the approximate location of a damaged area, the EMI-based damage detection approach can then identify the detailed damage location in the structure of the building.
Automatic image enhancement based on multi-scale image decomposition
NASA Astrophysics Data System (ADS)
Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong
2014-01-01
In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.
Local regression type methods applied to the study of geophysics and high frequency financial data
NASA Astrophysics Data System (ADS)
Mariani, M. C.; Basu, K.
2014-09-01
In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.
LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS
Einstein, Daniel R.; Dyedov, Vladimir
2010-01-01
Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546
NASA Technical Reports Server (NTRS)
Barker, L. E., Jr.; Bowles, R. L.; Williams, L. H.
1973-01-01
High angular rates encountered in real-time flight simulation problems may require a more stable and accurate integration method than the classical methods normally used. A study was made to develop a general local linearization procedure of integrating dynamic system equations when using a digital computer in real-time. The procedure is specifically applied to the integration of the quaternion rate equations. For this application, results are compared to a classical second-order method. The local linearization approach is shown to have desirable stability characteristics and gives significant improvement in accuracy over the classical second-order integration methods.
A Core Journal Decision Model Based on Weighted Page Rank
ERIC Educational Resources Information Center
Wang, Hei-Chia; Chou, Ya-lin; Guo, Jiunn-Liang
2011-01-01
Purpose: The paper's aim is to propose a core journal decision method, called the local impact factor (LIF), which can evaluate the requirements of the local user community by combining both the access rate and the weighted impact factor, and by tracking citation information on the local users' articles. Design/methodology/approach: Many…
Robust 3D face landmark localization based on local coordinate coding.
Song, Mingli; Tao, Dacheng; Sun, Shengpeng; Chen, Chun; Maybank, Stephen J
2014-12-01
In the 3D facial animation and synthesis community, input faces are usually required to be labeled by a set of landmarks for parameterization. Because of the variations in pose, expression and resolution, automatic 3D face landmark localization remains a challenge. In this paper, a novel landmark localization approach is presented. The approach is based on local coordinate coding (LCC) and consists of two stages. In the first stage, we perform nose detection, relying on the fact that the nose shape is usually invariant under the variations in the pose, expression, and resolution. Then, we use the iterative closest points algorithm to find a 3D affine transformation that aligns the input face to a reference face. In the second stage, we perform resampling to build correspondences between the input 3D face and the training faces. Then, an LCC-based localization algorithm is proposed to obtain the positions of the landmarks in the input face. Experimental results show that the proposed method is comparable to state of the art methods in terms of its robustness, flexibility, and accuracy.
Restoration of STORM images from sparse subset of localizations (Conference Presentation)
NASA Astrophysics Data System (ADS)
Moiseev, Alexander A.; Gelikonov, Grigory V.; Gelikonov, Valentine M.
2016-02-01
To construct a Stochastic Optical Reconstruction Microscopy (STORM) image one should collect sufficient number of localized fluorophores to satisfy Nyquist criterion. This requirement limits time resolution of the method. In this work we propose a probabalistic approach to construct STORM images from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion. Using a set of STORM images constructed from number of localizations sufficient for Nyquist criterion we derive a model which allows us to predict the probability for every location to be occupied by a fluorophore at the end of hypothetical acquisition, having as an input parameters distribution of already localized fluorophores in the proximity of this location. We show that probability map obtained from number of fluorophores 3-4 times less than required by Nyquist criterion may be used as superresolution image itself. Thus we are able to construct STORM image from a subset of localized fluorophores 3-4 times sparser than required from Nyquist criterion, proportionaly decreasing STORM data acquisition time. This method may be used complementary with other approaches desined for increasing STORM time resolution.
Local functional descriptors for surface comparison based binding prediction
2012-01-01
Background Molecular recognition in proteins occurs due to appropriate arrangements of physical, chemical, and geometric properties of an atomic surface. Similar surface regions should create similar binding interfaces. Effective methods for comparing surface regions can be used in identifying similar regions, and to predict interactions without regard to the underlying structural scaffold that creates the surface. Results We present a new descriptor for protein functional surfaces and algorithms for using these descriptors to compare protein surface regions to identify ligand binding interfaces. Our approach uses descriptors of local regions of the surface, and assembles collections of matches to compare larger regions. Our approach uses a variety of physical, chemical, and geometric properties, adaptively weighting these properties as appropriate for different regions of the interface. Our approach builds a classifier based on a training corpus of examples of binding sites of the target ligand. The constructed classifiers can be applied to a query protein providing a probability for each position on the protein that the position is part of a binding interface. We demonstrate the effectiveness of the approach on a number of benchmarks, demonstrating performance that is comparable to the state-of-the-art, with an approach with more generality than these prior methods. Conclusions Local functional descriptors offer a new method for protein surface comparison that is sufficiently flexible to serve in a variety of applications. PMID:23176080
Local and global evaluation for remote sensing image segmentation
NASA Astrophysics Data System (ADS)
Su, Tengfei; Zhang, Shengwei
2017-08-01
In object-based image analysis, how to produce accurate segmentation is usually a very important issue that needs to be solved before image classification or target recognition. The study for segmentation evaluation method is key to solving this issue. Almost all of the existent evaluation strategies only focus on the global performance assessment. However, these methods are ineffective for the situation that two segmentation results with very similar overall performance have very different local error distributions. To overcome this problem, this paper presents an approach that can both locally and globally quantify segmentation incorrectness. In doing so, region-overlapping metrics are utilized to quantify each reference geo-object's over and under-segmentation error. These quantified error values are used to produce segmentation error maps which have effective illustrative power to delineate local segmentation error patterns. The error values for all of the reference geo-objects are aggregated through using area-weighted summation, so that global indicators can be derived. An experiment using two scenes of very different high resolution images showed that the global evaluation part of the proposed approach was almost as effective as other two global evaluation methods, and the local part was a useful complement to comparing different segmentation results.
A multiscale approach to accelerate pore-scale simulation of porous electrodes
NASA Astrophysics Data System (ADS)
Zheng, Weibo; Kim, Seung Hyun
2017-04-01
A new method to accelerate pore-scale simulation of porous electrodes is presented. The method combines the macroscopic approach with pore-scale simulation by decomposing a physical quantity into macroscopic and local variations. The multiscale method is applied to the potential equation in pore-scale simulation of a Proton Exchange Membrane Fuel Cell (PEMFC) catalyst layer, and validated with the conventional approach for pore-scale simulation. Results show that the multiscale scheme substantially reduces the computational cost without sacrificing accuracy.
Extracting valley-ridge lines from point-cloud-based 3D fingerprint models.
Pang, Xufang; Song, Zhan; Xie, Wuyuan
2013-01-01
3D fingerprinting is an emerging technology with the distinct advantage of touchless operation. More important, 3D fingerprint models contain more biometric information than traditional 2D fingerprint images. However, current approaches to fingerprint feature detection usually must transform the 3D models to a 2D space through unwrapping or other methods, which might introduce distortions. A new approach directly extracts valley-ridge features from point-cloud-based 3D fingerprint models. It first applies the moving least-squares method to fit a local paraboloid surface and represent the local point cloud area. It then computes the local surface's curvatures and curvature tensors to facilitate detection of the potential valley and ridge points. The approach projects those points to the most likely valley-ridge lines, using statistical means such as covariance analysis and cross correlation. To finally extract the valley-ridge lines, it grows the polylines that approximate the projected feature points and removes the perturbations between the sampled points. Experiments with different 3D fingerprint models demonstrate this approach's feasibility and performance.
An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars.
Huang, Jiyan; Zhang, Ying; Luo, Shan
2017-12-15
Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer-Rao lower bound (CRLB) are derived. The simulation results verified the proposed method.
An Efficient Estimator for Moving Target Localization Using Multi-Station Dual-Frequency Radars
Zhang, Ying; Luo, Shan
2017-01-01
Localization of a moving target in a dual-frequency radars system has now gained considerable attention. The noncoherent localization approach based on a least squares (LS) estimator has been addressed in the literature. Compared with the LS method, a novel localization method based on a two-step weighted least squares estimator is proposed to increase positioning accuracy for a multi-station dual-frequency radars system in this paper. The effects of signal noise ratio and the number of samples on the performance of range estimation are also analyzed in the paper. Furthermore, both the theoretical variance and Cramer–Rao lower bound (CRLB) are derived. The simulation results verified the proposed method. PMID:29244727
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Anemone, Robert; Emerson, Charles; Conroy, Glenn
2011-01-01
Chance and serendipity have long played a role in the location of productive fossil localities by vertebrate paleontologists and paleoanthropologists. We offer an alternative approach, informed by methods borrowed from the geographic information sciences and using recent advances in computer science, to more efficiently predict where fossil localities might be found. Our model uses an artificial neural network (ANN) that is trained to recognize the spectral characteristics of known productive localities and other land cover classes, such as forest, wetlands, and scrubland, within a study area based on the analysis of remotely sensed (RS) imagery. Using these spectral signatures, the model then classifies other pixels throughout the study area. The results of the neural network classification can be examined and further manipulated within a geographic information systems (GIS) software package. While we have developed and tested this model on fossil mammal localities in deposits of Paleocene and Eocene age in the Great Divide Basin of southwestern Wyoming, a similar analytical approach can be easily applied to fossil-bearing sedimentary deposits of any age in any part of the world. We suggest that new analytical tools and methods of the geographic sciences, including remote sensing and geographic information systems, are poised to greatly enrich paleoanthropological investigations, and that these new methods should be embraced by field workers in the search for, and geospatial analysis of, fossil primates and hominins. Copyright © 2011 Wiley-Liss, Inc.
2010-01-01
Background There have been a number of interventions to date aimed at improving malaria diagnostic accuracy in sub-Saharan Africa. Yet, limited success is often reported for a number of reasons, especially in rural settings. This paper seeks to provide a framework for applied research aimed to improve malaria diagnosis using a combination of the established methods, participatory action research and social entrepreneurship. Methods This case study introduces the idea of using the social entrepreneurship approach (SEA) to create innovative and sustainable applied health research outcomes. The following key elements define the SEA: (1) identifying a locally relevant research topic and plan, (2) recognizing the importance of international multi-disciplinary teams and the incorporation of local knowledge, (3) engaging in a process of continuous innovation, adaptation and learning, (4) remaining motivated and determined to achieve sustainable long-term research outcomes and, (5) sharing and transferring ownership of the project with the international and local partner. Evaluation The SEA approach has a strong emphasis on innovation lead by local stakeholders. In this case, innovation resulted in a unique holistic research program aimed at understanding patient, laboratory and physician influences on accurate diagnosis of malaria. An evaluation of milestones for each SEA element revealed that the success of one element is intricately related to the success of other elements. Conclusions The SEA will provide an additional framework for researchers and local stakeholders that promotes innovation and adaptability. This approach will facilitate the development of new ideas, strategies and approaches to understand how health issues, such as malaria, affect vulnerable communities. PMID:20128922
van Koperen, Marije Tm; van der Kleij, Rianne Mjj; Renders, Carry Cm; Crone, Matty Mr; Hendriks, Anna-Marie Am; Jansen, Maria M; van de Gaar, Vivian Vm; Raat, Hein Jh; Ruiter, Emilie Elm; Molleman, Gerard Grm; Schuit, Jantine Aj; Seidell, Jacob Jc
2014-01-01
The aim of this paper is to describe the research aims, concepts and methods of the research Consortium Integrated Approach of Overweight (CIAO). CIAO is a concerted action of five Academic Collaborative Centres, local collaborations between academic institutions, regional public health services, local authorities and other relevant sectors in the Netherlands. Prior research revealed lacunas in knowledge of and skills related to five elements of the integrated approach of overweight prevention in children (based upon the French EPODE approach), namely political support, parental education, implementation, social marketing and evaluation. CIAO aims to gain theoretical and practical insight of these elements through five sub-studies and to develop, based on these data, a framework for monitoring and evaluation. For this research program, mixed methods are used in all the five sub-studies. First, problem specification through literature research and consultation of stakeholders, experts, health promotion specialists, parents and policy makers will be carried out. Based on this information, models, theoretical frameworks and practical instruments will be developed, tested and evaluated in the communities that implement the integrated approach to prevent overweight in children. Knowledge obtained from these studies and insights from experts and stakeholders will be combined to create an evaluation framework to evaluate the integrated approach at central, local and individual levels that will be applicable to daily practice. This innovative research program stimulates sub-studies to collaborate with local stakeholders and to share and integrate their knowledge, methodology and results. Therefore, the output of this program (both knowledge and practical tools) will be matched and form building blocks of a blueprint for a local evidence- and practice-based integrated approach towards prevention of overweight in children. The output will then support various communities to further optimize the implementation and subsequently the effects of this approach.
Guise, Amanda J.; Cristea, Ileana M.
2017-01-01
As a member of the class IIa family of histone deacetylases, the histone deacetylase 5 (HDAC5) is known to undergo nuclear–cytoplasmic shuttling and to be a critical transcriptional regulator. Its misregulation has been linked to prominent human diseases, including cardiac diseases and tumorigenesis. In this chapter, we describe several experimental methods that have proven effective for studying the functions and regulatory features of HDAC5. We present methods for assessing the subcellular localization, protein interactions, posttranslational modifications (PTMs), and activity of HDAC5 from the standpoint of investigating either the endogenous protein or tagged protein forms in human cells. Specifically, given that at the heart of HDAC5 regulation lie its dynamic localization, interactions, and PTMs, we present methods for assessing HDAC5 localization in fixed and live cells, for isolating HDAC5-containing protein complexes to identify its interactions and modifications, and for determining how these PTMs map to predicted HDAC5 structural motifs. Lastly, we provide examples of approaches for studying HDAC5 functions with a focus on its regulation during cell-cycle progression. These methods can readily be adapted for the study of other HDACs or non-HDAC-proteins of interest. Individually, these techniques capture temporal and spatial snapshots of HDAC5 functions; yet together, these approaches provide powerful tools for investigating both the regulation and regulatory roles of HDAC5 in different cell contexts relevant to health and disease. PMID:27246208
Boosting instance prototypes to detect local dermoscopic features.
Situ, Ning; Yuan, Xiaojing; Zouridakis, George
2010-01-01
Local dermoscopic features are useful in many dermoscopic criteria for skin cancer detection. We address the problem of detecting local dermoscopic features from epiluminescence (ELM) microscopy skin lesion images. We formulate the recognition of local dermoscopic features as a multi-instance learning (MIL) problem. We employ the method of diverse density (DD) and evidence confidence (EC) function to convert MIL to a single-instance learning (SIL) problem. We apply Adaboost to improve the classification performance with support vector machines (SVMs) as the base classifier. We also propose to boost the selection of instance prototypes through changing the data weights in the DD function. We validate the methods on detecting ten local dermoscopic features from a dataset with 360 images. We compare the performance of the MIL approach, its boosting version, and a baseline method without using MIL. Our results show that boosting can provide performance improvement compared to the other two methods.
Global/local methods for probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Wu, Y.-T.
1993-01-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Global/local methods for probabilistic structural analysis
NASA Astrophysics Data System (ADS)
Millwater, H. R.; Wu, Y.-T.
1993-04-01
A probabilistic global/local method is proposed to reduce the computational requirements of probabilistic structural analysis. A coarser global model is used for most of the computations with a local more refined model used only at key probabilistic conditions. The global model is used to establish the cumulative distribution function (cdf) and the Most Probable Point (MPP). The local model then uses the predicted MPP to adjust the cdf value. The global/local method is used within the advanced mean value probabilistic algorithm. The local model can be more refined with respect to the g1obal model in terms of finer mesh, smaller time step, tighter tolerances, etc. and can be used with linear or nonlinear models. The basis for this approach is described in terms of the correlation between the global and local models which can be estimated from the global and local MPPs. A numerical example is presented using the NESSUS probabilistic structural analysis program with the finite element method used for the structural modeling. The results clearly indicate a significant computer savings with minimal loss in accuracy.
Harris, Claire; Allen, Kelly; King, Richard; Ramsey, Wayne; Kelly, Cate; Thiagarajan, Malar
2017-05-05
This is the second in a series of papers reporting a program of Sustainability in Health care by Allocating Resources Effectively (SHARE) in a local healthcare setting. Rising healthcare costs, continuing advances in health technologies and recognition of ineffective practices and systematic waste are driving disinvestment of health technologies and clinical practices that offer little or no benefit in order to maximise outcomes from existing resources. However there is little information to guide regional health services or individual facilities in how they might approach disinvestment locally. This paper outlines the investigation of potential settings and methods for decision-making about disinvestment in the context of an Australian health service. Methods include a literature review on the concepts and terminology relating to disinvestment, a survey of national and international researchers, and interviews and workshops with local informants. A conceptual framework was drafted and refined with stakeholder feedback. There is a lack of common terminology regarding definitions and concepts related to disinvestment and no guidance for an organisation-wide systematic approach to disinvestment in a local healthcare service. A summary of issues from the literature and respondents highlight the lack of theoretical knowledge and practical experience and provide a guide to the information required to develop future models or methods for disinvestment in the local context. A conceptual framework was developed. Three mechanisms that provide opportunities to introduce disinvestment decisions into health service systems and processes were identified. Presented in order of complexity, time to achieve outcomes and resources required they include 1) Explicit consideration of potential disinvestment in routine decision-making, 2) Proactive decision-making about disinvestment driven by available evidence from published research and local data, and 3) Specific exercises in priority setting and system redesign. This framework identifies potential opportunities to initiate disinvestment activities in a systematic integrated approach that can be applied across a whole organisation using transparent, evidence-based methods. Incorporating considerations for disinvestment into existing decision-making systems and processes might be achieved quickly with minimal cost; however establishment of new systems requires research into appropriate methods and provision of appropriate skills and resources to deliver them.
NASA Astrophysics Data System (ADS)
Kim, Taehwan; Kim, Sungho
2017-02-01
This paper presents a novel method to detect the remote pedestrians. After producing the human temperature based brightness enhancement image using the temperature data input, we generates the regions of interest (ROIs) by the multiscale contrast filtering based approach including the biased hysteresis threshold and clustering, remote pedestrian's height, pixel area and central position information. Afterwards, we conduct local vertical and horizontal projection based ROI refinement and weak aspect ratio based ROI limitation to solve the problem of region expansion in the contrast filtering stage. Finally, we detect the remote pedestrians by validating the final ROIs using transfer learning with convolutional neural network (CNN) feature, following non-maximal suppression (NMS) with strong aspect ratio limitation to improve the detection performance. In the experimental results, we confirmed that the proposed contrast filtering and locally projected region based CNN (CFLP-CNN) outperforms the baseline method by 8% in term of logaveraged miss rate. Also, the proposed method is more effective than the baseline approach and the proposed method provides the better regions that are suitably adjusted to the shape and appearance of remote pedestrians, which makes it detect the pedestrian that didn't find in the baseline approach and are able to help detect pedestrians by splitting the people group into a person.
Gagnon, Marie-Pierre; Desmartis, Marie; Poder, Thomas; Witteman, William
2014-10-28
Health technology assessment (HTA) is increasingly performed at the local or hospital level where the costs, impacts, and benefits of health technologies can be directly assessed. Although local/hospital-based HTA has been implemented for more than two decades in some jurisdictions, little is known about its effects and impact on hospital budget, clinical practices, and patient outcomes. We conducted a mixed-methods systematic review that aimed to synthesize current evidence regarding the effects and impact of local/hospital-based HTA. We identified articles through PubMed and Embase and by citation tracking of included studies. We selected qualitative, quantitative, or mixed-methods studies with empirical data about the effects or impact of local/hospital-based HTA on decision-making, budget, or perceptions of stakeholders. We extracted the following information from included studies: country, methodological approach, and use of conceptual framework; local/hospital HTA approach and activities described; reported effects and impacts of local/hospital-based HTA; factors facilitating/hampering the use of hospital-based HTA recommendations; and perceptions of stakeholders concerning local/hospital HTA. Due to the great heterogeneity among studies, we conducted a narrative synthesis of their results. A total of 18 studies met the inclusion criteria. We reported the results according to the four approaches for performing HTA proposed by the Hospital Based HTA Interest Sub-Group: ambassador model, mini-HTA, internal committee, and HTA unit. Results showed that each of these approaches for performing HTA corresponds to specific needs and structures and has its strengths and limitations. Overall, studies showed positive impacts related to local/hospital-based HTA on hospital decisions and budgets, as well as positive perceptions from managers and clinicians. Local/hospital-based HTA could influence decision-making on several aspects. It is difficult to evaluate the real impacts of local HTA at the different levels of health care given the relatively small number of evaluations with quantitative data and the lack of clear comparators. Further research is necessary to explore the conditions under which local/hospital-based HTA results and recommendations can impact hospital policies, clinical decisions, and quality of care and optimize the use of scarce resources.
Sound source localization method in an environment with flow based on Amiet-IMACS
NASA Astrophysics Data System (ADS)
Wei, Long; Li, Min; Qin, Sheng; Fu, Qiang; Yang, Debin
2017-05-01
A sound source localization method is proposed to localize and analyze the sound source in an environment with airflow. It combines the improved mapping of acoustic correlated sources (IMACS) method and Amiet's method, and is called Amiet-IMACS. It can localize uncorrelated and correlated sound sources with airflow. To implement this approach, Amiet's method is used to correct the sound propagation path in 3D, which improves the accuracy of the array manifold matrix and decreases the position error of the localized source. Then, the mapping of acoustic correlated sources (MACS) method, which is as a high-resolution sound source localization algorithm, is improved by self-adjusting the constraint parameter at each irritation process to increase convergence speed. A sound source localization experiment using a pair of loud speakers in an anechoic wind tunnel under different flow speeds is conducted. The experiment exhibits the advantage of Amiet-IMACS in localizing a more accurate sound source position compared with implementing IMACS alone in an environment with flow. Moreover, the aerodynamic noise produced by a NASA EPPLER 862 STRUT airfoil model in airflow with a velocity of 80 m/s is localized using the proposed method, which further proves its effectiveness in a flow environment. Finally, the relationship between the source position of this airfoil model and its frequency, along with its generation mechanism, is determined and interpreted.
ERIC Educational Resources Information Center
Smith, Peter K.; Howard, Sharon; Thompson, Fran
2007-01-01
The Support Group Method (SGM), formerly the No Blame Approach, is widely used as an anti-bullying intervention in schools, but has aroused some controversy. There is little evidence from users regarding its effectiveness. We aimed to ascertain the use of and support for the SGM in Local Authorities (LAs) and schools; and obtain ratings of…
NASA Technical Reports Server (NTRS)
Rohde, J. E.
1982-01-01
Objectives and approaches to research in turbine heat transfer are discussed. Generally, improvements in the method of determining the hot gas flow through the turbine passage is one area of concern, as is the cooling air flow inside the airfoil, and the methods of predicting the heat transfer rates on the hot gas side and on the coolant side of the airfoil. More specific areas of research are: (1) local hot gas recovery temperatures along the airfoil surfaces; (2) local airfoil wall temperature; (3) local hot gas side heat transfer coefficients on the airfoil surfaces; (4) local coolant side heat transfer coefficients inside the airfoils; (5) local hot gas flow velocities and secondary flows at real engine conditions; and (6) local delta strain range of the airfoil walls.
NASA Astrophysics Data System (ADS)
Knapp, Julia L. A.; Cirpka, Olaf A.
2017-06-01
The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.
An approach to constrained aerodynamic design with application to airfoils
NASA Technical Reports Server (NTRS)
Campbell, Richard L.
1992-01-01
An approach was developed for incorporating flow and geometric constraints into the Direct Iterative Surface Curvature (DISC) design method. In this approach, an initial target pressure distribution is developed using a set of control points. The chordwise locations and pressure levels of these points are initially estimated either from empirical relationships and observed characteristics of pressure distributions for a given class of airfoils or by fitting the points to an existing pressure distribution. These values are then automatically adjusted during the design process to satisfy the flow and geometric constraints. The flow constraints currently available are lift, wave drag, pitching moment, pressure gradient, and local pressure levels. The geometric constraint options include maximum thickness, local thickness, leading-edge radius, and a 'glove' constraint involving inner and outer bounding surfaces. This design method was also extended to include the successive constraint release (SCR) approach to constrained minimization.
Perthold, Jan Walther; Oostenbrink, Chris
2018-05-17
Enveloping distribution sampling (EDS) is an efficient approach to calculate multiple free-energy differences from a single molecular dynamics (MD) simulation. However, the construction of an appropriate reference-state Hamiltonian that samples all states efficiently is not straightforward. We propose a novel approach for the construction of the EDS reference-state Hamiltonian, related to a previously described procedure to smoothen energy landscapes. In contrast to previously suggested EDS approaches, our reference-state Hamiltonian preserves local energy minima of the combined end-states. Moreover, we propose an intuitive, robust and efficient parameter optimization scheme to tune EDS Hamiltonian parameters. We demonstrate the proposed method with established and novel test systems and conclude that our approach allows for the automated calculation of multiple free-energy differences from a single simulation. Accelerated EDS promises to be a robust and user-friendly method to compute free-energy differences based on solid statistical mechanics.
Harris, Claire; Allen, Kelly; Ramsey, Wayne; King, Richard; Green, Sally
2018-05-30
This is the final paper in a thematic series reporting a program of Sustainability in Health care by Allocating Resources Effectively (SHARE) in a local healthcare setting. The SHARE Program was established to explore a systematic, integrated, evidence-based organisation-wide approach to disinvestment in a large Australian health service network. This paper summarises the findings, discusses the contribution of the SHARE Program to the body of knowledge and understanding of disinvestment in the local healthcare setting, and considers implications for policy, practice and research. The SHARE program was conducted in three phases. Phase One was undertaken to understand concepts and practices related to disinvestment and the implications for a local health service and, based on this information, to identify potential settings and methods for decision-making about disinvestment. The aim of Phase Two was to implement and evaluate the proposed methods to determine which were sustainable, effective and appropriate in a local health service. A review of the current literature incorporating the SHARE findings was conducted in Phase Three to contribute to the understanding of systematic approaches to disinvestment in the local healthcare context. SHARE differed from many other published examples of disinvestment in several ways: by seeking to identify and implement disinvestment opportunities within organisational infrastructure rather than as standalone projects; considering disinvestment in the context of all resource allocation decisions rather than in isolation; including allocation of non-monetary resources as well as financial decisions; and focusing on effective use of limited resources to optimise healthcare outcomes. The SHARE findings provide a rich source of new information about local health service decision-making, in a level of detail not previously reported, to inform others in similar situations. Multiple innovations related to disinvestment were found to be acceptable and feasible in the local setting. Factors influencing decision-making, implementation processes and final outcomes were identified; and methods for further exploration, or avoidance, in attempting disinvestment in this context are proposed based on these findings. The settings, frameworks, models, methods and tools arising from the SHARE findings have potential to enhance health care and patient outcomes.
The P1-RKDG method for two-dimensional Euler equations of gas dynamics
NASA Technical Reports Server (NTRS)
Cockburn, Bernardo; Shu, Chi-Wang
1991-01-01
A class of nonlinearly stable Runge-Kutta local projection discontinuous Galerkin (RKDG) finite element methods for conservation laws is investigated. Two dimensional Euler equations for gas dynamics are solved using P1 elements. The generalization of the local projections, which for scalar nonlinear conservation laws was designed to satisfy a local maximum principle, to systems of conservation laws such as the Euler equations of gas dynamics using local characteristic decompositions is discussed. Numerical examples include the standard regular shock reflection problem, the forward facing step problem, and the double Mach reflection problem. These preliminary numerical examples are chosen to show the capacity of the approach to obtain nonlinearly stable results comparable with the modern nonoscillatory finite difference methods.
Point cloud registration from local feature correspondences-Evaluation on challenging datasets.
Petricek, Tomas; Svoboda, Tomas
2017-01-01
Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.
NASA Astrophysics Data System (ADS)
Jiang, Junjun; Hu, Ruimin; Han, Zhen; Wang, Zhongyuan; Chen, Jun
2013-10-01
Face superresolution (SR), or face hallucination, refers to the technique of generating a high-resolution (HR) face image from a low-resolution (LR) one with the help of a set of training examples. It aims at transcending the limitations of electronic imaging systems. Applications of face SR include video surveillance, in which the individual of interest is often far from cameras. A two-step method is proposed to infer a high-quality and HR face image from a low-quality and LR observation. First, we establish the nonlinear relationship between LR face images and HR ones, according to radial basis function and partial least squares (RBF-PLS) regression, to transform the LR face into the global face space. Then, a locality-induced sparse representation (LiSR) approach is presented to enhance the local facial details once all the global faces for each LR training face are constructed. A comparison of some state-of-the-art SR methods shows the superiority of the proposed two-step approach, RBF-PLS global face regression followed by LiSR-based local patch reconstruction. Experiments also demonstrate the effectiveness under both simulation conditions and some real conditions.
Simulating the Generalized Gibbs Ensemble (GGE): A Hilbert space Monte Carlo approach
NASA Astrophysics Data System (ADS)
Alba, Vincenzo
By combining classical Monte Carlo and Bethe ansatz techniques we devise a numerical method to construct the Truncated Generalized Gibbs Ensemble (TGGE) for the spin-1/2 isotropic Heisenberg (XXX) chain. The key idea is to sample the Hilbert space of the model with the appropriate GGE probability measure. The method can be extended to other integrable systems, such as the Lieb-Liniger model. We benchmark the approach focusing on GGE expectation values of several local observables. As finite-size effects decay exponentially with system size, moderately large chains are sufficient to extract thermodynamic quantities. The Monte Carlo results are in agreement with both the Thermodynamic Bethe Ansatz (TBA) and the Quantum Transfer Matrix approach (QTM). Remarkably, it is possible to extract in a simple way the steady-state Bethe-Gaudin-Takahashi (BGT) roots distributions, which encode complete information about the GGE expectation values in the thermodynamic limit. Finally, it is straightforward to simulate extensions of the GGE, in which, besides the local integral of motion (local charges), one includes arbitrary functions of the BGT roots. As an example, we include in the GGE the first non-trivial quasi-local integral of motion.
High-resolution method for evolving complex interface networks
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-04-01
In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.
Finding the Genomic Basis of Local Adaptation: Pitfalls, Practical Solutions, and Future Directions.
Hoban, Sean; Kelley, Joanna L; Lotterhos, Katie E; Antolin, Michael F; Bradburd, Gideon; Lowry, David B; Poss, Mary L; Reed, Laura K; Storfer, Andrew; Whitlock, Michael C
2016-10-01
Uncovering the genetic and evolutionary basis of local adaptation is a major focus of evolutionary biology. The recent development of cost-effective methods for obtaining high-quality genome-scale data makes it possible to identify some of the loci responsible for adaptive differences among populations. Two basic approaches for identifying putatively locally adaptive loci have been developed and are broadly used: one that identifies loci with unusually high genetic differentiation among populations (differentiation outlier methods) and one that searches for correlations between local population allele frequencies and local environments (genetic-environment association methods). Here, we review the promises and challenges of these genome scan methods, including correcting for the confounding influence of a species' demographic history, biases caused by missing aspects of the genome, matching scales of environmental data with population structure, and other statistical considerations. In each case, we make suggestions for best practices for maximizing the accuracy and efficiency of genome scans to detect the underlying genetic basis of local adaptation. With attention to their current limitations, genome scan methods can be an important tool in finding the genetic basis of adaptive evolutionary change.
Factorization-based texture segmentation
Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.
2015-06-17
This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less
NASA Astrophysics Data System (ADS)
Wang, H.; Jing, X. J.
2017-07-01
This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.
Graph-Based Cooperative Localization Using Symmetric Measurement Equations.
Gulati, Dhiraj; Zhang, Feihu; Clarke, Daniel; Knoll, Alois
2017-06-17
Precise localization is a key requirement for the success of highly assisted or autonomous vehicles. The diminishing cost of hardware has resulted in a proliferation of the number of sensors in the environment. Cooperative localization (CL) presents itself as a feasible and effective solution for localizing the ego-vehicle and its neighboring vehicles. However, one of the major challenges to fully realize the effective use of infrastructure sensors for jointly estimating the state of a vehicle in cooperative vehicle-infrastructure localization is an effective data association. In this paper, we propose a method which implements symmetric measurement equations within factor graphs in order to overcome the data association challenge with a reduced bandwidth overhead. Simulated results demonstrate the benefits of the proposed approach in comparison with our previously proposed approach of topology factors.
Graph-Based Cooperative Localization Using Symmetric Measurement Equations
Gulati, Dhiraj; Zhang, Feihu; Clarke, Daniel; Knoll, Alois
2017-01-01
Precise localization is a key requirement for the success of highly assisted or autonomous vehicles. The diminishing cost of hardware has resulted in a proliferation of the number of sensors in the environment. Cooperative localization (CL) presents itself as a feasible and effective solution for localizing the ego-vehicle and its neighboring vehicles. However, one of the major challenges to fully realize the effective use of infrastructure sensors for jointly estimating the state of a vehicle in cooperative vehicle-infrastructure localization is an effective data association. In this paper, we propose a method which implements symmetric measurement equations within factor graphs in order to overcome the data association challenge with a reduced bandwidth overhead. Simulated results demonstrate the benefits of the proposed approach in comparison with our previously proposed approach of topology factors. PMID:28629141
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-08-06
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively.
A Wireless Sensor Network with Soft Computing Localization Techniques for Track Cycling Applications
Gharghan, Sadik Kamel; Nordin, Rosdiadee; Ismail, Mahamod
2016-01-01
In this paper, we propose two soft computing localization techniques for wireless sensor networks (WSNs). The two techniques, Neural Fuzzy Inference System (ANFIS) and Artificial Neural Network (ANN), focus on a range-based localization method which relies on the measurement of the received signal strength indicator (RSSI) from the three ZigBee anchor nodes distributed throughout the track cycling field. The soft computing techniques aim to estimate the distance between bicycles moving on the cycle track for outdoor and indoor velodromes. In the first approach the ANFIS was considered, whereas in the second approach the ANN was hybridized individually with three optimization algorithms, namely Particle Swarm Optimization (PSO), Gravitational Search Algorithm (GSA), and Backtracking Search Algorithm (BSA). The results revealed that the hybrid GSA-ANN outperforms the other methods adopted in this paper in terms of accuracy localization and distance estimation accuracy. The hybrid GSA-ANN achieves a mean absolute distance estimation error of 0.02 m and 0.2 m for outdoor and indoor velodromes, respectively. PMID:27509495
Tang, Jian; Jiang, Xiaoliang
2017-01-01
Image segmentation has always been a considerable challenge in image analysis and understanding due to the intensity inhomogeneity, which is also commonly known as bias field. In this paper, we present a novel region-based approach based on local entropy for segmenting images and estimating the bias field simultaneously. Firstly, a local Gaussian distribution fitting (LGDF) energy function is defined as a weighted energy integral, where the weight is local entropy derived from a grey level distribution of local image. The means of this objective function have a multiplicative factor that estimates the bias field in the transformed domain. Then, the bias field prior is fully used. Therefore, our model can estimate the bias field more accurately. Finally, minimization of this energy function with a level set regularization term, image segmentation, and bias field estimation can be achieved. Experiments on images of various modalities demonstrated the superior performance of the proposed method when compared with other state-of-the-art approaches.
Localization of synchronous cortical neural sources.
Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc
2013-03-01
Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.
Mechanics of cantilever beam: Implementation and comparison of FEM and MLPG approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trobec, Roman
2016-06-08
Two weak form solution approaches for partial differential equations, the well known meshbased finite element method and the newer meshless local Petrov Galerkin method are described and compared on a standard test case - mechanics of cantilever beam. The implementation, solution accuracy and calculation complexity are addressed for both approaches. We found out that FEM is superior in most standard criteria, but MLPG has some advantages because of its flexibility that results from its general formulation.
Gps-Denied Geo-Localisation Using Visual Odometry
NASA Astrophysics Data System (ADS)
Gupta, Ashish; Chang, Huan; Yilmaz, Alper
2016-06-01
The primary method for geo-localization is based on GPS which has issues of localization accuracy, power consumption, and unavailability. This paper proposes a novel approach to geo-localization in a GPS-denied environment for a mobile platform. Our approach has two principal components: public domain transport network data available in GIS databases or OpenStreetMap; and a trajectory of a mobile platform. This trajectory is estimated using visual odometry and 3D view geometry. The transport map information is abstracted as a graph data structure, where various types of roads are modelled as graph edges and typically intersections are modelled as graph nodes. A search for the trajectory in real time in the graph yields the geo-location of the mobile platform. Our approach uses a simple visual sensor and it has a low memory and computational footprint. In this paper, we demonstrate our method for trajectory estimation and provide examples of geolocalization using public-domain map data. With the rapid proliferation of visual sensors as part of automated driving technology and continuous growth in public domain map data, our approach has the potential to completely augment, or even supplant, GPS based navigation since it functions in all environments.
NASA Astrophysics Data System (ADS)
Hsu, Ting-Yu; Shiao, Shen-Yuan; Liao, Wen-I.
2018-01-01
Wind turbines are a cost-effective alternative energy source; however, their blades are susceptible to damage. Therefore, damage detection of wind turbine blades is of great importance for condition monitoring of wind turbines. Many vibration-based structural damage detection techniques have been proposed in the last two decades. The local flexibility method, which can determine local stiffness variations of beam-like structures by using measured modal parameters, is one of the most promising vibration-based approaches. The local flexibility method does not require a finite element model of the structure. A few structural modal parameters identified from the ambient vibration signals both before and after damage are required for this method. In this study, we propose a damage detection approach for rotating wind turbine blades using the local flexibility method based on the dynamic macro-strain signals measured by long-gauge fiber Bragg grating (FBG)-based sensors. A small wind turbine structure was constructed and excited using a shaking table to generate vibration signals. The structure was designed to have natural frequencies as close as possible to those of a typical 1.5 MW wind turbine in real scale. The optical fiber signal of the rotating blades was transmitted to the data acquisition system through a rotary joint fixed inside the hollow shaft of the wind turbine. Reversible damage was simulated by aluminum plates attached to some sections of the wind turbine blades. The damaged locations of the rotating blades were successfully detected using the proposed approach, with the extent of damage somewhat over-estimated. Nevertheless, although the specimen of wind turbine blades cannot represent a real one, the results still manifest that FBG-based macro-strain measurement has potential to be employed to obtain the modal parameters of the rotating wind turbines and then locations of wind turbine segments with a change of rigidity can be estimated effectively by utilizing these identified parameters.
Healy, R.W.; Russell, T.F.
1993-01-01
A new mass-conservative method for solution of the one-dimensional advection-dispersion equation is derived and discussed. Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods, in terms of accuracy and efficiency, for solute transport problems that are dominated by advection. For dispersion-dominated problems, the performance of the method is similar to that of standard methods. Like previous ELLAM formulations, FVELLAM systematically conserves mass globally with all types of boundary conditions. FVELLAM differs from other ELLAM approaches in that integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking, as used by most characteristic methods, of characteristic lines intersecting inflow boundaries. FVELLAM extends previous ELLAM results by obtaining mass conservation locally on Lagrangian space-time elements. Details of the integration, tracking, and boundary algorithms are presented. Test results are given for problems in Cartesian and radial coordinates.
Beamspace fast fully adaptive brain source localization for limited data sequences
NASA Astrophysics Data System (ADS)
Ravan, Maryam
2017-05-01
In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization methods that rely on estimating second order statistics often fail when the observations are taken over a short time interval, especially when the number of electrodes is large. To address this issue, in previous study, we developed a multistage adaptive processing called fast fully adaptive (FFA) approach that can significantly reduce the required sample support while still processing all available degrees of freedom (DOFs). This approach processes the observed data in stages through a decimation procedure. In this study, we introduce a new form of FFA approach called beamspace FFA. We first divide the brain into smaller regions and transform the measured data from the source space to the beamspace in each region. The FFA approach is then applied to the beamspaced data of each region. The goal of this modification is to benefit the correlation sensitivity reduction between sources in different brain regions. To demonstrate the performance of the beamspace FFA approach in the limited data scenario, simulation results with multiple deep and cortical sources as well as experimental results are compared with regular FFA and widely used FINE approaches. Both simulation and experimental results demonstrate that the beamspace FFA method can localize different types of multiple correlated brain sources in low signal to noise ratios more accurately with limited data.
A non-contact approach for PWV detection: application in a clinical setting.
Campo, Adriaan; Heuten, Hilde; Goovaerts, Inge; Ennekens, Guy; Vrints, Christiaan; Dirckx, Joris
2016-07-01
A need for screening methods for arteriosclerosis led to the development of several approaches to measure pulse wave velocity (PWV) being indicative of arterial stiffness. Carotid-femoral PWV (cfPWV) can be measured between common carotid artery (CCA) and femoral artery (FA) displaying the physiologically important stiffness of the conduit arteries. However, this measurement approach has several disadvantages, and a local PWV-measurement of CCA-stiffness has been proposed as an alternative in the past. In the presented pilot study, laser Doppler vibrometry (LDV) is used to measure PWV locally in the CCA (PWVLDV) in 48 patients aged between 48 and 70, with known atherosclerotic arterial disease: stabilized coronary artery disease (CAD), cerebro-vascular disease (CVD) or peripheral artery disease (PAD). Additionally, cfPWV, CCA distensibility coefficient (DC), CCA intima-media thickness (IMT), blood pressure (BP) and age were evaluated. LDV is a valid method for local PWV-measurement. The method is potentially easy to use, and causes no discomfort to the patient. PWVLDV correlates with age (R = 0.432; p = 0.002) as reported in related studies using other techniques, and measured values lay between 2.5 and 5.8 m s(-1), which is well in line with literature measures of local PWV in the CCA. In conclusion, PWVLDV potentially is a marker for arterial health, but more research in a larger and more homogeneous patient population is mandatory. In future studies, blood velocity measurements should be incorporated, as well as a reference method such as pulse wave imaging (PWI) or magnetic resonance imaging (MRI).
Funk, Russell J; Owen-Smith, Jason; Landon, Bruce E; Birkmeyer, John D; Hollingsworth, John M
2017-02-01
To develop and compare methods for identifying natural alignments between ambulatory surgery centers (ASCs) and hospitals that anchor local health systems. Using all-payer data from Florida's State Ambulatory Surgery and Inpatient Databases (2005-2009), we developed 3 methods for identifying alignments between ASCS and hospitals. The first, a geographic proximity approach, used spatial data to assign an ASC to its nearest hospital neighbor. The second, a predominant affiliation approach, assigned an ASC to the hospital with which it shared a plurality of surgeons. The third, a network community approach, linked an ASC with a larger group of hospitals held together by naturally occurring physician networks. We compared each method in terms of its ability to capture meaningful and stable affiliations and its administrative simplicity. Although the proximity approach was simplest to implement and produced the most durable alignments, ASC surgeon's loyalty to the assigned hospital was low with this method. The predominant affiliation and network community approaches performed better and nearly equivalently on these metrics, capturing more meaningful affiliations between ASCs and hospitals. However, the latter's alignments were least durable, and it was complex to administer. We describe 3 methods for identifying natural alignments between ASCs and hospitals, each with strengths and weaknesses. These methods will help health system managers identify ASCs with which to partner. Moreover, health services researchers and policy analysts can use them to study broader communities of surgical care.
A Bayesian network approach for modeling local failure in lung cancer
NASA Astrophysics Data System (ADS)
Oh, Jung Hun; Craft, Jeffrey; Lozi, Rawan Al; Vaidya, Manushka; Meng, Yifan; Deasy, Joseph O.; Bradley, Jeffrey D.; El Naqa, Issam
2011-03-01
Locally advanced non-small cell lung cancer (NSCLC) patients suffer from a high local failure rate following radiotherapy. Despite many efforts to develop new dose-volume models for early detection of tumor local failure, there was no reported significant improvement in their application prospectively. Based on recent studies of biomarker proteins' role in hypoxia and inflammation in predicting tumor response to radiotherapy, we hypothesize that combining physical and biological factors with a suitable framework could improve the overall prediction. To test this hypothesis, we propose a graphical Bayesian network framework for predicting local failure in lung cancer. The proposed approach was tested using two different datasets of locally advanced NSCLC patients treated with radiotherapy. The first dataset was collected retrospectively, which comprises clinical and dosimetric variables only. The second dataset was collected prospectively in which in addition to clinical and dosimetric information, blood was drawn from the patients at various time points to extract candidate biomarkers as well. Our preliminary results show that the proposed method can be used as an efficient method to develop predictive models of local failure in these patients and to interpret relationships among the different variables in the models. We also demonstrate the potential use of heterogeneous physical and biological variables to improve the model prediction. With the first dataset, we achieved better performance compared with competing Bayesian-based classifiers. With the second dataset, the combined model had a slightly higher performance compared to individual physical and biological models, with the biological variables making the largest contribution. Our preliminary results highlight the potential of the proposed integrated approach for predicting post-radiotherapy local failure in NSCLC patients.
An integrated approach to model strain localization bands in magnesium alloys
NASA Astrophysics Data System (ADS)
Baxevanakis, K. P.; Mo, C.; Cabal, M.; Kontsos, A.
2018-02-01
Strain localization bands (SLBs) that appear at early stages of deformation of magnesium alloys have been recently associated with heterogeneous activation of deformation twinning. Experimental evidence has demonstrated that such "Lüders-type" band formations dominate the overall mechanical behavior of these alloys resulting in sigmoidal type stress-strain curves with a distinct plateau followed by pronounced anisotropic hardening. To evaluate the role of SLB formation on the local and global mechanical behavior of magnesium alloys, an integrated experimental/computational approach is presented. The computational part is developed based on custom subroutines implemented in a finite element method that combine a plasticity model with a stiffness degradation approach. Specific inputs from the characterization and testing measurements to the computational approach are discussed while the numerical results are validated against such available experimental information, confirming the existence of load drops and the intensification of strain accumulation at the time of SLB initiation.
A novel approach for SEMG signal classification with adaptive local binary patterns.
Ertuğrul, Ömer Faruk; Kaya, Yılmaz; Tekin, Ramazan
2016-07-01
Feature extraction plays a major role in the pattern recognition process, and this paper presents a novel feature extraction approach, adaptive local binary pattern (aLBP). aLBP is built on the local binary pattern (LBP), which is an image processing method, and one-dimensional local binary pattern (1D-LBP). In LBP, each pixel is compared with its neighbors. Similarly, in 1D-LBP, each data in the raw is judged against its neighbors. 1D-LBP extracts feature based on local changes in the signal. Therefore, it has high a potential to be employed in medical purposes. Since, each action or abnormality, which is recorded in SEMG signals, has its own pattern, and via the 1D-LBP these (hidden) patterns may be detected. But, the positions of the neighbors in 1D-LBP are constant depending on the position of the data in the raw. Also, both LBP and 1D-LBP are very sensitive to noise. Therefore, its capacity in detecting hidden patterns is limited. To overcome these drawbacks, aLBP was proposed. In aLBP, the positions of the neighbors and their values can be assigned adaptively via the down-sampling and the smoothing coefficients. Therefore, the potential to detect (hidden) patterns, which may express an illness or an action, is really increased. To validate the proposed feature extraction approach, two different datasets were employed. Achieved accuracies by the proposed approach were higher than obtained results by employed popular feature extraction approaches and the reported results in the literature. Obtained accuracy results were brought out that the proposed method can be employed to investigate SEMG signals. In summary, this work attempts to develop an adaptive feature extraction scheme that can be utilized for extracting features from local changes in different categories of time-varying signals.
Local structures around the substituted elements in mixed layered oxides
Akama, Shota; Kobayashi, Wataru; Amaha, Kaoru; Niwa, Hideharu; Nitani, Hiroaki; Moritomo, Yutaka
2017-01-01
The chemical substitution of a transition metal (M) is an effective method to improve the functionality of a material, such as its electrochemical, magnetic, and dielectric properties. The substitution, however, causes local lattice distortion because the difference in the ionic radius (r) modifies the local interatomic distances. Here, we systematically investigated the local structures in the pure (x = 0.0) and mixed (x = 0.05 or 0.1) layered oxides, Na(M1−xM′x)O2 (M and M′ are the majority and minority transition metals, respectively), by means of extended X-ray absorption fine structure (EXAFS) analysis. We found that the local interatomic distance (dM-O) around the minority element approaches that around the majority element to reduces the local lattice distortion. We further found that the valence of the minority Mn changes so that its ionic radius approaches that of the majority M. PMID:28252008
Biocultural approaches to well-being and sustainability indicators across scales.
Sterling, Eleanor J; Filardi, Christopher; Toomey, Anne; Sigouin, Amanda; Betley, Erin; Gazit, Nadav; Newell, Jennifer; Albert, Simon; Alvira, Diana; Bergamini, Nadia; Blair, Mary; Boseto, David; Burrows, Kate; Bynum, Nora; Caillon, Sophie; Caselle, Jennifer E; Claudet, Joachim; Cullman, Georgina; Dacks, Rachel; Eyzaguirre, Pablo B; Gray, Steven; Herrera, James; Kenilorea, Peter; Kinney, Kealohanuiopuna; Kurashima, Natalie; Macey, Suzanne; Malone, Cynthia; Mauli, Senoveva; McCarter, Joe; McMillen, Heather; Pascua, Pua'ala; Pikacha, Patrick; Porzecanski, Ana L; de Robert, Pascale; Salpeteur, Matthieu; Sirikolo, Myknee; Stege, Mark H; Stege, Kristina; Ticktin, Tamara; Vave, Ron; Wali, Alaka; West, Paige; Winter, Kawika B; Jupiter, Stacy D
2017-12-01
Monitoring and evaluation are central to ensuring that innovative, multi-scale, and interdisciplinary approaches to sustainability are effective. The development of relevant indicators for local sustainable management outcomes, and the ability to link these to broader national and international policy targets, are key challenges for resource managers, policymakers, and scientists. Sets of indicators that capture both ecological and social-cultural factors, and the feedbacks between them, can underpin cross-scale linkages that help bridge local and global scale initiatives to increase resilience of both humans and ecosystems. Here we argue that biocultural approaches, in combination with methods for synthesizing across evidence from multiple sources, are critical to developing metrics that facilitate linkages across scales and dimensions. Biocultural approaches explicitly start with and build on local cultural perspectives - encompassing values, knowledges, and needs - and recognize feedbacks between ecosystems and human well-being. Adoption of these approaches can encourage exchange between local and global actors, and facilitate identification of crucial problems and solutions that are missing from many regional and international framings of sustainability. Resource managers, scientists, and policymakers need to be thoughtful about not only what kinds of indicators are measured, but also how indicators are designed, implemented, measured, and ultimately combined to evaluate resource use and well-being. We conclude by providing suggestions for translating between local and global indicator efforts.
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
Machine-learning approach for local classification of crystalline structures in multiphase systems
NASA Astrophysics Data System (ADS)
Dietz, C.; Kretz, T.; Thoma, M. H.
2017-07-01
Machine learning is one of the most popular fields in computer science and has a vast number of applications. In this work we will propose a method that will use a neural network to locally identify crystal structures in a mixed phase Yukawa system consisting of fcc, hcp, and bcc clusters and disordered particles similar to plasma crystals. We compare our approach to already used methods and show that the quality of identification increases significantly. The technique works very well for highly disturbed lattices and shows a flexible and robust way to classify crystalline structures that can be used by only providing particle positions. This leads to insights into highly disturbed crystalline structures.
Laine, Elodie; Carbone, Alessandra
2015-01-01
Protein-protein interactions (PPIs) are essential to all biological processes and they represent increasingly important therapeutic targets. Here, we present a new method for accurately predicting protein-protein interfaces, understanding their properties, origins and binding to multiple partners. Contrary to machine learning approaches, our method combines in a rational and very straightforward way three sequence- and structure-based descriptors of protein residues: evolutionary conservation, physico-chemical properties and local geometry. The implemented strategy yields very precise predictions for a wide range of protein-protein interfaces and discriminates them from small-molecule binding sites. Beyond its predictive power, the approach permits to dissect interaction surfaces and unravel their complexity. We show how the analysis of the predicted patches can foster new strategies for PPIs modulation and interaction surface redesign. The approach is implemented in JET2, an automated tool based on the Joint Evolutionary Trees (JET) method for sequence-based protein interface prediction. JET2 is freely available at www.lcqb.upmc.fr/JET2. PMID:26690684
Conditional parametric models for storm sewer runoff
NASA Astrophysics Data System (ADS)
Jonsdottir, H.; Nielsen, H. Aa; Madsen, H.; Eliasson, J.; Palsson, O. P.; Nielsen, M. K.
2007-05-01
The method of conditional parametric modeling is introduced for flow prediction in a sewage system. It is a well-known fact that in hydrological modeling the response (runoff) to input (precipitation) varies depending on soil moisture and several other factors. Consequently, nonlinear input-output models are needed. The model formulation described in this paper is similar to the traditional linear models like final impulse response (FIR) and autoregressive exogenous (ARX) except that the parameters vary as a function of some external variables. The parameter variation is modeled by local lines, using kernels for local linear regression. As such, the method might be referred to as a nearest neighbor method. The results achieved in this study were compared to results from the conventional linear methods, FIR and ARX. The increase in the coefficient of determination is substantial. Furthermore, the new approach conserves the mass balance better. Hence this new approach looks promising for various hydrological models and analysis.
Meshfree truncated hierarchical refinement for isogeometric analysis
NASA Astrophysics Data System (ADS)
Atri, H. R.; Shojaee, S.
2018-05-01
In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.
Matrix-product-state method with local basis optimization for nonequilibrium electron-phonon systems
NASA Astrophysics Data System (ADS)
Heidrich-Meisner, Fabian; Brockt, Christoph; Dorfner, Florian; Vidmar, Lev; Jeckelmann, Eric
We present a method for simulating the time evolution of quasi-one-dimensional correlated systems with strongly fluctuating bosonic degrees of freedom (e.g., phonons) using matrix product states. For this purpose we combine the time-evolving block decimation (TEBD) algorithm with a local basis optimization (LBO) approach. We discuss the performance of our approach in comparison to TEBD with a bare boson basis, exact diagonalization, and diagonalization in a limited functional space. TEBD with LBO can reduce the computational cost by orders of magnitude when boson fluctuations are large and thus it allows one to investigate problems that are out of reach of other approaches. First, we test our method on the non-equilibrium dynamics of a Holstein polaron and show that it allows us to study the regime of strong electron-phonon coupling. Second, the method is applied to the scattering of an electronic wave packet off a region with electron-phonon coupling. Our study reveals a rich physics including transient self-trapping and dissipation. Supported by Deutsche Forschungsgemeinschaft (DFG) via FOR 1807.
PLPD: reliable protein localization prediction from imbalanced and overlapped datasets
Lee, KiYoung; Kim, Dae-Won; Na, DoKyun; Lee, Kwang H.; Lee, Doheon
2006-01-01
Subcellular localization is one of the key functional characteristics of proteins. An automatic and efficient prediction method for the protein subcellular localization is highly required owing to the need for large-scale genome analysis. From a machine learning point of view, a dataset of protein localization has several characteristics: the dataset has too many classes (there are more than 10 localizations in a cell), it is a multi-label dataset (a protein may occur in several different subcellular locations), and it is too imbalanced (the number of proteins in each localization is remarkably different). Even though many previous works have been done for the prediction of protein subcellular localization, none of them tackles effectively these characteristics at the same time. Thus, a new computational method for protein localization is eventually needed for more reliable outcomes. To address the issue, we present a protein localization predictor based on D-SVDD (PLPD) for the prediction of protein localization, which can find the likelihood of a specific localization of a protein more easily and more correctly. Moreover, we introduce three measurements for the more precise evaluation of a protein localization predictor. As the results of various datasets which are made from the experiments of Huh et al. (2003), the proposed PLPD method represents a different approach that might play a complimentary role to the existing methods, such as Nearest Neighbor method and discriminate covariant method. Finally, after finding a good boundary for each localization using the 5184 classified proteins as training data, we predicted 138 proteins whose subcellular localizations could not be clearly observed by the experiments of Huh et al. (2003). PMID:16966337
Binding ligand prediction for proteins using partial matching of local surface patches.
Sael, Lee; Kihara, Daisuke
2010-01-01
Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group.
Binding Ligand Prediction for Proteins Using Partial Matching of Local Surface Patches
Sael, Lee; Kihara, Daisuke
2010-01-01
Functional elucidation of uncharacterized protein structures is an important task in bioinformatics. We report our new approach for structure-based function prediction which captures local surface features of ligand binding pockets. Function of proteins, specifically, binding ligands of proteins, can be predicted by finding similar local surface regions of known proteins. To enable partial comparison of binding sites in proteins, a weighted bipartite matching algorithm is used to match pairs of surface patches. The surface patches are encoded with the 3D Zernike descriptors. Unlike the existing methods which compare global characteristics of the protein fold or the global pocket shape, the local surface patch method can find functional similarity between non-homologous proteins and binding pockets for flexible ligand molecules. The proposed method improves prediction results over global pocket shape-based method which was previously developed by our group. PMID:21614188
NASA Astrophysics Data System (ADS)
Sahoo, Madhumita; Sahoo, Satiprasad; Dhar, Anirban; Pradhan, Biswajeet
2016-10-01
Groundwater vulnerability assessment has been an accepted practice to identify the zones with relatively increased potential for groundwater contamination. DRASTIC is the most popular secondary information-based vulnerability assessment approach. Original DRASTIC approach considers relative importance of features/sub-features based on subjective weighting/rating values. However variability of features at a smaller scale is not reflected in this subjective vulnerability assessment process. In contrast to the subjective approach, the objective weighting-based methods provide flexibility in weight assignment depending on the variation of the local system. However experts' opinion is not directly considered in the objective weighting-based methods. Thus effectiveness of both subjective and objective weighting-based approaches needs to be evaluated. In the present study, three methods - Entropy information method (E-DRASTIC), Fuzzy pattern recognition method (F-DRASTIC) and Single parameter sensitivity analysis (SA-DRASTIC), were used to modify the weights of the original DRASTIC features to include local variability. Moreover, a grey incidence analysis was used to evaluate the relative performance of subjective (DRASTIC and SA-DRASTIC) and objective (E-DRASTIC and F-DRASTIC) weighting-based methods. The performance of the developed methodology was tested in an urban area of Kanpur City, India. Relative performance of the subjective and objective methods varies with the choice of water quality parameters. This methodology can be applied without/with suitable modification. These evaluations establish the potential applicability of the methodology for general vulnerability assessment in urban context.
Multiview Locally Linear Embedding for Effective Medical Image Retrieval
Shen, Hualei; Tao, Dacheng; Ma, Dianfu
2013-01-01
Content-based medical image retrieval continues to gain attention for its potential to assist radiological image interpretation and decision making. Many approaches have been proposed to improve the performance of medical image retrieval system, among which visual features such as SIFT, LBP, and intensity histogram play a critical role. Typically, these features are concatenated into a long vector to represent medical images, and thus traditional dimension reduction techniques such as locally linear embedding (LLE), principal component analysis (PCA), or laplacian eigenmaps (LE) can be employed to reduce the “curse of dimensionality”. Though these approaches show promising performance for medical image retrieval, the feature-concatenating method ignores the fact that different features have distinct physical meanings. In this paper, we propose a new method called multiview locally linear embedding (MLLE) for medical image retrieval. Following the patch alignment framework, MLLE preserves the geometric structure of the local patch in each feature space according to the LLE criterion. To explore complementary properties among a range of features, MLLE assigns different weights to local patches from different feature spaces. Finally, MLLE employs global coordinate alignment and alternating optimization techniques to learn a smooth low-dimensional embedding from different features. To justify the effectiveness of MLLE for medical image retrieval, we compare it with conventional spectral embedding methods. We conduct experiments on a subset of the IRMA medical image data set. Evaluation results show that MLLE outperforms state-of-the-art dimension reduction methods. PMID:24349277
DOT National Transportation Integrated Search
2010-12-01
To tackle the problems of greenhouse gas emissions, traffic congestion, resident quality of life, and public health concerns, communities are using initiatives to spur more walking and cycling. As local governments face hard choices about which progr...
Global Search Capabilities of Indirect Methods for Impulsive Transfers
NASA Astrophysics Data System (ADS)
Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong
2015-09-01
An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.
A new experimental method for determining local airloads on rotor blades in forward flight
NASA Astrophysics Data System (ADS)
Berton, E.; Maresca, C.; Favier, D.
This paper presents a new approach for determining local airloads on helicopter rotor blade sections in forward flight. The method is based on the momentum equation in which all the terms are expressed by means of the velocity field measured by a laser Doppler velocimeter. The relative magnitude of the different terms involved in the momentum and Bernoulli equations is estimated and the results are encouraging.
The Green's functions for peridynamic non-local diffusion.
Wang, L J; Xu, J F; Wang, J X
2016-09-01
In this work, we develop the Green's function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green's functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green's functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems.
Surface sampling techniques for 3D object inspection
NASA Astrophysics Data System (ADS)
Shih, Chihhsiong S.; Gerhardt, Lester A.
1995-03-01
While the uniform sampling method is quite popular for pointwise measurement of manufactured parts, this paper proposes three novel sampling strategies which emphasize 3D non-uniform inspection capability. They are: (a) the adaptive sampling, (b) the local adjustment sampling, and (c) the finite element centroid sampling techniques. The adaptive sampling strategy is based on a recursive surface subdivision process. Two different approaches are described for this adaptive sampling strategy. One uses triangle patches while the other uses rectangle patches. Several real world objects were tested using these two algorithms. Preliminary results show that sample points are distributed more closely around edges, corners, and vertices as desired for many classes of objects. Adaptive sampling using triangle patches is shown to generally perform better than both uniform and adaptive sampling using rectangle patches. The local adjustment sampling strategy uses a set of predefined starting points and then finds the local optimum position of each nodal point. This method approximates the object by moving the points toward object edges and corners. In a hybrid approach, uniform points sets and non-uniform points sets, first preprocessed by the adaptive sampling algorithm on a real world object were then tested using the local adjustment sampling method. The results show that the initial point sets when preprocessed by adaptive sampling using triangle patches, are moved the least amount of distance by the subsequently applied local adjustment method, again showing the superiority of this method. The finite element sampling technique samples the centroids of the surface triangle meshes produced from the finite element method. The performance of this algorithm was compared to that of the adaptive sampling using triangular patches. The adaptive sampling with triangular patches was once again shown to be better on different classes of objects.
Hernández, Noelia; Ocaña, Manuel; Alonso, Jose M; Kim, Euntai
2017-01-13
Although much research has taken place in WiFi indoor localization systems, their accuracy can still be improved. When designing this kind of system, fingerprint-based methods are a common choice. The problem with fingerprint-based methods comes with the need of site surveying the environment, which is effort consuming. In this work, we propose an approach, based on support vector regression, to estimate the received signal strength at non-site-surveyed positions of the environment. Experiments, performed in a real environment, show that the proposed method could be used to improve the resolution of fingerprint-based indoor WiFi localization systems without increasing the site survey effort.
Hernández, Noelia; Ocaña, Manuel; Alonso, Jose M.; Kim, Euntai
2017-01-01
Although much research has taken place in WiFi indoor localization systems, their accuracy can still be improved. When designing this kind of system, fingerprint-based methods are a common choice. The problem with fingerprint-based methods comes with the need of site surveying the environment, which is effort consuming. In this work, we propose an approach, based on support vector regression, to estimate the received signal strength at non-site-surveyed positions of the environment. Experiments, performed in a real environment, show that the proposed method could be used to improve the resolution of fingerprint-based indoor WiFi localization systems without increasing the site survey effort. PMID:28098773
NASA Astrophysics Data System (ADS)
Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz
2015-10-01
In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.
Prediction of protein subcellular localization by weighted gene ontology terms.
Chi, Sang-Mun
2010-08-27
We develop a new weighting approach of gene ontology (GO) terms for predicting protein subcellular localization. The weights of individual GO terms, corresponding to their contribution to the prediction algorithm, are determined by the term-weighting methods used in text categorization. We evaluate several term-weighting methods, which are based on inverse document frequency, information gain, gain ratio, odds ratio, and chi-square and its variants. Additionally, we propose a new term-weighting method based on the logarithmic transformation of chi-square. The proposed term-weighting method performs better than other term-weighting methods, and also outperforms state-of-the-art subcellular prediction methods. Our proposed method achieves 98.1%, 99.3%, 98.1%, 98.1%, and 95.9% overall accuracies for the animal BaCelLo independent dataset (IDS), fungal BaCelLo IDS, animal Höglund IDS, fungal Höglund IDS, and PLOC dataset, respectively. Furthermore, the close correlation between high-weighted GO terms and subcellular localizations suggests that our proposed method appropriately weights GO terms according to their relevance to the localizations. Copyright 2010 Elsevier Inc. All rights reserved.
2011-01-01
Background Inequalities in health have proved resistant to 'top down' approaches. It is increasingly recognised that health promotion initiatives are unlikely to succeed without strong local involvement at all stages of the process and many programmes now use grass roots approaches. A healthy living approach to community development (HLA) was developed as an innovative response to local concerns about a lack of appropriate services in two deprived communities in Pembrokeshire, West Wales. We sought to assess feasibility, costs, benefits and working relationships of this HLA. Methods The HLA intervention operated through existing community forums and focused on the whole community and its relationship with statutory and voluntary sectors. Local people were trained as community researchers and gathered views about local needs though resident interviews. Forums used interview results to write action plans, disseminated to commissioning organisations. The process was supported throughout through the project. The evaluation used a multi-method before and after study design including process and outcome formative and summative evaluation; data gathered through documentary evidence, diaries and reflective accounts, semi-structured interviews, focus groups and costing proformas. Main outcome measures were processes and timelines of implementation of HLA; self reported impact on communities and participants; community-agency processes of liaison; costs. Results Communities were able to produce and disseminate action plans based on locally-identified needs. The process was slower than anticipated: few community changes had occurred but expectations were high. Community participants gained skills and confidence. Cross-sector partnership working developed. The process had credibility within service provider organisations but mechanisms for refocusing commissioning were patchy. Intervention costs averaged £58,304 per community per annum. Conclusions The intervention was feasible and inexpensive, with indications of potential impact at individual, community and policy planning levels. However, it is a long term process which requires sustained investment and must be embedded in planning and service delivery processes. PMID:21223586
Relevant Feature Set Estimation with a Knock-out Strategy and Random Forests
Ganz, Melanie; Greve, Douglas N.; Fischl, Bruce; Konukoglu, Ender
2015-01-01
Group analysis of neuroimaging data is a vital tool for identifying anatomical and functional variations related to diseases as well as normal biological processes. The analyses are often performed on a large number of highly correlated measurements using a relatively smaller number of samples. Despite the correlation structure, the most widely used approach is to analyze the data using univariate methods followed by post-hoc corrections that try to account for the data’s multivariate nature. Although widely used, this approach may fail to recover from the adverse effects of the initial analysis when local effects are not strong. Multivariate pattern analysis (MVPA) is a powerful alternative to the univariate approach for identifying relevant variations. Jointly analyzing all the measures, MVPA techniques can detect global effects even when individual local effects are too weak to detect with univariate analysis. Current approaches are successful in identifying variations that yield highly predictive and compact models. However, they suffer from lessened sensitivity and instabilities in identification of relevant variations. Furthermore, current methods’ user-defined parameters are often unintuitive and difficult to determine. In this article, we propose a novel MVPA method for group analysis of high-dimensional data that overcomes the drawbacks of the current techniques. Our approach explicitly aims to identify all relevant variations using a “knock-out” strategy and the Random Forest algorithm. In evaluations with synthetic datasets the proposed method achieved substantially higher sensitivity and accuracy than the state-of-the-art MVPA methods, and outperformed the univariate approach when the effect size is low. In experiments with real datasets the proposed method identified regions beyond the univariate approach, while other MVPA methods failed to replicate the univariate results. More importantly, in a reproducibility study with the well-known ADNI dataset the proposed method yielded higher stability and power than the univariate approach. PMID:26272728
Capsule endoscope localization based on computer vision technique.
Liu, Li; Hu, Chao; Cai, Wentao; Meng, Max Q H
2009-01-01
To build a new type of wireless capsule endoscope with interactive gastrointestinal tract examination, a localization and orientation system is needed for tracking 3D location and 3D orientation of the capsule movement. The magnetic localization and orientation method produces only 5 DOF, but misses the information of rotation angle along capsule's main axis. In this paper, we presented a complementary orientation approach for the capsule endoscope, and the 3D rotation can be determined by applying computer vision technique on the captured endoscopic images. The experimental results show that the complementary orientation method has good accuracy and high feasibility.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neilson, James R.; McQueen, Tyrel M.
With the increased availability of high-intensity time-of-flight neutron and synchrotron X-ray scattering sources that can access wide ranges of momentum transfer, the pair distribution function method has become a standard analysis technique for studying disorder of local coordination spheres and at intermediate atomic separations. In some cases, rational modeling of the total scattering data (Bragg and diffuse) becomes intractable with least-squares approaches, necessitating reverse Monte Carlo simulations using large atomistic ensembles. However, the extraction of meaningful information from the resulting atomistic ensembles is challenging, especially at intermediate length scales. Representational analysis is used here to describe the displacements of atomsmore » in reverse Monte Carlo ensembles from an ideal crystallographic structure in an approach analogous to tight-binding methods. Rewriting the displacements in terms of a local basis that is descriptive of the ideal crystallographic symmetry provides a robust approach to characterizing medium-range order (and disorder) and symmetry breaking in complex and disordered crystalline materials. Lastly, this method enables the extraction of statistically relevant displacement modes (orientation, amplitude and distribution) of the crystalline disorder and provides directly meaningful information in a locally symmetry-adapted basis set that is most descriptive of the crystal chemistry and physics.« less
Neilson, James R.; McQueen, Tyrel M.
2015-09-20
With the increased availability of high-intensity time-of-flight neutron and synchrotron X-ray scattering sources that can access wide ranges of momentum transfer, the pair distribution function method has become a standard analysis technique for studying disorder of local coordination spheres and at intermediate atomic separations. In some cases, rational modeling of the total scattering data (Bragg and diffuse) becomes intractable with least-squares approaches, necessitating reverse Monte Carlo simulations using large atomistic ensembles. However, the extraction of meaningful information from the resulting atomistic ensembles is challenging, especially at intermediate length scales. Representational analysis is used here to describe the displacements of atomsmore » in reverse Monte Carlo ensembles from an ideal crystallographic structure in an approach analogous to tight-binding methods. Rewriting the displacements in terms of a local basis that is descriptive of the ideal crystallographic symmetry provides a robust approach to characterizing medium-range order (and disorder) and symmetry breaking in complex and disordered crystalline materials. Lastly, this method enables the extraction of statistically relevant displacement modes (orientation, amplitude and distribution) of the crystalline disorder and provides directly meaningful information in a locally symmetry-adapted basis set that is most descriptive of the crystal chemistry and physics.« less
A finite element method for solving the shallow water equations on the sphere
NASA Astrophysics Data System (ADS)
Comblen, Richard; Legrand, Sébastien; Deleersnijder, Eric; Legat, Vincent
Within the framework of ocean general circulation modeling, the present paper describes an efficient way to discretize partial differential equations on curved surfaces by means of the finite element method on triangular meshes. Our approach benefits from the inherent flexibility of the finite element method. The key idea consists in a dialog between a local coordinate system defined for each element in which integration takes place, and a nodal coordinate system in which all local contributions related to a vectorial degree of freedom are assembled. Since each element of the mesh and each degree of freedom are treated in the same way, the so-called pole singularity issue is fully circumvented. Applied to the shallow water equations expressed in primitive variables, this new approach has been validated against the standard test set defined by [Williamson, D.L., Drake, J.B., Hack, J.J., Jakob, R., Swarztrauber, P.N., 1992. A standard test set for numerical approximations to the shallow water equations in spherical geometry. Journal of Computational Physics 102, 211-224]. Optimal rates of convergence for the P1NC-P1 finite element pair are obtained, for both global and local quantities of interest. Finally, the approach can be extended to three-dimensional thin-layer flows in a straightforward manner.
NASA Astrophysics Data System (ADS)
Khalqihi, K. I.; Rahayu, M.; Rendra, M.
2017-12-01
PT Perkebunan Nusantara VIII Ciater is a company produced black tea orthodox more or less 4 tons every day. At the production section, PT Perkebunan Nusantara VIII will use local exhaust ventilation specially at sortation area on sieve machine. To maintain the quality of the black tea orthodox, all machine must be scheduled for maintenance every once a month and takes time 2 hours in workhours, with additional local exhaust ventilation, it will increase time for maintenance process, if maintenance takes time more than 2 hours it will caused production process delayed. To support maintenance process in PT Perkebunan Nusantara VIII Ciater, designing local exhaust ventilation using design for assembly approach with Boothroyd and Dewhurst method, design for assembly approach is choosen to simplify maintenance process which required assembly process. There are 2 LEV designs for this research. Design 1 with 94 components, assembly time 647.88 seconds and assembly efficiency level 23.62%. Design 2 with 82 components, assembly time 567.84 seconds and assembly efficiency level 24.83%. Design 2 is choosen for this research based on DFA goals, minimum total part that use, optimization assembly time, and assembly efficiency level.
Medial Versus Traditional Approach to US-guided TAP Blocks for Open Inguinal Hernia Repair
2012-04-30
Abdominal Muscles/Ultrasonography; Adult; Ambulatory Surgical Procedures; Anesthetics, Local/Administration & Dosage; Ropivacaine/Administration & Dosage; Ropivacaine/Analogs & Derivatives; Hernia, Inguinal/Surgery; Humans; Nerve Block/Methods; Pain Measurement/Methods; Pain, Postoperative/Prevention & Control; Ultrasonography, Interventional
Efficient Implementation of an Optimal Interpolator for Large Spatial Data Sets
NASA Technical Reports Server (NTRS)
Memarsadeghi, Nargess; Mount, David M.
2007-01-01
Scattered data interpolation is a problem of interest in numerous areas such as electronic imaging, smooth surface modeling, and computational geometry. Our motivation arises from applications in geology and mining, which often involve large scattered data sets and a demand for high accuracy. The method of choice is ordinary kriging. This is because it is a best unbiased estimator. Unfortunately, this interpolant is computationally very expensive to compute exactly. For n scattered data points, computing the value of a single interpolant involves solving a dense linear system of size roughly n x n. This is infeasible for large n. In practice, kriging is solved approximately by local approaches that are based on considering only a relatively small'number of points that lie close to the query point. There are many problems with this local approach, however. The first is that determining the proper neighborhood size is tricky, and is usually solved by ad hoc methods such as selecting a fixed number of nearest neighbors or all the points lying within a fixed radius. Such fixed neighborhood sizes may not work well for all query points, depending on local density of the point distribution. Local methods also suffer from the problem that the resulting interpolant is not continuous. Meyer showed that while kriging produces smooth continues surfaces, it has zero order continuity along its borders. Thus, at interface boundaries where the neighborhood changes, the interpolant behaves discontinuously. Therefore, it is important to consider and solve the global system for each interpolant. However, solving such large dense systems for each query point is impractical. Recently a more principled approach to approximating kriging has been proposed based on a technique called covariance tapering. The problems arise from the fact that the covariance functions that are used in kriging have global support. Our implementations combine, utilize, and enhance a number of different approaches that have been introduced in literature for solving large linear systems for interpolation of scattered data points. For very large systems, exact methods such as Gaussian elimination are impractical since they require 0(n(exp 3)) time and 0(n(exp 2)) storage. As Billings et al. suggested, we use an iterative approach. In particular, we use the SYMMLQ method, for solving the large but sparse ordinary kriging systems that result from tapering. The main technical issue that need to be overcome in our algorithmic solution is that the points' covariance matrix for kriging should be symmetric positive definite. The goal of tapering is to obtain a sparse approximate representation of the covariance matrix while maintaining its positive definiteness. Furrer et al. used tapering to obtain a sparse linear system of the form Ax = b, where A is the tapered symmetric positive definite covariance matrix. Thus, Cholesky factorization could be used to solve their linear systems. They implemented an efficient sparse Cholesky decomposition method. They also showed if these tapers are used for a limited class of covariance models, the solution of the system converges to the solution of the original system. Matrix A in the ordinary kriging system, while symmetric, is not positive definite. Thus, their approach is not applicable to the ordinary kriging system. Therefore, we use tapering only to obtain a sparse linear system. Then, we use SYMMLQ to solve the ordinary kriging system. We show that solving large kriging systems becomes practical via tapering and iterative methods, and results in lower estimation errors compared to traditional local approaches, and significant memory savings compared to the original global system. We also developed a more efficient variant of the sparse SYMMLQ method for large ordinary kriging systems. This approach adaptively finds the correct local neighborhood for each query point in the interpolation process.
Local quantum thermal susceptibility
De Pasquale, Antonella; Rossini, Davide; Fazio, Rosario; Giovannetti, Vittorio
2016-01-01
Thermodynamics relies on the possibility to describe systems composed of a large number of constituents in terms of few macroscopic variables. Its foundations are rooted into the paradigm of statistical mechanics, where thermal properties originate from averaging procedures which smoothen out local details. While undoubtedly successful, elegant and formally correct, this approach carries over an operational problem, namely determining the precision at which such variables are inferred, when technical/practical limitations restrict our capabilities to local probing. Here we introduce the local quantum thermal susceptibility, a quantifier for the best achievable accuracy for temperature estimation via local measurements. Our method relies on basic concepts of quantum estimation theory, providing an operative strategy to address the local thermal response of arbitrary quantum systems at equilibrium. At low temperatures, it highlights the local distinguishability of the ground state from the excited sub-manifolds, thus providing a method to locate quantum phase transitions. PMID:27681458
Local quantum thermal susceptibility
NASA Astrophysics Data System (ADS)
de Pasquale, Antonella; Rossini, Davide; Fazio, Rosario; Giovannetti, Vittorio
2016-09-01
Thermodynamics relies on the possibility to describe systems composed of a large number of constituents in terms of few macroscopic variables. Its foundations are rooted into the paradigm of statistical mechanics, where thermal properties originate from averaging procedures which smoothen out local details. While undoubtedly successful, elegant and formally correct, this approach carries over an operational problem, namely determining the precision at which such variables are inferred, when technical/practical limitations restrict our capabilities to local probing. Here we introduce the local quantum thermal susceptibility, a quantifier for the best achievable accuracy for temperature estimation via local measurements. Our method relies on basic concepts of quantum estimation theory, providing an operative strategy to address the local thermal response of arbitrary quantum systems at equilibrium. At low temperatures, it highlights the local distinguishability of the ground state from the excited sub-manifolds, thus providing a method to locate quantum phase transitions.
Numerical simulation of hypersonic inlet flows with equilibrium or finite rate chemistry
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao; Hsieh, Kwang-Chung; Shuen, Jian-Shun; Mcbride, Bonnie J.
1988-01-01
An efficient numerical program incorporated with comprehensive high temperature gas property models has been developed to simulate hypersonic inlet flows. The computer program employs an implicit lower-upper time marching scheme to solve the two-dimensional Navier-Stokes equations with variable thermodynamic and transport properties. Both finite-rate and local-equilibrium approaches are adopted in the chemical reaction model for dissociation and ionization of the inlet air. In the finite rate approach, eleven species equations coupled with fluid dynamic equations are solved simultaneously. In the local-equilibrium approach, instead of solving species equations, an efficient chemical equilibrium package has been developed and incorporated into the flow code to obtain chemical compositions directly. Gas properties for the reaction products species are calculated by methods of statistical mechanics and fit to a polynomial form for C(p). In the present study, since the chemical reaction time is comparable to the flow residence time, the local-equilibrium model underpredicts the temperature in the shock layer. Significant differences of predicted chemical compositions in shock layer between finite rate and local-equilibrium approaches have been observed.
Work-Based Learning: Effectiveness in Information Systems Training and Development
ERIC Educational Resources Information Center
Walters, David
2006-01-01
The ability to use methodologies is an essential ingredient in the teaching of Information System techniques and approaches. One method to achieve this is to use a practical approach where students undertake "live" projects with local client organisations. They can then reflect on the approach adopted with the aim of producing a "reflective"…
ERIC Educational Resources Information Center
Arce, Alberto
2003-01-01
Both community development and sustainable livelihood approaches ignore value contestations that underlie people's interests and experiences. A case from Bolivia demonstrates that local values, social relations, actions, and language strategies must underlie policy and method in development. (Contains 28 references.) (SK)
Local algebraic analysis of differential systems
NASA Astrophysics Data System (ADS)
Kaptsov, O. V.
2015-06-01
We propose a new approach for studying the compatibility of partial differential equations. This approach is a synthesis of the Riquier method, Gröbner basis theory, and elements of algebraic geometry. As applications, we consider systems including the wave equation and the sine-Gordon equation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, Erin A.; Robinson, Sean M.; Anderson, Kevin K.
2015-01-19
Here we present a novel technique for the localization of radiological sources in urban or rural environments from an aerial platform. The technique is based on a Bayesian approach to localization, in which measured count rates in a time series are compared with predicted count rates from a series of pre-calculated test sources to define likelihood. Furthermore, this technique is expanded by using a localized treatment with a limited field of view (FOV), coupled with a likelihood ratio reevaluation, allowing for real-time computation on commodity hardware for arbitrarily complex detector models and terrain. In particular, detectors with inherent asymmetry ofmore » response (such as those employing internal collimation or self-shielding for enhanced directional awareness) are leveraged by this approach to provide improved localization. Our results from the localization technique are shown for simulated flight data using monolithic as well as directionally-aware detector models, and the capability of the methodology to locate radioisotopes is estimated for several test cases. This localization technique is shown to facilitate urban search by allowing quick and adaptive estimates of source location, in many cases from a single flyover near a source. In particular, this method represents a significant advancement from earlier methods like full-field Bayesian likelihood, which is not generally fast enough to allow for broad-field search in real time, and highest-net-counts estimation, which has a localization error that depends strongly on flight path and cannot generally operate without exhaustive search« less
Local Feature Selection for Data Classification.
Armanfard, Narges; Reilly, James P; Komeili, Majid
2016-06-01
Typical feature selection methods choose an optimal global feature subset that is applied over all regions of the sample space. In contrast, in this paper we propose a novel localized feature selection (LFS) approach whereby each region of the sample space is associated with its own distinct optimized feature set, which may vary both in membership and size across the sample space. This allows the feature set to optimally adapt to local variations in the sample space. An associated method for measuring the similarities of a query datum to each of the respective classes is also proposed. The proposed method makes no assumptions about the underlying structure of the samples; hence the method is insensitive to the distribution of the data over the sample space. The method is efficiently formulated as a linear programming optimization problem. Furthermore, we demonstrate the method is robust against the over-fitting problem. Experimental results on eleven synthetic and real-world data sets demonstrate the viability of the formulation and the effectiveness of the proposed algorithm. In addition we show several examples where localized feature selection produces better results than a global feature selection method.
Evaluation of the site effect with Heuristic Methods
NASA Astrophysics Data System (ADS)
Torres, N. N.; Ortiz-Aleman, C.
2017-12-01
The seismic site response in an area depends mainly on the local geological and topographical conditions. Estimation of variations in ground motion can lead to significant contributions on seismic hazard assessment, in order to reduce human and economic losses. Site response estimation can be posed as a parameterized inversion approach which allows separating source and path effects. The generalized inversion (Field and Jacob, 1995) represents one of the alternative methods to estimate the local seismic response, which involves solving a strongly non-linear multiparametric problem. In this work, local seismic response was estimated using global optimization methods (Genetic Algorithms and Simulated Annealing) which allowed us to increase the range of explored solutions in a nonlinear search, as compared to other conventional linear methods. By using the VEOX Network velocity records, collected from August 2007 to March 2009, source, path and site parameters corresponding to the amplitude spectra of the S wave of the velocity seismic records are estimated. We can establish that inverted parameters resulting from this simultaneous inversion approach, show excellent agreement, not only in terms of adjustment between observed and calculated spectra, but also when compared to previous work from several authors.
Mehta, J N; Heinen, J T
2001-08-01
Like many developing countries, Nepal has adopted a community-based conservation (CBC) approach in recent years to manage its protected areas mainly in response to poor park-people relations. Among other things, under this approach the government has created new "people-oriented" conservation areas, formed and devolved legal authority to grassroots-level institutions to manage local resources, fostered infrastructure development, promoted tourism, and provided income-generating trainings to local people. Of interest to policy-makers and resource managers in Nepal and worldwide is whether this approach to conservation leads to improved attitudes on the part of local people. It is also important to know if personal costs and benefits associated with various intervention programs, and socioeconomic and demographic characteristics influence these attitudes. We explore these questions by looking at the experiences in Annapurna and Makalu-Barun Conservation Areas, Nepal, which have largely adopted a CBC approach in policy formulation, planning, and management. The research was conducted during 1996 and 1997; the data collection methods included random household questionnaire surveys, informal interviews, and review of official records and published literature. The results indicated that the majority of local people held favorable attitudes toward these conservation areas. Logistic regression results revealed that participation in training, benefit from tourism, wildlife depredation issue, ethnicity, gender, and education level were the significant predictors of local attitudes in one or the other conservation area. We conclude that the CBC approach has potential to shape favorable local attitudes and that these attitudes will be mediated by some personal attributes.
A real-space stochastic density matrix approach for density functional electronic structure.
Beck, Thomas L
2015-12-21
The recent development of real-space grid methods has led to more efficient, accurate, and adaptable approaches for large-scale electrostatics and density functional electronic structure modeling. With the incorporation of multiscale techniques, linear-scaling real-space solvers are possible for density functional problems if localized orbitals are used to represent the Kohn-Sham energy functional. These methods still suffer from high computational and storage overheads, however, due to extensive matrix operations related to the underlying wave function grid representation. In this paper, an alternative stochastic method is outlined that aims to solve directly for the one-electron density matrix in real space. In order to illustrate aspects of the method, model calculations are performed for simple one-dimensional problems that display some features of the more general problem, such as spatial nodes in the density matrix. This orbital-free approach may prove helpful considering a future involving increasingly parallel computing architectures. Its primary advantage is the near-locality of the random walks, allowing for simultaneous updates of the density matrix in different regions of space partitioned across the processors. In addition, it allows for testing and enforcement of the particle number and idempotency constraints through stabilization of a Feynman-Kac functional integral as opposed to the extensive matrix operations in traditional approaches.
DLocalMotif: a discriminative approach for discovering local motifs in protein sequences.
Mehdi, Ahmed M; Sehgal, Muhammad Shoaib B; Kobe, Bostjan; Bailey, Timothy L; Bodén, Mikael
2013-01-01
Local motifs are patterns of DNA or protein sequences that occur within a sequence interval relative to a biologically defined anchor or landmark. Current protein motif discovery methods do not adequately consider such constraints to identify biologically significant motifs that are only weakly over-represented but spatially confined. Using negatives, i.e. sequences known to not contain a local motif, can further increase the specificity of their discovery. This article introduces the method DLocalMotif that makes use of positional information and negative data for local motif discovery in protein sequences. DLocalMotif combines three scoring functions, measuring degrees of motif over-representation, entropy and spatial confinement, specifically designed to discriminatively exploit the availability of negative data. The method is shown to outperform current methods that use only a subset of these motif characteristics. We apply the method to several biological datasets. The analysis of peroxisomal targeting signals uncovers several novel motifs that occur immediately upstream of the dominant peroxisomal targeting signal-1 signal. The analysis of proline-tyrosine nuclear localization signals uncovers multiple novel motifs that overlap with C2H2 zinc finger domains. We also evaluate the method on classical nuclear localization signals and endoplasmic reticulum retention signals and find that DLocalMotif successfully recovers biologically relevant sequence properties. http://bioinf.scmb.uq.edu.au/dlocalmotif/
NASA Astrophysics Data System (ADS)
Nugroho, P.
2018-02-01
Creative industries existence is inseparable from the underlying social construct which provides sources for creativity and innovation. The working of social capital in a society facilitates information exchange, knowledge transfer and technology acquisition within the industry through social networks. As a result, a socio-spatial divide exists in directing the growth of the creative industries. This paper aims to examine how such a socio-spatial divide contributes to the local creative industry development in Semarang and Kudus batik clusters. Explanatory sequential mixed methods approach covering a quantitative approach followed by a qualitative approach is chosen to understand better the interplay between tangible and intangible variables in the local batik clusters. Surveys on secondary data taken from the government statistics and reports, previous studies, and media exposures are completed in the former approach to identify clustering pattern of the local batik industry and the local embeddedness factors which have shaped the existing business environment. In-depth interviews, content analysis, and field observations are engaged in the latter approach to explore reciprocal relationships between the elements of social capital and the local batik cluster development. The result demonstrates that particular social ties have determined the forms of spatial proximity manifested in forward and backward business linkages. Trust, shared norms, and inherited traditions are the key social capital attributes that lead to such a socio-spatial divide. Therefore, the intermediating roles of the bridging actors are necessary to encouraging cooperation among the participating stakeholders for a better cluster development.
New estimation architecture for multisensor data fusion
NASA Astrophysics Data System (ADS)
Covino, Joseph M.; Griffiths, Barry E.
1991-07-01
This paper describes a novel method of hierarchical asynchronous distributed filtering called the Net Information Approach (NIA). The NIA is a Kalman-filter-based estimation scheme for spatially distributed sensors which must retain their local optimality yet require a nearly optimal global estimate. The key idea of the NIA is that each local sensor-dedicated filter tells the global filter 'what I've learned since the last local-to-global transmission,' whereas in other estimation architectures the local-to-global transmission consists of 'what I think now.' An algorithm based on this idea has been demonstrated on a small-scale target-tracking problem with many encouraging results. Feasibility of this approach was demonstrated by comparing NIA performance to an optimal centralized Kalman filter (lower bound) via Monte Carlo simulations.
Adapt-Mix: learning local genetic correlation structure improves summary statistics-based analyses
Park, Danny S.; Brown, Brielin; Eng, Celeste; Huntsman, Scott; Hu, Donglei; Torgerson, Dara G.; Burchard, Esteban G.; Zaitlen, Noah
2015-01-01
Motivation: Approaches to identifying new risk loci, training risk prediction models, imputing untyped variants and fine-mapping causal variants from summary statistics of genome-wide association studies are playing an increasingly important role in the human genetics community. Current summary statistics-based methods rely on global ‘best guess’ reference panels to model the genetic correlation structure of the dataset being studied. This approach, especially in admixed populations, has the potential to produce misleading results, ignores variation in local structure and is not feasible when appropriate reference panels are missing or small. Here, we develop a method, Adapt-Mix, that combines information across all available reference panels to produce estimates of local genetic correlation structure for summary statistics-based methods in arbitrary populations. Results: We applied Adapt-Mix to estimate the genetic correlation structure of both admixed and non-admixed individuals using simulated and real data. We evaluated our method by measuring the performance of two summary statistics-based methods: imputation and joint-testing. When using our method as opposed to the current standard of ‘best guess’ reference panels, we observed a 28% decrease in mean-squared error for imputation and a 73.7% decrease in mean-squared error for joint-testing. Availability and implementation: Our method is publicly available in a software package called ADAPT-Mix available at https://github.com/dpark27/adapt_mix. Contact: noah.zaitlen@ucsf.edu PMID:26072481
Żurek-Biesiada, Dominika; Szczurek, Aleksander T; Prakash, Kirti; Mohana, Giriram K; Lee, Hyun-Keun; Roignant, Jean-Yves; Birk, Udo J; Dobrucki, Jurek W; Cremer, Christoph
2016-05-01
Higher order chromatin structure is not only required to compact and spatially arrange long chromatids within a nucleus, but have also important functional roles, including control of gene expression and DNA processing. However, studies of chromatin nanostructures cannot be performed using conventional widefield and confocal microscopy because of the limited optical resolution. Various methods of superresolution microscopy have been described to overcome this difficulty, like structured illumination and single molecule localization microscopy. We report here that the standard DNA dye Vybrant(®) DyeCycle™ Violet can be used to provide single molecule localization microscopy (SMLM) images of DNA in nuclei of fixed mammalian cells. This SMLM method enabled optical isolation and localization of large numbers of DNA-bound molecules, usually in excess of 10(6) signals in one cell nucleus. The technique yielded high-quality images of nuclear DNA density, revealing subdiffraction chromatin structures of the size in the order of 100nm; the interchromatin compartment was visualized at unprecedented optical resolution. The approach offers several advantages over previously described high resolution DNA imaging methods, including high specificity, an ability to record images using a single wavelength excitation, and a higher density of single molecule signals than reported in previous SMLM studies. The method is compatible with DNA/multicolor SMLM imaging which employs simple staining methods suited also for conventional optical microscopy. Copyright © 2016. Published by Elsevier Inc.
Global localization of 3D point clouds in building outline maps of urban outdoor environments.
Landsiedel, Christian; Wollherr, Dirk
2017-01-01
This paper presents a method to localize a robot in a global coordinate frame based on a sparse 2D map containing outlines of building and road network information and no location prior information. Its input is a single 3D laser scan of the surroundings of the robot. The approach extends the generic chamfer matching template matching technique from image processing by including visibility analysis in the cost function. Thus, the observed building planes are matched to the expected view of the corresponding map section instead of to the entire map, which makes a more accurate matching possible. Since this formulation operates on generic edge maps from visual sensors, the matching formulation can be expected to generalize to other input data, e.g., from monocular or stereo cameras. The method is evaluated on two large datasets collected in different real-world urban settings and compared to a baseline method from literature and to the standard chamfer matching approach, where it shows considerable performance benefits, as well as the feasibility of global localization based on sparse building outline data.
Jiang, Joe-Air; Chuang, Cheng-Long; Lin, Tzu-Shiang; Chen, Chia-Pang; Hung, Chih-Hung; Wang, Jiing-Yi; Liu, Chang-Wang; Lai, Tzu-Yun
2010-01-01
In recent years, various received signal strength (RSS)-based localization estimation approaches for wireless sensor networks (WSNs) have been proposed. RSS-based localization is regarded as a low-cost solution for many location-aware applications in WSNs. In previous studies, the radiation patterns of all sensor nodes are assumed to be spherical, which is an oversimplification of the radio propagation model in practical applications. In this study, we present an RSS-based cooperative localization method that estimates unknown coordinates of sensor nodes in a network. Arrangement of two external low-cost omnidirectional dipole antennas is developed by using the distance-power gradient model. A modified robust regression is also proposed to determine the relative azimuth and distance between a sensor node and a fixed reference node. In addition, a cooperative localization scheme that incorporates estimations from multiple fixed reference nodes is presented to improve the accuracy of the localization. The proposed method is tested via computer-based analysis and field test. Experimental results demonstrate that the proposed low-cost method is a useful solution for localizing sensor nodes in unknown or changing environments.
A time-domain finite element boundary integral approach for elastic wave scattering
NASA Astrophysics Data System (ADS)
Shi, F.; Lowe, M. J. S.; Skelton, E. A.; Craster, R. V.
2018-04-01
The response of complex scatterers, such as rough or branched cracks, to incident elastic waves is required in many areas of industrial importance such as those in non-destructive evaluation and related fields; we develop an approach to generate accurate and rapid simulations. To achieve this we develop, in the time domain, an implementation to efficiently couple the finite element (FE) method within a small local region, and the boundary integral (BI) globally. The FE explicit scheme is run in a local box to compute the surface displacement of the scatterer, by giving forcing signals to excitation nodes, which can lie on the scatterer itself. The required input forces on the excitation nodes are obtained with a reformulated FE equation, according to the incident displacement field. The surface displacements computed by the local FE are then projected, through time-domain BI formulae, to calculate the scattering signals with different modes. This new method yields huge improvements in the efficiency of FE simulations for scattering from complex scatterers. We present results using different shapes and boundary conditions, all simulated using this approach in both 2D and 3D, and then compare with full FE models and theoretical solutions to demonstrate the efficiency and accuracy of this numerical approach.
ERIC Educational Resources Information Center
Bogotch, Ira; Maslin-Ostrowski, Patricia
2010-01-01
Purpose: This study describes how an educational leadership department transformed its regional identity and localized practices over a ten-year period (1997-2007) to become internationalized in terms of research, teaching, and service. Research Methods/Approach (e.g., Setting, Participants, Research Design, Data Collection and Analysis): A basic…
Vibration band gaps for elastic metamaterial rods using wave finite element method
NASA Astrophysics Data System (ADS)
Nobrega, E. D.; Gautier, F.; Pelat, A.; Dos Santos, J. M. C.
2016-10-01
Band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators are investigated. New techniques to analyze metamaterial systems are using a combination of analytical or numerical method with wave propagation. One of them, called here wave spectral element method (WSEM), consists of combining the spectral element method (SEM) with Floquet-Bloch's theorem. A modern methodology called wave finite element method (WFEM), developed to calculate dynamic behavior in periodic acoustic and structural systems, utilizes a similar approach where SEM is substituted by the conventional finite element method (FEM). In this paper, it is proposed to use WFEM to calculate band gaps in elastic metamaterial rods with spatial periodic distribution and periodically attached local resonators of multi-degree-of-freedom (M-DOF). Simulated examples with band gaps generated by Bragg scattering and local resonators are calculated by WFEM and verified with WSEM, which is used as a reference method. Results are presented in the form of attenuation constant, vibration transmittance and frequency response function (FRF). For all cases, WFEM and WSEM results are in agreement, provided that the number of elements used in WFEM is sufficient to convergence. An experimental test was conducted with a real elastic metamaterial rod, manufactured with plastic in a 3D printer, without local resonance-type effect. The experimental results for the metamaterial rod with band gaps generated by Bragg scattering are compared with the simulated ones. Both numerical methods (WSEM and WFEM) can localize the band gap position and width very close to the experimental results. A hybrid approach combining WFEM with the commercial finite element software ANSYS is proposed to model complex metamaterial systems. Two examples illustrating its efficiency and accuracy to model an elastic metamaterial rod unit-cell using 1D simple rod element and 3D solid element are demonstrated and the results present good approximation to the experimental data.
Fritscher, Karl; Grunerbl, Agnes; Hanni, Markus; Suhm, Norbert; Hengg, Clemens; Schubert, Rainer
2009-10-01
Currently, conventional X-ray and CT images as well as invasive methods performed during the surgical intervention are used to judge the local quality of a fractured proximal femur. However, these approaches are either dependent on the surgeon's experience or cannot assist diagnostic and planning tasks preoperatively. Therefore, in this work a method for the individual analysis of local bone quality in the proximal femur based on model-based analysis of CT- and X-ray images of femur specimen will be proposed. A combined representation of shape and spatial intensity distribution of an object and different statistical approaches for dimensionality reduction are used to create a statistical appearance model in order to assess the local bone quality in CT and X-ray images. The developed algorithms are tested and evaluated on 28 femur specimen. It will be shown that the tools and algorithms presented herein are highly adequate to automatically and objectively predict bone mineral density values as well as a biomechanical parameter of the bone that can be measured intraoperatively.
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and the performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are: (a) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed, and (b) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
Mehl, S.; Hill, M.C.
2002-01-01
Models with local grid refinement, as often required in groundwater models, pose special problems for model calibration. This work investigates the calculation of sensitivities and performance of regression methods using two existing and one new method of grid refinement. The existing local grid refinement methods considered are (1) a variably spaced grid in which the grid spacing becomes smaller near the area of interest and larger where such detail is not needed and (2) telescopic mesh refinement (TMR), which uses the hydraulic heads or fluxes of a regional model to provide the boundary conditions for a locally refined model. The new method has a feedback between the regional and local grids using shared nodes, and thereby, unlike the TMR methods, balances heads and fluxes at the interfacing boundary. Results for sensitivities are compared for the three methods and the effect of the accuracy of sensitivity calculations are evaluated by comparing inverse modelling results. For the cases tested, results indicate that the inaccuracies of the sensitivities calculated using the TMR approach can cause the inverse model to converge to an incorrect solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V., E-mail: truhlar@umn.edu, E-mail: candler@aem.umn.edu
2014-02-07
Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with amore » review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.« less
Standardised Library Instruction Assessment: An Institution-Specific Approach
ERIC Educational Resources Information Center
Staley, Shannon M.; Branch, Nicole A.; Hewitt, Tom L.
2010-01-01
Introduction: We explore the use of a psychometric model for locally-relevant, information literacy assessment, using an online tool for standardised assessment of student learning during discipline-based library instruction sessions. Method: A quantitative approach to data collection and analysis was used, employing standardised multiple-choice…
Localization of diffusion sources in complex networks with sparse observations
NASA Astrophysics Data System (ADS)
Hu, Zhao-Long; Shen, Zhesi; Tang, Chang-Bing; Xie, Bin-Bin; Lu, Jian-Feng
2018-04-01
Locating sources in a large network is of paramount importance to reduce the spreading of disruptive behavior. Based on the backward diffusion-based method and integer programming, we propose an efficient approach to locate sources in complex networks with limited observers. The results on model networks and empirical networks demonstrate that, for a certain fraction of observers, the accuracy of our method for source localization will improve as the increase of network size. Besides, compared with the previous method (the maximum-minimum method), the performance of our method is much better with a small fraction of observers, especially in heterogeneous networks. Furthermore, our method is more robust against noise environments and strategies of choosing observers.
Sparse reconstruction localization of multiple acoustic emissions in large diameter pipelines
NASA Astrophysics Data System (ADS)
Dubuc, Brennan; Ebrahimkhanlou, Arvin; Salamone, Salvatore
2017-04-01
A sparse reconstruction localization method is proposed, which is capable of localizing multiple acoustic emission events occurring closely in time. The events may be due to a number of sources, such as the growth of corrosion patches or cracks. Such acoustic emissions may yield localization failure if a triangulation method is used. The proposed method is implemented both theoretically and experimentally on large diameter thin-walled pipes. Experimental examples are presented, which demonstrate the failure of a triangulation method when multiple sources are present in this structure, while highlighting the capabilities of the proposed method. The examples are generated from experimental data of simulated acoustic emission events. The data corresponds to helical guided ultrasonic waves generated in a 3 m long large diameter pipe by pencil lead breaks on its outer surface. Acoustic emission waveforms are recorded by six sparsely distributed low-profile piezoelectric transducers instrumented on the outer surface of the pipe. The same array of transducers is used for both the proposed and the triangulation method. It is demonstrated that the proposed method is able to localize multiple events occurring closely in time. Furthermore, the matching pursuit algorithm and the basis pursuit densoising approach are each evaluated as potential numerical tools in the proposed sparse reconstruction method.
Local Minima Free Parameterized Appearance Models
Nguyen, Minh Hoai; De la Torre, Fernando
2010-01-01
Parameterized Appearance Models (PAMs) (e.g. Eigentracking, Active Appearance Models, Morphable Models) are commonly used to model the appearance and shape variation of objects in images. While PAMs have numerous advantages relative to alternate approaches, they have at least two drawbacks. First, they are especially prone to local minima in the fitting process. Second, often few if any of the local minima of the cost function correspond to acceptable solutions. To solve these problems, this paper proposes a method to learn a cost function by explicitly optimizing that the local minima occur at and only at the places corresponding to the correct fitting parameters. To the best of our knowledge, this is the first paper to address the problem of learning a cost function to explicitly model local properties of the error surface to fit PAMs. Synthetic and real examples show improvement in alignment performance in comparison with traditional approaches. PMID:21804750
Localized Dictionaries Based Orientation Field Estimation for Latent Fingerprints.
Xiao Yang; Jianjiang Feng; Jie Zhou
2014-05-01
Dictionary based orientation field estimation approach has shown promising performance for latent fingerprints. In this paper, we seek to exploit stronger prior knowledge of fingerprints in order to further improve the performance. Realizing that ridge orientations at different locations of fingerprints have different characteristics, we propose a localized dictionaries-based orientation field estimation algorithm, in which noisy orientation patch at a location output by a local estimation approach is replaced by real orientation patch in the local dictionary at the same location. The precondition of applying localized dictionaries is that the pose of the latent fingerprint needs to be estimated. We propose a Hough transform-based fingerprint pose estimation algorithm, in which the predictions about fingerprint pose made by all orientation patches in the latent fingerprint are accumulated. Experimental results on challenging latent fingerprint datasets show the proposed method outperforms previous ones markedly.
NASA Technical Reports Server (NTRS)
Antle, John M.; Valdivia, Roberto O.; Boote, Kenneth J.; Janssen, Sander; Jones, James W.; Porter, Cheryl H.; Rosenzweig, Cynthia; Ruane, Alexander C.; Thorburn, Peter J.
2015-01-01
This chapter describes methods developed by the Agricultural Model Intercomparison and Improvement Project (AgMIP) to implement a transdisciplinary, systems-based approach for regional-scale (local to national) integrated assessment of agricultural systems under future climate, biophysical, and socio-economic conditions. These methods were used by the AgMIP regional research teams in Sub-Saharan Africa and South Asia to implement the analyses reported in their respective chapters of this book. Additional technical details are provided in Appendix 1.The principal goal that motivates AgMIP's regional integrated assessment (RIA) methodology is to provide scientifically rigorous information needed to support improved decision-making by various stakeholders, ranging from local to national and international non-governmental and governmental organizations.
Diabetes Self-Management Education; Experience of People with Diabetes.
Mardanian Dehkordi, Leila; Abdoli, Samereh
2017-06-01
Introduction: Diabetes self-management education (DSME) is a major factor which can affects quality of life of people with diabetes (PWD). Understanding the experience of PWD participating in DSME programs is an undeniable necessity in providing effective DSME to this population. The Aim of the study was to explore the experiences of PWD from a local DSME program in Iran. Methods: This study applied a descriptive phenomenological approach. The participants were PWD attending a well-established local DSME program in an endocrinology and diabetes center in Isfahan, Iran. Fifteen participants willing to share their experience about DSME were selected through purposive sampling from September 2011 to June 2012. Data were collected via unstructured interviews and analyzed using Colaizzi's approach. Results: The experience of participants were categorized under three main themes including content of diabetes education (useful versus repetitive, intensive and volatile), teaching methods (traditional, technology ignorant) and learning environment (friendly atmosphere, cramped and dark). Conclusion: It seems the current approach for DSME cannot meet the needs and expectations of PWD attending the program. Needs assessment, interactive teaching methods, multidisciplinary approach, technology as well as appropriate physical space need to be considered to improve DSME.
Applications of hybrid genetic algorithms in seismic tomography
NASA Astrophysics Data System (ADS)
Soupios, Pantelis; Akca, Irfan; Mpogiatzis, Petros; Basokur, Ahmet T.; Papazachos, Constantinos
2011-11-01
Almost all earth sciences inverse problems are nonlinear and involve a large number of unknown parameters, making the application of analytical inversion methods quite restrictive. In practice, most analytical methods are local in nature and rely on a linearized form of the problem equations, adopting an iterative procedure which typically employs partial derivatives in order to optimize the starting (initial) model by minimizing a misfit (penalty) function. Unfortunately, especially for highly non-linear cases, the final model strongly depends on the initial model, hence it is prone to solution-entrapment in local minima of the misfit function, while the derivative calculation is often computationally inefficient and creates instabilities when numerical approximations are used. An alternative is to employ global techniques which do not rely on partial derivatives, are independent of the misfit form and are computationally robust. Such methods employ pseudo-randomly generated models (sampling an appropriately selected section of the model space) which are assessed in terms of their data-fit. A typical example is the class of methods known as genetic algorithms (GA), which achieves the aforementioned approximation through model representation and manipulations, and has attracted the attention of the earth sciences community during the last decade, with several applications already presented for several geophysical problems. In this paper, we examine the efficiency of the combination of the typical regularized least-squares and genetic methods for a typical seismic tomography problem. The proposed approach combines a local (LOM) and a global (GOM) optimization method, in an attempt to overcome the limitations of each individual approach, such as local minima and slow convergence, respectively. The potential of both optimization methods is tested and compared, both independently and jointly, using the several test models and synthetic refraction travel-time date sets that employ the same experimental geometry, wavelength and geometrical characteristics of the model anomalies. Moreover, real data from a crosswell tomographic project for the subsurface mapping of an ancient wall foundation are used for testing the efficiency of the proposed algorithm. The results show that the combined use of both methods can exploit the benefits of each approach, leading to improved final models and producing realistic velocity models, without significantly increasing the required computation time.
NASA Astrophysics Data System (ADS)
Fillion, Anthony; Bocquet, Marc; Gratton, Serge
2018-04-01
The analysis in nonlinear variational data assimilation is the solution of a non-quadratic minimization. Thus, the analysis efficiency relies on its ability to locate a global minimum of the cost function. If this minimization uses a Gauss-Newton (GN) method, it is critical for the starting point to be in the attraction basin of a global minimum. Otherwise the method may converge to a local extremum, which degrades the analysis. With chaotic models, the number of local extrema often increases with the temporal extent of the data assimilation window, making the former condition harder to satisfy. This is unfortunate because the assimilation performance also increases with this temporal extent. However, a quasi-static (QS) minimization may overcome these local extrema. It accomplishes this by gradually injecting the observations in the cost function. This method was introduced by Pires et al. (1996) in a 4D-Var context. We generalize this approach to four-dimensional strong-constraint nonlinear ensemble variational (EnVar) methods, which are based on both a nonlinear variational analysis and the propagation of dynamical error statistics via an ensemble. This forces one to consider the cost function minimizations in the broader context of cycled data assimilation algorithms. We adapt this QS approach to the iterative ensemble Kalman smoother (IEnKS), an exemplar of nonlinear deterministic four-dimensional EnVar methods. Using low-order models, we quantify the positive impact of the QS approach on the IEnKS, especially for long data assimilation windows. We also examine the computational cost of QS implementations and suggest cheaper algorithms.
Zhu, Qile; Li, Xiaolin; Conesa, Ana; Pereira, Cécile
2018-05-01
Best performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models. We propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems. The GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN. andyli@ece.ufl.edu or aconesa@ufl.edu. Supplementary data are available at Bioinformatics online.
Zhu, Qile; Li, Xiaolin; Conesa, Ana; Pereira, Cécile
2018-01-01
Abstract Motivation Best performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models. Results We propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems. Availability and implementation The GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN. Contact andyli@ece.ufl.edu or aconesa@ufl.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:29272325
Modeling the uncertainty of estimating forest carbon stocks in China
NASA Astrophysics Data System (ADS)
Yue, T. X.; Wang, Y. F.; Du, Z. P.; Zhao, M. W.; Zhang, L. L.; Zhao, N.; Lu, M.; Larocque, G. R.; Wilson, J. P.
2015-12-01
Earth surface systems are controlled by a combination of global and local factors, which cannot be understood without accounting for both the local and global components. The system dynamics cannot be recovered from the global or local controls alone. Ground forest inventory is able to accurately estimate forest carbon stocks at sample plots, but these sample plots are too sparse to support the spatial simulation of carbon stocks with required accuracy. Satellite observation is an important source of global information for the simulation of carbon stocks. Satellite remote-sensing can supply spatially continuous information about the surface of forest carbon stocks, which is impossible from ground-based investigations, but their description has considerable uncertainty. In this paper, we validated the Lund-Potsdam-Jena dynamic global vegetation model (LPJ), the Kriging method for spatial interpolation of ground sample plots and a satellite-observation-based approach as well as an approach for fusing the ground sample plots with satellite observations and an assimilation method for incorporating the ground sample plots into LPJ. The validation results indicated that both the data fusion and data assimilation approaches reduced the uncertainty of estimating carbon stocks. The data fusion had the lowest uncertainty by using an existing method for high accuracy surface modeling to fuse the ground sample plots with the satellite observations (HASM-SOA). The estimates produced with HASM-SOA were 26.1 and 28.4 % more accurate than the satellite-based approach and spatial interpolation of the sample plots, respectively. Forest carbon stocks of 7.08 Pg were estimated for China during the period from 2004 to 2008, an increase of 2.24 Pg from 1984 to 2008, using the preferred HASM-SOA method.
Molecular counting of membrane receptor subunits with single-molecule localization microscopy
NASA Astrophysics Data System (ADS)
Krüger, Carmen; Fricke, Franziska; Karathanasis, Christos; Dietz, Marina S.; Malkusch, Sebastian; Hummer, Gerhard; Heilemann, Mike
2017-02-01
We report on quantitative single-molecule localization microscopy, a method that next to super-resolved images of cellular structures provides information on protein copy numbers in protein clusters. This approach is based on the analysis of blinking cycles of single fluorophores, and on a model-free description of the distribution of the number of blinking events. We describe the experimental and analytical procedures, present cellular data of plasma membrane proteins and discuss the applicability of this method.
An approach to the language discrimination in different scripts using adjacent local binary pattern
NASA Astrophysics Data System (ADS)
Brodić, D.; Amelio, A.; Milivojević, Z. N.
2017-09-01
The paper proposes a language discrimination method of documents. First, each letter is encoded with the certain script type according to its status in baseline area. Such a cipher text is subjected to a feature extraction process. Accordingly, the local binary pattern as well as its expanded version called adjacent local binary pattern are extracted. Because of the difference in the language characteristics, the above analysis shows significant diversity. This type of diversity is a key aspect in the decision-making differentiation of the languages. Proposed method is tested on an example of documents. The experiments give encouraging results.
Local and nonlocal parallel heat transport in general magnetic fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Del-Castillo-Negrete, Diego B; Chacon, Luis
2011-01-01
A novel approach for the study of parallel transport in magnetized plasmas is presented. The method avoids numerical pollution issues of grid-based formulations and applies to integrable and chaotic magnetic fields with local or nonlocal parallel closures. In weakly chaotic fields, the method gives the fractal structure of the devil's staircase radial temperature profile. In fully chaotic fields, the temperature exhibits self-similar spatiotemporal evolution with a stretched-exponential scaling function for local closures and an algebraically decaying one for nonlocal closures. It is shown that, for both closures, the effective radial heat transport is incompatible with the quasilinear diffusion model.
Scanning tunneling microscopy current from localized basis orbital density functional theory
NASA Astrophysics Data System (ADS)
Gustafsson, Alexander; Paulsson, Magnus
2016-03-01
We present a method capable of calculating elastic scanning tunneling microscopy (STM) currents from localized atomic orbital density functional theory (DFT). To overcome the poor accuracy of the localized orbital description of the wave functions far away from the atoms, we propagate the wave functions, using the total DFT potential. From the propagated wave functions, the Bardeen's perturbative approach provides the tunneling current. To illustrate the method we investigate carbon monoxide adsorbed on a Cu(111) surface and recover the depression/protrusion observed experimentally with normal/CO-functionalized STM tips. The theory furthermore allows us to discuss the significance of s - and p -wave tips.
The Green’s functions for peridynamic non-local diffusion
Wang, L. J.; Xu, J. F.
2016-01-01
In this work, we develop the Green’s function method for the solution of the peridynamic non-local diffusion model in which the spatial gradient of the generalized potential in the classical theory is replaced by an integral of a generalized response function in a horizon. We first show that the general solutions of the peridynamic non-local diffusion model can be expressed as functionals of the corresponding Green’s functions for point sources, along with volume constraints for non-local diffusion. Then, we obtain the Green’s functions by the Fourier transform method for unsteady and steady diffusions in infinite domains. We also demonstrate that the peridynamic non-local solutions converge to the classical differential solutions when the non-local length approaches zero. Finally, the peridynamic analytical solutions are applied to an infinite plate heated by a Gauss source, and the predicted variations of temperature are compared with the classical local solutions. The peridynamic non-local diffusion model predicts a lower rate of variation of the field quantities than that of the classical theory, which is consistent with experimental observations. The developed method is applicable to general diffusion-type problems. PMID:27713658
NASA Astrophysics Data System (ADS)
Izmaylov, Artur F.; Staroverov, Viktor N.; Scuseria, Gustavo E.; Davidson, Ernest R.; Stoltz, Gabriel; Cancès, Eric
2007-02-01
We have recently formulated a new approach, named the effective local potential (ELP) method, for calculating local exchange-correlation potentials for orbital-dependent functionals based on minimizing the variance of the difference between a given nonlocal potential and its desired local counterpart [V. N. Staroverov et al., J. Chem. Phys. 125, 081104 (2006)]. Here we show that under a mildly simplifying assumption of frozen molecular orbitals, the equation defining the ELP has a unique analytic solution which is identical with the expression arising in the localized Hartree-Fock (LHF) and common energy denominator approximations (CEDA) to the optimized effective potential. The ELP procedure differs from the CEDA and LHF in that it yields the target potential as an expansion in auxiliary basis functions. We report extensive calculations of atomic and molecular properties using the frozen-orbital ELP method and its iterative generalization to prove that ELP results agree with the corresponding LHF and CEDA values, as they should. Finally, we make the case for extending the iterative frozen-orbital ELP method to full orbital relaxation.
Confocal laser induced fluorescence with comparable spatial localization to the conventional method
NASA Astrophysics Data System (ADS)
Thompson, Derek S.; Henriquez, Miguel F.; Scime, Earl E.; Good, Timothy N.
2017-10-01
We present measurements of ion velocity distributions obtained by laser induced fluorescence (LIF) using a single viewport in an argon plasma. A patent pending design, which we refer to as the confocal fluorescence telescope, combines large objective lenses with a large central obscuration and a spatial filter to achieve high spatial localization along the laser injection direction. Models of the injection and collection optics of the two assemblies are used to provide a theoretical estimate of the spatial localization of the confocal arrangement, which is taken to be the full width at half maximum of the spatial optical response. The new design achieves approximately 1.4 mm localization at a focal length of 148.7 mm, improving on previously published designs by an order of magnitude and approaching the localization achieved by the conventional method. The confocal method, however, does so without requiring a pair of separated, perpendicular optical paths. The confocal technique therefore eases the two window access requirement of the conventional method, extending the application of LIF to experiments where conventional LIF measurements have been impossible or difficult, or where multiple viewports are scarce.
Non-axisymmetric local magnetostatic equilibrium
Candy, Jefferey M.; Belli, Emily A.
2015-03-24
In this study, we outline an approach to the problem of local equilibrium in non-axisymmetric configurations that adheres closely to Miller's original method for axisymmetric plasmas. Importantly, this method is novel in that it allows not only specification of 3D shape, but also explicit specification of the shear in the 3D shape. A spectrally-accurate method for solution of the resulting nonlinear partial differential equations is also developed. We verify the correctness of the spectral method, in the axisymmetric limit, through comparisons with an independent numerical solution. Some analytic results for the two-dimensional case are given, and the connection to Boozermore » coordinates is clarified.« less
An improved local radial point interpolation method for transient heat conduction analysis
NASA Astrophysics Data System (ADS)
Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang
2013-06-01
The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.
Bounded Linear Stability Analysis - A Time Delay Margin Estimation Approach for Adaptive Control
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Ishihara, Abraham K.; Krishnakumar, Kalmanje Srinlvas; Bakhtiari-Nejad, Maryam
2009-01-01
This paper presents a method for estimating time delay margin for model-reference adaptive control of systems with almost linear structured uncertainty. The bounded linear stability analysis method seeks to represent the conventional model-reference adaptive law by a locally bounded linear approximation within a small time window using the comparison lemma. The locally bounded linear approximation of the combined adaptive system is cast in a form of an input-time-delay differential equation over a small time window. The time delay margin of this system represents a local stability measure and is computed analytically by a matrix measure method, which provides a simple analytical technique for estimating an upper bound of time delay margin. Based on simulation results for a scalar model-reference adaptive control system, both the bounded linear stability method and the matrix measure method are seen to provide a reasonably accurate and yet not too conservative time delay margin estimation.
Comparison of local- to regional-scale estimates of ground-water recharge in Minnesota, USA
Delin, G.N.; Healy, R.W.; Lorenz, D.L.; Nimmo, J.R.
2007-01-01
Regional ground-water recharge estimates for Minnesota were compared to estimates made on the basis of four local- and basin-scale methods. Three local-scale methods (unsaturated-zone water balance, water-table fluctuations (WTF) using three approaches, and age dating of ground water) yielded point estimates of recharge that represent spatial scales from about 1 to about 1000 m2. A fourth method (RORA, a basin-scale analysis of streamflow records using a recession-curve-displacement technique) yielded recharge estimates at a scale of 10–1000s of km2. The RORA basin-scale recharge estimates were regionalized to estimate recharge for the entire State of Minnesota on the basis of a regional regression recharge (RRR) model that also incorporated soil and climate data. Recharge rates estimated by the RRR model compared favorably to the local and basin-scale recharge estimates. RRR estimates at study locations were about 41% less on average than the unsaturated-zone water-balance estimates, ranged from 44% greater to 12% less than estimates that were based on the three WTF approaches, were about 4% less than the age dating of ground-water estimates, and were about 5% greater than the RORA estimates. Of the methods used in this study, the WTF method is the simplest and easiest to apply. Recharge estimates made on the basis of the UZWB method were inconsistent with the results from the other methods. Recharge estimates using the RRR model could be a good source of input for regional ground-water flow models; RRR model results currently are being applied for this purpose in USGS studies elsewhere.
Local phase method for designing and optimizing metasurface devices.
Hsu, Liyi; Dupré, Matthieu; Ndao, Abdoulaye; Yellowhair, Julius; Kanté, Boubacar
2017-10-16
Metasurfaces have attracted significant attention due to their novel designs for flat optics. However, the approach usually used to engineer metasurface devices assumes that neighboring elements are identical, by extracting the phase information from simulations with periodic boundaries, or that near-field coupling between particles is negligible, by extracting the phase from single particle simulations. This is not the case most of the time and the approach thus prevents the optimization of devices that operate away from their optimum. Here, we propose a versatile numerical method to obtain the phase of each element within the metasurface (meta-atoms) while accounting for near-field coupling. Quantifying the phase error of each element of the metasurfaces with the proposed local phase method paves the way to the design of highly efficient metasurface devices including, but not limited to, deflectors, high numerical aperture metasurface concentrators, lenses, cloaks, and modulators.
Face Recognition Using Local Quantized Patterns and Gabor Filters
NASA Astrophysics Data System (ADS)
Khryashchev, V.; Priorov, A.; Stepanova, O.; Nikitin, A.
2015-05-01
The problem of face recognition in a natural or artificial environment has received a great deal of researchers' attention over the last few years. A lot of methods for accurate face recognition have been proposed. Nevertheless, these methods often fail to accurately recognize the person in difficult scenarios, e.g. low resolution, low contrast, pose variations, etc. We therefore propose an approach for accurate and robust face recognition by using local quantized patterns and Gabor filters. The estimation of the eye centers is used as a preprocessing stage. The evaluation of our algorithm on different samples from a standardized FERET database shows that our method is invariant to the general variations of lighting, expression, occlusion and aging. The proposed approach allows about 20% correct recognition accuracy increase compared with the known face recognition algorithms from the OpenCV library. The additional use of Gabor filters can significantly improve the robustness to changes in lighting conditions.
Localization of multiple defects using the compact phased array (CPA) method
NASA Astrophysics Data System (ADS)
Senyurek, Volkan Y.; Baghalian, Amin; Tashakori, Shervin; McDaniel, Dwayne; Tansel, Ibrahim N.
2018-01-01
Array systems of transducers have found numerous applications in detection and localization of defects in structural health monitoring (SHM) of plate-like structures. Different types of array configurations and analysis algorithms have been used to improve the process of localization of defects. For accurate and reliable monitoring of large structures by array systems, a high number of actuator and sensor elements are often required. In this study, a compact phased array system consisting of only three piezoelectric elements is used in conjunction with an updated total focusing method (TFM) for localization of single and multiple defects in an aluminum plate. The accuracy of the localization process was greatly improved by including wave propagation information in TFM. Results indicated that the proposed CPA approach can locate single and multiple defects with high accuracy while decreasing the processing costs and the number of required transducers. This method can be utilized in critical applications such as aerospace structures where the use of a large number of transducers is not desirable.
Healy, R.W.; Russell, T.F.
1992-01-01
A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.
Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods
NASA Astrophysics Data System (ADS)
Ullrich, P. A.; Guerra, J. E.
2014-12-01
The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.
NASA Astrophysics Data System (ADS)
Reppert, Michael; Tokmakoff, Andrei
The structural characterization of intrinsically disordered peptides (IDPs) presents a challenging biophysical problem. Extreme heterogeneity and rapid conformational interconversion make traditional methods difficult to interpret. Due to its ultrafast (ps) shutter speed, Amide I vibrational spectroscopy has received considerable interest as a novel technique to probe IDP structure and dynamics. Historically, Amide I spectroscopy has been limited to delivering global secondary structural information. More recently, however, the method has been adapted to study structure at the local level through incorporation of isotope labels into the protein backbone at specific amide bonds. Thanks to the acute sensitivity of Amide I frequencies to local electrostatic interactions-particularly hydrogen bonds-spectroscopic data on isotope labeled residues directly reports on local peptide conformation. Quantitative information can be extracted using electrostatic frequency maps which translate molecular dynamics trajectories into Amide I spectra for comparison with experiment. Here we present our recent efforts in the development of a rigorous approach to incorporating Amide I spectroscopic restraints into refined molecular dynamics structural ensembles using maximum entropy and related approaches. By combining force field predictions with experimental spectroscopic data, we construct refined structural ensembles for a family of short, strongly disordered, elastin-like peptides in aqueous solution.
NASA Astrophysics Data System (ADS)
Czarnecki, Łukasz; Grech, Dariusz; Pamuła, Grzegorz
2008-12-01
We confront global and local methods to analyze the financial crash-like events on the Polish financial market from the critical phenomena point of view. These methods are based on the analysis of log-periodicity and the local fractal properties of financial time series in the vicinity of phase transitions (crashes). The whole history (1991-2008) of Warsaw Stock Exchange Index (WIG) describing the largest developing financial market in Europe, is analyzed in a daily time horizon. We find that crash-like events on the Polish financial market are described better by the log-divergent price model decorated with log-periodic behavior than the corresponding power-law-divergent price model. Predictions coming from log-periodicity scenario are verified for all main crashes that took place in WIG history. It is argued that crash predictions within log-periodicity model strongly depend on the amount of data taken to make a fit and therefore are likely to contain huge inaccuracies. Turning to local fractal description, we calculate the so-called local (time dependent) Hurst exponent H for the WIG time series and we find the dependence between the behavior of the local fractal properties of the WIG time series and the crashes appearance on the financial market. The latter method seems to work better than the global approach - both for developing as for developed markets. The current situation on the market, particularly related to the Fed intervention in September’07 and the situation on the market immediately after this intervention is also analyzed from the fractional Brownian motion point of view.
NASA Astrophysics Data System (ADS)
Megat Jamual Fawaeed, P. S.; Daim, M. S.
2018-02-01
Local stakeholder involvement in Marine Protected Area (MPA) management can bring to a successful MPA. Generally, participatory research in marine protected area management is exploring the relationship between marine protected area management approach adopted by the management agencies and the level of participation of local stakeholder whom reside within the marine protected areas. However, the scenario of local community participation in MPA management in Malaysia seems discouraging and does not align with the International Aichi Biodiversity Target 2020. In order to achieve the International Aichi Biodiversity Target 2020, this paper attempts to explore the methodology on participatory research towards the local stakeholder of Pulau Perhentian Marine Park (PPMP), Terengganu, Malaysia. A Q-methodology is used to investigate the perspective of local stakeholder who represents different stances on the issues, by having participants rank and sort a series of statements by comply quantitative and qualitative method in collecting the data. A structured questionnaire will be employed across this study by means of face-to-face interview. In total, 210 respondents from Kampung Pasir Hantu are randomly selected. Meanwhile, a workshop with the agency (Department of Marine Park) had been held to discuss about the issues faces on behalf of management that manage the PPMP. Using the Q-method, researcher acknowledged wise viewpoints, reflecting how different stakeholders’ perception and opinion about community participation with highlights the current level of community participation in MPA. Thus, this paper describes the phases involved in this study, methodology and analysis used in making a conclusion .
Bass, Judith K; Ryder, Robert W; Lammers, Marie-Christine; Mukaba, Thibaut N; Bolton, Paul A
2008-12-01
To determine if a post-partum depression syndrome exists among mothers in Kinshasa, Democratic Republic of Congo, by adapting and validating standard screening instruments. Using qualitative interviewing techniques, we interviewed a convenience sample of 80 women living in a large peri-urban community to better understand local conceptions of mental illness. We used this information to adapt two standard depression screeners, the Edinburgh Post-partum Depression Scale and the Hopkins Symptom Checklist. In a subsequent quantitative study, we identified another 133 women with and without the local depression syndrome and used this information to validate the adapted screening instruments. Based on the qualitative data, we found a local syndrome that closely approximates the Western model of major depressive disorder. The women we interviewed, representative of the local populace, considered this an important syndrome among new mothers because it negatively affects women and their young children. Women (n = 41) identified as suffering from this syndrome had statistically significantly higher depression severity scores on both adapted screeners than women identified as not having this syndrome (n = 20; P < 0.0001). When it is unclear or unknown if Western models of psychopathology are appropriate for use in the local context, these models must be validated to ensure cross-cultural applicability. Using a mixed-methods approach we found a local syndrome similar to depression and validated instruments to screen for this disorder. As the importance of compromised mental health in developing world populations becomes recognized, the methods described in this report will be useful more widely.
A distributed approach to the OPF problem
NASA Astrophysics Data System (ADS)
Erseghe, Tomaso
2015-12-01
This paper presents a distributed approach to optimal power flow (OPF) in an electrical network, suitable for application in a future smart grid scenario where access to resource and control is decentralized. The non-convex OPF problem is solved by an augmented Lagrangian method, similar to the widely known ADMM algorithm, with the key distinction that penalty parameters are constantly increased. A (weak) assumption on local solver reliability is required to always ensure convergence. A certificate of convergence to a local optimum is available in the case of bounded penalty parameters. For moderate sized networks (up to 300 nodes, and even in the presence of a severe partition of the network), the approach guarantees a performance very close to the optimum, with an appreciably fast convergence speed. The generality of the approach makes it applicable to any (convex or non-convex) distributed optimization problem in networked form. In the comparison with the literature, mostly focused on convex SDP approximations, the chosen approach guarantees adherence to the reference problem, and it also requires a smaller local computational complexity effort.
ERIC Educational Resources Information Center
Tsoubaris, Dimitris; Georgopoulos, Aleksandros
2013-01-01
The objective of this qualitative research work is to detect the needs, aspirations and feelings of pupils experiencing local environmental problems and elaborate them through the prism of a socially critical educational approach. Semi-structured focus group interviews are used as a research method applied to four primary schools located near…
Stochastic seismic inversion based on an improved local gradual deformation method
NASA Astrophysics Data System (ADS)
Yang, Xiuwei; Zhu, Peimin
2017-12-01
A new stochastic seismic inversion method based on the local gradual deformation method is proposed, which can incorporate seismic data, well data, geology and their spatial correlations into the inversion process. Geological information, such as sedimentary facies and structures, could provide significant a priori information to constrain an inversion and arrive at reasonable solutions. The local a priori conditional cumulative distributions at each node of model to be inverted are first established by indicator cokriging, which integrates well data as hard data and geological information as soft data. Probability field simulation is used to simulate different realizations consistent with the spatial correlations and local conditional cumulative distributions. The corresponding probability field is generated by the fast Fourier transform moving average method. Then, optimization is performed to match the seismic data via an improved local gradual deformation method. Two improved strategies are proposed to be suitable for seismic inversion. The first strategy is that we select and update local areas of bad fitting between synthetic seismic data and real seismic data. The second one is that we divide each seismic trace into several parts and obtain the optimal parameters for each part individually. The applications to a synthetic example and a real case study demonstrate that our approach can effectively find fine-scale acoustic impedance models and provide uncertainty estimations.
Infrared target recognition based on improved joint local ternary pattern
NASA Astrophysics Data System (ADS)
Sun, Junding; Wu, Xiaosheng
2016-05-01
This paper presents a simple, efficient, yet robust approach, named joint orthogonal combination of local ternary pattern, for automatic forward-looking infrared target recognition. It gives more advantages to describe the macroscopic textures and microscopic textures by fusing variety of scales than the traditional LBP-based methods. In addition, it can effectively reduce the feature dimensionality. Further, the rotation invariant and uniform scheme, the robust LTP, and soft concave-convex partition are introduced to enhance its discriminative power. Experimental results demonstrate that the proposed method can achieve competitive results compared with the state-of-the-art methods.
Statistical lamb wave localization based on extreme value theory
NASA Astrophysics Data System (ADS)
Harley, Joel B.
2018-04-01
Guided wave localization methods based on delay-and-sum imaging, matched field processing, and other techniques have been designed and researched to create images that locate and describe structural damage. The maximum value of these images typically represent an estimated damage location. Yet, it is often unclear if this maximum value, or any other value in the image, is a statistically significant indicator of damage. Furthermore, there are currently few, if any, approaches to assess the statistical significance of guided wave localization images. As a result, we present statistical delay-and-sum and statistical matched field processing localization methods to create statistically significant images of damage. Our framework uses constant rate of false alarm statistics and extreme value theory to detect damage with little prior information. We demonstrate our methods with in situ guided wave data from an aluminum plate to detect two 0.75 cm diameter holes. Our results show an expected improvement in statistical significance as the number of sensors increase. With seventeen sensors, both methods successfully detect damage with statistical significance.
Jiang, Xiaoying; Wei, Rong; Zhao, Yanjun; Zhang, Tongliang
2008-05-01
The knowledge of subnuclear localization in eukaryotic cells is essential for understanding the life function of nucleus. Developing prediction methods and tools for proteins subnuclear localization become important research fields in protein science for special characteristics in cell nuclear. In this study, a novel approach has been proposed to predict protein subnuclear localization. Sample of protein is represented by Pseudo Amino Acid (PseAA) composition based on approximate entropy (ApEn) concept, which reflects the complexity of time series. A novel ensemble classifier is designed incorporating three AdaBoost classifiers. The base classifier algorithms in three AdaBoost are decision stumps, fuzzy K nearest neighbors classifier, and radial basis-support vector machines, respectively. Different PseAA compositions are used as input data of different AdaBoost classifier in ensemble. Genetic algorithm is used to optimize the dimension and weight factor of PseAA composition. Two datasets often used in published works are used to validate the performance of the proposed approach. The obtained results of Jackknife cross-validation test are higher and more balance than them of other methods on same datasets. The promising results indicate that the proposed approach is effective and practical. It might become a useful tool in protein subnuclear localization. The software in Matlab and supplementary materials are available freely by contacting the corresponding author.
NASA Astrophysics Data System (ADS)
Hübener, H.; Pérez-Osorio, M. A.; Ordejón, P.; Giustino, F.
2012-09-01
We present a systematic study of the performance of numerical pseudo-atomic orbital basis sets in the calculation of dielectric matrices of extended systems using the self-consistent Sternheimer approach of [F. Giustino et al., Phys. Rev. B 81, 115105 (2010)]. In order to cover a range of systems, from more insulating to more metallic character, we discuss results for the three semiconductors diamond, silicon, and germanium. Dielectric matrices of silicon and diamond calculated using our method fall within 1% of reference planewaves calculations, demonstrating that this method is promising. We find that polarization orbitals are critical for achieving good agreement with planewaves calculations, and that only a few additional ζ's are required for obtaining converged results, provided the split norm is properly optimized. Our present work establishes the validity of local orbital basis sets and the self-consistent Sternheimer approach for the calculation of dielectric matrices in extended systems, and prepares the ground for future studies of electronic excitations using these methods.
Discontinuous Galerkin Approaches for Stokes Flow and Flow in Porous Media
NASA Astrophysics Data System (ADS)
Lehmann, Ragnar; Kaus, Boris; Lukacova, Maria
2014-05-01
Firstly, we present results of a study comparing two different numerical approaches for solving the Stokes equations with strongly varying viscosity: the continuous Galerkin (i.e., FEM) and the discontinuous Galerkin (DG) method. Secondly, we show how the latter method can be extended and applied to flow in porous media governed by Darcy's law. Nonlinearities in the viscosity or other material parameters can lead to discontinuities in the velocity-pressure solution that may not be approximated well with continuous elements. The DG method allows for discontinuities across interior edges of the underlying mesh. Furthermore, depending on the chosen basis functions, it naturally enforces local mass conservation, i.e., in every mesh cell. Computationally, it provides the capability to locally adapt the polynomial degree and needs communication only between directly adjacent mesh cells making it highly flexible and easy to parallelize. The methods are compared for several geophysically relevant benchmarking setups and discussed with respect to speed, accuracy, computational efficiency.
Localization of Mobile Robots Using Odometry and an External Vision Sensor
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields. PMID:22319318
Localization of mobile robots using odometry and an external vision sensor.
Pizarro, Daniel; Mazo, Manuel; Santiso, Enrique; Marron, Marta; Jimenez, David; Cobreces, Santiago; Losada, Cristina
2010-01-01
This paper presents a sensor system for robot localization based on the information obtained from a single camera attached in a fixed place external to the robot. Our approach firstly obtains the 3D geometrical model of the robot based on the projection of its natural appearance in the camera while the robot performs an initialization trajectory. This paper proposes a structure-from-motion solution that uses the odometry sensors inside the robot as a metric reference. Secondly, an online localization method based on a sequential Bayesian inference is proposed, which uses the geometrical model of the robot as a link between image measurements and pose estimation. The online approach is resistant to hard occlusions and the experimental setup proposed in this paper shows its effectiveness in real situations. The proposed approach has many applications in both the industrial and service robot fields.
VLPs of HCV local isolates for HCV immunoassay diagnostic approach in Indonesia
NASA Astrophysics Data System (ADS)
Prasetyo, Afiono Agung
2017-01-01
Hepatitis C Virus (HCV) infection is a major global disease which often leads to morbidity and mortality. Low survival is related to the lack of adequate diagnostic because HCV infection is frequently asymptomatic and there are no specific diagnostic tests due to the fast transformation of the virus. Here, we investigated the VLPs (virus-like particles) of HCV local isolate as an immunoassay diagnostic approach to detect HCV infection, especially in Indonesia. The core, E1, and E2 of HCV local isolate genes were cloned and molecular analyzed, either as single or in recombinant-VLP form, to determine the molecular and chemical characteristics of each VLPs related to their potential use as an immunoassay detection method for HCV infection. The results indicated the molecular and chemical character of each VLPs are comparable. Conclusion: VLPs of HCV has the potential as an immunoassay diagnostic approach to detect HCV infection.
Improving mobile robot localization: grid-based approach
NASA Astrophysics Data System (ADS)
Yan, Junchi
2012-02-01
Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.
Robust Point Set Matching for Partial Face Recognition.
Weng, Renliang; Lu, Jiwen; Tan, Yap-Peng
2016-03-01
Over the past three decades, a number of face recognition methods have been proposed in computer vision, and most of them use holistic face images for person identification. In many real-world scenarios especially some unconstrained environments, human faces might be occluded by other objects, and it is difficult to obtain fully holistic face images for recognition. To address this, we propose a new partial face recognition approach to recognize persons of interest from their partial faces. Given a pair of gallery image and probe face patch, we first detect keypoints and extract their local textural features. Then, we propose a robust point set matching method to discriminatively match these two extracted local feature sets, where both the textural information and geometrical information of local features are explicitly used for matching simultaneously. Finally, the similarity of two faces is converted as the distance between these two aligned feature sets. Experimental results on four public face data sets show the effectiveness of the proposed approach.
NASA Astrophysics Data System (ADS)
Birk, Udo; Szczurek, Aleksander; Cremer, Christoph
2017-12-01
Current approaches to overcome the conventional limit of the resolution potential of light microscopy (of about 200 nm for visible light), often suffer from non-linear effects, which render the quantification of the image intensities in the reconstructions difficult, and also affect the quantification of the biological structure under investigation. As an attempt to face these difficulties, we discuss a particular method of localization microscopy which is based on photostable fluorescent dyes. The proposed method can potentially be implemented as a fast alternative for quantitative localization microscopy, circumventing the need for the acquisition of thousands of image frames and complex, highly dye-specific imaging buffers. Although the need for calibration remains in order to extract quantitative data (such as the number of emitters), multispectral approaches are largely facilitated due to the much less stringent requirements on imaging buffers. Furthermore, multispectral acquisitions can be readily obtained using commercial instrumentation such as e.g. the conventional confocal laser scanning microscope.
3-D Localization Method for a Magnetically Actuated Soft Capsule Endoscope and Its Applications
Yim, Sehyuk; Sitti, Metin
2014-01-01
In this paper, we present a 3-D localization method for a magnetically actuated soft capsule endoscope (MASCE). The proposed localization scheme consists of three steps. First, MASCE is oriented to be coaxially aligned with an external permanent magnet (EPM). Second, MASCE is axially contracted by the enhanced magnetic attraction of the approaching EPM. Third, MASCE recovers its initial shape by the retracting EPM as the magnetic attraction weakens. The combination of the estimated direction in the coaxial alignment step and the estimated distance in the shape deformation (recovery) step provides the position of MASCE in 3-D. It is experimentally shown that the proposed localization method could provide 2.0–3.7 mm of distance error in 3-D. This study also introduces two new applications of the proposed localization method. First, based on the trace of contact points between the MASCE and the surface of the stomach, the 3-D geometrical model of a synthetic stomach was reconstructed. Next, the relative tissue compliance at each local contact point in the stomach was characterized by measuring the local tissue deformation at each point due to the preloading force. Finally, the characterized relative tissue compliance parameter was mapped onto the geometrical model of the stomach toward future use in disease diagnosis. PMID:25383064
Harnessing graphical structure in Markov chain Monte Carlo learning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stolorz, P.E.; Chew P.C.
1996-12-31
The Monte Carlo method is recognized as a useful tool in learning and probabilistic inference methods common to many datamining problems. Generalized Hidden Markov Models and Bayes nets are especially popular applications. However, the presence of multiple modes in many relevant integrands and summands often renders the method slow and cumbersome. Recent mean field alternatives designed to speed things up have been inspired by experience gleaned from physics. The current work adopts an approach very similar to this in spirit, but focusses instead upon dynamic programming notions as a basis for producing systematic Monte Carlo improvements. The idea is tomore » approximate a given model by a dynamic programming-style decomposition, which then forms a scaffold upon which to build successively more accurate Monte Carlo approximations. Dynamic programming ideas alone fail to account for non-local structure, while standard Monte Carlo methods essentially ignore all structure. However, suitably-crafted hybrids can successfully exploit the strengths of each method, resulting in algorithms that combine speed with accuracy. The approach relies on the presence of significant {open_quotes}local{close_quotes} information in the problem at hand. This turns out to be a plausible assumption for many important applications. Example calculations are presented, and the overall strengths and weaknesses of the approach are discussed.« less
Local synchronization of a complex network model.
Yu, Wenwu; Cao, Jinde; Chen, Guanrong; Lü, Jinhu; Han, Jian; Wei, Wei
2009-02-01
This paper introduces a novel complex network model to evaluate the reputation of virtual organizations. By using the Lyapunov function and linear matrix inequality approaches, the local synchronization of the proposed model is further investigated. Here, the local synchronization is defined by the inner synchronization within a group which does not mean the synchronization between different groups. Moreover, several sufficient conditions are derived to ensure the local synchronization of the proposed network model. Finally, several representative examples are given to show the effectiveness of the proposed methods and theories.
Explosion localization via infrasound.
Szuberla, Curt A L; Olson, John V; Arnoult, Kenneth M
2009-11-01
Two acoustic source localization techniques were applied to infrasonic data and their relative performance was assessed. The standard approach for low-frequency localization uses an ensemble of small arrays to separately estimate far-field source bearings, resulting in a solution from the various back azimuths. This method was compared to one developed by the authors that treats the smaller subarrays as a single, meta-array. In numerical simulation and a field experiment, the latter technique was found to provide improved localization precision everywhere in the vicinity of a 3-km-aperture meta-array, often by an order of magnitude.
NASA Astrophysics Data System (ADS)
Grech, Dariusz
We define and confront global and local methods to analyze the financial crash-like events on the financial markets from the critical phenomena point of view. These methods are based respectively on the analysis of log-periodicity and on the local fractal properties of financial time series in the vicinity of phase transitions (crashes). The log-periodicity analysis is made in a daily time horizon, for the whole history (1991-2008) of Warsaw Stock Exchange Index (WIG) connected with the largest developing financial market in Europe. We find that crash-like events on the Polish financial market are described better by the log-divergent price model decorated with log-periodic behavior than by the power-law-divergent price model usually discussed in log-periodic scenarios for developed markets. Predictions coming from log-periodicity scenario are verified for all main crashes that took place in WIG history. It is argued that crash predictions within log-periodicity model strongly depend on the amount of data taken to make a fit and therefore are likely to contain huge inaccuracies. Next, this global analysis is confronted with the local fractal description. To do so, we provide calculation of the so-called local (time dependent) Hurst exponent H loc for the WIG time series and for main US stock market indices like DJIA and S&P 500. We point out dependence between the behavior of the local fractal properties of financial time series and the crashes appearance on the financial markets. We conclude that local fractal method seems to work better than the global approach - both for developing and developed markets. The very recent situation on the market, particularly related to the Fed intervention in September 2007 and the situation immediately afterwards is also analyzed within fractal approach. It is shown in this context how the financial market evolves through different phases of fractional Brownian motion. Finally, the current situation on American market is analyzed in fractal language. This is to show how far we still are from the end of recession and from the beginning of a new boom on US financial market or on other world leading stocks.
Teacher and learner: Supervised and unsupervised learning in communities.
Shafto, Michael G; Seifert, Colleen M
2015-01-01
How far can teaching methods go to enhance learning? Optimal methods of teaching have been considered in research on supervised and unsupervised learning. Locally optimal methods are usually hybrids of teaching and self-directed approaches. The costs and benefits of specific methods have been shown to depend on the structure of the learning task, the learners, the teachers, and the environment.
District nursing workforce planning: a review of the methods.
Reid, Bernie; Kane, Kay; Curran, Carol
2008-11-01
District nursing services in Northern Ireland face increasing demands and challenges which may be responded to by effective and efficient workforce planning and development. The aim of this paper is to critically analyse district nursing workforce planning and development methods, in an attempt to find a suitable method for Northern Ireland. A systematic analysis of the literature reveals four methods: professional judgement; population-based health needs; caseload analysis and dependency-acuity. Each method has strengths and weaknesses. Professional judgement offers a 'belt and braces' approach but lacks sensitivity to fluctuating patient numbers. Population-based health needs methods develop staffing algorithms that reflect deprivation and geographical spread, but are poorly understood by district nurses. Caseload analysis promotes equitable workloads but poorly performing district nursing localities may continue if benchmarking processes only consider local data. Dependency-acuity methods provide a means of equalizing and prioritizing workload but are prone to district nurses overstating factors in patient dependency or understating carers' capability. In summary a mixed method approach is advocated to evaluate and adjust the size and mix of district nursing teams using empirically determined patient dependency and activity-based variables based on the population's health needs.
Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Cepeda-Negrete, Jonathan; Ibarra-Manzano, Mario Alberto; Chalopin, Claire
2017-12-01
Brain tumor segmentation is a routine process in a clinical setting and provides useful information for diagnosis and treatment planning. Manual segmentation, performed by physicians or radiologists, is a time-consuming task due to the large quantity of medical data generated presently. Hence, automatic segmentation methods are needed, and several approaches have been introduced in recent years including the Localized Region-based Active Contour Model (LRACM). There are many popular LRACM, but each of them presents strong and weak points. In this paper, the automatic selection of LRACM based on image content and its application on brain tumor segmentation is presented. Thereby, a framework to select one of three LRACM, i.e., Local Gaussian Distribution Fitting (LGDF), localized Chan-Vese (C-V) and Localized Active Contour Model with Background Intensity Compensation (LACM-BIC), is proposed. Twelve visual features are extracted to properly select the method that may process a given input image. The system is based on a supervised approach. Applied specifically to Magnetic Resonance Imaging (MRI) images, the experiments showed that the proposed system is able to correctly select the suitable LRACM to handle a specific image. Consequently, the selection framework achieves better accuracy performance than the three LRACM separately. Copyright © 2017 Elsevier Ltd. All rights reserved.
Eye center localization and gaze gesture recognition for human-computer interaction.
Zhang, Wenhao; Smith, Melvyn L; Smith, Lyndon N; Farooq, Abdul
2016-03-01
This paper introduces an unsupervised modular approach for accurate and real-time eye center localization in images and videos, thus allowing a coarse-to-fine, global-to-regional scheme. The trajectories of eye centers in consecutive frames, i.e., gaze gestures, are further analyzed, recognized, and employed to boost the human-computer interaction (HCI) experience. This modular approach makes use of isophote and gradient features to estimate the eye center locations. A selective oriented gradient filter has been specifically designed to remove strong gradients from eyebrows, eye corners, and shadows, which sabotage most eye center localization methods. A real-world implementation utilizing these algorithms has been designed in the form of an interactive advertising billboard to demonstrate the effectiveness of our method for HCI. The eye center localization algorithm has been compared with 10 other algorithms on the BioID database and six other algorithms on the GI4E database. It outperforms all the other algorithms in comparison in terms of localization accuracy. Further tests on the extended Yale Face Database b and self-collected data have proved this algorithm to be robust against moderate head poses and poor illumination conditions. The interactive advertising billboard has manifested outstanding usability and effectiveness in our tests and shows great potential for benefiting a wide range of real-world HCI applications.
Ko, Nak Yong; Kuc, Tae-Yong
2015-01-01
This paper proposes a method for mobile robot localization in a partially unknown indoor environment. The method fuses two types of range measurements: the range from the robot to the beacons measured by ultrasonic sensors and the range from the robot to the walls surrounding the robot measured by a laser range finder (LRF). For the fusion, the unscented Kalman filter (UKF) is utilized. Because finding the Jacobian matrix is not feasible for range measurement using an LRF, UKF has an advantage in this situation over the extended KF. The locations of the beacons and range data from the beacons are available, whereas the correspondence of the range data to the beacon is not given. Therefore, the proposed method also deals with the problem of data association to determine which beacon corresponds to the given range data. The proposed approach is evaluated using different sets of design parameter values and is compared with the method that uses only an LRF or ultrasonic beacons. Comparative analysis shows that even though ultrasonic beacons are sparsely populated, have a large error and have a slow update rate, they improve the localization performance when fused with the LRF measurement. In addition, proper adjustment of the UKF design parameters is crucial for full utilization of the UKF approach for sensor fusion. This study contributes to the derivation of a UKF-based design methodology to fuse two exteroceptive measurements that are complementary to each other in localization. PMID:25970259
NASA Astrophysics Data System (ADS)
Gao, Y.
2017-12-01
Regional precipitation recycling (i.e., the contribution of local evaporation to local precipitation) is an important component of water cycle over the Tibetan Plateau (TP). Two methods were used to investigate regional precipitation recycling: 1) tracking of tagged atmospheric water parcels originating from evaporation in a source region (i.e., E-tagging), and 2) back-trajectory approach to track the evaporative sources contributed to precipitation in a specific region. These two methods were applied to Weather Research and Forecasting (WRF) regional climate simulations to quantify the precipitation recycling ratio in the TP for three selected years: climatologically normal, dry and wet year. The simulation region is characterized by high average elevation above 4000 m and complex terrain. The back-trajectory approach is also calculated over three sub-regions over the TP: namely western, northeastern and southeastern TP, and the E-tagging approach could provide recycling-ratio distributions over the whole TP. Three aspects are investigated to characterize the precipitation recycling: annual mean, seasonal variations and spatial distributions. Averaged over the TP, the precipitation recycling ratio estimated by the E-tagging approach is higher than that from the back-trajectory method. The back-trajectory approach uses a precipitation threshold as total precipitation in five days divided by a random number, and this number was set to 500 as a tread off between equilibrium and computational efficiency. Lower recycling ratio derived from the back-trajectory approach is related to the precipitation threshold used. The E-tagging, however, tracks every air parcel of evaporation regardless of the precipitation amount. There is no obvious seasonal variation in the recycling ratio using both methods. The E-tagging approach shows high recycling ratios in the center TP, indicating stronger land-atmospheric interactions than elsewhere.
Spatial sparsity based indoor localization in wireless sensor network for assistive healthcare.
Pourhomayoun, Mohammad; Jin, Zhanpeng; Fowler, Mark
2012-01-01
Indoor localization is one of the key topics in the area of wireless networks with increasing applications in assistive healthcare, where tracking the position and actions of the patient or elderly are required for medical observation or accident prevention. Most of the common indoor localization methods are based on estimating one or more location-dependent signal parameters like TOA, AOA or RSS. However, some difficulties and challenges caused by the complex scenarios within a closed space significantly limit the applicability of those existing approaches in an indoor assistive environment, such as the well-known multipath effect. In this paper, we develop a new one-stage localization method based on spatial sparsity of the x-y plane. In this method, we directly estimate the location of the emitter without going through the intermediate stage of TOA or signal strength estimation. We evaluate the performance of the proposed method using Monte Carlo simulation. The results show that the proposed method is (i) very accurate even with a small number of sensors and (ii) very effective in addressing the multi-path issues.
Terrain clutter simulation using physics-based scattering model and digital terrain profile data
NASA Astrophysics Data System (ADS)
Park, James; Johnson, Joel T.; Ding, Kung-Hau; Kim, Kristopher; Tenbarge, Joseph
2015-05-01
Localization of a wireless capsule endoscope finds many clinical applications from diagnostics to therapy. There are potentially two approaches of the electromagnetic waves based localization: a) signal propagation model based localization using a priori information about the persons dielectric channels, and b) recently developed microwave imaging based localization without using any a priori information about the persons dielectric channels. In this paper, we study the second approach in terms of a variety of frequencies and signal-to-noise ratios for localization accuracy. To this end, we select a 2-D anatomically realistic numerical phantom for microwave imaging at different frequencies. The selected frequencies are 13:56 MHz, 431:5 MHz, 920 MHz, and 2380 MHz that are typically considered for medical applications. Microwave imaging of a phantom will provide us with an electromagnetic model with electrical properties (relative permittivity and conductivity) of the internal parts of the body and can be useful as a foundation for localization of an in-body RF source. Low frequency imaging at 13:56 MHz provides a low resolution image with high contrast in the dielectric properties. However, at high frequencies, the imaging algorithm is able to image only the outer boundaries of the tissues due to low penetration depth as higher frequency means higher attenuation. Furthermore, recently developed localization method based on microwave imaging is used for estimating the localization accuracy at different frequencies and signal-to-noise ratios. Statistical evaluation of the localization error is performed using the cumulative distribution function (CDF). Based on our results, we conclude that the localization accuracy is minimally affected by the frequency or the noise. However, the choice of the frequency will become critical if the purpose of the method is to image the internal parts of the body for tumor and/or cancer detection.
ERIC Educational Resources Information Center
Cattaneo, Matias D.; Titiunik, Rocío; Vazquez-Bare, Gonzalo
2017-01-01
The regression discontinuity (RD) design is a popular quasi-experimental design for causal inference and policy evaluation. The most common inference approaches in RD designs employ "flexible" parametric and nonparametric local polynomial methods, which rely on extrapolation and large-sample approximations of conditional expectations…
ERIC Educational Resources Information Center
Davies, Jeanne; Lester, Carolyn; O'Neill, Martin; Williams, Gareth
2008-01-01
Objective: This article describes the Triangle Project's work with a post industrial community, where healthy living activities were developed in response to community members' expressed needs. Method: An action research partnership approach was taken to reduce health inequalities, with local people developing their own activities to address…
Classroom Approaches and Japanese College Students' Intercultural Competence
ERIC Educational Resources Information Center
Gilbert, Joan
2017-01-01
Preparing college students to be contributing members of local and global societies requires educators to analyze the capabilities and needs of their students and to adjust instructional content and practice. The purpose of this mixed-methods study was twofold: (a) to explore how classroom approaches designed to facilitate students' questioning of…
Modeling Subgrid Scale Droplet Deposition in Multiphase-CFD
NASA Astrophysics Data System (ADS)
Agostinelli, Giulia; Baglietto, Emilio
2017-11-01
The development of first-principle-based constitutive equations for the Eulerian-Eulerian CFD modeling of annular flow is a major priority to extend the applicability of multiphase CFD (M-CFD) across all two-phase flow regimes. Two key mechanisms need to be incorporated in the M-CFD framework, the entrainment of droplets from the liquid film, and their deposition. Here we focus first on the aspect of deposition leveraging a separate effects approach. Current two-field methods in M-CFD do not include appropriate local closures to describe the deposition of droplets in annular flow conditions. As many integral correlations for deposition have been proposed for lumped parameters methods applications, few attempts exist in literature to extend their applicability to CFD simulations. The integral nature of the approach limits its applicability to fully developed flow conditions, without geometrical or flow variations, therefore negating the scope of CFD application. A new approach is proposed here that leverages local quantities to predict the subgrid-scale deposition rate. The methodology is first tested into a three-field approach CFD model.
Mudflow Hazards in the Georgian Caucasus - Using Participatory Methods to Investigate Disaster Risk
NASA Astrophysics Data System (ADS)
Spanu, Valentina; McCall, Michael; Gaprindashvili, George
2014-05-01
The Caucasus form an extremely complex mountainous area of Georgia in terms of geology and the scale and frequency of natural disaster processes. These processes, especially mudflows, frequently result in considerable damage to the settlements, farmlands and infrastructure facilities. The occurrence intervals between mudflows are becoming significantly shorter, therefore the most populated areas and infrastucture need to be included in risk zones. This presentation reviews the case of the mudflow problem in Mleta village in the region of Dusheti where the mudflow risk is critical. The villages of Zemo Mleta (Higher Mleta) and Kvemo Mleta (Lower Mleta) are entirely surrounded by unstable slopes where mudslides, landslides and floods are often generated. These hazards occur at least twice per year and sometimes result in severe events. In 2006 and 2010 in Mleta village a very severe mudflow event occurred creating heavy damage. This paper focuses on the recognition of the importance of cooperating with the local communities affected by these disasters, in order to get useful information and local knowledge to apply to disaster prevention and management. In October 2010, the EU-financed MATRA Project (Institutional Capacity Building in Natural Disaster Risk Reduction) in Georgia included fieldworks in several locations. Particular attention was given to Mleta village in the Caucasus Mountains, where the activities focused on institutional capacity-building in disaster risk reduction, including modern spatial planning approaches and technologies and the development of risk communication strategies. Participatory methods of acquiring local knowledge from local communities reveal many advantages compared to traditional survey approaches for collecting data. In a participatory survey and planning approach, local authorities, experts and local communities are supposed to work together to provide useful information and eventually produce a plan for Disaster Risk Reduction/Management (DRR and DRM). Participatory surveys (and participatory monitoring) elicit local people's knowledge about the specifics of the hazard concerning frequency, timing, warning signals, rates of flow, spatial extent, etc. And significantly, only this local knowledge from informants can reveal essential information about different vulnerabilities of people and places, and about any coping or adjustment mechanisms that local people have. The participatory methods employed in Mleta included historical discussions with key informants, village social transects, participatory mapping with children, semi-structured interviews with inhabitants, and VCA (Vulnerability & Capacity Analysis). The geolomorphological map produced on the base of the local geology has been realized with ArcGIS. This allowed the assessment of the areas at risk and the relative maps. We adapted and tested the software programme CyberTracker as a survey tool, a digital device method of field data collection. Google Earth, OpenStreetMap, Virtual Earth and Ilwis have been used for data processing.
Local surface curvature analysis based on reflection estimation
NASA Astrophysics Data System (ADS)
Lu, Qinglin; Laligant, Olivier; Fauvet, Eric; Zakharova, Anastasia
2015-07-01
In this paper, we propose a novel reflection based method to estimate the local orientation of a specular surface. For a calibrated scene with a fixed light band, the band is reflected by the surface to the image plane of a camera. Then the local geometry between the surface and reflected band is estimated. Firstly, in order to find the relationship relying the object position, the object surface orientation and the band reflection, we study the fundamental theory of the geometry between a specular mirror surface and a band source. Then we extend our approach to the spherical surface with arbitrary curvature. Experiments are conducted with mirror surface and spherical surface. Results show that our method is able to obtain the local surface orientation merely by measuring the displacement and the form of the reflection.
NASA Astrophysics Data System (ADS)
Wagner, Jenny; Liesenborgs, Jori; Tessore, Nicolas
2018-04-01
Context. Local gravitational lensing properties, such as convergence and shear, determined at the positions of multiply imaged background objects, yield valuable information on the smaller-scale lensing matter distribution in the central part of galaxy clusters. Highly distorted multiple images with resolved brightness features like the ones observed in CL0024 allow us to study these local lensing properties and to tighten the constraints on the properties of dark matter on sub-cluster scale. Aim. We investigate to what precision local magnification ratios, J, ratios of convergences, f, and reduced shears, g = (g1, g2), can be determined independently of a lens model for the five resolved multiple images of the source at zs = 1.675 in CL0024. We also determine if a comparison to the respective results obtained by the parametric modelling tool Lenstool and by the non-parametric modelling tool Grale can detect biases in the models. For these lens models, we analyse the influence of the number and location of the constraints from multiple images on the lens properties at the positions of the five multiple images of the source at zs = 1.675. Methods: Our model-independent approach uses a linear mapping between the five resolved multiple images to determine the magnification ratios, ratios of convergences, and reduced shears at their positions. With constraints from up to six multiple image systems, we generate Lenstool and Grale models using the same image positions, cosmological parameters, and number of generated convergence and shear maps to determine the local values of J, f, and g at the same positions across all methods. Results: All approaches show strong agreement on the local values of J, f, and g. We find that Lenstool obtains the tightest confidence bounds even for convergences around one using constraints from six multiple-image systems, while the best Grale model is generated only using constraints from all multiple images with resolved brightness features and adding limited small-scale mass corrections. Yet, confidence bounds as large as the values themselves can occur for convergences close to one in all approaches. Conclusions: Our results agree with previous findings, support the light-traces-mass assumption, and the merger hypothesis for CL0024. Comparing the different approaches can detect model biases. The model-independent approach determines the local lens properties to a comparable precision in less than one second.
NASA Astrophysics Data System (ADS)
Nguyen, Thinh; Potter, Thomas; Grossman, Robert; Zhang, Yingchun
2018-06-01
Objective. Neuroimaging has been employed as a promising approach to advance our understanding of brain networks in both basic and clinical neuroscience. Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) represent two neuroimaging modalities with complementary features; EEG has high temporal resolution and low spatial resolution while fMRI has high spatial resolution and low temporal resolution. Multimodal EEG inverse methods have attempted to capitalize on these properties but have been subjected to localization error. The dynamic brain transition network (DBTN) approach, a spatiotemporal fMRI constrained EEG source imaging method, has recently been developed to address these issues by solving the EEG inverse problem in a Bayesian framework, utilizing fMRI priors in a spatial and temporal variant manner. This paper presents a computer simulation study to provide a detailed characterization of the spatial and temporal accuracy of the DBTN method. Approach. Synthetic EEG data were generated in a series of computer simulations, designed to represent realistic and complex brain activity at superficial and deep sources with highly dynamical activity time-courses. The source reconstruction performance of the DBTN method was tested against the fMRI-constrained minimum norm estimates algorithm (fMRIMNE). The performances of the two inverse methods were evaluated both in terms of spatial and temporal accuracy. Main results. In comparison with the commonly used fMRIMNE method, results showed that the DBTN method produces results with increased spatial and temporal accuracy. The DBTN method also demonstrated the capability to reduce crosstalk in the reconstructed cortical time-course(s) induced by neighboring regions, mitigate depth bias and improve overall localization accuracy. Significance. The improved spatiotemporal accuracy of the reconstruction allows for an improved characterization of complex neural activity. This improvement can be extended to any subsequent brain connectivity analyses used to construct the associated dynamic brain networks.
Fast Localization in Large-Scale Environments Using Supervised Indexing of Binary Features.
Youji Feng; Lixin Fan; Yihong Wu
2016-01-01
The essence of image-based localization lies in matching 2D key points in the query image and 3D points in the database. State-of-the-art methods mostly employ sophisticated key point detectors and feature descriptors, e.g., Difference of Gaussian (DoG) and Scale Invariant Feature Transform (SIFT), to ensure robust matching. While a high registration rate is attained, the registration speed is impeded by the expensive key point detection and the descriptor extraction. In this paper, we propose to use efficient key point detectors along with binary feature descriptors, since the extraction of such binary features is extremely fast. The naive usage of binary features, however, does not lend itself to significant speedup of localization, since existing indexing approaches, such as hierarchical clustering trees and locality sensitive hashing, are not efficient enough in indexing binary features and matching binary features turns out to be much slower than matching SIFT features. To overcome this, we propose a much more efficient indexing approach for approximate nearest neighbor search of binary features. This approach resorts to randomized trees that are constructed in a supervised training process by exploiting the label information derived from that multiple features correspond to a common 3D point. In the tree construction process, node tests are selected in a way such that trees have uniform leaf sizes and low error rates, which are two desired properties for efficient approximate nearest neighbor search. To further improve the search efficiency, a probabilistic priority search strategy is adopted. Apart from the label information, this strategy also uses non-binary pixel intensity differences available in descriptor extraction. By using the proposed indexing approach, matching binary features is no longer much slower but slightly faster than matching SIFT features. Consequently, the overall localization speed is significantly improved due to the much faster key point detection and descriptor extraction. It is empirically demonstrated that the localization speed is improved by an order of magnitude as compared with state-of-the-art methods, while comparable registration rate and localization accuracy are still maintained.
NASA Astrophysics Data System (ADS)
Itter, M.; Finley, A. O.; Hooten, M.; Higuera, P. E.; Marlon, J. R.; McLachlan, J. S.; Kelly, R.
2016-12-01
Sediment charcoal records are used in paleoecological analyses to identify individual local fire events and to estimate fire frequency and regional biomass burned at centennial to millenial time scales. Methods to identify local fire events based on sediment charcoal records have been well developed over the past 30 years, however, an integrated statistical framework for fire identification is still lacking. We build upon existing paleoecological methods to develop a hierarchical Bayesian point process model for local fire identification and estimation of fire return intervals. The model is unique in that it combines sediment charcoal records from multiple lakes across a region in a spatially-explicit fashion leading to estimation of a joint, regional fire return interval in addition to lake-specific local fire frequencies. Further, the model estimates a joint regional charcoal deposition rate free from the effects of local fires that can be used as a measure of regional biomass burned over time. Finally, the hierarchical Bayesian approach allows for tractable error propagation such that estimates of fire return intervals reflect the full range of uncertainty in sediment charcoal records. Specific sources of uncertainty addressed include sediment age models, the separation of local versus regional charcoal sources, and generation of a composite charcoal record The model is applied to sediment charcoal records from a dense network of lakes in the Yukon Flats region of Alaska. The multivariate joint modeling approach results in improved estimates of regional charcoal deposition with reduced uncertainty in the identification of individual fire events and local fire return intervals compared to individual lake approaches. Modeled individual-lake fire return intervals range from 100 to 500 years with a regional interval of roughly 200 years. Regional charcoal deposition to the network of lakes is correlated up to 50 kilometers. Finally, the joint regional charcoal deposition rate exhibits changes over time coincident with major climatic and vegetation shifts over the past 10,000 years. Ongoing work will use the regional charcoal deposition rate to estimate changes in biomass burned as a function of climate variability and regional vegetation pattern.
Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H.; J. Zahra, Sophia
2016-01-01
An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996
NASA Astrophysics Data System (ADS)
Unke, Oliver T.; Meuwly, Markus
2018-06-01
Despite the ever-increasing computer power, accurate ab initio calculations for large systems (thousands to millions of atoms) remain infeasible. Instead, approximate empirical energy functions are used. Most current approaches are either transferable between different chemical systems, but not particularly accurate, or they are fine-tuned to a specific application. In this work, a data-driven method to construct a potential energy surface based on neural networks is presented. Since the total energy is decomposed into local atomic contributions, the evaluation is easily parallelizable and scales linearly with system size. With prediction errors below 0.5 kcal mol-1 for both unknown molecules and configurations, the method is accurate across chemical and configurational space, which is demonstrated by applying it to datasets from nonreactive and reactive molecular dynamics simulations and a diverse database of equilibrium structures. The possibility to use small molecules as reference data to predict larger structures is also explored. Since the descriptor only uses local information, high-level ab initio methods, which are computationally too expensive for large molecules, become feasible for generating the necessary reference data used to train the neural network.
Demographic stability metrics for conservation prioritization of isolated populations.
Finn, Debra S; Bogan, Michael T; Lytle, David A
2009-10-01
Systems of geographically isolated habitat patches house species that occur naturally as small, disjunct populations. Many of these species are of conservation concern, particularly under the interacting influences of isolation and rapid global change. One potential conservation strategy is to prioritize the populations most likely to persist through change and act as sources for future recolonization of less stable localities. We propose an approach to classify long-term population stability (and, presumably, future persistence potential) with composite demographic metrics derived from standard population-genetic data. Stability metrics can be related to simple habitat measures for a straightforward method of classifying localities to inform conservation management. We tested these ideas in a system of isolated desert headwater streams with mitochondrial sequence data from 16 populations of a flightless aquatic insect. Populations exhibited a wide range of stability scores, which were significantly predicted by dry-season aquatic habitat size. This preliminary test suggests strong potential for our proposed method of classifying isolated populations according to persistence potential. The approach is complementary to existing methods for prioritizing local habitats according to diversity patterns and should be tested further in other systems and with additional loci to inform composite demographic stability scores.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.
Tuta, Jure; Juric, Matjaz B
2018-03-24
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.
MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method
Juric, Matjaz B.
2018-01-01
This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage. PMID:29587352
ERIC Educational Resources Information Center
Griffith, Jason S.
2009-01-01
Although some school improvement literature has suggested that schools will improve when unions are removed from the school system, unions have rarely been isolated in the research. This study involved a mixed method case study approach to explore whether support of the local teacher union affected perceptions of school climate, as measured by the…
A discrete-time localization method for capsule endoscopy based on on-board magnetic sensing
NASA Astrophysics Data System (ADS)
Salerno, Marco; Ciuti, Gastone; Lucarini, Gioia; Rizzo, Rocco; Valdastri, Pietro; Menciassi, Arianna; Landi, Alberto; Dario, Paolo
2012-01-01
Recent achievements in active capsule endoscopy have allowed controlled inspection of the bowel by magnetic guidance. Capsule localization represents an important enabling technology for such kinds of platforms. In this paper, the authors present a localization method, applied as first step in time-discrete capsule position detection, that is useful for establishing a magnetic link at the beginning of an endoscopic procedure or for re-linking the capsule in the case of loss due to locomotion. The novelty of this approach consists in using magnetic sensors on board the capsule whose output is combined with pre-calculated magnetic field analytical model solutions. A magnetic field triangulation algorithm is used for obtaining the position of the capsule inside the gastrointestinal tract. Experimental validation has demonstrated that the proposed procedure is stable, accurate and has a wide localization range in a volume of about 18 × 103 cm3. Position errors of 14 mm along the X direction, 11 mm along the Y direction and 19 mm along the Z direction were obtained in less than 27 s of elaboration time. The proposed approach, being compatible with magnetic fields used for locomotion, can be easily extended to other platforms for active capsule endoscopy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Y M; Bush, K; Han, B
Purpose: Accurate and fast dose calculation is a prerequisite of precision radiation therapy in modern photon and particle therapy. While Monte Carlo (MC) dose calculation provides high dosimetric accuracy, the drastically increased computational time hinders its routine use. Deterministic dose calculation methods are fast, but problematic in the presence of tissue density inhomogeneity. We leverage the useful features of deterministic methods and MC to develop a hybrid dose calculation platform with autonomous utilization of MC and deterministic calculation depending on the local geometry, for optimal accuracy and speed. Methods: Our platform utilizes a Geant4 based “localized Monte Carlo” (LMC) methodmore » that isolates MC dose calculations only to volumes that have potential for dosimetric inaccuracy. In our approach, additional structures are created encompassing heterogeneous volumes. Deterministic methods calculate dose and energy fluence up to the volume surfaces, where the energy fluence distribution is sampled into discrete histories and transported using MC. Histories exiting the volume are converted back into energy fluence, and transported deterministically. By matching boundary conditions at both interfaces, deterministic dose calculation account for dose perturbations “downstream” of localized heterogeneities. Hybrid dose calculation was performed for water and anthropomorphic phantoms. Results: We achieved <1% agreement between deterministic and MC calculations in the water benchmark for photon and proton beams, and dose differences of 2%–15% could be observed in heterogeneous phantoms. The saving in computational time (a factor ∼4–7 compared to a full Monte Carlo dose calculation) was found to be approximately proportional to the volume of the heterogeneous region. Conclusion: Our hybrid dose calculation approach takes advantage of the computational efficiency of deterministic method and accuracy of MC, providing a practical tool for high performance dose calculation in modern RT. The approach is generalizable to all modalities where heterogeneities play a large role, notably particle therapy.« less
NASA Technical Reports Server (NTRS)
Brooke, D.; Vondrasek, D. V.
1978-01-01
The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.
Local SIMPLE multi-atlas-based segmentation applied to lung lobe detection on chest CT
NASA Astrophysics Data System (ADS)
Agarwal, M.; Hendriks, E. A.; Stoel, B. C.; Bakker, M. E.; Reiber, J. H. C.; Staring, M.
2012-02-01
For multi atlas-based segmentation approaches, a segmentation fusion scheme which considers local performance measures may be more accurate than a method which uses a global performance measure. We improve upon an existing segmentation fusion method called SIMPLE and extend it to be localized and suitable for multi-labeled segmentations. We demonstrate the algorithm performance on 23 CT scans of COPD patients using a leave-one- out experiment. Our algorithm performs significantly better (p < 0.01) than majority voting, STAPLE, and SIMPLE, with a median overlap of the fissure of 0.45, 0.48, 0.55 and 0.6 for majority voting, STAPLE, SIMPLE, and the proposed algorithm, respectively.
A Novel Real-Time Reference Key Frame Scan Matching Method.
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-05-07
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions' environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems.
Kinetically reduced local Navier-Stokes equations for simulation of incompressible viscous flows.
Borok, S; Ansumali, S; Karlin, I V
2007-12-01
Recently, another approach to study incompressible fluid flow was suggested [S. Ansumali, I. Karlin, and H. Ottinger, Phys. Rev. Lett. 94, 080602 (2005)]-the kinetically reduced local Navier-Stokes (KRLNS) equations. We consider a simplified two-dimensional KRLNS system and compare it with Chorin's artificial compressibility method. A comparison of the two methods for steady state computation of the flow in a lid-driven cavity at various Reynolds numbers shows that the results from both methods are in good agreement with each other. However, in the transient flow, it is demonstrated that the KRLNS equations correctly describe the time evolution of the velocity and of the pressure, unlike the artificial compressibility method.
NASA Astrophysics Data System (ADS)
Joslin, R. D.
1991-04-01
The use of passive devices to obtain drag and noise reduction or transition delays in boundary layers is highly desirable. One such device that shows promise for hydrodynamic applications is the compliant coating. The present study extends the mechanical model to allow for three-dimensional waves. This study also looks at the effect of compliant walls on three-dimensional secondary instabilities. For the primary and secondary instability analysis, spectral and shooting approximations are used to obtain solutions of the governing equations and boundary conditions. The spectral approximation consists of local and global methods of solution while the shooting approach is local. The global method is used to determine the discrete spectrum of eigenvalue without any initial guess. The local method requires a sufficiently accurate initial guess to converge to the eigenvalue. Eigenvectors may be obtained with either local approach. For the initial stage of this analysis, two and three dimensional primary instabilities propagate over compliant coatings. Results over the compliant walls are compared with the rigid wall case. Three-dimensional instabilities are found to dominate transition over the compliant walls considered. However, transition delays are still obtained and compared with transition delay predictions for rigid walls. The angles of wave propagation are plotted with Reynolds number and frequency. Low frequency waves are found to be highly three-dimensional.
Image enhancement and color constancy for a vehicle-mounted change detection system
NASA Astrophysics Data System (ADS)
Tektonidis, Marco; Monnin, David
2016-10-01
Vehicle-mounted change detection systems allow to improve situational awareness on outdoor itineraries of inter- est. Since the visibility of acquired images is often affected by illumination effects (e.g., shadows) it is important to enhance local contrast. For the analysis and comparison of color images depicting the same scene at different time points it is required to compensate color and lightness inconsistencies caused by the different illumination conditions. We have developed an approach for image enhancement and color constancy based on the center/surround Retinex model and the Gray World hypothesis. The combination of the two methods using a color processing function improves color rendition, compared to both methods. The use of stacked integral images (SII) allows to efficiently perform local image processing. Our combined Retinex/Gray World approach has been successfully applied to image sequences acquired on outdoor itineraries at different time points and a comparison with previous Retinex-based approaches has been carried out.
Using economic analyses for local priority setting : the population cost-impact approach.
Heller, Richard F; Gemmell, Islay; Wilson, Edward C F; Fordham, Richard; Smith, Richard D
2006-01-01
Standard methods of economic analysis may not be suitable for local decision making that is specific to a particular population. We describe a new three-step methodology, termed 'population cost-impact analysis', which provides a population perspective to the costs and benefits of alternative interventions. The first two steps involve calculating the population impact and the costs of the proposed interventions relevant to local conditions. This involves the calculation of population impact measures (which have been previously described but are not currently used extensively) - measures of absolute risk and risk reduction, applied to a population denominator. In step three, preferences of policy-makers are obtained. This is in contrast to the QALY approach in which quality weights are obtained as a part of the measurement of benefit. We applied the population cost-impact analysis method to a comparison of two interventions - increasing the use of beta-adrenoceptor antagonists (beta-blockers) and smoking cessation - after myocardial infarction in a scaled-back notional local population of 100,000 people in England. Twenty-two public health professionals were asked via a questionnaire to rank the order in which they would implement four interventions. They were given information on both population cost impact and QALYs for each intervention. In a population of 100,000 people, moving from current to best practice for beta-adrenoceptor antagonists and smoking cessation will prevent 11 and 4 deaths (or gain of 127 or 42 life-years), respectively. The cost per event prevented in the next year, or life-year gained, is less for beta-adrenoceptor antagonists than for smoking cessation. Public health professionals were found to be more inclined to rank alternative interventions according to the population cost impact than the QALY approach. The use of the population cost-impact approach allows information on the benefits of moving from current to best practice to be presented in terms of the benefits and costs to a particular population. The process for deciding between alternative interventions in a prioritisation exercise may differ according to the local context. We suggest that the valuation of the benefit is performed after the benefits have been quantified and that it takes into account local issues relevant to prioritisation. It would be an appropriate next step to experiment with, and formalise, this part of the population cost-impact analysis to provide a standardised approach for determining willingness to pay and provide a ranking of priorities. Our method adds a new dimension to economic analysis, the ability to identify costs and benefits of potential interventions to a defined population, which may be of considerable use for policy makers working at the local level.
An Approach to Teaching Applied GIS: Implementation for Local Organizations.
ERIC Educational Resources Information Center
Benhart, John, Jr.
2000-01-01
Describes the instructional method, Client-Life Cycle GIS Project Learning, used in a course at Indiana University of Pennsylvania that enables students to learn with and about geographic information system (GIS). Discusses the course technical issues in GIS and an example project using this method. (CMK)
Global, quantitative and dynamic mapping of protein subcellular localization.
Itzhak, Daniel N; Tyanova, Stefka; Cox, Jürgen; Borner, Georg Hh
2016-06-09
Subcellular localization critically influences protein function, and cells control protein localization to regulate biological processes. We have developed and applied Dynamic Organellar Maps, a proteomic method that allows global mapping of protein translocation events. We initially used maps statically to generate a database with localization and absolute copy number information for over 8700 proteins from HeLa cells, approaching comprehensive coverage. All major organelles were resolved, with exceptional prediction accuracy (estimated at >92%). Combining spatial and abundance information yielded an unprecedented quantitative view of HeLa cell anatomy and organellar composition, at the protein level. We subsequently demonstrated the dynamic capabilities of the approach by capturing translocation events following EGF stimulation, which we integrated into a quantitative model. Dynamic Organellar Maps enable the proteome-wide analysis of physiological protein movements, without requiring any reagents specific to the investigated process, and will thus be widely applicable in cell biology.
Merrill, Thomas L; Mitchell, Jennifer E; Merrill, Denise R
2016-08-01
Recent revascularization success for ischemic stroke patients using stentrievers has created a new opportunity for therapeutic hypothermia. By using short term localized tissue cooling interventional catheters can be used to reduce reperfusion injury and improve neurological outcomes. Using experimental testing and a well-established heat exchanger design approach, the ɛ-NTU method, this paper examines the cooling performance of commercially available catheters as function of four practical parameters: (1) infusion flow rate, (2) catheter location in the body, (3) catheter configuration and design, and (4) cooling approach. While saline batch cooling outperformed closed-loop autologous blood cooling at all equivalent flow rates in terms of lower delivered temperatures and cooling capacity, hemodilution, systemic and local, remains a concern. For clinicians and engineers this paper provides insights for the selection, design, and operation of commercially available catheters used for localized tissue cooling. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Pan, Andrew; Burnett, Benjamin A.; Chui, Chi On; Williams, Benjamin S.
2017-08-01
We derive a density matrix (DM) theory for quantum cascade lasers (QCLs) that describes the influence of scattering on coherences through a generalized scattering superoperator. The theory enables quantitative modeling of QCLs, including localization and tunneling effects, using the well-defined energy eigenstates rather than the ad hoc localized basis states required by most previous DM models. Our microscopic approach to scattering also eliminates the need for phenomenological transition or dephasing rates. We discuss the physical interpretation and numerical implementation of the theory, presenting sets of both energy-resolved and thermally averaged equations, which can be used for detailed or compact device modeling. We illustrate the theory's applications by simulating a high performance resonant-phonon terahertz (THz) QCL design, which cannot be easily or accurately modeled using conventional DM methods. We show that the theory's inclusion of coherences is crucial for describing localization and tunneling effects consistent with experiment.
Edwards, David; Bastani, Yaser; Cao, Ye; ...
2016-01-19
The role of local strains is fundamental to the large effective piezoelectric and ferroelectric response of thin films. Therefore a method to investigate local strain-induced phenomena is imperative. Here, pressure induced domain reorganization is reported in lead zirconate titanate films with composition near the morphotropic phase boundary. An approach is thus demonstrated to simultaneously study the role of applied mechanical pressure on multiple local properties of the film. In particular, the modification of hysteresis loops collected at different tip pressures is consistent with first mostly ferroelastic and then ferroelectric dominated reorientation of domains under increasing applied pressure. The pressure inducedmore » domain writing is also investigated through phase field simulations where the applied pressure is generally found to increase the in-plane polarization of the domains with respect to the out-of-plane component, corroborating the experimental observations. The approach developed here has the potential to explore other hysteretic phenomena and phase transitions in a spatially resolved manner with varying local pressure.« less
Mobile device geo-localization and object visualization in sensor networks
NASA Astrophysics Data System (ADS)
Lemaire, Simon; Bodensteiner, Christoph; Arens, Michael
2014-10-01
In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.
Identifying influential nodes in complex networks: A node information dimension approach
NASA Astrophysics Data System (ADS)
Bian, Tian; Deng, Yong
2018-04-01
In the field of complex networks, how to identify influential nodes is a significant issue in analyzing the structure of a network. In the existing method proposed to identify influential nodes based on the local dimension, the global structure information in complex networks is not taken into consideration. In this paper, a node information dimension is proposed by synthesizing the local dimensions at different topological distance scales. A case study of the Netscience network is used to illustrate the efficiency and practicability of the proposed method.
Assaraf, Roland
2014-12-01
We show that the recently proposed correlated sampling without reweighting procedure extends the locality (asymptotic independence of the system size) of a physical property to the statistical fluctuations of its estimator. This makes the approach potentially vastly more efficient for computing space-localized properties in large systems compared with standard correlated methods. A proof is given for a large collection of noninteracting fragments. Calculations on hydrogen chains suggest that this behavior holds not only for systems displaying short-range correlations, but also for systems with long-range correlations.
NASA Astrophysics Data System (ADS)
Huebner, Claudia S.
2016-10-01
As a consequence of fluctuations in the index of refraction of the air, atmospheric turbulence causes scintillation, spatial and temporal blurring as well as global and local image motion creating geometric distortions. To mitigate these effects many different methods have been proposed. Global as well as local motion compensation in some form or other constitutes an integral part of many software-based approaches. For the estimation of motion vectors between consecutive frames simple methods like block matching are preferable to more complex algorithms like optical flow, at least when challenged with near real-time requirements. However, the processing power of commercially available computers continues to increase rapidly and the more powerful optical flow methods have the potential to outperform standard block matching methods. Therefore, in this paper three standard optical flow algorithms, namely Horn-Schunck (HS), Lucas-Kanade (LK) and Farnebäck (FB), are tested for their suitability to be employed for local motion compensation as part of a turbulence mitigation system. Their qualitative performance is evaluated and compared with that of three standard block matching methods, namely Exhaustive Search (ES), Adaptive Rood Pattern Search (ARPS) and Correlation based Search (CS).
Automatic bladder segmentation from CT images using deep CNN and 3D fully connected CRF-RNN.
Xu, Xuanang; Zhou, Fugen; Liu, Bo
2018-03-19
Automatic approach for bladder segmentation from computed tomography (CT) images is highly desirable in clinical practice. It is a challenging task since the bladder usually suffers large variations of appearance and low soft-tissue contrast in CT images. In this study, we present a deep learning-based approach which involves a convolutional neural network (CNN) and a 3D fully connected conditional random fields recurrent neural network (CRF-RNN) to perform accurate bladder segmentation. We also propose a novel preprocessing method, called dual-channel preprocessing, to further advance the segmentation performance of our approach. The presented approach works as following: first, we apply our proposed preprocessing method on the input CT image and obtain a dual-channel image which consists of the CT image and an enhanced bladder density map. Second, we exploit a CNN to predict a coarse voxel-wise bladder score map on this dual-channel image. Finally, a 3D fully connected CRF-RNN refines the coarse bladder score map and produce final fine-localized segmentation result. We compare our approach to the state-of-the-art V-net on a clinical dataset. Results show that our approach achieves superior segmentation accuracy, outperforming the V-net by a significant margin. The Dice Similarity Coefficient of our approach (92.24%) is 8.12% higher than that of the V-net. Moreover, the bladder probability maps performed by our approach present sharper boundaries and more accurate localizations compared with that of the V-net. Our approach achieves higher segmentation accuracy than the state-of-the-art method on clinical data. Both the dual-channel processing and the 3D fully connected CRF-RNN contribute to this improvement. The united deep network composed of the CNN and 3D CRF-RNN also outperforms a system where the CRF model acts as a post-processing method disconnected from the CNN.
Hammerschmidt, Lukas; Maschio, Lorenzo; Müller, Carsten; Paulus, Beate
2015-01-13
We have applied the Method of Increments and the periodic Local-MP2 approach to the study of the (110) surface of magnesium fluoride, a system of significant interest in heterogeneous catalysis. After careful assessment of the approximations inherent in both methods, the two schemes, though conceptually different, are shown to yield nearly identical results. This remains true even when analyzed in fine detail through partition of the individual contribution to the total energy. This kind of partitioning also provides thorough insight into the electron correlation effects underlying the surface formation process, which are discussed in detail.
An evidential link prediction method and link predictability based on Shannon entropy
NASA Astrophysics Data System (ADS)
Yin, Likang; Zheng, Haoyang; Bian, Tian; Deng, Yong
2017-09-01
Predicting missing links is of both theoretical value and practical interest in network science. In this paper, we empirically investigate a new link prediction method base on similarity and compare nine well-known local similarity measures on nine real networks. Most of the previous studies focus on the accuracy, however, it is crucial to consider the link predictability as an initial property of networks itself. Hence, this paper has proposed a new link prediction approach called evidential measure (EM) based on Dempster-Shafer theory. Moreover, this paper proposed a new method to measure link predictability via local information and Shannon entropy.
Heideklang, René; Shokouhi, Parisa
2016-01-01
This article focuses on the fusion of flaw indications from multi-sensor nondestructive materials testing. Because each testing method makes use of a different physical principle, a multi-method approach has the potential of effectively differentiating actual defect indications from the many false alarms, thus enhancing detection reliability. In this study, we propose a new technique for aggregating scattered two- or three-dimensional sensory data. Using a density-based approach, the proposed method explicitly addresses localization uncertainties such as registration errors. This feature marks one of the major of advantages of this approach over pixel-based image fusion techniques. We provide guidelines on how to set all the key parameters and demonstrate the technique’s robustness. Finally, we apply our fusion approach to experimental data and demonstrate its capability to locate small defects by substantially reducing false alarms under conditions where no single-sensor method is adequate. PMID:26784200
Inter-departmental dosimetry audits – development of methods and lessons learned
Eaton, David J.; Bolton, Steve; Thomas, Russell A. S.; Clark, Catharine H.
2015-01-01
External dosimetry audits give confidence in the safe and accurate delivery of radiotherapy. In the United Kingdom, such audits have been performed for almost 30 years. From the start, they included clinically relevant conditions, as well as reference machine output. Recently, national audits have tested new or complex techniques, but these methods are then used in regional audits by a peer-to-peer approach. This local approach builds up the radiotherapy community, facilitates communication, and brings synergy to medical physics. PMID:26865753
NASA Astrophysics Data System (ADS)
He, Y.; Puckett, E. G.; Billen, M. I.; Kellogg, L. H.
2016-12-01
For a convection-dominated system, like convection in the Earth's mantle, accurate modeling of the temperature field in terms of the interaction between convective and diffusive processes is one of the most common numerical challenges. In the geodynamics community using Finite Element Method (FEM) with artificial entropy viscosity is a popular approach to resolve this difficulty, but introduce numerical diffusion. The extra artificial viscosity added into the temperature system will not only oversmooth the temperature field where the convective process dominates, but also change the physical properties by increasing the local material conductivity, which will eventually change the local conservation of energy. Accurate modeling of temperature is especially important in the mantle, where material properties are strongly dependent on temperature. In subduction zones, for example, the rheology of the cold sinking slab depends nonlinearly on the temperature, and physical processes such as slab detachment, rollback, and melting all are sensitively dependent on temperature and rheology. Therefore methods that overly smooth the temperature may inaccurately represent the physical processes governing subduction, lithospheric instabilities, plume generation and other aspects of mantle convection. Here we present a method for modeling the temperature field in mantle dynamics simulations using a new solver implemented in the ASPECT software. The new solver for the temperature equation uses a Discontinuous Galerkin (DG) approach, which combines features of both finite element and finite volume methods, and is particularly suitable for problems satisfying the conservation law, and the solution has a large variation locally. Furthermore, we have applied a post-processing technique to insure that the solution satisfies a local discrete maximum principle in order to eliminate the overshoots and undershoots in the temperature locally. To demonstrate the capabilities of this new method we present benchmark results (e.g., falling sphere), and a simple subduction models with kinematic surface boundary condition. To evaluate the trade-offs in computational speed and solution accuracy we present results for the same benchmarks using the Finite Element entropy viscosity method available in ASPECT.
Hippocampus Segmentation Based on Local Linear Mapping
Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin
2017-01-01
We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively. PMID:28368016
Hippocampus Segmentation Based on Local Linear Mapping.
Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin
2017-04-03
We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.
Hippocampus Segmentation Based on Local Linear Mapping
NASA Astrophysics Data System (ADS)
Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin
2017-04-01
We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.
NASA Astrophysics Data System (ADS)
Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu
2014-03-01
Statistical iterative reconstruction and post-log data restoration algorithms for CT noise reduction have been widely studied and these techniques have enabled us to reduce irradiation doses while maintaining image qualities. In low dose scanning, electronic noise becomes obvious and it results in some non-positive signals in raw measurements. The nonpositive signal should be converted to positive signal so that it can be log-transformed. Since conventional conversion methods do not consider local variance on the sinogram, they have difficulty of controlling the strength of the filtering. Thus, in this work, we propose a method to convert the non-positive signal to the positive signal by mainly controlling the local variance. The method is implemented in two separate steps. First, an iterative restoration algorithm based on penalized weighted least squares is used to mitigate the effect of electronic noise. The algorithm preserves the local mean and reduces the local variance induced by the electronic noise. Second, smoothed raw measurements by the iterative algorithm are converted to the positive signal according to a function which replaces the non-positive signal with its local mean. In phantom studies, we confirm that the proposed method properly preserves the local mean and reduce the variance induced by the electronic noise. Our technique results in dramatically reduced shading artifacts and can also successfully cooperate with the post-log data filter to reduce streak artifacts.
Beichel, Reinhard R.; Van Tol, Markus; Ulrich, Ethan J.; Bauer, Christian; Chang, Tangel; Plichta, Kristin A.; Smith, Brian J.; Sunderland, John J.; Graham, Michael M.; Sonka, Milan; Buatti, John M.
2016-01-01
Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction. PMID:27277044
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beichel, Reinhard R., E-mail: reinhard-beichel@uiowa.edu; Iowa Institute for Biomedical Imaging, University of Iowa, Iowa City, Iowa 52242; Department of Internal Medicine, University of Iowa, Iowa City, Iowa 52242
Purpose: The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. Methods: A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behaviormore » of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the “just-enough-interaction” principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Results: Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Conclusions: Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction.« less
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle †
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-01
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy. PMID:29320434
Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle.
Ito, Seigo; Hiratsuka, Shigeyoshi; Ohta, Mitsuhiko; Matsubara, Hiroyuki; Ogawa, Masaru
2018-01-10
We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications. We also introduce a fusion-based localization method, named SPAD DCNN, which uses the SPAD LIDAR and employs a Deep Convolutional Neural Network (DCNN). SPAD DCNN can fuse the outputs of the SPAD LIDAR: range image data, monocular image data and peak intensity image data. The SPAD DCNN has two outputs: the regression result of the position of the SPAD LIDAR and the classification result of the existence of a target to be approached. Our third prototype sensor and the localization method are evaluated in an indoor environment by assuming various AGV trajectories. The results show that the sensor and localization method improve the localization accuracy.
Roller Coasters without Differential Equations--A Newtonian Approach to Constrained Motion
ERIC Educational Resources Information Center
Muller, Rainer
2010-01-01
Within the context of Newton's equation, we present a simple approach to the constrained motion of a body forced to move along a specified trajectory. Because the formalism uses a local frame of reference, it is simpler than other methods, making more complicated geometries accessible. No Lagrangian multipliers are necessary to determine the…
Reach for the Stars: A Constellational Approach to Ethnographies of Elite Schools
ERIC Educational Resources Information Center
Prosser, Howard
2014-01-01
This paper offers a method for examining elite schools in a global setting by appropriating Theodor Adorno's constellational approach. I contend that arranging ideas and themes in a non-deterministic fashion can illuminate the social reality of elite schools. Drawing on my own fieldwork at an elite school in Argentina, I suggest that local and…
Tensor scale-based fuzzy connectedness image segmentation
NASA Astrophysics Data System (ADS)
Saha, Punam K.; Udupa, Jayaram K.
2003-05-01
Tangible solutions to image segmentation are vital in many medical imaging applications. Toward this goal, a framework based on fuzzy connectedness was developed in our laboratory. A fundamental notion called "affinity" - a local fuzzy hanging togetherness relation on voxels - determines the effectiveness of this segmentation framework in real applications. In this paper, we introduce the notion of "tensor scale" - a recently developed local morphometric parameter - in affinity definition and study its effectiveness. Although, our previous notion of "local scale" using the spherical model successfully incorporated local structure size into affinity and resulted in measureable improvements in segmentation results, a major limitation of the previous approach was that it ignored local structural orientation and anisotropy. The current approach of using tensor scale in affinity computation allows an effective utilization of local size, orientation, and ansiotropy in a unified manner. Tensor scale is used for computing both the homogeneity- and object-feature-based components of affinity. Preliminary results of the proposed method on several medical images and computer generated phantoms of realistic shapes are presented. Further extensions of this work are discussed.
Localized thermo-cisplatin therapy: a pilot study in spontaneous canine and feline tumours.
Théon, A P; Madewell, B R; Moore, A S; Stephens, C; Krag, D N
1991-01-01
Local hyperthermia combined with intralesional cisplatin chemotherapy is a logical and potentially effective therapeutic approach for localized cancers. A trial using outbred animals with spontaneously occurring tumours was initiated to evaluate the toxicity and efficacy of this approach. Treatment consisted of injection of a colloidal suspension of cisplatin into the tumour prior to hyperthermia once a week for 4 weeks. Immediately after intratumoral injection of a mixture of cisplatin and collagen, thermotherapy was given. The goal temperature was 42 +/- 1 degrees C for 30 min. Ten animals (nine dogs and one cat) with soft tissue neoplasms were treated with one to four hyperthermia and cisplatin sessions for a total of 30 treatment sessions. Complete responses occurred in 4/10 cases (one carcinoma, two sarcomas, one melanoma). One dog with haemangiopericytoma had partial response. The lack of systemic toxicity and the minimal local normal tissue reactions indicate that the treatments were well tolerated. These data provide preliminary evidence that a combination of local hyperthermia and intratumoral cisplatin chemotherapy is a safe and effective method for the treatment of selected localized neoplasms.
Local connectome phenotypes predict social, health, and cognitive factors
Powell, Michael A.; Garcia, Javier O.; Yeh, Fang-Cheng; Vettel, Jean M.
2018-01-01
The unique architecture of the human connectome is defined initially by genetics and subsequently sculpted over time with experience. Thus, similarities in predisposition and experience that lead to similarities in social, biological, and cognitive attributes should also be reflected in the local architecture of white matter fascicles. Here we employ a method known as local connectome fingerprinting that uses diffusion MRI to measure the fiber-wise characteristics of macroscopic white matter pathways throughout the brain. This fingerprinting approach was applied to a large sample (N = 841) of subjects from the Human Connectome Project, revealing a reliable degree of between-subject correlation in the local connectome fingerprints, with a relatively complex, low-dimensional substructure. Using a cross-validated, high-dimensional regression analysis approach, we derived local connectome phenotype (LCP) maps that could reliably predict a subset of subject attributes measured, including demographic, health, and cognitive measures. These LCP maps were highly specific to the attribute being predicted but also sensitive to correlations between attributes. Collectively, these results indicate that the local architecture of white matter fascicles reflects a meaningful portion of the variability shared between subjects along several dimensions. PMID:29911679
Local connectome phenotypes predict social, health, and cognitive factors.
Powell, Michael A; Garcia, Javier O; Yeh, Fang-Cheng; Vettel, Jean M; Verstynen, Timothy
2018-01-01
The unique architecture of the human connectome is defined initially by genetics and subsequently sculpted over time with experience. Thus, similarities in predisposition and experience that lead to similarities in social, biological, and cognitive attributes should also be reflected in the local architecture of white matter fascicles. Here we employ a method known as local connectome fingerprinting that uses diffusion MRI to measure the fiber-wise characteristics of macroscopic white matter pathways throughout the brain. This fingerprinting approach was applied to a large sample ( N = 841) of subjects from the Human Connectome Project, revealing a reliable degree of between-subject correlation in the local connectome fingerprints, with a relatively complex, low-dimensional substructure. Using a cross-validated, high-dimensional regression analysis approach, we derived local connectome phenotype (LCP) maps that could reliably predict a subset of subject attributes measured, including demographic, health, and cognitive measures. These LCP maps were highly specific to the attribute being predicted but also sensitive to correlations between attributes. Collectively, these results indicate that the local architecture of white matter fascicles reflects a meaningful portion of the variability shared between subjects along several dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com
Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less
Local spatiotemporal time-frequency peak filtering method for seismic random noise reduction
NASA Astrophysics Data System (ADS)
Liu, Yanping; Dang, Bo; Li, Yue; Lin, Hongbo
2014-12-01
To achieve a higher level of seismic random noise suppression, the Radon transform has been adopted to implement spatiotemporal time-frequency peak filtering (TFPF) in our previous studies. Those studies involved performing TFPF in full-aperture Radon domain, including linear Radon and parabolic Radon. Although the superiority of this method to the conventional TFPF has been tested through processing on synthetic seismic models and field seismic data, there are still some limitations in the method. Both full-aperture linear Radon and parabolic Radon are applicable and effective for some relatively simple situations (e.g., curve reflection events with regular geometry) but inapplicable for complicated situations such as reflection events with irregular shapes, or interlaced events with quite different slope or curvature parameters. Therefore, a localized approach to the application of the Radon transform must be applied. It would serve the filter method better by adapting the transform to the local character of the data variations. In this article, we propose an idea that adopts the local Radon transform referred to as piecewise full-aperture Radon to realize spatiotemporal TFPF, called local spatiotemporal TFPF. Through experiments on synthetic seismic models and field seismic data, this study demonstrates the advantage of our method in seismic random noise reduction and reflection event recovery for relatively complicated situations of seismic data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazic, Predrag; Stefancic, Hrvoje; Abraham, Hrvoje
2006-03-20
We introduce a novel numerical method, named the Robin Hood method, of solving electrostatic problems. The approach of the method is closest to the boundary element methods, although significant conceptual differences exist with respect to this class of methods. The method achieves equipotentiality of conducting surfaces by iterative non-local charge transfer. For each of the conducting surfaces, non-local charge transfers are performed between surface elements, which differ the most from the targeted equipotentiality of the surface. The method is tested against analytical solutions and its wide range of application is demonstrated. The method has appealing technical characteristics. For the problemmore » with N surface elements, the computational complexity of the method essentially scales with N {sup {alpha}}, where {alpha} < 2, the required computer memory scales with N, while the error of the potential decreases exponentially with the number of iterations for many orders of magnitude of the error, without the presence of the Critical Slowing Down. The Robin Hood method could prove useful in other classical or even quantum problems. Some future development ideas for possible applications outside electrostatics are addressed.« less
Texture-based approach to palmprint retrieval for personal identification
NASA Astrophysics Data System (ADS)
Li, Wenxin; Zhang, David; Xu, Z.; You, J.
2000-12-01
This paper presents a new approach to palmprint retrieval for personal identification. Three key issues in image retrieval are considered - feature selection, similarity measures and dynamic search for the best matching of the sample in the image database. We propose a texture-based method for palmprint feature representation. The concept of texture energy is introduced to define a palm print's global and local features, which are characterized with high convergence of inner-palm similarities and good dispersion of inter-palm discrimination. The search is carried out in a layered fashion: first global features are used to guide the fast selection of a small set of similar candidates from the database from the database and then local features are used to decide the final output within the candidate set. The experimental results demonstrate the effectiveness and accuracy of the proposed method.
Texture-based approach to palmprint retrieval for personal identification
NASA Astrophysics Data System (ADS)
Li, Wenxin; Zhang, David; Xu, Z.; You, J.
2001-01-01
This paper presents a new approach to palmprint retrieval for personal identification. Three key issues in image retrieval are considered - feature selection, similarity measures and dynamic search for the best matching of the sample in the image database. We propose a texture-based method for palmprint feature representation. The concept of texture energy is introduced to define a palm print's global and local features, which are characterized with high convergence of inner-palm similarities and good dispersion of inter-palm discrimination. The search is carried out in a layered fashion: first global features are used to guide the fast selection of a small set of similar candidates from the database from the database and then local features are used to decide the final output within the candidate set. The experimental results demonstrate the effectiveness and accuracy of the proposed method.
RFMix: A Discriminative Modeling Approach for Rapid and Robust Local-Ancestry Inference
Maples, Brian K.; Gravel, Simon; Kenny, Eimear E.; Bustamante, Carlos D.
2013-01-01
Local-ancestry inference is an important step in the genetic analysis of fully sequenced human genomes. Current methods can only detect continental-level ancestry (i.e., European versus African versus Asian) accurately even when using millions of markers. Here, we present RFMix, a powerful discriminative modeling approach that is faster (∼30×) and more accurate than existing methods. We accomplish this by using a conditional random field parameterized by random forests trained on reference panels. RFMix is capable of learning from the admixed samples themselves to boost performance and autocorrect phasing errors. RFMix shows high sensitivity and specificity in simulated Hispanics/Latinos and African Americans and admixed Europeans, Africans, and Asians. Finally, we demonstrate that African Americans in HapMap contain modest (but nonzero) levels of Native American ancestry (∼0.4%). PMID:23910464
Lattice quantum chromodynamical approach to nuclear physics
NASA Astrophysics Data System (ADS)
Aoki, Sinya; Doi, Takumi; Hatsuda, Tetsuo; Ikeda, Yoichi; Inoue, Takashi; Ishii, Noriyoshi; Murano, Keiko; Nemura, Hidekatsu; Sasaki, Kenji; HAL QCD Collaboration
2012-09-01
We review recent progress in the HAL QCD method, which was recently proposed to investigate hadron interactions in lattice quantum chromodynamics (QCD). The strategy to extract the energy-independent non-local potential in lattice QCD is explained in detail. The method is applied to study nucleon-nucleon, nucleon-hyperon, hyperon-hyperon, and meson-baryon interactions. Several extensions of the method are also discussed.
Liu, Dong; Wang, Shengsheng; Huang, Dezhi; Deng, Gang; Zeng, Fantao; Chen, Huiling
2016-05-01
Medical image recognition is an important task in both computer vision and computational biology. In the field of medical image classification, representing an image based on local binary patterns (LBP) descriptor has become popular. However, most existing LBP-based methods encode the binary patterns in a fixed neighborhood radius and ignore the spatial relationships among local patterns. The ignoring of the spatial relationships in the LBP will cause a poor performance in the process of capturing discriminative features for complex samples, such as medical images obtained by microscope. To address this problem, in this paper we propose a novel method to improve local binary patterns by assigning an adaptive neighborhood radius for each pixel. Based on these adaptive local binary patterns, we further propose a spatial adjacent histogram strategy to encode the micro-structures for image representation. An extensive set of evaluations are performed on four medical datasets which show that the proposed method significantly improves standard LBP and compares favorably with several other prevailing approaches. Copyright © 2016 Elsevier Ltd. All rights reserved.
Large margin nearest neighbor classifiers.
Domeniconi, Carlotta; Gunopulos, Dimitrios; Peng, Jing
2005-07-01
The nearest neighbor technique is a simple and appealing approach to addressing classification problems. It relies on the assumption of locally constant class conditional probabilities. This assumption becomes invalid in high dimensions with a finite number of examples due to the curse of dimensionality. Severe bias can be introduced under these conditions when using the nearest neighbor rule. The employment of a locally adaptive metric becomes crucial in order to keep class conditional probabilities close to uniform, thereby minimizing the bias of estimates. We propose a technique that computes a locally flexible metric by means of support vector machines (SVMs). The decision function constructed by SVMs is used to determine the most discriminant direction in a neighborhood around the query. Such a direction provides a local feature weighting scheme. We formally show that our method increases the margin in the weighted space where classification takes place. Moreover, our method has the important advantage of online computational efficiency over competing locally adaptive techniques for nearest neighbor classification. We demonstrate the efficacy of our method using both real and simulated data.
Near-isotropic 3D optical nanoscopy with photon-limited chromophores
Tang, Jianyong; Akerboom, Jasper; Vaziri, Alipasha; Looger, Loren L.; Shank, Charles V.
2010-01-01
Imaging approaches based on single molecule localization break the diffraction barrier of conventional fluorescence microscopy, allowing for bioimaging with nanometer resolution. It remains a challenge, however, to precisely localize photon-limited single molecules in 3D. We have developed a new localization-based imaging technique achieving almost isotropic subdiffraction resolution in 3D. A tilted mirror is used to generate a side view in addition to the front view of activated single emitters, allowing their 3D localization to be precisely determined for superresolution imaging. Because both front and side views are in focus, this method is able to efficiently collect emitted photons. The technique is simple to implement on a commercial fluorescence microscope, and especially suitable for biological samples with photon-limited chromophores such as endogenously expressed photoactivatable fluorescent proteins. Moreover, this method is relatively resistant to optical aberration, as it requires only centroid determination for localization analysis. Here we demonstrate the application of this method to 3D imaging of bacterial protein distribution and neuron dendritic morphology with subdiffraction resolution. PMID:20472826
Pratapa, Phanisri P.; Suryanarayana, Phanish; Pask, John E.
2015-12-02
We present the Clenshaw–Curtis Spectral Quadrature (SQ) method for real-space O(N) Density Functional Theory (DFT) calculations. In this approach, all quantities of interest are expressed as bilinear forms or sums over bilinear forms, which are then approximated by spatially localized Clenshaw–Curtis quadrature rules. This technique is identically applicable to both insulating and metallic systems, and in conjunction with local reformulation of the electrostatics, enables the O(N) evaluation of the electronic density, energy, and atomic forces. The SQ approach also permits infinite-cell calculations without recourse to Brillouin zone integration or large supercells. We employ a finite difference representation in order tomore » exploit the locality of electronic interactions in real space, enable systematic convergence, and facilitate large-scale parallel implementation. In particular, we derive expressions for the electronic density, total energy, and atomic forces that can be evaluated in O(N) operations. We demonstrate the systematic convergence of energies and forces with respect to quadrature order as well as truncation radius to the exact diagonalization result. In addition, we show convergence with respect to mesh size to established O(N 3) planewave results. In conclusion, we establish the efficiency of the proposed approach for high temperature calculations and discuss its particular suitability for large-scale parallel computation.« less
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks). Copyright © 2014 Elsevier B.V. All rights reserved.
A global/local analysis method for treating details in structural design
NASA Technical Reports Server (NTRS)
Aminpour, Mohammad A.; Mccleary, Susan L.; Ransom, Jonathan B.
1993-01-01
A method for analyzing global/local behavior of plate and shell structures is described. In this approach, a detailed finite element model of the local region is incorporated within a coarser global finite element model. The local model need not be nodally compatible (i.e., need not have a one-to-one nodal correspondence) with the global model at their common boundary; therefore, the two models may be constructed independently. The nodal incompatibility of the models is accounted for by introducing appropriate constraint conditions into the potential energy in a hybrid variational formulation. The primary advantage of this method is that the need for transition modeling between global and local models is eliminated. Eliminating transition modeling has two benefits. First, modeling efforts are reduced since tedious and complex transitioning need not be performed. Second, errors due to the mesh distortion, often unavoidable in mesh transitioning, are minimized by avoiding distorted elements beyond what is needed to represent the geometry of the component. The method is applied reduced to a plate loaded in tension and transverse bending. The plate has a central hole, and various hole sixes and shapes are studied. The method is also applied to a composite laminated fuselage panel with a crack emanating from a window in the panel. While this method is applied herein to global/local problems, it is also applicable to the coupled analysis of independently modeled components as well as adaptive refinement.
Energy landscapes and properties of biomolecules.
Wales, David J
2005-11-09
Thermodynamic and dynamic properties of biomolecules can be calculated using a coarse-grained approach based upon sampling stationary points of the underlying potential energy surface. The superposition approximation provides an overall partition function as a sum of contributions from the local minima, and hence functions such as internal energy, entropy, free energy and the heat capacity. To obtain rates we must also sample transition states that link the local minima, and the discrete path sampling method provides a systematic means to achieve this goal. A coarse-grained picture is also helpful in locating the global minimum using the basin-hopping approach. Here we can exploit a fictitious dynamics between the basins of attraction of local minima, since the objective is to find the lowest minimum, rather than to reproduce the thermodynamics or dynamics.
Alagl, Adel S; Madi, Marwa
2018-05-01
Alveolar ridge deficiency is considered a major limitation for successful implant placement, as well as for the long-term success rate, especially in the anterior maxillary region. Various approaches have been developed to increase bone volume. Among those approaches, inlay and onlay grafts, alveolar ridge distraction, and guided bone regeneration have been suggested. The use of titanium mesh is a reliable method for ridge augmentation. We describe a patient who presented with a localized, combined, horizontal and vertical ridge defect in the anterior maxilla. The patient was treated using titanium mesh and alloplast material mixed with a nano-bone graft to treat the localized ridge deformity for future implant installation. The clinical and radiographic presentation, as well as relevant literature, are presented.
Stucki, S; Orozco-terWengel, P; Forester, B R; Duruz, S; Colli, L; Masembe, C; Negrini, R; Landguth, E; Jones, M R; Bruford, M W; Taberlet, P; Joost, S
2017-09-01
With the increasing availability of both molecular and topo-climatic data, the main challenges facing landscape genomics - that is the combination of landscape ecology with population genomics - include processing large numbers of models and distinguishing between selection and demographic processes (e.g. population structure). Several methods address the latter, either by estimating a null model of population history or by simultaneously inferring environmental and demographic effects. Here we present samβada, an approach designed to study signatures of local adaptation, with special emphasis on high performance computing of large-scale genetic and environmental data sets. samβada identifies candidate loci using genotype-environment associations while also incorporating multivariate analyses to assess the effect of many environmental predictor variables. This enables the inclusion of explanatory variables representing population structure into the models to lower the occurrences of spurious genotype-environment associations. In addition, samβada calculates local indicators of spatial association for candidate loci to provide information on whether similar genotypes tend to cluster in space, which constitutes a useful indication of the possible kinship between individuals. To test the usefulness of this approach, we carried out a simulation study and analysed a data set from Ugandan cattle to detect signatures of local adaptation with samβada, bayenv, lfmm and an F ST outlier method (FDIST approach in arlequin) and compare their results. samβada - an open source software for Windows, Linux and Mac OS X available at http://lasig.epfl.ch/sambada - outperforms other approaches and better suits whole-genome sequence data processing. © 2016 The Authors. Molecular Ecology Resources Published by John Wiley & Sons Ltd.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm(2) to 30 cm(2), whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered.
Ferenczy, György G
2013-04-05
The application of the local basis equation (Ferenczy and Adams, J. Chem. Phys. 2009, 130, 134108) in mixed quantum mechanics/molecular mechanics (QM/MM) and quantum mechanics/quantum mechanics (QM/QM) methods is investigated. This equation is suitable to derive local basis nonorthogonal orbitals that minimize the energy of the system and it exhibits good convergence properties in a self-consistent field solution. These features make the equation appropriate to be used in mixed QM/MM and QM/QM methods to optimize orbitals in the field of frozen localized orbitals connecting the subsystems. Calculations performed for several properties in divers systems show that the method is robust with various choices of the frozen orbitals and frontier atom properties. With appropriate basis set assignment, it gives results equivalent with those of a related approach [G. G. Ferenczy previous paper in this issue] using the Huzinaga equation. Thus, the local basis equation can be used in mixed QM/MM methods with small size quantum subsystems to calculate properties in good agreement with reference Hartree-Fock-Roothaan results. It is shown that bond charges are not necessary when the local basis equation is applied, although they are required for the self-consistent field solution of the Huzinaga equation based method. Conversely, the deformation of the wave-function near to the boundary is observed without bond charges and this has a significant effect on deprotonation energies but a less pronounced effect when the total charge of the system is conserved. The local basis equation can also be used to define a two layer quantum system with nonorthogonal localized orbitals surrounding the central delocalized quantum subsystem. Copyright © 2013 Wiley Periodicals, Inc.
Chowdhury, Rasheda Arman; Lina, Jean Marc; Kobayashi, Eliane; Grova, Christophe
2013-01-01
Localizing the generators of epileptic activity in the brain using Electro-EncephaloGraphy (EEG) or Magneto-EncephaloGraphy (MEG) signals is of particular interest during the pre-surgical investigation of epilepsy. Epileptic discharges can be detectable from background brain activity, provided they are associated with spatially extended generators. Using realistic simulations of epileptic activity, this study evaluates the ability of distributed source localization methods to accurately estimate the location of the generators and their sensitivity to the spatial extent of such generators when using MEG data. Source localization methods based on two types of realistic models have been investigated: (i) brain activity may be modeled using cortical parcels and (ii) brain activity is assumed to be locally smooth within each parcel. A Data Driven Parcellization (DDP) method was used to segment the cortical surface into non-overlapping parcels and diffusion-based spatial priors were used to model local spatial smoothness within parcels. These models were implemented within the Maximum Entropy on the Mean (MEM) and the Hierarchical Bayesian (HB) source localization frameworks. We proposed new methods in this context and compared them with other standard ones using Monte Carlo simulations of realistic MEG data involving sources of several spatial extents and depths. Detection accuracy of each method was quantified using Receiver Operating Characteristic (ROC) analysis and localization error metrics. Our results showed that methods implemented within the MEM framework were sensitive to all spatial extents of the sources ranging from 3 cm2 to 30 cm2, whatever were the number and size of the parcels defining the model. To reach a similar level of accuracy within the HB framework, a model using parcels larger than the size of the sources should be considered. PMID:23418485
ERIC Educational Resources Information Center
Harsch, Claudia; Martin, Guido
2012-01-01
We explore how a local rating scale can be based on the Common European Framework CEF-proficiency scales. As part of the scale validation (Alderson, 1991; Lumley, 2002), we examine which adaptations are needed to turn CEF-proficiency descriptors into a rating scale for a local context, and to establish a practicable method to revise the initial…
Meshless Local Petrov-Galerkin Method for Bending Problems
NASA Technical Reports Server (NTRS)
Phillips, Dawn R.; Raju, Ivatury S.
2002-01-01
Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.
NASA Astrophysics Data System (ADS)
Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohammed A.
2014-09-01
In this paper, we propose an efficient spectral collocation algorithm to solve numerically wave type equations subject to initial, boundary and non-local conservation conditions. The shifted Jacobi pseudospectral approximation is investigated for the discretization of the spatial variable of such equations. It possesses spectral accuracy in the spatial variable. The shifted Jacobi-Gauss-Lobatto (SJ-GL) quadrature rule is established for treating the non-local conservation conditions, and then the problem with its initial and non-local boundary conditions are reduced to a system of second-order ordinary differential equations in temporal variable. This system is solved by two-stage forth-order A-stable implicit RK scheme. Five numerical examples with comparisons are given. The computational results demonstrate that the proposed algorithm is more accurate than finite difference method, method of lines and spline collocation approach
NASA Astrophysics Data System (ADS)
Efrain Humpire-Mamani, Gabriel; Arindra Adiyoso Setio, Arnaud; van Ginneken, Bram; Jacobs, Colin
2018-04-01
Automatic localization of organs and other structures in medical images is an important preprocessing step that can improve and speed up other algorithms such as organ segmentation, lesion detection, and registration. This work presents an efficient method for simultaneous localization of multiple structures in 3D thorax-abdomen CT scans. Our approach predicts the location of multiple structures using a single multi-label convolutional neural network for each orthogonal view. Each network takes extra slices around the current slice as input to provide extra context. A sigmoid layer is used to perform multi-label classification. The output of the three networks is subsequently combined to compute a 3D bounding box for each structure. We used our approach to locate 11 structures of interest. The neural network was trained and evaluated on a large set of 1884 thorax-abdomen CT scans from patients undergoing oncological workup. Reference bounding boxes were annotated by human observers. The performance of our method was evaluated by computing the wall distance to the reference bounding boxes. The bounding boxes annotated by the first human observer were used as the reference standard for the test set. Using the best configuration, we obtained an average wall distance of 3.20~+/-~7.33 mm in the test set. The second human observer achieved 1.23~+/-~3.39 mm. For all structures, the results were better than those reported in previously published studies. In conclusion, we proposed an efficient method for the accurate localization of multiple organs. Our method uses multiple slices as input to provide more context around the slice under analysis, and we have shown that this improves performance. This method can easily be adapted to handle more organs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Żurek-Biesiada, Dominika; Szczurek, Aleksander T.; Prakash, Kirti
Higher order chromatin structure is not only required to compact and spatially arrange long chromatids within a nucleus, but have also important functional roles, including control of gene expression and DNA processing. However, studies of chromatin nanostructures cannot be performed using conventional widefield and confocal microscopy because of the limited optical resolution. Various methods of superresolution microscopy have been described to overcome this difficulty, like structured illumination and single molecule localization microscopy. We report here that the standard DNA dye Vybrant{sup ®} DyeCycle™ Violet can be used to provide single molecule localization microscopy (SMLM) images of DNA in nuclei ofmore » fixed mammalian cells. This SMLM method enabled optical isolation and localization of large numbers of DNA-bound molecules, usually in excess of 10{sup 6} signals in one cell nucleus. The technique yielded high-quality images of nuclear DNA density, revealing subdiffraction chromatin structures of the size in the order of 100 nm; the interchromatin compartment was visualized at unprecedented optical resolution. The approach offers several advantages over previously described high resolution DNA imaging methods, including high specificity, an ability to record images using a single wavelength excitation, and a higher density of single molecule signals than reported in previous SMLM studies. The method is compatible with DNA/multicolor SMLM imaging which employs simple staining methods suited also for conventional optical microscopy. - Highlights: • Super-resolution imaging of nuclear DNA with Vybrant Violet and blue excitation. • 90nm resolution images of DNA structures in optically thick eukaryotic nuclei. • Enhanced resolution confirms the existence of DNA-free regions inside the nucleus. • Optimized imaging conditions enable multicolor super-resolution imaging.« less
Streamflow Prediction based on Chaos Theory
NASA Astrophysics Data System (ADS)
Li, X.; Wang, X.; Babovic, V. M.
2015-12-01
Chaos theory is a popular method in hydrologic time series prediction. Local model (LM) based on this theory utilizes time-delay embedding to reconstruct the phase-space diagram. For this method, its efficacy is dependent on the embedding parameters, i.e. embedding dimension, time lag, and nearest neighbor number. The optimal estimation of these parameters is thus critical to the application of Local model. However, these embedding parameters are conventionally estimated using Average Mutual Information (AMI) and False Nearest Neighbors (FNN) separately. This may leads to local optimization and thus has limitation to its prediction accuracy. Considering about these limitation, this paper applies a local model combined with simulated annealing (SA) to find the global optimization of embedding parameters. It is also compared with another global optimization approach of Genetic Algorithm (GA). These proposed hybrid methods are applied in daily and monthly streamflow time series for examination. The results show that global optimization can contribute to the local model to provide more accurate prediction results compared with local optimization. The LM combined with SA shows more advantages in terms of its computational efficiency. The proposed scheme here can also be applied to other fields such as prediction of hydro-climatic time series, error correction, etc.
NASA Astrophysics Data System (ADS)
Lorenzi, Juan M.; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian
2017-10-01
Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.
Lorenzi, Juan M; Stecher, Thomas; Reuter, Karsten; Matera, Sebastian
2017-10-28
Many problems in computational materials science and chemistry require the evaluation of expensive functions with locally rapid changes, such as the turn-over frequency of first principles kinetic Monte Carlo models for heterogeneous catalysis. Because of the high computational cost, it is often desirable to replace the original with a surrogate model, e.g., for use in coupled multiscale simulations. The construction of surrogates becomes particularly challenging in high-dimensions. Here, we present a novel version of the modified Shepard interpolation method which can overcome the curse of dimensionality for such functions to give faithful reconstructions even from very modest numbers of function evaluations. The introduction of local metrics allows us to take advantage of the fact that, on a local scale, rapid variation often occurs only across a small number of directions. Furthermore, we use local error estimates to weigh different local approximations, which helps avoid artificial oscillations. Finally, we test our approach on a number of challenging analytic functions as well as a realistic kinetic Monte Carlo model. Our method not only outperforms existing isotropic metric Shepard methods but also state-of-the-art Gaussian process regression.
Heading Estimation for Pedestrian Dead Reckoning Based on Robust Adaptive Kalman Filtering.
Wu, Dongjin; Xia, Linyuan; Geng, Jijun
2018-06-19
Pedestrian dead reckoning (PDR) using smart phone-embedded micro-electro-mechanical system (MEMS) sensors plays a key role in ubiquitous localization indoors and outdoors. However, as a relative localization method, it suffers from the problem of error accumulation which prevents it from long term independent running. Heading estimation error is one of the main location error sources, and therefore, in order to improve the location tracking performance of the PDR method in complex environments, an approach based on robust adaptive Kalman filtering (RAKF) for estimating accurate headings is proposed. In our approach, outputs from gyroscope, accelerometer, and magnetometer sensors are fused using the solution of Kalman filtering (KF) that the heading measurements derived from accelerations and magnetic field data are used to correct the states integrated from angular rates. In order to identify and control measurement outliers, a maximum likelihood-type estimator (M-estimator)-based model is used. Moreover, an adaptive factor is applied to resist the negative effects of state model disturbances. Extensive experiments under static and dynamic conditions were conducted in indoor environments. The experimental results demonstrate the proposed approach provides more accurate heading estimates and supports more robust and dynamic adaptive location tracking, compared with methods based on conventional KF.
Federico, Alejandro; Kaufmann, Guillermo H
2006-03-20
We propose a novel approach to retrieving the phase map coded by a single closed-fringe pattern in digital speckle pattern interferometry, which is based on the estimation of the local sign of the quadrature component. We obtain the estimate by calculating the local orientation of the fringes that have previously been denoised by a weighted smoothing spline method. We carry out the procedure of sign estimation by determining the local abrupt jumps of size pi in the orientation field of the fringes and by segmenting the regions defined by these jumps. The segmentation method is based on the application of two-dimensional active contours (snakes), with which one can also estimate absent jumps, i.e., those that cannot be detected from the local orientation of the fringes. The performance of the proposed phase-retrieval technique is evaluated for synthetic and experimental fringes and compared with the results obtained with the spiral-phase- and Fourier-transform methods.
NASA Astrophysics Data System (ADS)
Liu, Miaofeng
2017-07-01
In recent years, deep convolutional neural networks come into use in image inpainting and super-resolution in many fields. Distinct to most of the former methods requiring to know beforehand the local information for corrupted pixels, we propose a 20-depth fully convolutional network to learn an end-to-end mapping a dataset of damaged/ground truth subimage pairs realizing non-local blind inpainting and super-resolution. As there often exist image with huge corruptions or inpainting on a low-resolution image that the existing approaches unable to perform well, we also share parameters in local area of layers to achieve spatial recursion and enlarge the receptive field. To avoid the difficulty of training this deep neural network, skip-connections between symmetric convolutional layers are designed. Experimental results shows that the proposed method outperforms state-of-the-art methods for diverse corrupting and low-resolution conditions, it works excellently when realizing super-resolution and image inpainting simultaneously
Locally Weighted Ensemble Clustering.
Huang, Dong; Wang, Chang-Dong; Lai, Jian-Huang
2018-05-01
Due to its ability to combine multiple base clusterings into a probably better and more robust clustering, the ensemble clustering technique has been attracting increasing attention in recent years. Despite the significant success, one limitation to most of the existing ensemble clustering methods is that they generally treat all base clusterings equally regardless of their reliability, which makes them vulnerable to low-quality base clusterings. Although some efforts have been made to (globally) evaluate and weight the base clusterings, yet these methods tend to view each base clustering as an individual and neglect the local diversity of clusters inside the same base clustering. It remains an open problem how to evaluate the reliability of clusters and exploit the local diversity in the ensemble to enhance the consensus performance, especially, in the case when there is no access to data features or specific assumptions on data distribution. To address this, in this paper, we propose a novel ensemble clustering approach based on ensemble-driven cluster uncertainty estimation and local weighting strategy. In particular, the uncertainty of each cluster is estimated by considering the cluster labels in the entire ensemble via an entropic criterion. A novel ensemble-driven cluster validity measure is introduced, and a locally weighted co-association matrix is presented to serve as a summary for the ensemble of diverse clusters. With the local diversity in ensembles exploited, two novel consensus functions are further proposed. Extensive experiments on a variety of real-world datasets demonstrate the superiority of the proposed approach over the state-of-the-art.
NASA Astrophysics Data System (ADS)
Di Valentin, Cristiana
2007-10-01
In this work we present a simplified procedure to use hybrid functionals and localized atomic basis sets to simulate scanning tunneling microscopy (STM) images of stoichiometric, reduced and hydroxylated rutile (110) TiO2 surface. For the two defective systems it is necessary to introduce some exact Hartree-Fock exchange in the exchange functional in order to correctly describe the details of the electronic structure. Results are compared to the standard density functional theory and planewave basis set approach. Both methods have advantages and drawbacks that are analyzed in detail. In particular, for the localized basis set approach, it is necessary to introduce a number of Gaussian function in the vacuum region above the surface in order to correctly describe the exponential decay of the integrated local density of states from the surface. In the planewave periodic approach, a thick vacuum region is required to achieve correct results. Simulated STM images are obtained for both the reduced and hydroxylated surface which nicely compare with experimental findings. A direct comparison of the two defects as displayed in the simulated STM images indicates that the OH groups should appear brighter than oxygen vacancies in perfect agreement with the experimental STM data.
A high-order staggered meshless method for elliptic problems
Trask, Nathaniel; Perego, Mauro; Bochev, Pavel Blagoveston
2017-03-21
Here, we present a new meshless method for scalar diffusion equations, which is motivated by their compatible discretizations on primal-dual grids. Unlike the latter though, our approach is truly meshless because it only requires the graph of nearby neighbor connectivity of the discretization points. This graph defines a local primal-dual grid complex with a virtual dual grid, in the sense that specification of the dual metric attributes is implicit in the method's construction. Our method combines a topological gradient operator on the local primal grid with a generalized moving least squares approximation of the divergence on the local dual grid. We show that the resulting approximation of the div-grad operator maintains polynomial reproduction to arbitrary orders and yields a meshless method, which attainsmore » $$O(h^{m})$$ convergence in both $L^2$- and $H^1$-norms, similar to mixed finite element methods. We demonstrate this convergence on curvilinear domains using manufactured solutions in two and three dimensions. Application of the new method to problems with discontinuous coefficients reveals solutions that are qualitatively similar to those of compatible mesh-based discretizations.« less
NASA Astrophysics Data System (ADS)
Kumar, Keshav; Shukla, Sumitra; Singh, Sachin Kumar
2018-04-01
Periodic impulses arise due to localised defects in rolling element bearing. At the early stage of defects, the weak impulses are immersed in strong machinery vibration. This paper proposes a combined approach based upon Hilbert envelop and zero frequency resonator for the detection of the weak periodic impulses. In the first step, the strength of impulses is increased by taking normalised Hilbert envelop of the signal. It also helps in better localization of these impulses on time axis. In the second step, Hilbert envelope of the signal is passed through the zero frequency resonator for the exact localization of the periodic impulses. Spectrum of the resonator output gives peak at the fault frequency. Simulated noisy signal with periodic impulses is used to explain the working of the algorithm. The proposed technique is verified with experimental data also. A comparison of the proposed method with Hilbert-Haung transform (HHT) based method is presented to establish the effectiveness of the proposed method.
Array magnetics modal analysis for the DIII-D tokamak based on localized time-series modelling
Olofsson, K. Erik J.; Hanson, Jeremy M.; Shiraki, Daisuke; ...
2014-07-14
Here, time-series analysis of magnetics data in tokamaks is typically done using block-based fast Fourier transform methods. This work presents the development and deployment of a new set of algorithms for magnetic probe array analysis. The method is based on an estimation technique known as stochastic subspace identification (SSI). Compared with the standard coherence approach or the direct singular value decomposition approach, the new technique exhibits several beneficial properties. For example, the SSI method does not require that frequencies are orthogonal with respect to the timeframe used in the analysis. Frequencies are obtained directly as parameters of localized time-series models.more » The parameters are extracted by solving small-scale eigenvalue problems. Applications include maximum-likelihood regularized eigenmode pattern estimation, detection of neoclassical tearing modes, including locked mode precursors, and automatic clustering of modes, and magnetics-pattern characterization of sawtooth pre- and postcursors, edge harmonic oscillations and fishbones.« less
Allen, Lisa K; Hetherington, Erin; Manyama, Mange; Hatfield, Jennifer M; van Marle, Guido
2010-02-03
There have been a number of interventions to date aimed at improving malaria diagnostic accuracy in sub-Saharan Africa. Yet, limited success is often reported for a number of reasons, especially in rural settings. This paper seeks to provide a framework for applied research aimed to improve malaria diagnosis using a combination of the established methods, participatory action research and social entrepreneurship. This case study introduces the idea of using the social entrepreneurship approach (SEA) to create innovative and sustainable applied health research outcomes. The following key elements define the SEA: (1) identifying a locally relevant research topic and plan, (2) recognizing the importance of international multi-disciplinary teams and the incorporation of local knowledge, (3) engaging in a process of continuous innovation, adaptation and learning, (4) remaining motivated and determined to achieve sustainable long-term research outcomes and, (5) sharing and transferring ownership of the project with the international and local partner. The SEA approach has a strong emphasis on innovation lead by local stakeholders. In this case, innovation resulted in a unique holistic research program aimed at understanding patient, laboratory and physician influences on accurate diagnosis of malaria. An evaluation of milestones for each SEA element revealed that the success of one element is intricately related to the success of other elements. The SEA will provide an additional framework for researchers and local stakeholders that promotes innovation and adaptability. This approach will facilitate the development of new ideas, strategies and approaches to understand how health issues, such as malaria, affect vulnerable communities.
Jacobian-Based Iterative Method for Magnetic Localization in Robotic Capsule Endoscopy
Di Natali, Christian; Beccani, Marco; Simaan, Nabil; Valdastri, Pietro
2016-01-01
The purpose of this study is to validate a Jacobian-based iterative method for real-time localization of magnetically controlled endoscopic capsules. The proposed approach applies finite-element solutions to the magnetic field problem and least-squares interpolations to obtain closed-form and fast estimates of the magnetic field. By defining a closed-form expression for the Jacobian of the magnetic field relative to changes in the capsule pose, we are able to obtain an iterative localization at a faster computational time when compared with prior works, without suffering from the inaccuracies stemming from dipole assumptions. This new algorithm can be used in conjunction with an absolute localization technique that provides initialization values at a slower refresh rate. The proposed approach was assessed via simulation and experimental trials, adopting a wireless capsule equipped with a permanent magnet, six magnetic field sensors, and an inertial measurement unit. The overall refresh rate, including sensor data acquisition and wireless communication was 7 ms, thus enabling closed-loop control strategies for magnetic manipulation running faster than 100 Hz. The average localization error, expressed in cylindrical coordinates was below 7 mm in both the radial and axial components and 5° in the azimuthal component. The average error for the capsule orientation angles, obtained by fusing gyroscope and inclinometer measurements, was below 5°. PMID:27087799
Lowrie, Emma; Tyrrell-Smith, Rachel
2017-01-01
This paper reports on the use of a Community-Engaged Research (CEnR) approach to develop a new research tool to involve members of the community in thinking about priorities for early child health and development in a deprived area of the UK. The CEnR approach involves researchers, professionals and members of the public working together during all stages of research and development.Researchers used a phased approach to the development of a Photo Grid tool including reviewing tools which could be used for community engagement, and testing the new tool based on feedback from workshops with local early years professionals and parents of young children.The Photo Grid tool is a flat square grid on which photo cards can be placed. Participants were asked to pace at the top of the grid the photos they considered most important for early child health and development, working down to the less important ones at the bottom. The findings showed that the resulting Photo Grid tool was a useful and successful method of engaging with the local community. The evidence for this is the high numbers of participants who completed a pilot study and who provided feedback on the method. By involving community members throughout the research process, it was possible to develop a method that would be acceptable to the local population, thus decreasing the likelihood of a lack of engagement. The success of the tool is therefore particularly encouraging as it engages "seldom heard voices," such as those with low literacy. The aim of this research was to consult with professionals and parents to develop a new research toolkit (Photo Grid), to understand community assets and priorities in relation to early child health and development in Blackpool, a socio-economic disadvantaged community. A Community-Engaged Research (CEnR) approach was used to consult with community members. This paper describes the process of using a CEnR approach in developing a Photo Grid toolkit. A phased CEnR approach was used to design, test and pilot a Photo Grid tool. Members of the Blackpool community; parents with children aged 0-4 years, health professionals, members of the early year's workforce, and community development workers were involved in the development of the research tool at various stages. They were recruited opportunistically via a venue-based time-space sampling method. In total, 213 parents and 18 professionals engaged in the research process. Using a CEnR approach allowed effective engagement with the local community and professionals, evidence by high levels of engagement throughout the development process. This approach improved the acceptability and usability of the resulting Photo Grid toolkit. Community members found the method accessible, engaging, useful, and thought provoking. The Photo Grid toolkit was seen by community members as accessible, engaging, useful and thought provoking in an area of high social deprivation, complex problems, and low literacy. The Photo Grid is an adaptable tool which can be used in other areas of socio-economic disadvantage to engage with the community to understand a wide variety of complex topics.
Improving Schools through Inservice Test Construction: The Rossville Model.
ERIC Educational Resources Information Center
Gilman, David Alan
A method for improving curriculum and schools through the local development of competency tests in basic skills--the Competency-Rossville Model (CRM)--is outlined. The method was originated in the school system of Rossville (Illinois) and has been tested in five other midwestern school systems. The approach leads the faculty of the school, with…
Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jiao, Xiangmin; Einstein, Daniel R.; Dyedov, Volodymyr
2010-03-24
Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stabilitymore » and consistency tests. These concepts lend themselves to robust numerical techniques including eigenvalue analysis, weighted least squares approximations, and numerical minimization, resulting in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods.« less
Face recognition via sparse representation of SIFT feature on hexagonal-sampling image
NASA Astrophysics Data System (ADS)
Zhang, Daming; Zhang, Xueyong; Li, Lu; Liu, Huayong
2018-04-01
This paper investigates a face recognition approach based on Scale Invariant Feature Transform (SIFT) feature and sparse representation. The approach takes advantage of SIFT which is local feature other than holistic feature in classical Sparse Representation based Classification (SRC) algorithm and possesses strong robustness to expression, pose and illumination variations. Since hexagonal image has more inherit merits than square image to make recognition process more efficient, we extract SIFT keypoint in hexagonal-sampling image. Instead of matching SIFT feature, firstly the sparse representation of each SIFT keypoint is given according the constructed dictionary; secondly these sparse vectors are quantized according dictionary; finally each face image is represented by a histogram and these so-called Bag-of-Words vectors are classified by SVM. Due to use of local feature, the proposed method achieves better result even when the number of training sample is small. In the experiments, the proposed method gave higher face recognition rather than other methods in ORL and Yale B face databases; also, the effectiveness of the hexagonal-sampling in the proposed method is verified.
Inherent Structure versus Geometric Metric for State Space Discretization
Liu, Hanzhong; Li, Minghai; Fan, Jue; Huo, Shuanghong
2016-01-01
Inherent structure (IS) and geometry-based clustering methods are commonly used for analyzing molecular dynamics trajectories. ISs are obtained by minimizing the sampled conformations into local minima on potential/effective energy surface. The conformations that are minimized into the same energy basin belong to one cluster. We investigate the influence of the applications of these two methods of trajectory decomposition on our understanding of the thermodynamics and kinetics of alanine tetrapeptide. We find that at the micro cluster level, the IS approach and root-mean-square deviation (RMSD) based clustering method give totally different results. Depending on the local features of energy landscape, the conformations with close RMSDs can be minimized into different minima, while the conformations with large RMSDs could be minimized into the same basin. However, the relaxation timescales calculated based on the transition matrices built from the micro clusters are similar. The discrepancy at the micro cluster level leads to different macro clusters. Although the dynamic models established through both clustering methods are validated approximately Markovian, the IS approach seems to give a meaningful state space discretization at the macro cluster level. PMID:26915811
Model-free estimation of the psychometric function
Żychaluk, Kamila; Foster, David H.
2009-01-01
A subject's response to the strength of a stimulus is described by the psychometric function, from which summary measures, such as a threshold or slope, may be derived. Traditionally, this function is estimated by fitting a parametric model to the experimental data, usually the proportion of successful trials at each stimulus level. Common models include the Gaussian and Weibull cumulative distribution functions. This approach works well if the model is correct, but it can mislead if not. In practice, the correct model is rarely known. Here, a nonparametric approach based on local linear fitting is advocated. No assumption is made about the true model underlying the data, except that the function is smooth. The critical role of the bandwidth is identified, and its optimum value estimated by a cross-validation procedure. As a demonstration, seven vision and hearing data sets were fitted by the local linear method and by several parametric models. The local linear method frequently performed better and never worse than the parametric ones. Supplemental materials for this article can be downloaded from app.psychonomic-journals.org/content/supplemental. PMID:19633355
Template match using local feature with view invariance
NASA Astrophysics Data System (ADS)
Lu, Cen; Zhou, Gang
2013-10-01
Matching the template image in the target image is the fundamental task in the field of computer vision. Aiming at the deficiency in the traditional image matching methods and inaccurate matching in scene image with rotation, illumination and view changing, a novel matching algorithm using local features are proposed in this paper. The local histograms of the edge pixels (LHoE) are extracted as the invariable feature to resist view and brightness changing. The merits of the LHoE is that the edge points have been little affected with view changing, and the LHoE can resist not only illumination variance but also the polution of noise. For the process of matching are excuded only on the edge points, the computation burden are highly reduced. Additionally, our approach is conceptually simple, easy to implement and do not need the training phase. The view changing can be considered as the combination of rotation, illumination and shear transformation. Experimental results on simulated and real data demonstrated that the proposed approach is superior to NCC(Normalized cross-correlation) and Histogram-based methods with view changing.
Individual breathing reactions measured in hemoglobin by hydrogen exchange methods.
Englander, S W; Calhoun, D B; Englander, J J; Kallenbach, N R; Liem, R K; Malin, E L; Mandal, C; Rogero, J R
1980-01-01
Protein hydrogen exchange is generally believed to register some aspects of internal protein dynamics, but the kind of motion at work is not clear. Experiments are being done to identify the determinants of protein hydrogen exchange and to distinguish between local unfolding and accessibility-penetration mechanisms. Results with small molecules, polynucleotides, and proteins demonstrate that solvent accessibility is by no means sufficient for fast exchange. H-exchange slowing is quite generally connected with intramolecular H-bonding, and the exchange process depends pivotally on transient H-bond cleavage. At least in alpha-helical structures, the cooperative aspect of H-bond cleavage must be expressed in local unfolding reactions. Results obtained by use of a difference hydrogen exchange method appear to provide a direct measurement of transient, cooperative, local unfolding reactions in hemoglobin. The reality of these supposed coherent breathing units is being tested by using the difference H-exchange approach to tritium label the units one at a time and then attempting to locate the tritium by fragmenting the protein, separating the fragments, and testing them for label. Early results demonstrate the feasibility of this approach. PMID:7248462
Collaborative voxel-based surgical virtual environments.
Acosta, Eric; Muniz, Gilbert; Armonda, Rocco; Bowyer, Mark; Liu, Alan
2008-01-01
Virtual Reality-based surgical simulators can utilize Collaborative Virtual Environments (C-VEs) to provide team-based training. To support real-time interactions, C-VEs are typically replicated on each user's local computer and a synchronization method helps keep all local copies consistent. This approach does not work well for voxel-based C-VEs since large and frequent volumetric updates make synchronization difficult. This paper describes a method that allows multiple users to interact within a voxel-based C-VE for a craniotomy simulator being developed. Our C-VE method requires smaller update sizes and provides faster synchronization update rates than volumetric-based methods. Additionally, we address network bandwidth/latency issues to simulate networked haptic and bone drilling tool interactions with a voxel-based skull C-VE.
Al-Khatib, Ra'ed M; Rashid, Nur'Aini Abdul; Abdullah, Rosni
2011-08-01
The secondary structure of RNA pseudoknots has been extensively inferred and scrutinized by computational approaches. Experimental methods for determining RNA structure are time consuming and tedious; therefore, predictive computational approaches are required. Predicting the most accurate and energy-stable pseudoknot RNA secondary structure has been proven to be an NP-hard problem. In this paper, a new RNA folding approach, termed MSeeker, is presented; it includes KnotSeeker (a heuristic method) and Mfold (a thermodynamic algorithm). The global optimization of this thermodynamic heuristic approach was further enhanced by using a case-based reasoning technique as a local optimization method. MSeeker is a proposed algorithm for predicting RNA pseudoknot structure from individual sequences, especially long ones. This research demonstrates that MSeeker improves the sensitivity and specificity of existing RNA pseudoknot structure predictions. The performance and structural results from this proposed method were evaluated against seven other state-of-the-art pseudoknot prediction methods. The MSeeker method had better sensitivity than the DotKnot, FlexStem, HotKnots, pknotsRG, ILM, NUPACK and pknotsRE methods, with 79% of the predicted pseudoknot base-pairs being correct.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann
1993-01-01
A general solution adaptive scheme-based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
NASA Technical Reports Server (NTRS)
Jiang, Yi-Tsann; Usab, William J., Jr.
1993-01-01
A general solution adaptive scheme based on a remeshing technique is developed for solving the two-dimensional and quasi-three-dimensional Euler and Favre-averaged Navier-Stokes equations. The numerical scheme is formulated on an unstructured triangular mesh utilizing an edge-based pointer system which defines the edge connectivity of the mesh structure. Jameson's four-stage hybrid Runge-Kutta scheme is used to march the solution in time. The convergence rate is enhanced through the use of local time stepping and implicit residual averaging. As the solution evolves, the mesh is regenerated adaptively using flow field information. Mesh adaptation parameters are evaluated such that an estimated local numerical error is equally distributed over the whole domain. For inviscid flows, the present approach generates a complete unstructured triangular mesh using the advancing front method. For turbulent flows, the approach combines a local highly stretched structured triangular mesh in the boundary layer region with an unstructured mesh in the remaining regions to efficiently resolve the important flow features. One-equation and two-equation turbulence models are incorporated into the present unstructured approach. Results are presented for a wide range of flow problems including two-dimensional multi-element airfoils, two-dimensional cascades, and quasi-three-dimensional cascades. This approach is shown to gain flow resolution in the refined regions while achieving a great reduction in the computational effort and storage requirements since solution points are not wasted in regions where they are not required.
Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach
NASA Technical Reports Server (NTRS)
Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.
2003-01-01
A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.
Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma
NASA Astrophysics Data System (ADS)
Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira
2013-02-01
A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.
Harris, Claire; Green, Sally; Ramsey, Wayne; Allen, Kelly; King, Richard
2017-09-08
This is the ninth in a series of papers reporting a program of Sustainability in Health care by Allocating Resources Effectively (SHARE) in a local healthcare setting. The disinvestment literature has broadened considerably over the past decade; however there is a significant gap regarding systematic, integrated, organisation-wide approaches. This debate paper presents a discussion of the conceptual aspects of disinvestment from the local perspective. Four themes are discussed: Terminology and concepts, Motivation and purpose, Relationships with other healthcare improvement paradigms, and Challenges to disinvestment. There are multiple definitions for disinvestment, multiple concepts underpin the definitions and multiple alternative terms convey these concepts; some definitions overlap and some are mutually exclusive; and there are systematic discrepancies in use between the research and practice settings. Many authors suggest that the term 'disinvestment' should be avoided due to perceived negative connotations and propose that the concept be considered alongside investment in the context of all resource allocation decisions and approached from the perspective of optimising health care. This may provide motivation for change, reduce disincentives and avoid some of the ethical dilemmas inherent in other disinvestment approaches. The impetus and rationale for disinvestment activities are likely to affect all aspects of the process from identification and prioritisation through to implementation and evaluation but have not been widely discussed. A need for mechanisms, frameworks, methods and tools for disinvestment is reported. However there are several health improvement paradigms with mature frameworks and validated methods and tools that are widely-used and well-accepted in local health services that already undertake disinvestment-type activities and could be expanded and built upon. The nature of disinvestment brings some particular challenges for policy-makers, managers, health professionals and researchers. There is little evidence of successful implementation of 'disinvestment' projects in the local setting, however initiatives to remove or replace technologies and practices have been successfully achieved through evidence-based practice, quality and safety activities, and health service improvement programs. These findings suggest that the construct of 'disinvestment' may be problematic at the local level. A new definition and two potential approaches to disinvestment are proposed to stimulate further research and discussion.
NASA Technical Reports Server (NTRS)
Mengshoel, Ole J.; Wilkins, David C.; Roth, Dan
2010-01-01
For hard computational problems, stochastic local search has proven to be a competitive approach to finding optimal or approximately optimal problem solutions. Two key research questions for stochastic local search algorithms are: Which algorithms are effective for initialization? When should the search process be restarted? In the present work we investigate these research questions in the context of approximate computation of most probable explanations (MPEs) in Bayesian networks (BNs). We introduce a novel approach, based on the Viterbi algorithm, to explanation initialization in BNs. While the Viterbi algorithm works on sequences and trees, our approach works on BNs with arbitrary topologies. We also give a novel formalization of stochastic local search, with focus on initialization and restart, using probability theory and mixture models. Experimentally, we apply our methods to the problem of MPE computation, using a stochastic local search algorithm known as Stochastic Greedy Search. By carefully optimizing both initialization and restart, we reduce the MPE search time for application BNs by several orders of magnitude compared to using uniform at random initialization without restart. On several BNs from applications, the performance of Stochastic Greedy Search is competitive with clique tree clustering, a state-of-the-art exact algorithm used for MPE computation in BNs.
NASA Astrophysics Data System (ADS)
Byeon, Hye-Hyeon; Lee, Woo Chul; Kim, Wonbin; Kim, Seong Keun; Kim, Woong; Yi, Hyunjung
2017-01-01
Single-walled carbon nanotubes (SWNTs) are one of the promising electronic components for nanoscale electronic devices such as field-effect transistors (FETs) owing to their excellent device characteristics such as high conductivity, high carrier mobility and mechanical flexibility. Localized gating gemometry of FETs enables individual addressing of active channels and allows for better electrostatics via thinner dielectric layer of high k-value. For localized gating of SWNTs, it becomes critical to define SWNTs of controlled nanostructures and functionality onto desired locations in high precision. Here, we demonstrate that a biologically templated approach in combination of microfabrication processes can successfully produce a nanostructured channels of SWNTs for localized active devices such as local bottom-gated FETs. A large-scale nanostructured network, nanomesh, of SWNTs were assembled in solution using an M13 phage with strong binding affinity toward SWNTs and micrometer-scale nanomesh channels were defined using negative photolithography and plasma-etching processes. The bio-fabrication approach produced local bottom-gated FETs with remarkably controllable nanostructures and successfully enabled semiconducting behavior out of unsorted SWNTs. In addition, the localized gating scheme enhanced the device performances such as operation voltage and I on/I off ratio. We believe that our approach provides a useful and integrative method for fabricating electronic devices out of nanoscale electronic materials for applications in which tunable electrical properties, mechanical flexibility, ambient stability, and chemical stability are of crucial importance.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Morita, Syoichi; Zhou, Xinxin; Chen, Huayue; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Hoshi, Hiroaki; Fujita, Hiroshi
2015-03-01
This paper describes an automatic approach for anatomy partitioning on three-dimensional (3D) computedtomography (CT) images that divide the human torso into several volume-of-interesting (VOI) images based on anatomical definition. The proposed approach combines several individual detections of organ-location with a groupwise organ-location calibration and correction to achieve an automatic and robust multiple-organ localization task. The essence of the proposed method is to jointly detect the 3D minimum bounding box for each type of organ shown on CT images based on intra-organ-image-textures and inter-organ-spatial-relationship in the anatomy. Machine-learning-based template matching and generalized Hough transform-based point-distribution estimation are used in the detection and calibration processes. We apply this approach to the automatic partitioning of a torso region on CT images, which are divided into 35 VOIs presenting major organ regions and tissues required by routine diagnosis in clinical medicine. A database containing 4,300 patient cases of high-resolution 3D torso CT images is used for training and performance evaluations. We confirmed that the proposed method was successful in target organ localization on more than 95% of CT cases. Only two organs (gallbladder and pancreas) showed a lower success rate: 71 and 78% respectively. In addition, we applied this approach to another database that included 287 patient cases of whole-body CT images scanned for positron emission tomography (PET) studies and used for additional performance evaluation. The experimental results showed that no significant difference between the anatomy partitioning results from those two databases except regarding the spleen. All experimental results showed that the proposed approach was efficient and useful in accomplishing localization tasks for major organs and tissues on CT images scanned using different protocols.
NASA Astrophysics Data System (ADS)
Chandra, Rohit; Balasingham, Ilangko
2015-05-01
Localization of a wireless capsule endoscope finds many clinical applications from diagnostics to therapy. There are potentially two approaches of the electromagnetic waves based localization: a) signal propagation model based localization using a priori information about the persons dielectric channels, and b) recently developed microwave imaging based localization without using any a priori information about the persons dielectric channels. In this paper, we study the second approach in terms of a variety of frequencies and signal-to-noise ratios for localization accuracy. To this end, we select a 2-D anatomically realistic numerical phantom for microwave imaging at different frequencies. The selected frequencies are 13:56 MHz, 431:5 MHz, 920 MHz, and 2380 MHz that are typically considered for medical applications. Microwave imaging of a phantom will provide us with an electromagnetic model with electrical properties (relative permittivity and conductivity) of the internal parts of the body and can be useful as a foundation for localization of an in-body RF source. Low frequency imaging at 13:56 MHz provides a low resolution image with high contrast in the dielectric properties. However, at high frequencies, the imaging algorithm is able to image only the outer boundaries of the tissues due to low penetration depth as higher frequency means higher attenuation. Furthermore, recently developed localization method based on microwave imaging is used for estimating the localization accuracy at different frequencies and signal-to-noise ratios. Statistical evaluation of the localization error is performed using the cumulative distribution function (CDF). Based on our results, we conclude that the localization accuracy is minimally affected by the frequency or the noise. However, the choice of the frequency will become critical if the purpose of the method is to image the internal parts of the body for tumor and/or cancer detection.
Service, Christina N.; Adams, Megan S.; Artelle, Kyle A.; Paquet, Paul; Grant, Laura V.; Darimont, Chris T.
2014-01-01
Range shifts among wildlife can occur rapidly and impose cascading ecological, economic, and cultural consequences. However, occurrence data used to define distributional limits derived from scientific approaches are often outdated for wide ranging and elusive species, especially in remote environments. Accordingly, our aim was to amalgamate indigenous and western scientific evidence of grizzly bear (Ursus arctos horribilis) records and detail a potential range shift on the central coast of British Columbia, Canada. In addition, we test the hypothesis that data from each method yield similar results, as well as illustrate the complementary nature of this coupled approach. Combining information from traditional and local ecological knowledge (TEK/LEK) interviews with remote camera, genetic, and hunting data revealed that grizzly bears are now present on 10 islands outside their current management boundary. LEK interview data suggested this expansion has accelerated over the last 10 years. Both approaches provided complementary details and primarily affirmed one another: all islands with scientific evidence for occupation had consistent TEK/LEK evidence. Moreover, our complementary methods approach enabled a more spatially and temporally detailed account than either method would have afforded alone. In many cases, knowledge already held by local indigenous people could provide timely and inexpensive data about changing ecological processes. However, verifying the accuracy of scientific and experiential knowledge by pairing sources at the same spatial scale allows for increased confidence and detail. A similarly coupled approach may be useful across taxa in many regions. PMID:25054635
Rotation invariant eigenvessels and auto-context for retinal vessel detection
NASA Astrophysics Data System (ADS)
Montuoro, Alessio; Simader, Christian; Langs, Georg; Schmidt-Erfurth, Ursula
2015-03-01
Retinal vessels are one of the few anatomical landmarks that are clearly visible in various imaging modalities of the eye. As they are also relatively invariant to disease progression, retinal vessel segmentation allows cross-modal and temporal registration enabling exact diagnosing for various eye diseases like diabetic retinopathy, hypertensive retinopathy or age-related macular degeneration (AMD). Due to the clinical significance of retinal vessels many different approaches for segmentation have been published in the literature. In contrast to other segmentation approaches our method is not specifically tailored to the task of retinal vessel segmentation. Instead we utilize a more general image classification approach and show that this can achieve comparable results. In the proposed method we utilize the concepts of eigenfaces and auto-context. Eigenfaces have been described quite extensively in the literature and their performance is well known. They are however quite sensitive to translation and rotation. The former was addressed by computing the eigenvessels in local image windows of different scales, the latter by estimating and correcting the local orientation. Auto-context aims to incorporate automatically generated context information into the training phase of classification approaches. It has been shown to improve the performance of spinal cord segmentation4 and 3D brain image segmentation. The proposed method achieves an area under the receiver operating characteristic (ROC) curve of Az = 0.941 on the DRIVE data set, being comparable to current state-of-the-art approaches.
A two-step FEM-SEM approach for wave propagation analysis in cable structures
NASA Astrophysics Data System (ADS)
Zhang, Songhan; Shen, Ruili; Wang, Tao; De Roeck, Guido; Lombaert, Geert
2018-02-01
Vibration-based methods are among the most widely studied in structural health monitoring (SHM). It is well known, however, that the low-order modes, characterizing the global dynamic behaviour of structures, are relatively insensitive to local damage. Such local damage may be easier to detect by methods based on wave propagation which involve local high frequency behaviour. The present work considers the numerical analysis of wave propagation in cables. A two-step approach is proposed which allows taking into account the cable sag and the distribution of the axial forces in the wave propagation analysis. In the first step, the static deformation and internal forces are obtained by the finite element method (FEM), taking into account geometric nonlinear effects. In the second step, the results from the static analysis are used to define the initial state of the dynamic analysis which is performed by means of the spectral element method (SEM). The use of the SEM in the second step of the analysis allows for a significant reduction in computational costs as compared to a FE analysis. This methodology is first verified by means of a full FE analysis for a single stretched cable. Next, simulations are made to study the effects of damage in a single stretched cable and a cable-supported truss. The results of the simulations show how damage significantly affects the high frequency response, confirming the potential of wave propagation based methods for SHM.
Tweaked residual convolutional network for face alignment
NASA Astrophysics Data System (ADS)
Du, Wenchao; Li, Ke; Zhao, Qijun; Zhang, Yi; Chen, Hu
2017-08-01
We propose a novel Tweaked Residual Convolutional Network approach for face alignment with two-level convolutional networks architecture. Specifically, the first-level Tweaked Convolutional Network (TCN) module predicts the landmark quickly but accurately enough as a preliminary, by taking low-resolution version of the detected face holistically as the input. The following Residual Convolutional Networks (RCN) module progressively refines the landmark by taking as input the local patch extracted around the predicted landmark, particularly, which allows the Convolutional Neural Network (CNN) to extract local shape-indexed features to fine tune landmark position. Extensive evaluations show that the proposed Tweaked Residual Convolutional Network approach outperforms existing methods.
A new phase-correlation-based iris matching for degraded images.
Krichen, Emine; Garcia-Salicetti, Sonia; Dorizzi, Bernadette
2009-08-01
In this paper, we present a new phase-correlation-based iris matching approach in order to deal with degradations in iris images due to unconstrained acquisition procedures. Our matching system is a fusion of global and local Gabor phase-correlation schemes. The main originality of our local approach is that we do not only consider the correlation peak amplitudes but also their locations in different regions of the images. Results on several degraded databases, namely, the CASIA-BIOSECURE and Iris Challenge Evaluation 2005 databases, show the improvement of our method compared to two available reference systems, Masek and Open Source for Iris (OSRIS), in verification mode.
Organic Chemistry and the Native Plants of the Sonoran Desert: Conversion of Jojoba Oil to Biodiesel
ERIC Educational Resources Information Center
Daconta, Lisa V.; Minger, Timothy; Nedelkova, Valentina; Zikopoulos, John N.
2015-01-01
A new, general approach to the organic chemistry laboratory is introduced that is based on learning about organic chemistry techniques and research methods by exploring the natural products found in local native plants. As an example of this approach for the Sonoran desert region, the extraction of jojoba oil and its transesterification to…
NASA Astrophysics Data System (ADS)
Wang, Yongbo; Sheng, Yehua; Lu, Guonian; Tian, Peng; Zhang, Kai
2008-04-01
Surface reconstruction is an important task in the field of 3d-GIS, computer aided design and computer graphics (CAD & CG), virtual simulation and so on. Based on available incremental surface reconstruction methods, a feature-constrained surface reconstruction approach for point cloud is presented. Firstly features are extracted from point cloud under the rules of curvature extremes and minimum spanning tree. By projecting local sample points to the fitted tangent planes and using extracted features to guide and constrain the process of local triangulation and surface propagation, topological relationship among sample points can be achieved. For the constructed models, a process named consistent normal adjustment and regularization is adopted to adjust normal of each face so that the correct surface model is achieved. Experiments show that the presented approach inherits the convenient implementation and high efficiency of traditional incremental surface reconstruction method, meanwhile, it avoids improper propagation of normal across sharp edges, which means the applicability of incremental surface reconstruction is greatly improved. Above all, appropriate k-neighborhood can help to recognize un-sufficient sampled areas and boundary parts, the presented approach can be used to reconstruct both open and close surfaces without additional interference.
Accurate traveltime computation in complex anisotropic media with discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Le Bouteiller, P.; Benjemaa, M.; Métivier, L.; Virieux, J.
2017-12-01
Travel time computation is of major interest for a large range of geophysical applications, among which source localization and characterization, phase identification, data windowing and tomography, from decametric scale up to global Earth scale.Ray-tracing tools, being essentially 1D Lagrangian integration along a path, have been used for their efficiency but present some drawbacks, such as a rather difficult control of the medium sampling. Moreover, they do not provide answers in shadow zones. Eikonal solvers, based on an Eulerian approach, have attracted attention in seismology with the pioneering work of Vidale (1988), while such approach has been proposed earlier by Riznichenko (1946). They have been used now for first-arrival travel-time tomography at various scales (Podvin & Lecomte (1991). The framework for solving this non-linear partial differential equation is now well understood and various finite-difference approaches have been proposed, essentially for smooth media. We propose a novel finite element approach which builds a precise solution for strongly heterogeneous anisotropic medium (still in the limit of Eikonal validity). The discontinuous Galerkin method we have developed allows local refinement of the mesh and local high orders of interpolation inside elements. High precision of the travel times and its spatial derivatives is obtained through this formulation. This finite element method also honors boundary conditions, such as complex topographies and absorbing boundaries for mimicking an infinite medium. Applications from travel-time tomography, slope tomography are expected, but also for migration and take-off angles estimation, thanks to the accuracy obtained when computing first-arrival times.References:Podvin, P. and Lecomte, I., 1991. Finite difference computation of traveltimes in very contrasted velocity model: a massively parallel approach and its associated tools, Geophys. J. Int., 105, 271-284.Riznichenko, Y., 1946. Geometrical seismics of layered media, Trudy Inst. Theor. Geophysics, Vol II, Moscow (in Russian).Vidale, J., 1988. Finite-difference calculation of travel times, Bull. seism. Soc. Am., 78, 2062-2076.
Hybridized Multiscale Discontinuous Galerkin Methods for Multiphysics
2015-09-14
discontinuous Galerkin method for the numerical solution of the Helmholtz equation , J. Comp. Phys., 290, 318–335, 2015. [14] N.C. NGUYEN, J. PERAIRE...approximations of the Helmholtz equation for a very wide range of wave frequencies. Our approach combines the hybridizable discontinuous Galerkin methodology...local approximation spaces of the hybridizable discontinuous Galerkin methods with precomputed phases which are solutions of the eikonal equation in
NASA Astrophysics Data System (ADS)
Neese, Frank; Wennmohs, Frank; Hansen, Andreas
2009-03-01
Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Møller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol-1. Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500 times faster. The method performs best in conjunction with large and flexible basis sets. These results open the way for large-scale chemical applications.
Neese, Frank; Wennmohs, Frank; Hansen, Andreas
2009-03-21
Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Moller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol(-1). Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500 times faster. The method performs best in conjunction with large and flexible basis sets. These results open the way for large-scale chemical applications.
Beichel, Reinhard R; Van Tol, Markus; Ulrich, Ethan J; Bauer, Christian; Chang, Tangel; Plichta, Kristin A; Smith, Brian J; Sunderland, John J; Graham, Michael M; Sonka, Milan; Buatti, John M
2016-06-01
The purpose of this work was to develop, validate, and compare a highly computer-aided method for the segmentation of hot lesions in head and neck 18F-FDG PET scans. A semiautomated segmentation method was developed, which transforms the segmentation problem into a graph-based optimization problem. For this purpose, a graph structure around a user-provided approximate lesion centerpoint is constructed and a suitable cost function is derived based on local image statistics. To handle frequently occurring situations that are ambiguous (e.g., lesions adjacent to each other versus lesion with inhomogeneous uptake), several segmentation modes are introduced that adapt the behavior of the base algorithm accordingly. In addition, the authors present approaches for the efficient interactive local and global refinement of initial segmentations that are based on the "just-enough-interaction" principle. For method validation, 60 PET/CT scans from 59 different subjects with 230 head and neck lesions were utilized. All patients had squamous cell carcinoma of the head and neck. A detailed comparison with the current clinically relevant standard manual segmentation approach was performed based on 2760 segmentations produced by three experts. Segmentation accuracy measured by the Dice coefficient of the proposed semiautomated and standard manual segmentation approach was 0.766 and 0.764, respectively. This difference was not statistically significant (p = 0.2145). However, the intra- and interoperator standard deviations were significantly lower for the semiautomated method. In addition, the proposed method was found to be significantly faster and resulted in significantly higher intra- and interoperator segmentation agreement when compared to the manual segmentation approach. Lack of consistency in tumor definition is a critical barrier for radiation treatment targeting as well as for response assessment in clinical trials and in clinical oncology decision-making. The properties of the authors approach make it well suited for applications in image-guided radiation oncology, response assessment, or treatment outcome prediction.
Medical Image Compression Based on Vector Quantization with Variable Block Sizes in Wavelet Domain
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality. PMID:23049544
Medical image compression based on vector quantization with variable block sizes in wavelet domain.
Jiang, Huiyan; Ma, Zhiyuan; Hu, Yang; Yang, Benqiang; Zhang, Libo
2012-01-01
An optimized medical image compression algorithm based on wavelet transform and improved vector quantization is introduced. The goal of the proposed method is to maintain the diagnostic-related information of the medical image at a high compression ratio. Wavelet transformation was first applied to the image. For the lowest-frequency subband of wavelet coefficients, a lossless compression method was exploited; for each of the high-frequency subbands, an optimized vector quantization with variable block size was implemented. In the novel vector quantization method, local fractal dimension (LFD) was used to analyze the local complexity of each wavelet coefficients, subband. Then an optimal quadtree method was employed to partition each wavelet coefficients, subband into several sizes of subblocks. After that, a modified K-means approach which is based on energy function was used in the codebook training phase. At last, vector quantization coding was implemented in different types of sub-blocks. In order to verify the effectiveness of the proposed algorithm, JPEG, JPEG2000, and fractal coding approach were chosen as contrast algorithms. Experimental results show that the proposed method can improve the compression performance and can achieve a balance between the compression ratio and the image visual quality.
Damage identification using inverse methods.
Friswell, Michael I
2007-02-15
This paper gives an overview of the use of inverse methods in damage detection and location, using measured vibration data. Inverse problems require the use of a model and the identification of uncertain parameters of this model. Damage is often local in nature and although the effect of the loss of stiffness may require only a small number of parameters, the lack of knowledge of the location means that a large number of candidate parameters must be included. This paper discusses a number of problems that exist with this approach to health monitoring, including modelling error, environmental effects, damage localization and regularization.
Analytic prediction of unconfined boundary layer flashback limits in premixed hydrogen-air flames
NASA Astrophysics Data System (ADS)
Hoferichter, Vera; Hirsch, Christoph; Sattelmayer, Thomas
2017-05-01
Flame flashback is a major challenge in premixed combustion. Hence, the prediction of the minimum flow velocity to prevent boundary layer flashback is of high technical interest. This paper presents an analytic approach to predicting boundary layer flashback limits for channel and tube burners. The model reflects the experimentally observed flashback mechanism and consists of a local and global analysis. Based on the local analysis, the flow velocity at flashback initiation is obtained depending on flame angle and local turbulent burning velocity. The local turbulent burning velocity is calculated in accordance with a predictive model for boundary layer flashback limits of duct-confined flames presented by the authors in an earlier publication. This ensures consistency of both models. The flame angle of the stable flame near flashback conditions can be obtained by various methods. In this study, an approach based on global mass conservation is applied and is validated using Mie-scattering images from a channel burner test rig at ambient conditions. The predicted flashback limits are compared to experimental results and to literature data from preheated tube burner experiments. Finally, a method for including the effect of burner exit temperature is demonstrated and used to explain the discrepancies in flashback limits obtained from different burner configurations reported in the literature.
NASA Astrophysics Data System (ADS)
Fitrinitia, I. S.; Junadi, P.; Sutanto, E.; Nugroho, D. A.; Zubair, A.; Suyanti, E.
2018-03-01
Located in a tropical area, cities in Indonesia are vulnerable to hydrometeorological risks such as flood and landslide and thus become prone to the climate change effects. Moreover, peri-urban cities had double burden as the consequences of main city spill over and also lack of urban facilities in overcoming the disaster. In another perspective, the city has many alternative resources to recover, so its create urban resiliency. Depok city becomes a case study of this research regarding with its development following the impact of Jakarta growth. This research purposes to capture how the local city dwellers could anticipate and adaptive with flood and landslide with their own mitigation version. Through mix method and spatial analysis using GIS techniques, it derives the two comparison approach, the normative and alternative that had been done by the city dwellers. It uses a spatial analysis to have a big picture of Depok and its environmental changing. It also divided into 4 local group of communities as a representative of city dwellers regarding the characteristic of a settlement with their level of risk. The result found type or characteristic of settlement which influenced the local adaptive capacity, from the establishment of infrastructure, health fulfillment and social livelihood with different kind of methods.
Rupture Predictions of Notched Ti-6Al-4V Using Local Approaches
Peron, Mirco; Berto, Filippo
2018-01-01
Ti-6Al-4V has been extensively used in structural applications in various engineering fields, from naval to automotive and from aerospace to biomedical. Structural applications are characterized by geometrical discontinuities such as notches, which are widely known to harmfully affect their tensile strength. In recent years, many attempts have been done to define solid criteria with which to reliably predict the tensile strength of materials. Among these criteria, two local approaches are worth mentioning due to the accuracy of their predictions, i.e., the strain energy density (SED) approach and the theory of critical distance (TCD) method. In this manuscript, the robustness of these two methods in predicting the tensile behavior of notched Ti-6Al-4V specimens has been compared. To this aim, two very dissimilar notch geometries have been tested, i.e., semi-circular and blunt V-notch with a notch root radius equal to 1 mm, and the experimental results have been compared with those predicted by the two models. The experimental values have been estimated with low discrepancies by either the SED approach and the TCD method, but the former results in better predictions. The deviations for the SED are in fact lower than 1.3%, while the TCD provides predictions with errors almost up to 8.5%. Finally, the weaknesses and the strengths of the two models have been reported. PMID:29693565
A Novel Real-Time Reference Key Frame Scan Matching Method
Mohamed, Haytham; Moussa, Adel; Elhabiby, Mohamed; El-Sheimy, Naser; Sesay, Abu
2017-01-01
Unmanned aerial vehicles represent an effective technology for indoor search and rescue operations. Typically, most indoor missions’ environments would be unknown, unstructured, and/or dynamic. Navigation of UAVs in such environments is addressed by simultaneous localization and mapping approach using either local or global approaches. Both approaches suffer from accumulated errors and high processing time due to the iterative nature of the scan matching method. Moreover, point-to-point scan matching is prone to outlier association processes. This paper proposes a low-cost novel method for 2D real-time scan matching based on a reference key frame (RKF). RKF is a hybrid scan matching technique comprised of feature-to-feature and point-to-point approaches. This algorithm aims at mitigating errors accumulation using the key frame technique, which is inspired from video streaming broadcast process. The algorithm depends on the iterative closest point algorithm during the lack of linear features which is typically exhibited in unstructured environments. The algorithm switches back to the RKF once linear features are detected. To validate and evaluate the algorithm, the mapping performance and time consumption are compared with various algorithms in static and dynamic environments. The performance of the algorithm exhibits promising navigational, mapping results and very short computational time, that indicates the potential use of the new algorithm with real-time systems. PMID:28481285
NASA Astrophysics Data System (ADS)
Habtu, Solomon; Ludi, Eva; Jamin, Jean Yves; Oates, Naomi; Fissahaye Yohannes, Degol
2014-05-01
Practicing various innovations pertinent to irrigated farming at local field scale is instrumental to increase productivity and yield for small holder farmers in Africa. However the translation of innovations from local scale to the scale of a jointly operated irrigation scheme is far from trivial. It requires insight on the drivers for adoption of local innovations within the wider farmer communities. Participatory methods are expected to improve not only the acceptance of locally developed innovations within the wider farmer communities, but to allow also an estimation to which extend changes will occur within the entire irrigation scheme. On such a base, more realistic scenarios of future water productivity within an irrigation scheme, which is operated by small holder farmers, can be estimated. Initial participatory problem and innovation appraisal was conducted in Gumselassa small scale irrigation scheme, Ethiopia, from Feb 27 to March 3, 2012 as part of the EAU4FOOD project funded by EC. The objective was to identify and appraise problems which hinder sustainable water management to enhance production and productivity and to identify future research strategies. Workshops were conducted both at local (Community of Practices) and regional (Learning Practice Alliance) level. At local levels, intensive collaboration with farmers using participatory methods produced problem trees and a "Photo Safari" documented a range of problems that negatively impact on productive irrigated farming. A range of participatory methods were also used to identify local innovations. At regional level a Learning Platform was established that includes a wide range of stakeholders (technical experts from various government ministries, policy makers, farmers, extension agents, researchers). This stakeholder group did a range of exercise as well to identify major problems related to irrigated smallholder farming and already identified innovations. Both groups identified similar problems to productive smallholder irrigation: soil nutrient depletion, salinization, disease and pest resulting from inefficient irrigation practices, infrastructure problems leading to a reduction of the size of the command area and decrease in reservoir volume. The major causes have been poor irrigation infrastructure, poor on-farm soil and water management, prevalence of various crop pests and diseases, lack of inputs and reservoir siltation. On-farm participatory research focusing on soil, crop and water management issues, including technical, institutional and managerial aspects, to identify best performing innovations while taking care of the environment was recommended. Currently, a range of interlinked activities are implemented a multiple scales, combining participatory and scientific approaches towards innovation development and up-scaling of promising technologies and institutional and managerial approaches from local to regional scales. ____________________________ Key words: Irrigation scheme, productivity, innovation, participatory method, Gumselassa, Ethiopia
High Dynamic Range Imaging Using Multiple Exposures
NASA Astrophysics Data System (ADS)
Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei
2017-06-01
It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.
Liang, Yunyun; Liu, Sanyang; Zhang, Shengli
2016-12-01
Apoptosis, or programed cell death, plays a central role in the development and homeostasis of an organism. Obtaining information on subcellular location of apoptosis proteins is very helpful for understanding the apoptosis mechanism. The prediction of subcellular localization of an apoptosis protein is still a challenging task, and existing methods mainly based on protein primary sequences. In this paper, we introduce a new position-specific scoring matrix (PSSM)-based method by using detrended cross-correlation (DCCA) coefficient of non-overlapping windows. Then a 190-dimensional (190D) feature vector is constructed on two widely used datasets: CL317 and ZD98, and support vector machine is adopted as classifier. To evaluate the proposed method, objective and rigorous jackknife cross-validation tests are performed on the two datasets. The results show that our approach offers a novel and reliable PSSM-based tool for prediction of apoptosis protein subcellular localization. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Miyama, Masamichi J.; Hukushima, Koji
2018-04-01
A sparse modeling approach is proposed for analyzing scanning tunneling microscopy topography data, which contain numerous peaks originating from the electron density of surface atoms and/or impurities. The method, based on the relevance vector machine with L1 regularization and k-means clustering, enables separation of the peaks and peak center positioning with accuracy beyond the resolution of the measurement grid. The validity and efficiency of the proposed method are demonstrated using synthetic data in comparison with the conventional least-squares method. An application of the proposed method to experimental data of a metallic oxide thin-film clearly indicates the existence of defects and corresponding local lattice distortions.
Chen, Wen Hao; Yang, Sam Y. S.; Xiao, Ti Qiao; Mayo, Sherry C.; Wang, Yu Dan; Wang, Hai Peng
2014-01-01
Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials. PMID:24763649
2008-01-01
Objective To use a common ethnographic study protocol across five countries to provide data to confirm social and risk settings and risk behaviors, develop the assessment instruments, tailor the intervention, design a process evaluation of the intervention, and design an understandable informed consent process. Design Methods determined best for capturing the core data elements were selected. Standards for data collection methods were established to enable comparable implementation of the ethnographic study across the five countries. Methods The methods selected were participant observation, focus groups, open-ended interviews, and social mapping. Standards included adhering to core data elements, number of participants, mode of data collection, type of data collection instrument, number of data collectors at each type of activity, duration of each type of activity, and type of informed consent administered. Sites had discretion in selecting which methods to use to obtain specific data. Results The ethnographic studies provided input to the Trial’s methods for data collection, described social groups in the target communities, depicted sexual practices, and determined core opinion leader characteristics; thus providing information that drove the adaptation of the intervention and facilitated the selection of venues, behavioral outcomes, and community popular opinion leaders (C-POLs). Conclusion The described rapid ethnographic approach worked well across the five countries, where findings allowed local adaptation of the intervention. When introducing the C-POL intervention in new areas, local non-governmental and governmental community and health workers can use this rapid ethnographic approach to identify the communities, social groups, messages, and C-POLs best suited for local implementation. PMID:17413263
Billoud, Bernard; Jouanno, Émilie; Nehr, Zofia; Carton, Baptiste; Rolland, Élodie; Chenivesse, Sabine; Charrier, Bénédicte
2015-01-01
Mutagenesis is the only process by which unpredicted biological gene function can be identified. Despite that several macroalgal developmental mutants have been generated, their causal mutation was never identified, because experimental conditions were not gathered at that time. Today, progresses in macroalgal genomics and judicious choices of suitable genetic models make mutated gene identification possible. This article presents a comparative study of two methods aiming at identifying a genetic locus in the brown alga Ectocarpus siliculosus: positional cloning and Next-Generation Sequencing (NGS)-based mapping. Once necessary preliminary experimental tools were gathered, we tested both analyses on an Ectocarpus morphogenetic mutant. We show how a narrower localization results from the combination of the two methods. Advantages and drawbacks of these two approaches as well as potential transfer to other macroalgae are discussed. PMID:25745426
Żurek-Biesiada, Dominika; Szczurek, Aleksander T; Prakash, Kirti; Best, Gerrit; Mohana, Giriram K; Lee, Hyun-Keun; Roignant, Jean-Yves; Dobrucki, Jurek W; Cremer, Christoph; Birk, Udo
2016-06-01
Single Molecule Localization Microscopy (SMLM) is a recently emerged optical imaging method that was shown to achieve a resolution in the order of tens of nanometers in intact cells. Novel high resolution imaging methods might be crucial for understanding of how the chromatin, a complex of DNA and proteins, is arranged in the eukaryotic cell nucleus. Such an approach utilizing switching of a fluorescent, DNA-binding dye Vybrant® DyeCycle™ Violet has been previously demonstrated by us (Żurek-Biesiada et al., 2015) [1]. Here we provide quantitative information on the influence of the chemical environment on the behavior of the dye, discuss the variability in the DNA-associated signal density, and demonstrate direct proof of enhanced structural resolution. Furthermore, we compare different visualization approaches. Finally, we describe various opportunities of multicolor DNA/SMLM imaging in eukaryotic cell nuclei.
Żurek-Biesiada, Dominika; Szczurek, Aleksander T.; Prakash, Kirti; Best, Gerrit; Mohana, Giriram K.; Lee, Hyun-Keun; Roignant, Jean-Yves; Dobrucki, Jurek W.; Cremer, Christoph; Birk, Udo
2016-01-01
Single Molecule Localization Microscopy (SMLM) is a recently emerged optical imaging method that was shown to achieve a resolution in the order of tens of nanometers in intact cells. Novel high resolution imaging methods might be crucial for understanding of how the chromatin, a complex of DNA and proteins, is arranged in the eukaryotic cell nucleus. Such an approach utilizing switching of a fluorescent, DNA-binding dye Vybrant® DyeCycle™ Violet has been previously demonstrated by us (Żurek-Biesiada et al., 2015) [1]. Here we provide quantitative information on the influence of the chemical environment on the behavior of the dye, discuss the variability in the DNA-associated signal density, and demonstrate direct proof of enhanced structural resolution. Furthermore, we compare different visualization approaches. Finally, we describe various opportunities of multicolor DNA/SMLM imaging in eukaryotic cell nuclei. PMID:27054149
Continuum Model of Gas Uptake for Inhomogeneous Fluids
Ihm, Yungok; Cooper, Valentino R.; Vlcek, Lukas; ...
2017-07-20
We describe a continuum model of gas uptake for inhomogeneous fluids (CMGIF) and use it to predict fluid adsorption in porous materials directly from gas-substrate interaction energies determined by first principles calculations or accurate effective force fields. The method uses a perturbation approach to correct bulk fluid interactions for local inhomogeneities caused by gas substrate interactions, and predicts local pressure and density of the adsorbed gas. The accuracy and limitations of the model are tested by comparison with the results of Grand Canonical Monte Carlo simulations of hydrogen uptake in metal-organic frameworks (MOFs). We show that the approach provides accuratemore » predictions at room temperature and at low temperatures for less strongly interacting materials. As a result, the speed of the CMGIF method makes it a promising candidate for high-throughput materials discovery in connection with existing databases of nano-porous materials.« less
Automated feature extraction in color retinal images by a model based approach.
Li, Huiqi; Chutatape, Opas
2004-02-01
Color retinal photography is an important tool to detect the evidence of various eye diseases. Novel methods to extract the main features in color retinal images have been developed in this paper. Principal component analysis is employed to locate optic disk; A modified active shape model is proposed in the shape detection of optic disk; A fundus coordinate system is established to provide a better description of the features in the retinal images; An approach to detect exudates by the combined region growing and edge detection is proposed. The success rates of disk localization, disk boundary detection, and fovea localization are 99%, 94%, and 100%, respectively. The sensitivity and specificity of exudate detection are 100% and 71%, correspondingly. The success of the proposed algorithms can be attributed to the utilization of the model-based methods. The detection and analysis could be applied to automatic mass screening and diagnosis of the retinal diseases.
Social Organization, Population, and Land Use*
Axinn, William G.; Ghimire, Dirgha J.
2011-01-01
We present a new approach to the investigation of human influences on environmental change that explicitly adds consideration of social organization. This approach identifies social organization as an influence on the environment that is independent of population size, affluence, and technology. The framework we present also identifies population events, such as births, that are likely to influence environmental outcomes beyond the consequences of population size. The theoretical framework we construct explains that explicit attention to social organization is necessary for micro-level investigation of the population-environment relationship because social organization influences both. We use newly available longitudinal, multilevel, mixed-method measures of local land use changes, local population dynamics, and social organization from the Nepalese Himalayas to provide empirical tests of this new framework. These tests reveal that measures of change in social organization are strongly associated with measures of change in land use, and that the association is independent of common measures of population size, affluence, and technology. Also, local birth events shape local land use changes and key proximate determinants of land use change. Together the empirical results demonstrate key new scientific opportunities arising from the approach we present. PMID:21876607
Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K
2016-07-12
We report the development and implementation of an energy decomposition analysis (EDA) scheme in the ONETEP linear-scaling electronic structure package. Our approach is hybrid as it combines the localized molecular orbital EDA (Su, P.; Li, H. J. Chem. Phys., 2009, 131, 014102) and the absolutely localized molecular orbital EDA (Khaliullin, R. Z.; et al. J. Phys. Chem. A, 2007, 111, 8753-8765) to partition the intermolecular interaction energy into chemically distinct components (electrostatic, exchange, correlation, Pauli repulsion, polarization, and charge transfer). Limitations shared in EDA approaches such as the issue of basis set dependence in polarization and charge transfer are discussed, and a remedy to this problem is proposed that exploits the strictly localized property of the ONETEP orbitals. Our method is validated on a range of complexes with interactions relevant to drug design. We demonstrate the capabilities for large-scale calculations with our approach on complexes of thrombin with an inhibitor comprised of up to 4975 atoms. Given the capability of ONETEP for large-scale calculations, such as on entire proteins, we expect that our EDA scheme can be applied in a large range of biomolecular problems, especially in the context of drug design.
Dynamics of local grid manipulations for internal flow problems
NASA Technical Reports Server (NTRS)
Eiseman, Peter R.; Snyder, Aaron; Choo, Yung K.
1991-01-01
The control point method of algebraic grid generation is briefly reviewed. The review proceeds from the general statement of the method in 2-D unencumbered by detailed mathematical formulation. The method is supported by an introspective discussion which provides the basis for confidence in the approach. The more complex 3-D formulation is then presented as a natural generalization. Application of the method is carried out through 2-D examples which demonstrate the technique.
Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction
NASA Astrophysics Data System (ADS)
Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun
2018-07-01
People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.
Unsupervised Segmentation of Head Tissues from Multi-modal MR Images for EEG Source Localization.
Mahmood, Qaiser; Chodorowski, Artur; Mehnert, Andrew; Gellermann, Johanna; Persson, Mikael
2015-08-01
In this paper, we present and evaluate an automatic unsupervised segmentation method, hierarchical segmentation approach (HSA)-Bayesian-based adaptive mean shift (BAMS), for use in the construction of a patient-specific head conductivity model for electroencephalography (EEG) source localization. It is based on a HSA and BAMS for segmenting the tissues from multi-modal magnetic resonance (MR) head images. The evaluation of the proposed method was done both directly in terms of segmentation accuracy and indirectly in terms of source localization accuracy. The direct evaluation was performed relative to a commonly used reference method brain extraction tool (BET)-FMRIB's automated segmentation tool (FAST) and four variants of the HSA using both synthetic data and real data from ten subjects. The synthetic data includes multiple realizations of four different noise levels and several realizations of typical noise with a 20% bias field level. The Dice index and Hausdorff distance were used to measure the segmentation accuracy. The indirect evaluation was performed relative to the reference method BET-FAST using synthetic two-dimensional (2D) multimodal magnetic resonance (MR) data with 3% noise and synthetic EEG (generated for a prescribed source). The source localization accuracy was determined in terms of localization error and relative error of potential. The experimental results demonstrate the efficacy of HSA-BAMS, its robustness to noise and the bias field, and that it provides better segmentation accuracy than the reference method and variants of the HSA. They also show that it leads to a more accurate localization accuracy than the commonly used reference method and suggest that it has potential as a surrogate for expert manual segmentation for the EEG source localization problem.
A high-resolution computational localization method for transcranial magnetic stimulation mapping.
Aonuma, Shinta; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa; Takakura, Tomokazu; Tamura, Manabu; Muragaki, Yoshihiro
2018-05-15
Transcranial magnetic stimulation (TMS) is used for the mapping of brain motor functions. The complexity of the brain deters determining the exact localization of the stimulation site using simplified methods (e.g., the region below the center of the TMS coil) or conventional computational approaches. This study aimed to present a high-precision localization method for a specific motor area by synthesizing computed non-uniform current distributions in the brain for multiple sessions of TMS. Peritumoral mapping by TMS was conducted on patients who had intra-axial brain neoplasms located within or close to the motor speech area. The electric field induced by TMS was computed using realistic head models constructed from magnetic resonance images of patients. A post-processing method was implemented to determine a TMS hotspot by combining the computed electric fields for the coil orientations and positions that delivered high motor-evoked potentials during peritumoral mapping. The method was compared to the stimulation site localized via intraoperative direct brain stimulation and navigated TMS. Four main results were obtained: 1) the dependence of the computed hotspot area on the number of peritumoral measurements was evaluated; 2) the estimated localization of the hand motor area in eight non-affected hemispheres was in good agreement with the position of a so-called "hand-knob"; 3) the estimated hotspot areas were not sensitive to variations in tissue conductivity; and 4) the hand motor areas estimated by this proposal and direct electric stimulation (DES) were in good agreement in the ipsilateral hemisphere of four glioma patients. The TMS localization method was validated by well-known positions of the "hand-knob" in brains for the non-affected hemisphere, and by a hotspot localized via DES during awake craniotomy for the tumor-containing hemisphere. Copyright © 2018 Elsevier Inc. All rights reserved.
A global approach for using kinematic redundancy to minimize base reactions of manipulators
NASA Technical Reports Server (NTRS)
Chung, C. L.; Desa, S.
1989-01-01
An important consideration in the use of manipulators in microgravity environments is the minimization of the base reactions, i.e. the magnitude of the force and the moment exerted by the manipulator on its base as it performs its tasks. One approach which was proposed and implemented is to use the redundant degree of freedom in a kinematically redundant manipulator to plan manipulator trajectories to minimize base reactions. A global approach was developed for minimizing the magnitude of the base reactions for kinematically redundant manipulators which integrates the Partitioned Jacobian method of redundancy resolution, a 4-3-4 joint-trajectory representation and the minimization of a cost function which is the time-integral of the magnitude of the base reactions. The global approach was also compared with a local approach developed earlier for the case of point-to-point motion of a three degree-of-freedom planar manipulator with one redundant degree-of-freedom. The results show that the global approach is more effective in reducing and smoothing the base force while the local approach is superior in reducing the base moment.
D'Elia, Marta; Perego, Mauro; Bochev, Pavel B.; ...
2015-12-21
We develop and analyze an optimization-based method for the coupling of nonlocal and local diffusion problems with mixed volume constraints and boundary conditions. The approach formulates the coupling as a control problem where the states are the solutions of the nonlocal and local equations, the objective is to minimize their mismatch on the overlap of the nonlocal and local domains, and the controls are virtual volume constraints and boundary conditions. When some assumptions on the kernel functions hold, we prove that the resulting optimization problem is well-posed and discuss its implementation using Sandia’s agile software components toolkit. As a result,more » the latter provides the groundwork for the development of engineering analysis tools, while numerical results for nonlocal diffusion in three-dimensions illustrate key properties of the optimization-based coupling method.« less
Spatial reconstruction of single-cell gene expression data.
Satija, Rahul; Farrell, Jeffrey A; Gennert, David; Schier, Alexander F; Regev, Aviv
2015-05-01
Spatial localization is a key determinant of cellular fate and behavior, but methods for spatially resolved, transcriptome-wide gene expression profiling across complex tissues are lacking. RNA staining methods assay only a small number of transcripts, whereas single-cell RNA-seq, which measures global gene expression, separates cells from their native spatial context. Here we present Seurat, a computational strategy to infer cellular localization by integrating single-cell RNA-seq data with in situ RNA patterns. We applied Seurat to spatially map 851 single cells from dissociated zebrafish (Danio rerio) embryos and generated a transcriptome-wide map of spatial patterning. We confirmed Seurat's accuracy using several experimental approaches, then used the strategy to identify a set of archetypal expression patterns and spatial markers. Seurat correctly localizes rare subpopulations, accurately mapping both spatially restricted and scattered groups. Seurat will be applicable to mapping cellular localization within complex patterned tissues in diverse systems.
Distributed traffic signal control using fuzzy logic
NASA Technical Reports Server (NTRS)
Chiu, Stephen
1992-01-01
We present a distributed approach to traffic signal control, where the signal timing parameters at a given intersection are adjusted as functions of the local traffic condition and of the signal timing parameters at adjacent intersections. Thus, the signal timing parameters evolve dynamically using only local information to improve traffic flow. This distributed approach provides for a fault-tolerant, highly responsive traffic management system. The signal timing at an intersection is defined by three parameters: cycle time, phase split, and offset. We use fuzzy decision rules to adjust these three parameters based only on local information. The amount of change in the timing parameters during each cycle is limited to a small fraction of the current parameters to ensure smooth transition. We show the effectiveness of this method through simulation of the traffic flow in a network of controlled intersections.
Computing 3-D steady supersonic flow via a new Lagrangian approach
NASA Technical Reports Server (NTRS)
Loh, C. Y.; Liou, M.-S.
1993-01-01
The new Lagrangian method introduced by Loh and Hui (1990) is extended for 3-D steady supersonic flow computation. Details of the conservation form, the implementation of the local Riemann solver, and the Godunov and the high resolution TVD schemes are presented. The new approach is robust yet accurate, capable of handling complicated geometry and reactions between discontinuous waves. It keeps all the advantages claimed in the 2-D method of Loh and Hui, e.g., crisp resolution for a slip surface (contact discontinuity) and automatic grid generation along the stream.
NASA Astrophysics Data System (ADS)
Squizzato, Stefania; Masiol, Mauro
2015-10-01
The air quality is influenced by the potential effects of meteorology at meso- and synoptic scales. While local weather and mixing layer dynamics mainly drive the dispersion of sources at small scales, long-range transports affect the movements of air masses over regional, transboundary and even continental scales. Long-range transport may advect polluted air masses from hot-spots by increasing the levels of pollution at nearby or remote locations or may further raise air pollution levels where external air masses originate from other hot-spots. Therefore, the knowledge of ground-wind circulation and potential long-range transports is fundamental not only to evaluate how local or external sources may affect the air quality at a receptor site but also to quantify it. This review is focussed on establishing the relationships among PM2.5 sources, meteorological condition and air mass origin in the Po Valley, which is one of the most polluted areas in Europe. We have chosen the results from a recent study carried out in Venice (Eastern Po Valley) and have analysed them using different statistical approaches to understand the influence of external and local contribution of PM2.5 sources. External contributions were evaluated by applying Trajectory Statistical Methods (TSMs) based on back-trajectory analysis including (i) back-trajectories cluster analysis, (ii) potential source contribution function (PSCF) and (iii) concentration weighted trajectory (CWT). Furthermore, the relationships between the source contributions and ground-wind circulation patterns were investigated by using (iv) cluster analysis on wind data and (v) conditional probability function (CPF). Finally, local source contribution have been estimated by applying the Lenschow' approach. In summary, the integrated approach of different techniques has successfully identified both local and external sources of particulate matter pollution in a European hot-spot affected by the worst air quality.
Training NOAA Staff on Effective Communication Methods with Local Climate Users
NASA Astrophysics Data System (ADS)
Timofeyeva, M. M.; Mayes, B.
2011-12-01
Since 2002 NOAA National Weather Service (NWS) Climate Services Division (CSD) offered training opportunities to NWS staff. As a result of eight-year-long development of the training program, NWS offers three training courses and about 25 online distance learning modules covering various climate topics: climate data and observations, climate variability and change, NWS national and local climate products, their tools, skill, and interpretation. Leveraging climate information and expertise available at all NOAA line offices and partners allows delivery of the most advanced knowledge and is a very critical aspect of the training program. NWS challenges in providing local climate services includes effective communication techniques on provide highly technical scientific information to local users. Addressing this challenge requires well trained, climate-literate workforce at local level capable of communicating the NOAA climate products and services as well as provide climate-sensitive decision support. Trained NWS climate service personnel use proactive and reactive approaches and professional education methods in communicating climate variability and change information to local users. Both scientifically-unimpaired messages and amiable communication techniques such as story telling approach are important in developing an engaged dialog between the climate service providers and users. Several pilot projects NWS CSD conducted in the past year applied the NWS climate services training program to training events for NOAA technical user groups. The technical user groups included natural resources managers, engineers, hydrologists, and planners for transportation infrastructure. Training of professional user groups required tailoring the instructions to the potential applications of each group of users. Training technical user identified the following critical issues: (1) Knowledge of target audience expectations, initial knowledge status, and potential use of climate information; (2) Leveraging partnership with climate services providers; and, (3) Applying 3H training approach, where the first H stands for Head (trusted science), the second H stands for Heart (make it easy), and the third H for Hand (support with applications).
Nonlocal and Mixed-Locality Multiscale Finite Element Methods
Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.
2018-03-27
In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less
Nonlocal and Mixed-Locality Multiscale Finite Element Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Costa, Timothy B.; Bond, Stephen D.; Littlewood, David J.
In many applications the resolution of small-scale heterogeneities remains a significant hurdle to robust and reliable predictive simulations. In particular, while material variability at the mesoscale plays a fundamental role in processes such as material failure, the resolution required to capture mechanisms at this scale is often computationally intractable. Multiscale methods aim to overcome this difficulty through judicious choice of a subscale problem and a robust manner of passing information between scales. One promising approach is the multiscale finite element method, which increases the fidelity of macroscale simulations by solving lower-scale problems that produce enriched multiscale basis functions. Here, inmore » this study, we present the first work toward application of the multiscale finite element method to the nonlocal peridynamic theory of solid mechanics. This is achieved within the context of a discontinuous Galerkin framework that facilitates the description of material discontinuities and does not assume the existence of spatial derivatives. Analysis of the resulting nonlocal multiscale finite element method is achieved using the ambulant Galerkin method, developed here with sufficient generality to allow for application to multiscale finite element methods for both local and nonlocal models that satisfy minimal assumptions. Finally, we conclude with preliminary results on a mixed-locality multiscale finite element method in which a nonlocal model is applied at the fine scale and a local model at the coarse scale.« less
Mass preserving registration for lung CT
NASA Astrophysics Data System (ADS)
Gorbunova, Vladlena; Lo, Pechin; Loeve, Martine; Tiddens, Harm A.; Sporring, Jon; Nielsen, Mads; de Bruijne, Marleen
2009-02-01
In this paper, we evaluate a novel image registration method on a set of expiratory-inspiratory pairs of computed tomography (CT) lung scans. A free-form multi resolution image registration technique is used to match two scans of the same subject. To account for the differences in the lung intensities due to differences in inspiration level, we propose to adjust the intensity of lung tissue according to the local expansion or compression. An image registration method without intensity adjustment is compared to the proposed method. Both approaches are evaluated on a set of 10 pairs of expiration and inspiration CT scans of children with cystic fibrosis lung disease. The proposed method with mass preserving adjustment results in significantly better alignment of the vessel trees. Analysis of local volume change for regions with trapped air compared to normally ventilated regions revealed larger differences between these regions in the case of mass preserving image registration, indicating that mass preserving registration is better at capturing localized differences in lung deformation.
An Observationally-Centred Method to Quantify the Changing Shape of Local Temperature Distributions
NASA Astrophysics Data System (ADS)
Chapman, S. C.; Stainforth, D. A.; Watkins, N. W.
2014-12-01
For climate sensitive decisions and adaptation planning, guidance on how local climate is changing is needed at the specific thresholds relevant to particular impacts or policy endeavours. This requires the quantification of how the distributions of variables, such as daily temperature, are changing at specific quantiles. These temperature distributions are non-normal and vary both geographically and in time. We present a method[1,2] for analysing local climatic time series data to assess which quantiles of the local climatic distribution show the greatest and most robust changes. We have demonstrated this approach using the E-OBS gridded dataset[3] which consists of time series of local daily temperature across Europe over the last 60 years. Our method extracts the changing cumulative distribution function over time and uses a simple mathematical deconstruction of how the difference between two observations from two different time periods can be assigned to the combination of natural statistical variability and/or the consequences of secular climate change. The change in temperature can be tracked at a temperature threshold, at a likelihood, or at a given return time, independently for each geographical location. Geographical correlations are thus an output of our method and reflect both climatic properties (local and synoptic), and spatial correlations inherent in the observation methodology. We find as an output many regionally consistent patterns of response of potential value in adaptation planning. For instance, in a band from Northern France to Denmark the hottest days in the summer temperature distribution have seen changes of at least 2°C over a 43 year period; over four times the global mean change over the same period. We discuss methods to quantify the robustness of these observed sensitivities and their statistical likelihood. This approach also quantifies the level of detail at which one might wish to see agreement between climate models and observations if such models are to be used directly as tools to assess climate change impacts at local scales. [1] S C Chapman, D A Stainforth, N W Watkins, 2013, Phil. Trans. R. Soc. A, 371 20120287. [2] D A Stainforth, S C Chapman, N W Watkins, 2013, Environ. Res. Lett. 8, 034031 [3] Haylock, M.R. et al., 2008, J. Geophys. Res (Atmospheres), 113, D20119
Decentralized control experiments on NASA's flexible grid
NASA Technical Reports Server (NTRS)
Ozguner, U.; Yurkowich, S.; Martin, J., III; Al-Abbass, F.
1986-01-01
Methods arising from the area of decentralized control are emerging for analysis and control synthesis for large flexible structures. In this paper the control strategy involves a decentralized model reference adaptive approach using a variable structure control. Local models are formulated based on desired damping and response time in a model-following scheme for various modal configurations. Variable structure controllers are then designed employing co-located angular rate and position feedback. In this scheme local control forces the system to move on a local sliding mode in some local error space. An important feature of this approach is that the local subsystem is made insensitive to dynamical interactions with other subsystems once the sliding surface is reached. Experiments based on the above have been performed for NASA's flexible grid experimental apparatus. The grid is designed to admit appreciable low-frequency structural dynamics, and allows for implementation of distributed computing components, inertial sensors, and actuation devices. A finite-element analysis of the grid provides the model for control system design and simulation; results of several simulations are reported on here, and a discussion of application experiments on the apparatus is presented.
Working towards the SDGs: measuring resilience from a practitioner's perspective
NASA Astrophysics Data System (ADS)
van Manen, S. M.; Both, M.
2015-12-01
The broad universal nature of the SDGs requires integrated approaches across development sectors and action at a variety of scales: from global to local. In humanitarian and development contexts, particularly at the local level, working towards these goals is increasingly approached through the concept of resilience. Resilience is broadly defined as the ability to minimise the impact of, cope with and recover from the consequences of shocks and stresses, both natural and manmade, without compromising long-term prospects. Key in this are the physical resources required and the ability to organise these prior to and during a crisis. However, despite the active debate on the theoretical foundations of resilience there is a comparative lack in the development of measurement approaches. The conceptual diversity of the few existing approaches further illustrates the complexity of operationalising the concept. Here we present a practical method to measure community resilience using a questionnaire composed of a generic set of household-level indicators. Rooted in the sustainable livelihoods approach it considers 6 domains: human, social, natural, economic, physical and political, and evaluates both resources and socio-cognitive factors. It is intended to be combined with more specific intervention-based questionnaires to systematically assess, monitor and evaluate the resilience of a community and the contribution of specific activities to resilience. Its use will be illustrated using a Haiti-based case study. The method presented supports knowledge-based decision making and impact monitoring. Furthermore, the evidence-based way of working contributes to accountability to a range of stakeholders and can be used for resource mobilisation. However, it should be noted that due to its inherent complexity and comprehensive nature there is no method or combination of methods and data types that can fully capture resilience in and across all of its facets, scales and domains.
Trajectory-based visual localization in underwater surveying missions.
Burguera, Antoni; Bonin-Font, Francisco; Oliver, Gabriel
2015-01-14
We present a new vision-based localization system applied to an autonomous underwater vehicle (AUV) with limited sensing and computation capabilities. The traditional EKF-SLAM approaches are usually expensive in terms of execution time; the approach presented in this paper strengthens this method by adopting a trajectory-based schema that reduces the computational requirements. The pose of the vehicle is estimated using an extended Kalman filter (EKF), which predicts the vehicle motion by means of a visual odometer and corrects these predictions using the data associations (loop closures) between the current frame and the previous ones. One of the most important steps in this procedure is the image registration method, as it reinforces the data association and, thus, makes it possible to close loops reliably. Since the use of standard EKFs entail linearization errors that can distort the vehicle pose estimations, the approach has also been tested using an iterated Kalman filter (IEKF). Experiments have been conducted using a real underwater vehicle in controlled scenarios and in shallow sea waters, showing an excellent performance with very small errors, both in the vehicle pose and in the overall trajectory estimates.
Automatic detection and recognition of signs from natural scenes.
Chen, Xilin; Yang, Jie; Zhang, Jing; Waibel, Alex
2004-01-01
In this paper, we present an approach to automatic detection and recognition of signs from natural scenes, and its application to a sign translation task. The proposed approach embeds multiresolution and multiscale edge detection, adaptive searching, color analysis, and affine rectification in a hierarchical framework for sign detection, with different emphases at each phase to handle the text in different sizes, orientations, color distributions and backgrounds. We use affine rectification to recover deformation of the text regions caused by an inappropriate camera view angle. The procedure can significantly improve text detection rate and optical character recognition (OCR) accuracy. Instead of using binary information for OCR, we extract features from an intensity image directly. We propose a local intensity normalization method to effectively handle lighting variations, followed by a Gabor transform to obtain local features, and finally a linear discriminant analysis (LDA) method for feature selection. We have applied the approach in developing a Chinese sign translation system, which can automatically detect and recognize Chinese signs as input from a camera, and translate the recognized text into English.
Attractive electron-electron interactions within robust local fitting approximations.
Merlot, Patrick; Kjærgaard, Thomas; Helgaker, Trygve; Lindh, Roland; Aquilante, Francesco; Reine, Simen; Pedersen, Thomas Bondo
2013-06-30
An analysis of Dunlap's robust fitting approach reveals that the resulting two-electron integral matrix is not manifestly positive semidefinite when local fitting domains or non-Coulomb fitting metrics are used. We present a highly local approximate method for evaluating four-center two-electron integrals based on the resolution-of-the-identity (RI) approximation and apply it to the construction of the Coulomb and exchange contributions to the Fock matrix. In this pair-atomic resolution-of-the-identity (PARI) approach, atomic-orbital (AO) products are expanded in auxiliary functions centered on the two atoms associated with each product. Numerical tests indicate that in 1% or less of all Hartree-Fock and Kohn-Sham calculations, the indefinite integral matrix causes nonconvergence in the self-consistent-field iterations. In these cases, the two-electron contribution to the total energy becomes negative, meaning that the electronic interaction is effectively attractive, and the total energy is dramatically lower than that obtained with exact integrals. In the vast majority of our test cases, however, the indefiniteness does not interfere with convergence. The total energy accuracy is comparable to that of the standard Coulomb-metric RI method. The speed-up compared with conventional algorithms is similar to the RI method for Coulomb contributions; exchange contributions are accelerated by a factor of up to eight with a triple-zeta quality basis set. A positive semidefinite integral matrix is recovered within PARI by introducing local auxiliary basis functions spanning the full AO product space, as may be achieved by using Cholesky-decomposition techniques. Local completion, however, slows down the algorithm to a level comparable with or below conventional calculations. Copyright © 2013 Wiley Periodicals, Inc.
Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre
2013-01-01
The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.
Bayesian-based localization of wireless capsule endoscope using received signal strength.
Nadimi, Esmaeil S; Blanes-Vidal, Victoria; Tarokh, Vahid; Johansen, Per Michael
2014-01-01
In wireless body area sensor networking (WBASN) applications such as gastrointestinal (GI) tract monitoring using wireless video capsule endoscopy (WCE), the performance of out-of-body wireless link propagating through different body media (i.e. blood, fat, muscle and bone) is still under investigation. Most of the localization algorithms are vulnerable to the variations of path-loss coefficient resulting in unreliable location estimation. In this paper, we propose a novel robust probabilistic Bayesian-based approach using received-signal-strength (RSS) measurements that accounts for Rayleigh fading, variable path-loss exponent and uncertainty in location information received from the neighboring nodes and anchors. The results of this study showed that the localization root mean square error of our Bayesian-based method was 1.6 mm which was very close to the optimum Cramer-Rao lower bound (CRLB) and significantly smaller than that of other existing localization approaches (i.e. classical MDS (64.2mm), dwMDS (32.2mm), MLE (36.3mm) and POCS (2.3mm)).
Global, quantitative and dynamic mapping of protein subcellular localization
Itzhak, Daniel N; Tyanova, Stefka; Cox, Jürgen; Borner, Georg HH
2016-01-01
Subcellular localization critically influences protein function, and cells control protein localization to regulate biological processes. We have developed and applied Dynamic Organellar Maps, a proteomic method that allows global mapping of protein translocation events. We initially used maps statically to generate a database with localization and absolute copy number information for over 8700 proteins from HeLa cells, approaching comprehensive coverage. All major organelles were resolved, with exceptional prediction accuracy (estimated at >92%). Combining spatial and abundance information yielded an unprecedented quantitative view of HeLa cell anatomy and organellar composition, at the protein level. We subsequently demonstrated the dynamic capabilities of the approach by capturing translocation events following EGF stimulation, which we integrated into a quantitative model. Dynamic Organellar Maps enable the proteome-wide analysis of physiological protein movements, without requiring any reagents specific to the investigated process, and will thus be widely applicable in cell biology. DOI: http://dx.doi.org/10.7554/eLife.16950.001 PMID:27278775
NASA Astrophysics Data System (ADS)
Kotchi, Serge Olivier; Brazeau, Stephanie; Ludwig, Antoinette; Aube, Guy; Berthiaume, Pilippe
2016-08-01
Environmental determinants (EVDs) were identified as key determinant of health (DoH) for the emergence and re-emergence of several vector-borne diseases. Maintaining ongoing acquisition of data related to EVDs at local scale and for large regions constitutes a significant challenge. Earth observation (EO) satellites offer a framework to overcome this challenge. However, EO image analysis methods commonly used to estimate EVDs are time and resource consuming. Moreover, variations of microclimatic conditions combined with high landscape heterogeneity limit the effectiveness of climatic variables derived from EO. In this study, we present what are DoH and EVDs, the impacts of EVDs on vector-borne diseases in the context of global environmental change, the need to characterize EVDs of vector-borne diseases at local scale and its challenges, and finally we propose an approach based on EO images to estimate at local scale indicators pertaining to EVDs of vector-borne diseases.
Blair-Stevens, Terry; Cork, Sarah
2008-01-01
A public health project is described which used social marketing philosophy and techniques to find out how to help facilitate breast-feeding in public places and for mothers returning to work. As part of a strategy to increase local breast-feeding rates, Brighton and Hove Healthy City Partnership, representing the local Primary Care Trust, City Council and the business, academic and voluntary sectors, worked with a social marketing consultancy. The consultancy carried out a literature review and qualitative research that used creative engagement methods to consult with local people. The consultations were with key stakeholders, mothers, and groups traditionally less interested in the subject of breast-feeding, such as employers, elderly people, teenage boys, and fathers. The qualitative research generated in-depth insight and soundly-based, practical recommendations for facilitating breast-feeding. The social marketing approach helped to establish that any ensuing policies and practices would be acceptable to a wide range of the local population.
Using Imaging Methods to Interrogate Radiation-Induced Cell Signaling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shankaran, Harish; Weber, Thomas J.; Freiin von Neubeck, Claere H.
2012-04-01
There is increasing emphasis on the use of systems biology approaches to define radiation induced responses in cells and tissues. Such approaches frequently rely on global screening using various high throughput 'omics' platforms. Although these methods are ideal for obtaining an unbiased overview of cellular responses, they often cannot reflect the inherent heterogeneity of the system or provide detailed spatial information. Additionally, performing such studies with multiple sampling time points can be prohibitively expensive. Imaging provides a complementary method with high spatial and temporal resolution capable of following the dynamics of signaling processes. In this review, we utilize specific examplesmore » to illustrate how imaging approaches have furthered our understanding of radiation induced cellular signaling. Particular emphasis is placed on protein co-localization, and oscillatory and transient signaling dynamics.« less
Ju, Chunhua; Xu, Chonghuan
2013-01-01
Although there are many good collaborative recommendation methods, it is still a challenge to increase the accuracy and diversity of these methods to fulfill users' preferences. In this paper, we propose a novel collaborative filtering recommendation approach based on K-means clustering algorithm. In the process of clustering, we use artificial bee colony (ABC) algorithm to overcome the local optimal problem caused by K-means. After that we adopt the modified cosine similarity to compute the similarity between users in the same clusters. Finally, we generate recommendation results for the corresponding target users. Detailed numerical analysis on a benchmark dataset MovieLens and a real-world dataset indicates that our new collaborative filtering approach based on users clustering algorithm outperforms many other recommendation methods.
Ju, Chunhua
2013-01-01
Although there are many good collaborative recommendation methods, it is still a challenge to increase the accuracy and diversity of these methods to fulfill users' preferences. In this paper, we propose a novel collaborative filtering recommendation approach based on K-means clustering algorithm. In the process of clustering, we use artificial bee colony (ABC) algorithm to overcome the local optimal problem caused by K-means. After that we adopt the modified cosine similarity to compute the similarity between users in the same clusters. Finally, we generate recommendation results for the corresponding target users. Detailed numerical analysis on a benchmark dataset MovieLens and a real-world dataset indicates that our new collaborative filtering approach based on users clustering algorithm outperforms many other recommendation methods. PMID:24381525
Crowley, D Max; Greenberg, Mark T; Feinberg, Mark E; Spoth, Richard L; Redmond, Cleve R
2012-02-01
A substantial challenge in improving public health is how to facilitate the local adoption of evidence-based interventions (EBIs). To do so, an important step is to build local stakeholders' knowledge and decision-making skills regarding the adoption and implementation of EBIs. One EBI delivery system, called PROSPER (PROmoting School-community-university Partnerships to Enhance Resilience), has effectively mobilized community prevention efforts, implemented prevention programming with quality, and consequently decreased youth substance abuse. While these results are encouraging, another objective is to increase local stakeholder knowledge of best practices for adoption, implementation and evaluation of EBIs. Using a mixed methods approach, we assessed local stakeholder knowledge of these best practices over 5 years, in 28 intervention and control communities. Results indicated that the PROSPER partnership model led to significant increases in expert knowledge regarding the selection, implementation, and evaluation of evidence-based interventions. Findings illustrate the limited programming knowledge possessed by members of local prevention efforts, the difficulty of complete knowledge transfer, and highlight one method for cultivating that knowledge.
Tetui, Moses; Coe, Anna-Britt; Hurtig, Anna-Karin; Ekirapa-Kiracho, Elizabeth; Kiwanuka, Suzanne N.
2017-01-01
ABSTRACT Background: To achieve a sustained improvement in health outcomes, the way health interventions are designed and implemented is critical. A participatory action research approach is applauded for building local capacity such as health management. Thereby increasing the chances of sustaining health interventions. Objective: This study explored stakeholder experiences of using PAR to implement an intervention meant to strengthen the local district capacity. Methods: This was a qualitative study featuring 18 informant interviews and a focus group discussion. Respondents included politicians, administrators, health managers and external researchers in three rural districts of eastern Uganda where PAR was used. Qualitative content analysis was used to explore stakeholders’ experiences. Results: ‘Being awakened’ emerged as an overarching category capturing stakeholder experiences of using PAR. This was described in four interrelated and sequential categories, which included: stakeholder involvement, being invigorated, the risk of wide stakeholder engagement and balancing the risk of wide stakeholder engagement. In terms of involvement, the stakeholders felt engaged, a sense of ownership, felt valued and responsible during the implementation of the project. Being invigorated meant being awakened, inspired and supported. On the other hand, risks such as conflict, stress and uncertainty were reported, and finally these risks were balanced through tolerance, risk-awareness and collaboration. Conclusions: The PAR approach was desirable because it created opportunities for building local capacity and enhancing continuity of interventions. Stakeholders were awakened by the approach, as it made them more responsive to systems challenges and possible local solutions. Nonetheless, the use of PAR should be considered in full knowledge of the undesirable and complex experiences, such as uncertainty, conflict and stress. This will enable adequate preparation and management of stakeholder expectations to maximize the benefits of the approach. PMID:28856974
Fuchs, Julian E; Waldner, Birgit J; Huber, Roland G; von Grafenstein, Susanne; Kramer, Christian; Liedl, Klaus R
2015-03-10
Conformational dynamics are central for understanding biomolecular structure and function, since biological macromolecules are inherently flexible at room temperature and in solution. Computational methods are nowadays capable of providing valuable information on the conformational ensembles of biomolecules. However, analysis tools and intuitive metrics that capture dynamic information from in silico generated structural ensembles are limited. In standard work-flows, flexibility in a conformational ensemble is represented through residue-wise root-mean-square fluctuations or B-factors following a global alignment. Consequently, these approaches relying on global alignments discard valuable information on local dynamics. Results inherently depend on global flexibility, residue size, and connectivity. In this study we present a novel approach for capturing positional fluctuations based on multiple local alignments instead of one single global alignment. The method captures local dynamics within a structural ensemble independent of residue type by splitting individual local and global degrees of freedom of protein backbone and side-chains. Dependence on residue type and size in the side-chains is removed via normalization with the B-factors of the isolated residue. As a test case, we demonstrate its application to a molecular dynamics simulation of bovine pancreatic trypsin inhibitor (BPTI) on the millisecond time scale. This allows for illustrating different time scales of backbone and side-chain flexibility. Additionally, we demonstrate the effects of ligand binding on side-chain flexibility of three serine proteases. We expect our new methodology for quantifying local flexibility to be helpful in unraveling local changes in biomolecular dynamics.
NASA Astrophysics Data System (ADS)
Wu, Bin; Yu, Bailang; Wu, Qiusheng; Huang, Yan; Chen, Zuoqi; Wu, Jianping
2016-10-01
Individual tree crown delineation is of great importance for forest inventory and management. The increasing availability of high-resolution airborne light detection and ranging (LiDAR) data makes it possible to delineate the crown structure of individual trees and deduce their geometric properties with high accuracy. In this study, we developed an automated segmentation method that is able to fully utilize high-resolution LiDAR data for detecting, extracting, and characterizing individual tree crowns with a multitude of geometric and topological properties. The proposed approach captures topological structure of forest and quantifies topological relationships of tree crowns by using a graph theory-based localized contour tree method, and finally segments individual tree crowns by analogy of recognizing hills from a topographic map. This approach consists of five key technical components: (1) derivation of canopy height model from airborne LiDAR data; (2) generation of contours based on the canopy height model; (3) extraction of hierarchical structures of tree crowns using the localized contour tree method; (4) delineation of individual tree crowns by segmenting hierarchical crown structure; and (5) calculation of geometric and topological properties of individual trees. We applied our new method to the Medicine Bow National Forest in the southwest of Laramie, Wyoming and the HJ Andrews Experimental Forest in the central portion of the Cascade Range of Oregon, U.S. The results reveal that the overall accuracy of individual tree crown delineation for the two study areas achieved 94.21% and 75.07%, respectively. Our method holds great potential for segmenting individual tree crowns under various forest conditions. Furthermore, the geometric and topological attributes derived from our method provide comprehensive and essential information for forest management.
MacLaren, David; Asugeni, James; Redman-MacLaren, Michelle
2015-12-01
To provide an example of one model of research capacity building for mental health from a remote setting in Solomon Islands. The Atoifi Health Research Group is building health research capacity with a health service on the remote east coast of Malaita, Solomon Islands. The group uses a 'learn-by-doing' approach embedded in health service and community-level health projects. The group is eclectic in nature and deliberately engages a variety of partners to discover culturally informed methods of collecting, analysing and disseminating research findings. Key successes of the Atoifi Health Research Group are: that it was initiated by Solomon Islanders with self-expressed desire to learn about research; the learn-by-doing model; inclusion of community people to inform questions and socio-cultural appropriateness; and commitment to ongoing support by international researchers. Given different social, cultural, economic, geographic, spiritual and service contexts across the Pacific, locally appropriate approaches need to be considered. Such approaches challenge the orthodox approach of centralized investment to replicate specialist driven approaches of funder nations. Increasing expertise at all levels through participatory capacity building models that define and address local problems may be more sustainable and responsive to local mental health contexts. © The Royal Australian and New Zealand College of Psychiatrists 2015.
The local properties of ocean surface waves by the phase-time method
NASA Technical Reports Server (NTRS)
Huang, Norden E.; Long, Steven R.; Tung, Chi-Chao; Donelan, Mark A.; Yuan, Yeli; Lai, Ronald J.
1992-01-01
A new approach using phase information to view and study the properties of frequency modulation, wave group structures, and wave breaking is presented. The method is applied to ocean wave time series data and a new type of wave group (containing the large 'rogue' waves) is identified. The method also has the capability of broad applications in the analysis of time series data in general.
NASA Astrophysics Data System (ADS)
Min, Junhong; Carlini, Lina; Unser, Michael; Manley, Suliana; Ye, Jong Chul
2015-09-01
Localization microscopy such as STORM/PALM can achieve a nanometer scale spatial resolution by iteratively localizing fluorescence molecules. It was shown that imaging of densely activated molecules can accelerate temporal resolution which was considered as major limitation of localization microscopy. However, this higher density imaging needs to incorporate advanced localization algorithms to deal with overlapping point spread functions (PSFs). In order to address this technical challenges, previously we developed a localization algorithm called FALCON1, 2 using a quasi-continuous localization model with sparsity prior on image space. It was demonstrated in both 2D/3D live cell imaging. However, it has several disadvantages to be further improved. Here, we proposed a new localization algorithm using annihilating filter-based low rank Hankel structured matrix approach (ALOHA). According to ALOHA principle, sparsity in image domain implies the existence of rank-deficient Hankel structured matrix in Fourier space. Thanks to this fundamental duality, our new algorithm can perform data-adaptive PSF estimation and deconvolution of Fourier spectrum, followed by truly grid-free localization using spectral estimation technique. Furthermore, all these optimizations are conducted on Fourier space only. We validated the performance of the new method with numerical experiments and live cell imaging experiment. The results confirmed that it has the higher localization performances in both experiments in terms of accuracy and detection rate.
The theory precision analyse of RFM localization of satellite remote sensing imagery
NASA Astrophysics Data System (ADS)
Zhang, Jianqing; Xv, Biao
2009-11-01
The tradition method of detecting precision of Rational Function Model(RFM) is to make use of a great deal check points, and it calculates mean square error through comparing calculational coordinate with known coordinate. This method is from theory of probability, through a large number of samples to statistic estimate value of mean square error, we can think its estimate value approaches in its true when samples are well enough. This paper is from angle of survey adjustment, take law of propagation of error as the theory basis, and it calculates theory precision of RFM localization. Then take the SPOT5 three array imagery as experiment data, and the result of traditional method and narrated method in the paper are compared, while has confirmed tradition method feasible, and answered its theory precision question from the angle of survey adjustment.
Shen, Hui-min; Lee, Kok-Meng; Hu, Liang; Foong, Shaohui; Fu, Xin
2016-01-01
Localization of active neural source (ANS) from measurements on head surface is vital in magnetoencephalography. As neuron-generated magnetic fields are extremely weak, significant uncertainties caused by stochastic measurement interference complicate its localization. This paper presents a novel computational method based on reconstructed magnetic field from sparse noisy measurements for enhanced ANS localization by suppressing effects of unrelated noise. In this approach, the magnetic flux density (MFD) in the nearby current-free space outside the head is reconstructed from measurements through formulating the infinite series solution of the Laplace's equation, where boundary condition (BC) integrals over the entire measurements provide "smooth" reconstructed MFD with the decrease in unrelated noise. Using a gradient-based method, reconstructed MFDs with good fidelity are selected for enhanced ANS localization. The reconstruction model, spatial interpolation of BC, parametric equivalent current dipole-based inverse estimation algorithm using reconstruction, and gradient-based selection are detailed and validated. The influences of various source depths and measurement signal-to-noise ratio levels on the estimated ANS location are analyzed numerically and compared with a traditional method (where measurements are directly used), and it was demonstrated that gradient-selected high-fidelity reconstructed data can effectively improve the accuracy of ANS localization.
NASA Astrophysics Data System (ADS)
Zhang, Xuebing; Liu, Ning; Xi, Jiaxin; Zhang, Yunqi; Zhang, Wenchun; Yang, Peipei
2017-08-01
How to analyze the nonstationary response signals and obtain vibration characters is extremely important in the vibration-based structural diagnosis methods. In this work, we introduce a more reasonable time-frequency decomposition method termed local mean decomposition (LMD) to instead the widely-used empirical mode decomposition (EMD). By employing the LMD method, one can derive a group of component signals, each of which is more stationary, and then analyze the vibration state and make the assessment of structural damage of a construction or building. We illustrated the effectiveness of LMD by a synthetic data and an experimental data recorded in a simply-supported reinforced concrete beam. Then based on the decomposition results, an elementary method of damage diagnosis was proposed.
The FLAME-slab method for electromagnetic wave scattering in aperiodic slabs
NASA Astrophysics Data System (ADS)
Mansha, Shampy; Tsukerman, Igor; Chong, Y. D.
2017-12-01
The proposed numerical method, "FLAME-slab," solves electromagnetic wave scattering problems for aperiodic slab structures by exploiting short-range regularities in these structures. The computational procedure involves special difference schemes with high accuracy even on coarse grids. These schemes are based on Trefftz approximations, utilizing functions that locally satisfy the governing differential equations, as is done in the Flexible Local Approximation Method (FLAME). Radiation boundary conditions are implemented via Fourier expansions in the air surrounding the slab. When applied to ensembles of slab structures with identical short-range features, such as amorphous or quasicrystalline lattices, the method is significantly more efficient, both in runtime and in memory consumption, than traditional approaches. This efficiency is due to the fact that the Trefftz functions need to be computed only once for the whole ensemble.
Mitochondria-specific photoactivation to monitor local sphingosine metabolism and function
Feng, Suihan; Harayama, Takeshi; Montessuit, Sylvie; David, Fabrice PA; Winssinger, Nicolas; Martinou, Jean-Claude
2018-01-01
Photoactivation ('uncaging’) is a powerful approach for releasing bioactive small-molecules in living cells. Current uncaging methods are limited by the random distribution of caged molecules within cells. We have developed a mitochondria-specific photoactivation method, which permitted us to release free sphingosine inside mitochondria and thereafter monitor local sphingosine metabolism by lipidomics. Our results indicate that sphingosine was quickly phosphorylated into sphingosine 1-phosphate (S1P) driven by sphingosine kinases. In time-course studies, the mitochondria-specific uncaged sphingosine demonstrated distinct metabolic patterns compared to globally-released sphingosine, and did not induce calcium spikes. Our data provide direct evidence that sphingolipid metabolism and signaling are highly dependent on the subcellular location and opens up new possibilities to study the effects of lipid localization on signaling and metabolic fate. PMID:29376826
Requirements for Predictive Density Functional Theory Methods for Heavy Materials Equation of State
NASA Astrophysics Data System (ADS)
Mattsson, Ann E.; Wills, John M.
2012-02-01
The difficulties in experimentally determining the Equation of State of actinide and lanthanide materials has driven the development of many computational approaches with varying degree of empiricism and predictive power. While Density Functional Theory (DFT) based on the Schr"odinger Equation (possibly with relativistic corrections including the scalar relativistic approach) combined with local and semi-local functionals has proven to be a successful and predictive approach for many materials, it is not giving enough accuracy, or even is a complete failure, for the actinides. To remedy this failure both an improved fundamental description based on the Dirac Equation (DE) and improved functionals are needed. Based on results obtained using the appropriate fundamental approach of DFT based on the DE we discuss the performance of available semi-local functionals, the requirements for improved functionals for actinide/lanthanide materials, and the similarities in how functionals behave in transition metal oxides. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zúñiga, Juan Pablo Álvarez; Lemarié, Gabriel; Laflorencie, Nicolas
A spin-wave (SW) approach for hard-core bosons is presented to treat the problem of two dimensional boson localization in a random potential. After a short review of the method to compute 1/S-corrected observables, the case of random on-site energy is discussed. Whereas the mean-field solution does not display a Bose glass (BG) phase, 1/S corrections do capture BG physics. In particular, the localization of SW excitations is discussed through the inverse participation ratio.
Classification of event location using matched filters via on-floor accelerometers
NASA Astrophysics Data System (ADS)
Woolard, Americo G.; Malladi, V. V. N. Sriram; Alajlouni, Sa'ed; Tarazaga, Pablo A.
2017-04-01
Recent years have shown prolific advancements in smart infrastructures, allowing buildings of the modern world to interact with their occupants. One of the sought-after attributes of smart buildings is the ability to provide unobtrusive, indoor localization of occupants. The ability to locate occupants indoors can provide a broad range of benefits in areas such as security, emergency response, and resource management. Recent research has shown promising results in occupant building localization, although there is still significant room for improvement. This study presents a passive, small-scale localization system using accelerometers placed around the edges of a small area in an active building environment. The area is discretized into a grid of small squares, and vibration measurements are processed using a pattern matching approach that estimates the location of the source. Vibration measurements are produced with ball-drops, hammer-strikes, and footsteps as the sources of the floor excitation. The developed approach uses matched filters based on a reference data set, and the location is classified using a nearest-neighbor search. This approach detects the appropriate location of impact-like sources i.e. the ball-drops and hammer-strikes with a 100% accuracy. However, this accuracy reduces to 56% for footsteps, with the average localization results being within 0.6 m (α = 0.05) from the true source location. While requiring a reference data set can make this method difficult to implement on a large scale, it may be used to provide accurate localization abilities in areas where training data is readily obtainable. This exploratory work seeks to examine the feasibility of the matched filter and nearest neighbor search approach for footstep and event localization in a small, instrumented area within a multi-story building.
Adjoint-tomography for a Local Surface Structure: Methodology and a Blind Test
NASA Astrophysics Data System (ADS)
Kubina, Filip; Michlik, Filip; Moczo, Peter; Kristek, Jozef; Stripajova, Svetlana
2017-04-01
We have developed a multiscale full-waveform adjoint-tomography method for local surface sedimentary structures with complicated interference wavefields. The local surface sedimentary basins and valleys are often responsible for anomalous earthquake ground motions and corresponding damage in earthquakes. In many cases only relatively small number of records of a few local earthquakes is available for a site of interest. Consequently, prediction of earthquake ground motion at the site has to include numerical modeling for a realistic model of the local structure. Though limited, the information about the local structure encoded in the records is important and irreplaceable. It is therefore reasonable to have a method capable of using the limited information in records for improving a model of the local structure. A local surface structure and its interference wavefield require a specific multiscale approach. In order to verify our inversion method, we performed a blind test. We obtained synthetic seismograms at 8 receivers for 2 local sources, complete description of the sources, positions of the receivers and material parameters of the bedrock. We considered the simplest possible starting model - a homogeneous halfspace made of the bedrock. Using our inversion method we obtained an inverted model. Given the starting model, synthetic seismograms simulated for the inverted model are surprisingly close to the synthetic seismograms simulated for the true structure in the target frequency range up to 4.5 Hz. We quantify the level of agreement between the true and inverted seismograms using the L2 and time-frequency misfits, and, more importantly for earthquake-engineering applications, also using the goodness-of-fit criteria based on the earthquake-engineering characteristics of earthquake ground motion. We also verified the inverted model for other source-receiver configurations not used in the inversion.
Patient-specific model-based segmentation of brain tumors in 3D intraoperative ultrasound images.
Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Lindner, Dirk; Arlt, Felix; Ituna-Yudonago, Jean Fulbert; Chalopin, Claire
2018-03-01
Intraoperative ultrasound (iUS) imaging is commonly used to support brain tumor operation. The tumor segmentation in the iUS images is a difficult task and still under improvement because of the low signal-to-noise ratio. The success of automatic methods is also limited due to the high noise sensibility. Therefore, an alternative brain tumor segmentation method in 3D-iUS data using a tumor model obtained from magnetic resonance (MR) data for local MR-iUS registration is presented in this paper. The aim is to enhance the visualization of the brain tumor contours in iUS. A multistep approach is proposed. First, a region of interest (ROI) based on the specific patient tumor model is defined. Second, hyperechogenic structures, mainly tumor tissues, are extracted from the ROI of both modalities by using automatic thresholding techniques. Third, the registration is performed over the extracted binary sub-volumes using a similarity measure based on gradient values, and rigid and affine transformations. Finally, the tumor model is aligned with the 3D-iUS data, and its contours are represented. Experiments were successfully conducted on a dataset of 33 patients. The method was evaluated by comparing the tumor segmentation with expert manual delineations using two binary metrics: contour mean distance and Dice index. The proposed segmentation method using local and binary registration was compared with two grayscale-based approaches. The outcomes showed that our approach reached better results in terms of computational time and accuracy than the comparative methods. The proposed approach requires limited interaction and reduced computation time, making it relevant for intraoperative use. Experimental results and evaluations were performed offline. The developed tool could be useful for brain tumor resection supporting neurosurgeons to improve tumor border visualization in the iUS volumes.
Localization of optic disc and fovea in retinal images using intensity based line scanning analysis.
Kamble, Ravi; Kokare, Manesh; Deshmukh, Girish; Hussin, Fawnizu Azmadi; Mériaudeau, Fabrice
2017-08-01
Accurate detection of diabetic retinopathy (DR) mainly depends on identification of retinal landmarks such as optic disc and fovea. Present methods suffer from challenges like less accuracy and high computational complexity. To address this issue, this paper presents a novel approach for fast and accurate localization of optic disc (OD) and fovea using one-dimensional scanned intensity profile analysis. The proposed method utilizes both time and frequency domain information effectively for localization of OD. The final OD center is located using signal peak-valley detection in time domain and discontinuity detection in frequency domain analysis. However, with the help of detected OD location, the fovea center is located using signal valley analysis. Experiments were conducted on MESSIDOR dataset, where OD was successfully located in 1197 out of 1200 images (99.75%) and fovea in 1196 out of 1200 images (99.66%) with an average computation time of 0.52s. The large scale evaluation has been carried out extensively on nine publicly available databases. The proposed method is highly efficient in terms of quickly and accurately localizing OD and fovea structure together compared with the other state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Reljin, Branimir; Milosević, Zorica; Stojić, Tomislav; Reljin, Irini
2009-01-01
Two methods for segmentation and visualization of microcalcifications in digital or digitized mammograms are described. First method is based on modern mathematical morphology, while the second one uses the multifractal approach. In the first method, by using an appropriate combination of some morphological operations, high local contrast enhancement, followed by significant suppression of background tissue, irrespective of its radiology density, is obtained. By iterative procedure, this method highly emphasizes only small bright details, possible microcalcifications. In a multifractal approach, from initial mammogram image, a corresponding multifractal "images" are created, from which a radiologist has a freedom to change the level of segmentation. An appropriate user friendly computer aided visualization (CAV) system with embedded two methods is realized. The interactive approach enables the physician to control the level and the quality of segmentation. Suggested methods were tested through mammograms from MIAS database as a gold standard, and from clinical praxis, using digitized films and digital images from full field digital mammograph.
Superpixel-based graph cuts for accurate stereo matching
NASA Astrophysics Data System (ADS)
Feng, Liting; Qin, Kaihuai
2017-06-01
Estimating the surface normal vector and disparity of a pixel simultaneously, also known as three-dimensional label method, has been widely used in recent continuous stereo matching problem to achieve sub-pixel accuracy. However, due to the infinite label space, it’s extremely hard to assign each pixel an appropriate label. In this paper, we present an accurate and efficient algorithm, integrating patchmatch with graph cuts, to approach this critical computational problem. Besides, to get robust and precise matching cost, we use a convolutional neural network to learn a similarity measure on small image patches. Compared with other MRF related methods, our method has several advantages: its sub-modular property ensures a sub-problem optimality which is easy to perform in parallel; graph cuts can simultaneously update multiple pixels, avoiding local minima caused by sequential optimizers like belief propagation; it uses segmentation results for better local expansion move; local propagation and randomization can easily generate the initial solution without using external methods. Middlebury experiments show that our method can get higher accuracy than other MRF-based algorithms.
Local reaction kinetics by imaging☆
Suchorski, Yuri; Rupprechter, Günther
2016-01-01
In the present contribution we present an overview of our recent studies using the “kinetics by imaging” approach for CO oxidation on heterogeneous model systems. The method is based on the correlation of the PEEM image intensity with catalytic activity: scaled down to the μm-sized surface regions, such correlation allows simultaneous local kinetic measurements on differently oriented individual domains of a polycrystalline metal-foil, including the construction of local kinetic phase diagrams. This allows spatially- and component-resolved kinetic studies and, e.g., a direct comparison of inherent catalytic properties of Pt(hkl)- and Pd(hkl)-domains or supported μm-sized Pd-powder agglomerates, studies of the local catalytic ignition and the role of defects and grain boundaries in the local reaction kinetics. PMID:26865736
Schmelzle, Molly C; Kinziger, Andrew P
2016-07-01
Environmental DNA (eDNA) monitoring approaches promise to greatly improve detection of rare, endangered and invasive species in comparison with traditional field approaches. Herein, eDNA approaches and traditional seining methods were applied at 29 research locations to compare method-specific estimates of detection and occupancy probabilities for endangered tidewater goby (Eucyclogobius newberryi). At each location, multiple paired seine hauls and water samples for eDNA analysis were taken, ranging from two to 23 samples per site, depending upon habitat size. Analysis using a multimethod occupancy modelling framework indicated that the probability of detection using eDNA was nearly double (0.74) the rate of detection for seining (0.39). The higher detection rates afforded by eDNA allowed determination of tidewater goby occupancy at two locations where they have not been previously detected and at one location considered to be locally extirpated. Additionally, eDNA concentration was positively related to tidewater goby catch per unit effort, suggesting eDNA could potentially be used as a proxy for local tidewater goby abundance. Compared to traditional field sampling, eDNA provided improved occupancy parameter estimates and can be applied to increase management efficiency across a broad spatial range and within a diversity of habitats. © 2015 John Wiley & Sons Ltd.
On event-based optical flow detection
Brosch, Tobias; Tschechne, Stephan; Neumann, Heiko
2015-01-01
Event-based sensing, i.e., the asynchronous detection of luminance changes, promises low-energy, high dynamic range, and sparse sensing. This stands in contrast to whole image frame-wise acquisition by standard cameras. Here, we systematically investigate the implications of event-based sensing in the context of visual motion, or flow, estimation. Starting from a common theoretical foundation, we discuss different principal approaches for optical flow detection ranging from gradient-based methods over plane-fitting to filter based methods and identify strengths and weaknesses of each class. Gradient-based methods for local motion integration are shown to suffer from the sparse encoding in address-event representations (AER). Approaches exploiting the local plane like structure of the event cloud, on the other hand, are shown to be well suited. Within this class, filter based approaches are shown to define a proper detection scheme which can also deal with the problem of representing multiple motions at a single location (motion transparency). A novel biologically inspired efficient motion detector is proposed, analyzed and experimentally validated. Furthermore, a stage of surround normalization is incorporated. Together with the filtering this defines a canonical circuit for motion feature detection. The theoretical analysis shows that such an integrated circuit reduces motion ambiguity in addition to decorrelating the representation of motion related activations. PMID:25941470
Some Aspects of Essentially Nonoscillatory (ENO) Formulations for the Euler Equations, Part 3
NASA Technical Reports Server (NTRS)
Chakravarthy, Sukumar R.
1990-01-01
An essentially nonoscillatory (ENO) formulation is described for hyperbolic systems of conservation laws. ENO approaches are based on smart interpolation to avoid spurious numerical oscillations. ENO schemes are a superset of Total Variation Diminishing (TVD) schemes. In the recent past, TVD formulations were used to construct shock capturing finite difference methods. At extremum points of the solution, TVD schemes automatically reduce to being first-order accurate discretizations locally, while away from extrema they can be constructed to be of higher order accuracy. The new framework helps construct essentially non-oscillatory finite difference methods without recourse to local reductions of accuracy to first order. Thus arbitrarily high orders of accuracy can be obtained. The basic general ideas of the new approach can be specialized in several ways and one specific implementation is described based on: (1) the integral form of the conservation laws; (2) reconstruction based on the primitive functions; (3) extension to multiple dimensions in a tensor product fashion; and (4) Runge-Kutta time integration. The resulting method is fourth-order accurate in time and space and is applicable to uniform Cartesian grids. The construction of such schemes for scalar equations and systems in one and two space dimensions is described along with several examples which illustrate interesting aspects of the new approach.
Near-Field Source Localization by Using Focusing Technique
NASA Astrophysics Data System (ADS)
He, Hongyang; Wang, Yide; Saillard, Joseph
2008-12-01
We discuss two fast algorithms to localize multiple sources in near field. The symmetry-based method proposed by Zhi and Chia (2007) is first improved by implementing a search-free procedure for the reduction of computation cost. We present then a focusing-based method which does not require symmetric array configuration. By using focusing technique, the near-field signal model is transformed into a model possessing the same structure as in the far-field situation, which allows the bearing estimation with the well-studied far-field methods. With the estimated bearing, the range estimation of each source is consequently obtained by using 1D MUSIC method without parameter pairing. The performance of the improved symmetry-based method and the proposed focusing-based method is compared by Monte Carlo simulations and with Crammer-Rao bound as well. Unlike other near-field algorithms, these two approaches require neither high-computation cost nor high-order statistics.
Ground-based cloud classification by learning stable local binary patterns
NASA Astrophysics Data System (ADS)
Wang, Yu; Shi, Cunzhao; Wang, Chunheng; Xiao, Baihua
2018-07-01
Feature selection and extraction is the first step in implementing pattern classification. The same is true for ground-based cloud classification. Histogram features based on local binary patterns (LBPs) are widely used to classify texture images. However, the conventional uniform LBP approach cannot capture all the dominant patterns in cloud texture images, thereby resulting in low classification performance. In this study, a robust feature extraction method by learning stable LBPs is proposed based on the averaged ranks of the occurrence frequencies of all rotation invariant patterns defined in the LBPs of cloud images. The proposed method is validated with a ground-based cloud classification database comprising five cloud types. Experimental results demonstrate that the proposed method achieves significantly higher classification accuracy than the uniform LBP, local texture patterns (LTP), dominant LBP (DLBP), completed LBP (CLTP) and salient LBP (SaLBP) methods in this cloud image database and under different noise conditions. And the performance of the proposed method is comparable with that of the popular deep convolutional neural network (DCNN) method, but with less computation complexity. Furthermore, the proposed method also achieves superior performance on an independent test data set.
NASA Astrophysics Data System (ADS)
Le Duy, Nguyen; Heidbüchel, Ingo; Meyer, Hanno; Merz, Bruno; Apel, Heiko
2018-02-01
This study analyzes the influence of local and regional climatic factors on the stable isotopic composition of rainfall in the Vietnamese Mekong Delta (VMD) as part of the Asian monsoon region. It is based on 1.5 years of weekly rainfall samples. In the first step, the isotopic composition of the samples is analyzed by local meteoric water lines (LMWLs) and single-factor linear correlations. Additionally, the contribution of several regional and local factors is quantified by multiple linear regression (MLR) of all possible factor combinations and by relative importance analysis. This approach is novel for the interpretation of isotopic records and enables an objective quantification of the explained variance in isotopic records for individual factors. In this study, the local factors are extracted from local climate records, while the regional factors are derived from atmospheric backward trajectories of water particles. The regional factors, i.e., precipitation, temperature, relative humidity and the length of backward trajectories, are combined with equivalent local climatic parameters to explain the response variables δ18O, δ2H, and d-excess of precipitation at the station of measurement. The results indicate that (i) MLR can better explain the isotopic variation in precipitation (R2 = 0.8) compared to single-factor linear regression (R2 = 0.3); (ii) the isotopic variation in precipitation is controlled dominantly by regional moisture regimes (˜ 70 %) compared to local climatic conditions (˜ 30 %); (iii) the most important climatic parameter during the rainy season is the precipitation amount along the trajectories of air mass movement; (iv) the influence of local precipitation amount and temperature is not significant during the rainy season, unlike the regional precipitation amount effect; (v) secondary fractionation processes (e.g., sub-cloud evaporation) can be identified through the d-excess and take place mainly in the dry season, either locally for δ18O and δ2H, or along the air mass trajectories for d-excess. The analysis shows that regional and local factors vary in importance over the seasons and that the source regions and transport pathways, and particularly the climatic conditions along the pathways, have a large influence on the isotopic composition of rainfall. Although the general results have been reported qualitatively in previous studies (proving the validity of the approach), the proposed method provides quantitative estimates of the controlling factors, both for the whole data set and for distinct seasons. Therefore, it is argued that the approach constitutes an advancement in the statistical analysis of isotopic records in rainfall that can supplement or precede more complex studies utilizing atmospheric models. Due to its relative simplicity, the method can be easily transferred to other regions, or extended with other factors. The results illustrate that the interpretation of the isotopic composition of precipitation as a recorder of local climatic conditions, as for example performed for paleorecords of water isotopes, may not be adequate in the southern part of the Indochinese Peninsula, and likely neither in other regions affected by monsoon processes. However, the presented approach could open a pathway towards better and seasonally differentiated reconstruction of paleoclimates based on isotopic records.
Perez, Bradford A; Koontz, Bridget F
2015-05-01
Men with localized high-risk prostate cancer carry significant risk of prostate cancer-specific mortality. The best treatment approach to minimize this risk is unclear. In this review, we evaluate the role of radiation before and after radical prostatectomy. A critical review of the literature was performed regarding the application of external radiation therapy (RT) in combination with prostatectomy for high-risk localized prostate cancer. Up to 70% of men with high-risk localized disease may require adjuvant therapy because of adverse pathologic features or biochemical recurrence in the absence of systemic disease. The utility of adjuvant RT among men with adverse pathologic features are well established at least regarding minimizing biochemical recurrence risk. The optimal timing of salvage radiation is the subject of ongoing studies. Neoadjuvant RT requires further study but is a potentially attractive method because of decreased radiation field sizes and potential radiobiologic benefits of delivering RT before surgery. Salvage prostatectomy is effective at treating local recurrence after radiation but is associated with significant surgical morbidity. Combining local therapies including radical prostatectomy and RT can be a reasonable approach. Care should be taken at the initial presentation of high-risk localized prostate cancer to consider and plan for the likelihood of multimodality care. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Malekan, Mohammad; Barros, Felício B.
2017-12-01
Generalized or extended finite element method (G/XFEM) models the crack by enriching functions of partition of unity type with discontinuous functions that represent well the physical behavior of the problem. However, this enrichment functions are not available for all problem types. Thus, one can use numerically-built (global-local) enrichment functions to have a better approximate procedure. This paper investigates the effects of micro-defects/inhomogeneities on a main crack behavior by modeling the micro-defects/inhomogeneities in the local problem using a two-scale G/XFEM. The global-local enrichment functions are influenced by the micro-defects/inhomogeneities from the local problem and thus change the approximate solution of the global problem with the main crack. This approach is presented in detail by solving three different linear elastic fracture mechanics problems for different cases: two plane stress and a Reissner-Mindlin plate problems. The numerical results obtained with the two-scale G/XFEM are compared with the reference solutions from the analytical, numerical solution using standard G/XFEM method and ABAQUS as well, and from the literature.
Wang, Yilin; Kanchanawong, Pakorn
2016-12-01
Fluorescence microscopy enables direct visualization of specific biomolecules within cells. However, for conventional fluorescence microscopy, the spatial resolution is restricted by diffraction to ~ 200 nm within the image plane and > 500 nm along the optical axis. As a result, fluorescence microscopy has long been severely limited in the observation of ultrastructural features within cells. The recent development of super resolution microscopy methods has overcome this limitation. In particular, the advent of photoswitchable fluorophores enables localization-based super resolution microscopy, which provides resolving power approaching the molecular-length scale. Here, we describe the application of a three-dimensional super resolution microscopy method based on single-molecule localization microscopy and multiphase interferometry, called interferometric PhotoActivated Localization Microscopy (iPALM). This method provides nearly isotropic resolution on the order of 20 nm in all three dimensions. Protocols for visualizing the filamentous actin cytoskeleton, including specimen preparation and operation of the iPALM instrument, are described here. These protocols are also readily adaptable and instructive for the study of other ultrastructural features in cells.
Curchod, Basile F E; Penfold, Thomas J; Rothlisberger, Ursula; Tavernelli, Ivano
2013-01-01
The implementation of local control theory using nonadiabatic molecular dynamics within the framework of linear-response time-dependent density functional theory is discussed. The method is applied to study the photoexcitation of lithium fluoride, for which we demonstrate that this approach can efficiently generate a pulse, on-the-fly, able to control the population transfer between two selected electronic states. Analysis of the computed control pulse yields insights into the photophysics of the process identifying the relevant frequencies associated to the curvature of the initial and final state potential energy curves and their energy differences. The limitations inherent to the use of the trajectory surface hopping approach are also discussed.
Nonlinear mechanics of composite materials with periodic microstructure
NASA Technical Reports Server (NTRS)
Jordan, E. H.; Walker, K. P.
1991-01-01
This report summarizes the result of research done under NASA NAG3-882 Nonlinear Mechanics of Composites with Periodic Microstructure. The effort involved the development of non-finite element methods to calculate local stresses around fibers in composite materials. The theory was developed and some promising numerical results were obtained. It is expected that when this approach is fully developed, it will provide an important tool for calculating local stresses and averaged constitutive behavior in composites. NASA currently has a major contractual effort (NAS3-24691) to bring the approach developed under this grant to application readiness. The report has three sections. One, the general theory that appeared as a NASA TM, a second section that gives greater details about the theory connecting Greens functions and Fourier series approaches, and a final section shows numerical results.
An approach to 3D model fusion in GIS systems and its application in a future ECDIS
NASA Astrophysics Data System (ADS)
Liu, Tao; Zhao, Depeng; Pan, Mingyang
2016-04-01
Three-dimensional (3D) computer graphics technology is widely used in various areas and causes profound changes. As an information carrier, 3D models are becoming increasingly important. The use of 3D models greatly helps to improve the cartographic expression and design. 3D models are more visually efficient, quicker and easier to understand and they can express more detailed geographical information. However, it is hard to efficiently and precisely fuse 3D models in local systems. The purpose of this study is to propose an automatic and precise approach to fuse 3D models in geographic information systems (GIS). It is the basic premise for subsequent uses of 3D models in local systems, such as attribute searching, spatial analysis, and so on. The basic steps of our research are: (1) pose adjustment by principal component analysis (PCA); (2) silhouette extraction by simple mesh silhouette extraction and silhouette merger; (3) size adjustment; (4) position matching. Finally, we implement the above methods in our system Automotive Intelligent Chart (AIC) 3D Electronic Chart Display and Information Systems (ECDIS). The fusion approach we propose is a common method and each calculation step is carefully designed. This approach solves the problem of cross-platform model fusion. 3D models can be from any source. They may be stored in the local cache or retrieved from Internet, or may be manually created by different tools or automatically generated by different programs. The system can be any kind of 3D GIS system.
Scene-based nonuniformity correction using local constant statistics.
Zhang, Chao; Zhao, Wenyi
2008-06-01
In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts.
Enhanced HTS hit selection via a local hit rate analysis.
Posner, Bruce A; Xi, Hualin; Mills, James E J
2009-10-01
The postprocessing of high-throughput screening (HTS) results is complicated by the occurrence of false positives (inactive compounds misidentified as active by the primary screen) and false negatives (active compounds misidentified as inactive by the primary screen). An activity cutoff is frequently used to select "active" compounds from HTS data; however, this approach is insensitive to both false positives and false negatives. An alternative method that can minimize the occurrence of these artifacts will increase the efficiency of hit selection and therefore lead discovery. In this work, rather than merely using the activity of a given compound, we look at the presence and absence of activity among all compounds in its "chemical space neighborhood" to give a degree of confidence in its activity. We demonstrate that this local hit rate (LHR) analysis method outperforms hit selection based on ranking by primary screen activity values across ten diverse high throughput screens, spanning both cell-based and biochemical assay formats of varying biology and robustness. On average, the local hit rate analysis method was approximately 2.3-fold and approximately 1.3-fold more effective in identifying active compounds and active chemical series, respectively, than selection based on primary activity alone. Moreover, when applied to finding false negatives, this method was 2.3-fold better than ranking by primary activity alone. In most cases, novel hit series were identified that would have otherwise been missed. Additional uses of and observations regarding this HTS analysis approach are also discussed.
Wan, Tao; Bloch, B Nicolas; Danish, Shabbar; Madabhushi, Anant
2014-11-20
In this work, we present a novel learning based fiducial driven registration (LeFiR) scheme which utilizes a point matching technique to identify the optimal configuration of landmarks to better recover deformation between a target and a moving image. Moreover, we employ the LeFiR scheme to model the localized nature of deformation introduced by a new treatment modality - laser induced interstitial thermal therapy (LITT) for treating neurological disorders. Magnetic resonance (MR) guided LITT has recently emerged as a minimally invasive alternative to craniotomy for local treatment of brain diseases (such as glioblastoma multiforme (GBM), epilepsy). However, LITT is currently only practised as an investigational procedure world-wide due to lack of data on longer term patient outcome following LITT. There is thus a need to quantitatively evaluate treatment related changes between post- and pre-LITT in terms of MR imaging markers. In order to validate LeFiR, we tested the scheme on a synthetic brain dataset (SBD) and in two real clinical scenarios for treating GBM and epilepsy with LITT. Four experiments under different deformation profiles simulating localized ablation effects of LITT on MRI were conducted on 286 pairs of SBD images. The training landmark configurations were obtained through 2000 iterations of registration where the points with consistently best registration performance were selected. The estimated landmarks greatly improved the quality metrics compared to a uniform grid (UniG) placement scheme, a speeded-up robust features (SURF) based method, and a scale-invariant feature transform (SIFT) based method as well as a generic free-form deformation (FFD) approach. The LeFiR method achieved average 90% improvement in recovering the local deformation compared to 82% for the uniform grid placement, 62% for the SURF based approach, and 16% for the generic FFD approach. On the real GBM and epilepsy data, the quantitative results showed that LeFiR outperformed UniG by 28% improvement in average.
Liu, Rui; Milkie, Daniel E; Kerlin, Aaron; MacLennan, Bryan; Ji, Na
2014-01-27
In traditional zonal wavefront sensing for adaptive optics, after local wavefront gradients are obtained, the entire wavefront can be calculated by assuming that the wavefront is a continuous surface. Such an approach will lead to sub-optimal performance in reconstructing wavefronts which are either discontinuous or undersampled by the zonal wavefront sensor. Here, we report a new method to reconstruct the wavefront by directly measuring local wavefront phases in parallel using multidither coherent optical adaptive technique. This method determines the relative phases of each pupil segment independently, and thus produces an accurate wavefront for even discontinuous wavefronts. We implemented this method in an adaptive optical two-photon fluorescence microscopy and demonstrated its superior performance in correcting large or discontinuous aberrations.
Target-depth estimation in active sonar: Cramer-Rao bounds for a bilinear sound-speed profile.
Mours, Alexis; Ioana, Cornel; Mars, Jérôme I; Josso, Nicolas F; Doisy, Yves
2016-09-01
This paper develops a localization method to estimate the depth of a target in the context of active sonar, at long ranges. The target depth is tactical information for both strategy and classification purposes. The Cramer-Rao lower bounds for the target position as range and depth are derived for a bilinear profile. The influence of sonar parameters on the standard deviations of the target range and depth are studied. A localization method based on ray back-propagation with a probabilistic approach is then investigated. Monte-Carlo simulations applied to a summer Mediterranean sound-speed profile are performed to evaluate the efficiency of the estimator. This method is finally validated on data in an experimental tank.
Investigation of methods and approaches for collecting and recording highway inventory data.
DOT National Transportation Integrated Search
2013-06-01
Many techniques for collecting highway inventory data have been used by state and local agencies in the U.S. These : techniques include field inventory, photo/video log, integrated GPS/GIS mapping systems, aerial photography, satellite : imagery, vir...
ERIC Educational Resources Information Center
Stamatakis, E.A.; Tyler, L.K.
2005-01-01
The study of neuropsychological disorders has been greatly facilitated by the localization of brain lesions on MRI scans. Current popular approaches for the assessment of MRI brain scans mostly depend on the successful segmentation of the brain into grey and white matter. These methods cannot be used effectively with large lesions because lesions…
Real-time global illumination on mobile device
NASA Astrophysics Data System (ADS)
Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.
2014-02-01
We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.
Inherent structure versus geometric metric for state space discretization.
Liu, Hanzhong; Li, Minghai; Fan, Jue; Huo, Shuanghong
2016-05-30
Inherent structure (IS) and geometry-based clustering methods are commonly used for analyzing molecular dynamics trajectories. ISs are obtained by minimizing the sampled conformations into local minima on potential/effective energy surface. The conformations that are minimized into the same energy basin belong to one cluster. We investigate the influence of the applications of these two methods of trajectory decomposition on our understanding of the thermodynamics and kinetics of alanine tetrapeptide. We find that at the microcluster level, the IS approach and root-mean-square deviation (RMSD)-based clustering method give totally different results. Depending on the local features of energy landscape, the conformations with close RMSDs can be minimized into different minima, while the conformations with large RMSDs could be minimized into the same basin. However, the relaxation timescales calculated based on the transition matrices built from the microclusters are similar. The discrepancy at the microcluster level leads to different macroclusters. Although the dynamic models established through both clustering methods are validated approximately Markovian, the IS approach seems to give a meaningful state space discretization at the macrocluster level in terms of conformational features and kinetics. © 2016 Wiley Periodicals, Inc.
Chen, C L; Kaber, D B; Dempsey, P G
2000-06-01
A new and improved method to feedforward neural network (FNN) development for application to data classification problems, such as the prediction of levels of low-back disorder (LBD) risk associated with industrial jobs, is presented. Background on FNN development for data classification is provided along with discussions of previous research and neighborhood (local) solution search methods for hard combinatorial problems. An analytical study is presented which compared prediction accuracy of a FNN based on an error-back propagation (EBP) algorithm with the accuracy of a FNN developed by considering results of local solution search (simulated annealing) for classifying industrial jobs as posing low or high risk for LBDs. The comparison demonstrated superior performance of the FNN generated using the new method. The architecture of this FNN included fewer input (predictor) variables and hidden neurons than the FNN developed based on the EBP algorithm. Independent variable selection methods and the phenomenon of 'overfitting' in FNN (and statistical model) generation for data classification are discussed. The results are supportive of the use of the new approach to FNN development for applications to musculoskeletal disorders and risk forecasting in other domains.
Shekhar, S.; Cambi, A.; Figdor, C.G.; Subramaniam, V.; Kanger, J.S.
2012-01-01
Because both the chemical and mechanical properties of living cells play crucial functional roles, there is a strong need for biophysical methods to address these properties simultaneously. Here we present a novel (to our knowledge) approach to measure local intracellular micromechanical and chemical properties using a hybrid magnetic chemical biosensor. We coupled a fluorescent dye, which serves as a chemical sensor, to a magnetic particle that is used for measurement of the viscoelastic environment by studying the response of the particle to magnetic force pulses. As a demonstration of the potential of this approach, we applied the method to study the process of phagocytosis, wherein cytoskeletal reorganization occurs in parallel with acidification of the phagosome. During this process, we measured the shear modulus and viscosity of the phagosomal environment concurrently with the phagosomal pH. We found that it is possible to manipulate phagocytosis by stalling the centripetal movement of the phagosome using magnetic force. Our results suggest that preventing centripetal phagosomal transport delays the onset of acidification. To our knowledge, this is the first report of manipulation of intracellular phagosomal transport without interfering with the underlying motor proteins or cytoskeletal network through biochemical methods. PMID:22947855
Numerical solution of the general coupled nonlinear Schrödinger equations on unbounded domains.
Li, Hongwei; Guo, Yue
2017-12-01
The numerical solution of the general coupled nonlinear Schrödinger equations on unbounded domains is considered by applying the artificial boundary method in this paper. In order to design the local absorbing boundary conditions for the coupled nonlinear Schrödinger equations, we generalize the unified approach previously proposed [J. Zhang et al., Phys. Rev. E 78, 026709 (2008)PLEEE81539-375510.1103/PhysRevE.78.026709]. Based on the methodology underlying the unified approach, the original problem is split into two parts, linear and nonlinear terms, and we then achieve a one-way operator to approximate the linear term to make the wave out-going, and finally we combine the one-way operator with the nonlinear term to derive the local absorbing boundary conditions. Then we reduce the original problem into an initial boundary value problem on the bounded domain, which can be solved by the finite difference method. The stability of the reduced problem is also analyzed by introducing some auxiliary variables. Ample numerical examples are presented to verify the accuracy and effectiveness of our proposed method.
The NonConforming Virtual Element Method for the Stokes Equations
Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco
2016-01-01
In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less
A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint
NASA Astrophysics Data System (ADS)
Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru
Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.
General Approach to Quantum Channel Impossibility by Local Operations and Classical Communication.
Cohen, Scott M
2017-01-13
We describe a general approach to proving the impossibility of implementing a quantum channel by local operations and classical communication (LOCC), even with an infinite number of rounds, and find that this can often be demonstrated by solving a set of linear equations. The method also allows one to design a LOCC protocol to implement the channel whenever such a protocol exists in any finite number of rounds. Perhaps surprisingly, the computational expense for analyzing LOCC channels is not much greater than that for LOCC measurements. We apply the method to several examples, two of which provide numerical evidence that the set of quantum channels that are not LOCC is not closed and that there exist channels that can be implemented by LOCC either in one round or in three rounds that are on the boundary of the set of all LOCC channels. Although every LOCC protocol must implement a separable quantum channel, it is a very difficult task to determine whether or not a given channel is separable. Fortunately, prior knowledge that the channel is separable is not required for application of our method.
Estimating 3D positions and velocities of projectiles from monocular views.
Ribnick, Evan; Atev, Stefan; Papanikolopoulos, Nikolaos P
2009-05-01
In this paper, we consider the problem of localizing a projectile in 3D based on its apparent motion in a stationary monocular view. A thorough theoretical analysis is developed, from which we establish the minimum conditions for the existence of a unique solution. The theoretical results obtained have important implications for applications involving projectile motion. A robust, nonlinear optimization-based formulation is proposed, and the use of a local optimization method is justified by detailed examination of the local convexity structure of the cost function. The potential of this approach is validated by experimental results.
Local deformation for soft tissue simulation
Omar, Nadzeri; Zhong, Yongmin; Smith, Julian; Gu, Chengfan
2016-01-01
ABSTRACT This paper presents a new methodology to localize the deformation range to improve the computational efficiency for soft tissue simulation. This methodology identifies the local deformation range from the stress distribution in soft tissues due to an external force. A stress estimation method is used based on elastic theory to estimate the stress in soft tissues according to a depth from the contact surface. The proposed methodology can be used with both mass-spring and finite element modeling approaches for soft tissue deformation. Experimental results show that the proposed methodology can improve the computational efficiency while maintaining the modeling realism. PMID:27286482
Tensor voting for image correction by global and local intensity alignment.
Jia, Jiaya; Tang, Chi-Keung
2005-01-01
This paper presents a voting method to perform image correction by global and local intensity alignment. The key to our modeless approach is the estimation of global and local replacement functions by reducing the complex estimation problem to the robust 2D tensor voting in the corresponding voting spaces. No complicated model for replacement function (curve) is assumed. Subject to the monotonic constraint only, we vote for an optimal replacement function by propagating the curve smoothness constraint using a dense tensor field. Our method effectively infers missing curve segments and rejects image outliers. Applications using our tensor voting approach are proposed and described. The first application consists of image mosaicking of static scenes, where the voted replacement functions are used in our iterative registration algorithm for computing the best warping matrix. In the presence of occlusion, our replacement function can be employed to construct a visually acceptable mosaic by detecting occlusion which has large and piecewise constant color. Furthermore, by the simultaneous consideration of color matches and spatial constraints in the voting space, we perform image intensity compensation and high contrast image correction using our voting framework, when only two defective input images are given.
Schalk, Stefan G; Demi, Libertario; Bouhouch, Nabil; Kuenen, Maarten P J; Postema, Arnoud W; de la Rosette, Jean J M C H; Wijkstra, Hessel; Tjalkens, Tjalling J; Mischi, Massimo
2017-03-01
The role of angiogenesis in cancer growth has stimulated research aimed at noninvasive cancer detection by blood perfusion imaging. Recently, contrast ultrasound dispersion imaging was proposed as an alternative method for angiogenesis imaging. After the intravenous injection of an ultrasound-contrast-agent bolus, dispersion can be indirectly estimated from the local similarity between neighboring time-intensity curves (TICs) measured by ultrasound imaging. Up until now, only linear similarity measures have been investigated. Motivated by the promising results of this approach in prostate cancer (PCa), we developed a novel dispersion estimation method based on mutual information, thus including nonlinear similarity, to further improve its ability to localize PCa. First, a simulation study was performed to establish the theoretical link between dispersion and mutual information. Next, the method's ability to localize PCa was validated in vivo in 23 patients (58 datasets) referred for radical prostatectomy by comparison with histology. A monotonic relationship between dispersion and mutual information was demonstrated. The in vivo study resulted in a receiver operating characteristic (ROC) curve area equal to 0.77, which was superior (p = 0.21-0.24) to that obtained by linear similarity measures (0.74-0.75) and (p <; 0.05) to that by conventional perfusion parameters (≤0.70). Mutual information between neighboring time-intensity curves can be used to indirectly estimate contrast dispersion and can lead to more accurate PCa localization. An improved PCa localization method can possibly lead to better grading and staging of tumors, and support focal-treatment guidance. Moreover, future employment of the method in other types of angiogenic cancer can be considered.
The evolution of the surgical treatment of chronic pancreatitis.
Andersen, Dana K; Frey, Charles F
2010-01-01
To establish the current status of surgical therapy for chronic pancreatitis, recent published reports are examined in the context of the historical advances in the field. The basis for decompression (drainage), denervation, and resection strategies for the treatment of pain caused by chronic pancreatitis is reviewed. These divergent approaches have finally coalesced as the head of the pancreas has become apparent as the nidus of chronic inflammation. The recent developments in surgical methods to treat the complications of chronic pancreatitis and the results of recent prospective randomized trials of operative approaches were reviewed to establish the current best practices. Local resection of the pancreatic head, with or without duct drainage, and duodenum-preserving pancreatic head resection offer outcomes as effective as pancreaticoduodenectomy, with lowered morbidity and mortality. Local resection or excavation of the pancreatic head offers the advantage of lowest cost and morbidity and early prevention of postoperative diabetes. The late incidences of recurrent pain, diabetes, and exocrine insufficiency are equivalent for all 3 surgical approaches. Local resection of the pancreatic head appears to offer best outcomes and lowest risk for the management of the pain of chronic pancreatitis.
Vrionis, F D; Robertson, J H; Foley, K T; Gardner, G
1997-01-01
Approaches through the middle cranial fossa directed at reaching the internal auditory canal (IAC) invariably employ exposure of the geniculate ganglion, the superior semicircular canal (SSC) or the epitympanum. This involves risk to the facial nerve and hearing apparatus. To minimize this risk, we conducted a laboratory study on 9 cadaver temporal bones by using an image-interactive guidance system (StealthStation) to provide topographic orientation in the middle fossa approach. Surface anatomic fiducials such as the umbo of the tympanic membrane, Henle's spine, the root of the zygoma and various sutures were used as fiducials for registration of CT-images of the temporal bone. Accurate localization of the IAC was achieved in every specimen. Mean target localization error varied from 1.20 to 1.38 mm for critical structures in the temporal bone such as the apex of the cochlea, crus commune, ampula of the SSC and facial hiatus. Our results suggest that frameless stereotaxy may be used as an alternative to current methods in localizing the IAC in patients with small vestibular schwannomas or intractable vertigo undergoing middle fossa surgery.
Current reversals and metastable states in the infinite Bose-Hubbard chain with local particle loss
NASA Astrophysics Data System (ADS)
Kiefer-Emmanouilidis, M.; Sirker, J.
2017-12-01
We present an algorithm which combines the quantum trajectory approach to open quantum systems with a density-matrix renormalization-group scheme for infinite one-dimensional lattice systems. We apply this method to investigate the long-time dynamics in the Bose-Hubbard model with local particle loss starting from a Mott-insulating initial state with one boson per site. While the short-time dynamics can be described even quantitatively by an equation of motion (EOM) approach at the mean-field level, many-body interactions lead to unexpected effects at intermediate and long times: local particle currents far away from the dissipative site start to reverse direction ultimately leading to a metastable state with a total particle current pointing away from the lossy site. An alternative EOM approach based on an effective fermion model shows that the reversal of currents can be understood qualitatively by the creation of holon-doublon pairs at the edge of the region of reduced particle density. The doublons are then able to escape while the holes move towards the dissipative site, a process reminiscent—in a loose sense—of Hawking radiation.
Improving Robot Locomotion Through Learning Methods for Expensive Black-Box Systems
2013-11-01
development of a class of “gradient free” optimization techniques; these include local approaches, such as a Nelder- Mead simplex search (c.f. [73]), and global...1Note that this simple method differs from the Nelder Mead constrained nonlinear optimization method [73]. 39 the Non-dominated Sorting Genetic Algorithm...Kober, and Jan Peters. Model-free inverse reinforcement learning. In International Conference on Artificial Intelligence and Statistics, 2011. [12] George
Bayesian random local clocks, or one rate to rule them all
2010-01-01
Background Relaxed molecular clock models allow divergence time dating and "relaxed phylogenetic" inference, in which a time tree is estimated in the face of unequal rates across lineages. We present a new method for relaxing the assumption of a strict molecular clock using Markov chain Monte Carlo to implement Bayesian modeling averaging over random local molecular clocks. The new method approaches the problem of rate variation among lineages by proposing a series of local molecular clocks, each extending over a subregion of the full phylogeny. Each branch in a phylogeny (subtending a clade) is a possible location for a change of rate from one local clock to a new one. Thus, including both the global molecular clock and the unconstrained model results, there are a total of 22n-2 possible rate models available for averaging with 1, 2, ..., 2n - 2 different rate categories. Results We propose an efficient method to sample this model space while simultaneously estimating the phylogeny. The new method conveniently allows a direct test of the strict molecular clock, in which one rate rules them all, against a large array of alternative local molecular clock models. We illustrate the method's utility on three example data sets involving mammal, primate and influenza evolution. Finally, we explore methods to visualize the complex posterior distribution that results from inference under such models. Conclusions The examples suggest that large sequence datasets may only require a small number of local molecular clocks to reconcile their branch lengths with a time scale. All of the analyses described here are implemented in the open access software package BEAST 1.5.4 (http://beast-mcmc.googlecode.com/). PMID:20807414
The Social Determinants of Health Core: Taking a Place-Based Approach.
Scribner, Richard A; Simonsen, Neal R; Leonardi, Claudia
2017-01-01
There is growing recognition that health disparities research needs to incorporate social determinants in the local environment into explanatory models. In the transdisciplinary setting of the Mid-South Transdisciplinary Collaborative Center (TCC), the Social Determinants of Health (SDH) Core developed an approach to incorporating SDH across a variety of studies. This place-based approach, which is geographically based, transdisciplinary, and inherently multilevel, is discussed. From 2014 through 2016, the SDH Core consulted on a variety of Mid-South TCC research studies with the goal of incorporating social determinants into their research designs. The approach used geospatial methods (e.g., geocoding) to link individual data files with measures of the physical and social environment in the SDH Core database. Once linked, the method permitted various types of analysis (e.g., multilevel analysis) to determine if racial disparities could be explained in terms of social determinants in the local environment. The SDH Core consulted on five Mid-South TCC research projects. In resulting analyses for all the studies, a significant portion of the variance in one or more outcomes was partially explained by a social determinant from the SDH Core database. The SDH Core approach to addressing health disparities by linking neighborhood social and physical environment measures to an individual-level data file proved to be a successful approach across Mid-South TCC research projects. Copyright © 2016 American Journal of Preventive Medicine. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less
Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank
2015-07-21
In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.
Large-eddy simulation of wind turbine wake interactions on locally refined Cartesian grids
NASA Astrophysics Data System (ADS)
Angelidis, Dionysios; Sotiropoulos, Fotis
2014-11-01
Performing high-fidelity numerical simulations of turbulent flow in wind farms remains a challenging issue mainly because of the large computational resources required to accurately simulate the turbine wakes and turbine/turbine interactions. The discretization of the governing equations on structured grids for mesoscale calculations may not be the most efficient approach for resolving the large disparity of spatial scales. A 3D Cartesian grid refinement method enabling the efficient coupling of the Actuator Line Model (ALM) with locally refined unstructured Cartesian grids adapted to accurately resolve tip vortices and multi-turbine interactions, is presented. Second order schemes are employed for the discretization of the incompressible Navier-Stokes equations in a hybrid staggered/non-staggered formulation coupled with a fractional step method that ensures the satisfaction of local mass conservation to machine zero. The current approach enables multi-resolution LES of turbulent flow in multi-turbine wind farms. The numerical simulations are in good agreement with experimental measurements and are able to resolve the rich dynamics of turbine wakes on grids containing only a small fraction of the grid nodes that would be required in simulations without local mesh refinement. This material is based upon work supported by the Department of Energy under Award Number DE-EE0005482 and the National Science Foundation under Award number NSF PFI:BIC 1318201.