Science.gov

Sample records for local approach methods

  1. A local chemical potential approach within the variable charge method formalism

    NASA Astrophysics Data System (ADS)

    Elsener, A.; Politano, O.; Derlet, P. M.; Van Swygenhoven, H.

    2008-03-01

    A new and computationally efficient implementation of the variable charge method of Streitz and Mintmire (1994 Phys. Rev. B 50 11996) is presented. In particular a local chemical potential approach that optimizes the charge on only those atoms expected to be ionic is developed. By doing so, the charge fluctuation problem experienced in regions far from any oxygen is solved, leading to a linear minimization problem of the electrostatic energy. In the dilute oxygen limit, such an approach can lead to at least an order of magnitude saving in computation.

  2. Localized 2D COSY sequences: Method and experimental evaluation for a whole metabolite quantification approach

    NASA Astrophysics Data System (ADS)

    Martel, Dimitri; Tse Ve Koon, K.; Le Fur, Yann; Ratiney, Hélène

    2015-11-01

    Two-dimensional spectroscopy offers the possibility to unambiguously distinguish metabolites by spreading out the multiplet structure of J-coupled spin systems into a second dimension. Quantification methods that perform parametric fitting of the 2D MRS signal have recently been proposed for resolved PRESS (JPRESS) but not explicitly for Localized Correlation Spectroscopy (LCOSY). Here, through a whole metabolite quantification approach, correlation spectroscopy quantification performances are studied. The ability to quantify metabolite relaxation constant times is studied for three localized 2D MRS sequences (LCOSY, LCTCOSY and the JPRESS) in vitro on preclinical MR systems. The issues encountered during implementation and quantification strategies are discussed with the help of the Fisher matrix formalism. The described parameterized models enable the computation of the lower bound for error variance - generally known as the Cramér Rao bounds (CRBs), a standard of precision - on the parameters estimated from these 2D MRS signal fittings. LCOSY has a theoretical net signal loss of two per unit of acquisition time compared to JPRESS. A rapid analysis could point that the relative CRBs of LCOSY compared to JPRESS (expressed as a percentage of the concentration values) should be doubled but we show that this is not necessarily true. Finally, the LCOSY quantification procedure has been applied on data acquired in vivo on a mouse brain.

  3. A nonparametric approach to calculate critical micelle concentrations: the local polynomial regression method.

    PubMed

    López Fontán, J L; Costa, J; Ruso, J M; Prieto, G; Sarmiento, F

    2004-02-01

    The application of a statistical method, the local polynomial regression method, (LPRM), based on a nonparametric estimation of the regression function to determine the critical micelle concentration (cmc) is presented. The method is extremely flexible because it does not impose any parametric model on the subjacent structure of the data but rather allows the data to speak for themselves. Good concordance of cmc values with those obtained by other methods was found for systems in which the variation of a measured physical property with concentration showed an abrupt change. When this variation was slow, discrepancies between the values obtained by LPRM and others methods were found.

  4. Enzyme-labeled Antigen Method: Development and Application of the Novel Approach for Identifying Plasma Cells Locally Producing Disease-specific Antibodies in Inflammatory Lesions

    PubMed Central

    Mizutani, Yasuyoshi; Shiogama, Kazuya; Onouchi, Takanori; Sakurai, Kouhei; Inada, Ken-ichi; Tsutsumi, Yutaka

    2016-01-01

    In chronic inflammatory lesions of autoimmune and infectious diseases, plasma cells are frequently observed. Antigens recognized by antibodies produced by the plasma cells mostly remain unclear. A new technique identifying these corresponding antigens may give us a breakthrough for understanding the disease from a pathophysiological viewpoint, simply because the immunocytes are seen within the lesion. We have developed an enzyme-labeled antigen method for microscopic identification of the antigen recognized by specific antibodies locally produced in plasma cells in inflammatory lesions. Firstly, target biotinylated antigens were constructed by the wheat germ cell-free protein synthesis system or through chemical biotinylation. Next, proteins reactive to antibodies in tissue extracts were screened and antibody titers were evaluated by the AlphaScreen method. Finally, with the enzyme-labeled antigen method using the biotinylated antigens as probes, plasma cells producing specific antibodies were microscopically localized in fixed frozen sections. Our novel approach visualized tissue plasma cells that produced 1) autoantibodies in rheumatoid arthritis, 2) antibodies against major antigens of Porphyromonas gingivalis in periodontitis or radicular cyst, and 3) antibodies against a carbohydrate antigen, Strep A, of Streptococcus pyogenes in recurrent tonsillitis. Evaluation of local specific antibody responses expectedly contributes to clarifying previously unknown processes in inflammatory disorders. PMID:27006517

  5. Speeding up local correlation methods

    SciTech Connect

    Kats, Daniel

    2014-12-28

    We present two techniques that can substantially speed up the local correlation methods. The first one allows one to avoid the expensive transformation of the electron-repulsion integrals from atomic orbitals to virtual space. The second one introduces an algorithm for the residual equations in the local perturbative treatment that, in contrast to the standard scheme, does not require holding the amplitudes or residuals in memory. It is shown that even an interpreter-based implementation of the proposed algorithm in the context of local MP2 method is faster and requires less memory than the highly optimized variants of conventional algorithms.

  6. Local method for detecting communities

    NASA Astrophysics Data System (ADS)

    Bagrow, James P.; Bollt, Erik M.

    2005-10-01

    We propose a method of community detection that is computationally inexpensive and possesses physical significance to a member of a social network. This method is unlike many divisive and agglomerative techniques and is local in the sense that a community can be detected within a network without requiring knowledge of the entire network. A global application of this method is also introduced. Several artificial and real-world networks, including the famous Zachary karate club, are analyzed.

  7. [Methods of recording local ERG].

    PubMed

    Shamshinova, A M; Govardovskiĭ, V I; Golubtsov, K V

    1989-01-01

    Two methods for recording the retinal local biopotential with the gaze fixation control and without this control are described, to be used for the assessment of the status of any retinal or macular site, among other things. A ring-shaped sucker electrode is employed in both methods; this electrode is supplied with a transparent anterior window and infrared-illuminated mirror in one method and in the other the electrode incorporates a light-emitting diode and an optic system that helps create a 6-20 degrees stimulus. Theoretical analysis and experimental findings evidence that the minimal area on the retina from which a local response may be obtained, fit for clinical purposes, conforms to 10-15 degrees with a stimulus 5 times brighter than semisaturating brightness. The local response share in this case is 70% and the number of possible blendings 50. The results explaining the choice of the conditions for the retinal local biopotential recording are presented. The suggested methods help assess the function of the macular area, distinguish between the functions of the cone and rod systems, record the total and Hanz-Feld electroretinograms without electrode substitution, record the macular biopotential in babies and in patients with nystagmus or with cataracts of various origins. PMID:2617751

  8. [Classification of local anesthesia methods].

    PubMed

    Petrikas, A Zh; Medvedev, D V; Ol'khovskaya, E B

    2016-01-01

    The traditional classification methods of dental local anesthesia must be modified. In this paper we proved that the vascular mechanism is leading component of spongy injection. It is necessary to take into account the high effectiveness and relative safety of spongy anesthesia, as well as versatility, ease of implementation and the growing prevalence in the world. The essence of the proposed modification is to distinguish the methods in diffusive (including surface anesthesia, infiltration and conductive anesthesia) and vascular-diffusive (including intraosseous, intraligamentary, intraseptal and intrapulpal anesthesia). For the last four methods the common term «spongy (intraosseous) anesthesia» may be used. PMID:27636752

  9. Computational methods for global/local analysis

    NASA Technical Reports Server (NTRS)

    Ransom, Jonathan B.; Mccleary, Susan L.; Aminpour, Mohammad A.; Knight, Norman F., Jr.

    1992-01-01

    Computational methods for global/local analysis of structures which include both uncoupled and coupled methods are described. In addition, global/local analysis methodology for automatic refinement of incompatible global and local finite element models is developed. Representative structural analysis problems are presented to demonstrate the global/local analysis methods.

  10. EEG source localization: a neural network approach.

    PubMed

    Sclabassi, R J; Sonmez, M; Sun, M

    2001-07-01

    Functional activity in the brain is associated with the generation of currents and resultant voltages which may be observed on the scalp as the electroencephelogram. The current sources may be modeled as dipoles. The properties of the current dipole sources may be studied by solving either the forward or inverse problems. The forward problem utilizes a volume conductor model for the head, in which the potentials on the conductor surface are computed based on an assumed current dipole at an arbitrary location, orientation, and strength. In the inverse problem, on the other hand, a current dipole, or a group of dipoles, is identified based on the observed EEG. Both the forward and inverse problems are typically solved by numerical procedures, such as a boundary element method and an optimization algorithm. These approaches are highly time-consuming and unsuitable for the rapid evaluation of brain function. In this paper we present a different approach to these problems based on machine learning. We solve both problems using artificial neural networks which are trained off-line using back-propagation techniques to learn the complex source-potential relationships of head volume conduction. Once trained, these networks are able to generalize their knowledge to localize functional activity within the brain in a computationally efficient manner.

  11. Local electric dipole moments: A generalized approach.

    PubMed

    Groß, Lynn; Herrmann, Carmen

    2016-09-30

    We present an approach for calculating local electric dipole moments for fragments of molecular or supramolecular systems. This is important for understanding chemical gating and solvent effects in nanoelectronics, atomic force microscopy, and intensities in infrared spectroscopy. Owing to the nonzero partial charge of most fragments, "naively" defined local dipole moments are origin-dependent. Inspired by previous work based on Bader's atoms-in-molecules (AIM) partitioning, we derive a definition of fragment dipole moments which achieves origin-independence by relying on internal reference points. Instead of bond critical points (BCPs) as in existing approaches, we use as few reference points as possible, which are located between the fragment and the remainder(s) of the system and may be chosen based on chemical intuition. This allows our approach to be used with AIM implementations that circumvent the calculation of critical points for reasons of computational efficiency, for cases where no BCPs are found due to large interfragment distances, and with local partitioning schemes other than AIM which do not provide BCPs. It is applicable to both covalently and noncovalently bound systems. © 2016 Wiley Periodicals, Inc. PMID:27520590

  12. Method for localizing heating in tumor tissue

    DOEpatents

    Doss, James D.; McCabe, Charles W.

    1977-04-12

    A method for a localized tissue heating of tumors is disclosed. Localized radio frequency current fields are produced with specific electrode configurations. Several electrode configurations are disclosed, enabling variations in electrical and thermal properties of tissues to be exploited.

  13. LOCALIZING THE RANGELAND HEALTH METHOD FOR SOUTHEASTERN ARIZONA

    EPA Science Inventory

    The interagency manual Interpreting Indicators of Rangeland Health, Version 4 (Technical Reference 1734-6) provides a method for making rangeland health assessments. The manual recommends that the rangeland health assessment approach be adapted to local conditions. This technica...

  14. Strain localization across main continental strike-slip shear zones : a multi-methods approach for the case of the Karakorum shear zone

    NASA Astrophysics Data System (ADS)

    Boutonnet, E.; Leloup, P. H.; Rozel, A.; Arnaud, N.; Paquette, J. L.

    2012-04-01

    Whether deformation within the deep continental crust is fundamentally concentrated in narrow shear zones or distributed in wide zones stays a major controversy of the earth sciences. This is in part because direct measurements of ductile shear or strain rate are difficult, especially when deformation is intense, as it is the case in ductile shear zones. The Pangong range (India) is an 8km-wide shear zone, corresponding to the exhumed root of the central Karakorum fault zone (KFZ), one of the great continental strike-slip faults of the India-Asia collision zone. Ductile deformation is most intense in the Tangtse and Muglib mylonitic strands, which bracket the shear zone to the SW and NE, respectively. The relationships between dykes emplacement ages (U/Pb dating) and deformation indicate that deformation was not synchronous across the shear zone. Ar/Ar dating document that cooling was diachronic across strike and ductile deformation (~ 300°C) stopped earlier in the SW than in the NE. Deformation thus appears to have migrated / localized from the whole shear zone to the Muglib strand, the only locus showing evidence for brittle deformation and active faulting. We compared the strain rates measured at different spatial scales: (1) a global scale investigated by the geological fault rate estimation and (2) a local scale, investigated with the QSR (Quartz strain rate metry) method. The total offset (200-240 km) and the KFZ life span (18 to 25 Ma) yield an average fault rate of 1.1 ±0.2 cm/yr. this corresponds to a global shear rate of 4.4 x10-14 s-1, assuming an homogenous deformation in space and time within a 8 km wide shear zone. Five quartz samples provided deformation temperatures between 348 and 428°C and corresponding paleo-stresses between 24 and 65 MPa. The local strain rates measured within the two mylonitic strands of the fault zone (> 1 x10-13 s-1), are higher than those measured outside of these strands (≤ 1 x10-14 s-1), where deformation is weaker

  15. Methods and strategies of object localization

    NASA Technical Reports Server (NTRS)

    Shao, Lejun; Volz, Richard A.

    1989-01-01

    An important property of an intelligent robot is to be able to determine the location of an object in 3-D space. A general object localization system structure is proposed, some important issues on localization discussed, and an overview given for current available object localization algorithms and systems. The algorithms reviewed are characterized by their feature extracting and matching strategies; the range finding methods; the types of locatable objects; and the mathematical formulating methods.

  16. Local fractal dimension based approaches for colonic polyp classification.

    PubMed

    Häfner, Michael; Tamaki, Toru; Tanaka, Shinji; Uhl, Andreas; Wimmer, Georg; Yoshida, Shigeto

    2015-12-01

    This work introduces texture analysis methods that are based on computing the local fractal dimension (LFD; or also called the local density function) and applies them for colonic polyp classification. The methods are tested on 8 HD-endoscopic image databases, where each database is acquired using different imaging modalities (Pentax's i-Scan technology combined with or without staining the mucosa) and on a zoom-endoscopic image database using narrow band imaging. In this paper, we present three novel extensions to a LFD based approach. These extensions additionally extract shape and/or gradient information of the image to enhance the discriminativity of the original approach. To compare the results of the LFD based approaches with the results of other approaches, five state of the art approaches for colonic polyp classification are applied to the employed databases. Experiments show that LFD based approaches are well suited for colonic polyp classification, especially the three proposed extensions. The three proposed extensions are the best performing methods or at least among the best performing methods for each of the employed databases. The methods are additionally tested by means of a public texture image database, the UIUCtex database. With this database, the viewpoint invariance of the methods is assessed, an important features for the employed endoscopic image databases. Results imply that most of the LFD based methods are more viewpoint invariant than the other methods. However, the shape, size and orientation adapted LFD approaches (which are especially designed to enhance the viewpoint invariance) are in general not more viewpoint invariant than the other LFD based approaches.

  17. Imaging Localized Prostate Cancer: Current Approaches and New Developments

    PubMed Central

    Turkbey, Baris; Albert, Paul S.; Kurdziel, Karen; Choyke, Peter L.

    2012-01-01

    OBJECTIVE Prostate cancer is the most common noncutaneous malignancy among men in the Western world. Imaging has recently become more important in the diagnosis, local staging, and treatment follow-up of prostate cancer. In this article, we review conventional and functional imaging methods as well as targeted imaging approaches with novel tracers used in the diagnosis and staging of prostate cancer. CONCLUSION Although prostate cancer is the second leading cause of cancer death in men, imaging of localized prostate cancer remains limited. Recent developments in imaging technologies, particularly MRI and PET, may lead to significant improvements in lesion detection and staging. PMID:19457807

  18. Approaches to local climate action in Colorado

    NASA Astrophysics Data System (ADS)

    Huang, Y. D.

    2011-12-01

    Though climate change is a global problem, the impacts are felt on the local scale; it follows that the solutions must come at the local level. Fortunately, many cities and municipalities are implementing climate mitigation (or climate action) policies and programs. However, they face many procedural and institutional barriers to their efforts, such of lack of expertise or data, limited human and financial resources, and lack of community engagement (Krause 2011). To address the first obstacle, thirteen in-depth case studies were done of successful model practices ("best practices") of climate action programs carried out by various cities, counties, and organizations in Colorado, and one outside Colorado, and developed into "how-to guides" for other municipalities to use. Research was conducted by reading documents (e.g. annual reports, community guides, city websites), email correspondence with program managers and city officials, and via phone interviews. The information gathered was then compiled into a series of reports containing a narrative description of the initiative; an overview of the plan elements (target audience and goals); implementation strategies and any indicators of success to date (e.g. GHG emissions reductions, cost savings); and the adoption or approval process, as well as community engagement efforts and marketing or messaging strategies. The types of programs covered were energy action plans, energy efficiency programs, renewable energy programs, and transportation and land use programs. Between the thirteen case studies, there was a range of approaches to implementing local climate action programs, examined along two dimensions: focus on climate change (whether it was direct/explicit or indirect/implicit) and extent of government authority. This benchmarking exercise affirmed the conventional wisdom propounded by Pitt (2010), that peer pressure (that is, the presence of neighboring jurisdictions with climate initiatives), the level of

  19. A Localization Method for Multistatic SAR Based on Convex Optimization.

    PubMed

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  20. A Localization Method for Multistatic SAR Based on Convex Optimization

    PubMed Central

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  1. Brain source localization based on fast fully adaptive approach.

    PubMed

    Ravan, Maryam; Reilly, James P

    2012-01-01

    In the electroencephalogram (EEG) or magnetoencephalogram (MEG) context, brain source localization (beamforming) methods often fail when the number of observations is small. This is particularly true when measuring evoked potentials, especially when the number of electrodes is large. Due to the nonstationarity of the EEG/MEG, an adaptive capability is desirable. Previous work has addressed these issues by reducing the adaptive degrees of freedom (DoFs). This paper develops and tests a new multistage adaptive processing for brain source localization that has been previously used for radar statistical signal processing application with uniform linear antenna array. This processing, referred to as the fast fully adaptive (FFA) approach, could significantly reduce the required sample support and computational complexity, while still processing all available DoFs. The performance improvement offered by the FFA approach in comparison to the fully adaptive minimum variance beamforming (MVB) with limited data is demonstrated by bootstrapping simulated data to evaluate the variability of the source location.

  2. Emergency local searching approach for job shop scheduling

    NASA Astrophysics Data System (ADS)

    Zhao, Ning; Chen, Siyu; Du, Yanhua

    2013-09-01

    Existing methods of local search mostly focus on how to reach optimal solution. However, in some emergency situations, search time is the hard constraint for job shop scheduling problem while optimal solution is not necessary. In this situation, the existing method of local search is not fast enough. This paper presents an emergency local search(ELS) approach which can reach feasible and nearly optimal solution in limited search time. The ELS approach is desirable for the aforementioned emergency situations where search time is limited and a nearly optimal solution is sufficient, which consists of three phases. Firstly, in order to reach a feasible and nearly optimal solution, infeasible solutions are repaired and a repair technique named group repair is proposed. Secondly, in order to save time, the amount of local search moves need to be reduced and this is achieved by a quickly search method named critical path search(CPS). Finally, CPS sometimes stops at a solution far from the optimal one. In order to jump out the search dilemma of CPS, a jump technique based on critical part is used to improve CPS. Furthermore, the schedule system based on ELS has been developed and experiments based on this system completed on the computer of Intel Pentium(R) 2.93 GHz. The experimental result shows that the optimal solutions of small scale instances are reached in 2 s, and the nearly optimal solutions of large scale instances are reached in 4 s. The proposed ELS approach can stably reach nearly optimal solutions with manageable search time, and can be applied on some emergency situations.

  3. Optic disk localization by a robust fusion method

    NASA Astrophysics Data System (ADS)

    Zhang, Jielin; Yin, Fengshou; Wong, Damon W. K.; Liu, Jiang; Baskaran, Mani; Cheng, Ching-Yu; Wong, Tien Yin

    2013-02-01

    The optic disk localization plays an important role in developing computer-aided diagnosis (CAD) systems for ocular diseases such as glaucoma, diabetic retinopathy and age-related macula degeneration. In this paper, we propose an intelligent fusion of methods for the localization of the optic disk in retinal fundus images. Three different approaches are developed to detect the location of the optic disk separately. The first method is the maximum vessel crossing method, which finds the region with the most number of blood vessel crossing points. The second one is the multichannel thresholding method, targeting the area with the highest intensity. The final method searches the vertical and horizontal region-of-interest separately on the basis of blood vessel structure and neighborhood entropy profile. Finally, these three methods are combined using an intelligent fusion method to improve the overall accuracy. The proposed algorithm was tested on the STARE database and the ORIGAlight database, each consisting of images with various pathologies. The preliminary result on the STARE database can achieve 81.5%, while a higher result of 99% can be obtained for the ORIGAlight database. The proposed method outperforms each individual approach and state-of-the-art method which utilizes an intensity-based approach. The result demonstrates a high potential for this method to be used in retinal CAD systems.

  4. System and method for object localization

    NASA Technical Reports Server (NTRS)

    Kelly, Alonzo J. (Inventor); Zhong, Yu (Inventor)

    2005-01-01

    A computer-assisted method for localizing a rack, including sensing an image of the rack, detecting line segments in the sensed image, recognizing a candidate arrangement of line segments in the sensed image indicative of a predetermined feature of the rack, generating a matrix of correspondence between the candidate arrangement of line segments and an expected position and orientation of the predetermined feature of the rack, and estimating a position and orientation of the rack based on the matrix of correspondence.

  5. Source Localization using Stochastic Approximation and Least Squares Methods

    SciTech Connect

    Sahyoun, Samir S.; Djouadi, Seddik M.; Qi, Hairong; Drira, Anis

    2009-03-05

    This paper presents two approaches to locate the source of a chemical plume; Nonlinear Least Squares and Stochastic Approximation (SA) algorithms. Concentration levels of the chemical measured by special sensors are used to locate this source. Non-linear Least Squares technique is applied at different noise levels and compared with the localization using SA. For a noise corrupted data collected from a distributed set of chemical sensors, we show that SA methods are more efficient than Least Squares method. SA methods are often better at coping with noisy input information than other search methods.

  6. Invariant current approach to wave propagation in locally symmetric structures

    NASA Astrophysics Data System (ADS)

    Zampetakis, V. E.; Diakonou, M. K.; Morfonios, C. V.; Kalozoumis, P. A.; Diakonos, F. K.; Schmelcher, P.

    2016-05-01

    A theory for wave mechanical systems with local inversion and translation symmetries is developed employing the two-dimensional solution space of the stationary Schrödinger equation. The local symmetries of the potential are encoded into corresponding local basis vectors in terms of symmetry-induced two-point invariant currents which map the basis amplitudes between symmetry-related points. A universal wavefunction structure in locally symmetric potentials is revealed, independently of the physical boundary conditions, by using special local bases which are adapted to the existing local symmetries. The local symmetry bases enable efficient computation of spatially resolved wave amplitudes in systems with arbitrary combinations of local inversion and translation symmetries. The approach opens the perspective of a flexible analysis and control of wave localization in structurally complex systems.

  7. Meshless Local Petrov-Galerkin Method for Bending Problems

    NASA Technical Reports Server (NTRS)

    Phillips, Dawn R.; Raju, Ivatury S.

    2002-01-01

    Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.

  8. Improving mobile robot localization: grid-based approach

    NASA Astrophysics Data System (ADS)

    Yan, Junchi

    2012-02-01

    Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.

  9. Using the Storypath Approach to Make Local Government Understandable

    ERIC Educational Resources Information Center

    McGuire, Margit E.; Cole, Bronwyn

    2008-01-01

    Learning about local government seems boring and irrelevant to most young people, particularly to students from high-poverty backgrounds. The authors explore a promising approach for solving this problem, Storypath, which engages students in authentic learning and active citizenship. The Storypath approach is based on a narrative in which students…

  10. Locally Compact Quantum Groups. A von Neumann Algebra Approach

    NASA Astrophysics Data System (ADS)

    Van Daele, Alfons

    2014-08-01

    In this paper, we give an alternative approach to the theory of locally compact quantum groups, as developed by Kustermans and Vaes. We start with a von Neumann algebra and a comultiplication on this von Neumann algebra. We assume that there exist faithful left and right Haar weights. Then we develop the theory within this von Neumann algebra setting. In [Math. Scand. 92 (2003), 68-92] locally compact quantum groups are also studied in the von Neumann algebraic context. This approach is independent of the original C^*-algebraic approach in the sense that the earlier results are not used. However, this paper is not really independent because for many proofs, the reader is referred to the original paper where the C^*-version is developed. In this paper, we give a completely self-contained approach. Moreover, at various points, we do things differently. We have a different treatment of the antipode. It is similar to the original treatment in [Ann. Sci. & #201;cole Norm. Sup. (4) 33 (2000), 837-934]. But together with the fact that we work in the von Neumann algebra framework, it allows us to use an idea from [Rev. Roumaine Math. Pures Appl. 21 (1976), 1411-1449] to obtain the uniqueness of the Haar weights in an early stage. We take advantage of this fact when deriving the other main results in the theory. We also give a slightly different approach to duality. Finally, we collect, in a systematic way, several important formulas. In an appendix, we indicate very briefly how the C^*-approach and the von Neumann algebra approach eventually yield the same objects. The passage from the von Neumann algebra setting to the C^*-algebra setting is more or less standard. For the other direction, we use a new method. It is based on the observation that the Haar weights on the C^*-algebra extend to weights on the double dual with central support and that all these supports are the same. Of course, we get the von Neumann algebra by cutting down the double dual with this unique

  11. A Tomographic Method for the Reconstruction of Local Probability Density Functions

    NASA Technical Reports Server (NTRS)

    Sivathanu, Y. R.; Gore, J. P.

    1993-01-01

    A method of obtaining the probability density function (PDF) of local properties from path integrated measurements is described. The approach uses a discrete probability function (DPF) method to infer the PDF of the local extinction coefficient from measurements of the PDFs of the path integrated transmittance. The local PDFs obtained using the method are compared with those obtained from direct intrusive measurements in propylene/air and ethylene/air diffusion flames. The results of this comparison are good.

  12. Performance of FFT methods in local gravity field modelling

    NASA Technical Reports Server (NTRS)

    Forsberg, Rene; Solheim, Dag

    1989-01-01

    Fast Fourier transform (FFT) methods provide a fast and efficient means of processing large amounts of gravity or geoid data in local gravity field modelling. The FFT methods, however, has a number of theoretical and practical limitations, especially the use of flat-earth approximation, and the requirements for gridded data. In spite of this the method often yields excellent results in practice when compared to other more rigorous (and computationally expensive) methods, such as least-squares collocation. The good performance of the FFT methods illustrate that the theoretical approximations are offset by the capability of taking into account more data in larger areas, especially important for geoid predictions. For best results good data gridding algorithms are essential. In practice truncated collocation approaches may be used. For large areas at high latitudes the gridding must be done using suitable map projections such as UTM, to avoid trivial errors caused by the meridian convergence. The FFT methods are compared to ground truth data in New Mexico (xi, eta from delta g), Scandinavia (N from delta g, the geoid fits to 15 cm over 2000 km), and areas of the Atlantic (delta g from satellite altimetry using Wiener filtering). In all cases the FFT methods yields results comparable or superior to other methods.

  13. The Local Variational Multiscale Method for Turbulence Simulation.

    SciTech Connect

    Collis, Samuel Scott; Ramakrishnan, Srinivas

    2005-05-01

    Accurate and efficient turbulence simulation in complex geometries is a formidable chal-lenge. Traditional methods are often limited by low accuracy and/or restrictions to simplegeometries. We explore the merger of Discontinuous Galerkin (DG) spatial discretizationswith Variational Multi-Scale (VMS) modeling, termed Local VMS (LVMS), to overcomethese limitations. DG spatial discretizations support arbitrarily high-order accuracy on un-structured grids amenable for complex geometries. Furthermore, high-order, hierarchicalrepresentation within DG provides a natural framework fora prioriscale separation crucialfor VMS implementation. We show that the combined benefits of DG and VMS within theLVMS method leads to promising new approach to LES for use in complex geometries.The efficacy of LVMS for turbulence simulation is assessed by application to fully-developed turbulent channelflow. First, a detailed spatial resolution study is undertakento record the effects of the DG discretization on turbulence statistics. Here, the localhp[?]refinement capabilites of DG are exploited to obtain reliable low-order statistics effi-ciently. Likewise, resolution guidelines for simulating wall-bounded turbulence using DGare established. We also explore the influence of enforcing Dirichlet boundary conditionsindirectly through numericalfluxes in DG which allows the solution to jump (slip) at thechannel walls. These jumps are effective in simulating the influence of the wall commen-surate with the local resolution and this feature of DG is effective in mitigating near-wallresolution requirements. In particular, we show that by locally modifying the numericalviscousflux used at the wall, we are able to regulate the near-wall slip through a penaltythat leads to improved shear-stress predictions. This work, demonstrates the potential ofthe numerical viscousflux to act as a numerically consistent wall-model and this successwarrents future research.As in any high-order numerical method some

  14. Mixed Methods Approaches in Family Science Research

    ERIC Educational Resources Information Center

    Plano Clark, Vicki L.; Huddleston-Casas, Catherine A.; Churchill, Susan L.; Green, Denise O'Neil; Garrett, Amanda L.

    2008-01-01

    The complex phenomena of interest to family scientists require the use of quantitative and qualitative approaches. Researchers across the social sciences are now turning to mixed methods designs that combine these two approaches. Mixed methods research has great promise for addressing family science topics, but only if researchers understand the…

  15. Identity, Intersectionality, and Mixed-Methods Approaches

    ERIC Educational Resources Information Center

    Harper, Casandra E.

    2011-01-01

    In this article, the author argues that current strategies to study and understand students' identities fall short of fully capturing their complexity. A multi-dimensional perspective and a mixed-methods approach can reveal nuance that is missed with current approaches. The author offers an illustration of how mixed-methods research can promote a…

  16. Developmental differences in auditory detection and localization of approaching vehicles.

    PubMed

    Barton, Benjamin K; Lew, Roger; Kovesdi, Casey; Cottrell, Nicholas D; Ulrich, Thomas

    2013-04-01

    Pedestrian safety is a significant problem in the United States, with thousands being injured each year. Multiple risk factors exist, but one poorly understood factor is pedestrians' ability to attend to vehicles using auditory cues. Auditory information in the pedestrian setting is increasing in importance with the growing number of quieter hybrid and all-electric vehicles on America's roadways that do not emit sound cues pedestrians expect from an approaching vehicle. Our study explored developmental differences in pedestrians' detection and localization of approaching vehicles. Fifty children ages 6-9 years, and 35 adults participated. Participants' performance varied significantly by age, and with increasing speed and direction of the vehicle's approach. Results underscore the importance of understanding children's and adults' use of auditory cues for pedestrian safety and highlight the need for further research.

  17. Developmental differences in auditory detection and localization of approaching vehicles.

    PubMed

    Barton, Benjamin K; Lew, Roger; Kovesdi, Casey; Cottrell, Nicholas D; Ulrich, Thomas

    2013-04-01

    Pedestrian safety is a significant problem in the United States, with thousands being injured each year. Multiple risk factors exist, but one poorly understood factor is pedestrians' ability to attend to vehicles using auditory cues. Auditory information in the pedestrian setting is increasing in importance with the growing number of quieter hybrid and all-electric vehicles on America's roadways that do not emit sound cues pedestrians expect from an approaching vehicle. Our study explored developmental differences in pedestrians' detection and localization of approaching vehicles. Fifty children ages 6-9 years, and 35 adults participated. Participants' performance varied significantly by age, and with increasing speed and direction of the vehicle's approach. Results underscore the importance of understanding children's and adults' use of auditory cues for pedestrian safety and highlight the need for further research. PMID:23357030

  18. Dual mode stereotactic localization method and application

    DOEpatents

    Keppel, Cynthia E.; Barbosa, Fernando Jorge; Majewski, Stanislaw

    2002-01-01

    The invention described herein combines the structural digital X-ray image provided by conventional stereotactic core biopsy instruments with the additional functional metabolic gamma imaging obtained with a dedicated compact gamma imaging mini-camera. Before the procedure, the patient is injected with an appropriate radiopharmaceutical. The radiopharmaceutical uptake distribution within the breast under compression in a conventional examination table expressed by the intensity of gamma emissions is obtained for comparison (co-registration) with the digital mammography (X-ray) image. This dual modality mode of operation greatly increases the functionality of existing stereotactic biopsy devices by yielding a much smaller number of false positives than would be produced using X-ray images alone. The ability to obtain both the X-ray mammographic image and the nuclear-based medicine gamma image using a single device is made possible largely through the use of a novel, small and movable gamma imaging camera that permits its incorporation into the same table or system as that currently utilized to obtain X-ray based mammographic images for localization of lesions.

  19. New Methods for Crafting Locally Decision-Relevant Scenarios

    NASA Astrophysics Data System (ADS)

    Lempert, R. J.

    2015-12-01

    Scenarios can play an important role in helping decision makers to imagine future worlds, both good and bad, different than the one with which we are familiar and to take concrete steps now to address the risks generated by climate change. At their best, scenarios can effectively represent deep uncertainty; integrate over multiple domains; and enable parties with different expectation and values to expand the range of futures they consider, to see the world from different points of view, and to grapple seriously with the potential implications of surprising or inconvenient futures. These attributes of scenario processes can prove crucial in helping craft effective responses to climate change. But traditional scenario methods can also fail to overcome difficulties related to choosing, communicating, and using scenarios to identify, evaluate, and reach consensus on appropriate policies. Such challenges can limit scenario's impact in broad public discourse. This talk will demonstrate how new decision support approaches can employ new quantitative tools that allow scenarios to emerge from a process of deliberation with analysis among stakeholders, rather than serve as inputs to it, thereby increasing the impacts of scenarios on decision making. This talk will demonstrate these methods in the design of a decision support tool to help residents of low lying coastal cities grapple with the long-term risks of sea level rise. In particular, this talk will show how information from the IPCC SSP's can be combined with local information to provide a rich set of locally decision-relevant information.

  20. Localized Multi-Reference Approach for Mixed-Valence Systems

    NASA Astrophysics Data System (ADS)

    Helal, W.; Evangelisti, S.; Leininger, T.; Maynau, D.

    2008-09-01

    The electronic structure and some important intra-molecular charge transfer parameters were investigated at CAS-SCF, MRCI, CAS+S and multi-reference localization levels of theory for purely organic mixed-valence molecules. In particular, a spiro cation has been taken as a model system. The potential energy surfaces of the ground and the lower three excited electronic states have been computed within a two-state model, at CAS-SCF using TZP basis for the spiro cation, and an adiabatic double-well potential has been obtained for the ground electronic state. Our analysis of the geometry through the reaction coordinate indicate that the spiro cation is a valence trapped bistable system. The effect of non-dynamical correlation, using a localized orbital approach, was found to be crucial for a quantitative description of the electronic structure and some important electron transfer parameters of these organic mixed-valence systems.

  1. Global/local methods research using the CSM testbed

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. Hayden, Jr.; Thompson, Danniella M.

    1990-01-01

    Research activities in global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.

  2. Topological rearrangements and local search method for tandem duplication trees.

    PubMed

    Bertrand, Denis; Gascuel, Olivier

    2005-01-01

    The problem of reconstructing the duplication history of a set of tandemly repeated sequences was first introduced by Fitch . Many recent studies deal with this problem, showing the validity of the unequal recombination model proposed by Fitch, describing numerous inference algorithms, and exploring the combinatorial properties of these new mathematical objects, which are duplication trees. In this paper, we deal with the topological rearrangement of these trees. Classical rearrangements used in phylogeny (NNI, SPR, TBR, ...) cannot be applied directly on duplication trees. We show that restricting the neighborhood defined by the SPR (Subtree Pruning and Regrafting) rearrangement to valid duplication trees, allows exploring the whole duplication tree space. We use these restricted rearrangements in a local search method which improves an initial tree via successive rearrangements. This method is applied to the optimization of parsimony and minimum evolution criteria. We show through simulations that this method improves all existing programs for both reconstructing the topology of the true tree and recovering its duplication events. We apply this approach to tandemly repeated human Zinc finger genes and observe that a much better duplication tree is obtained by our method than using any other program.

  3. A Locally-Exact Homogenization Approach for Periodic Heterogeneous Materials

    SciTech Connect

    Drago, Anthony S.; Pindera, Marek-Jerzy

    2008-02-15

    Elements of the homogenization theory are utilized to develop a new micromechanics approach for unit cells of periodic heterogeneous materials based on locally-exact elasticity solutions. Closed-form expressions for the homogenized moduli of unidirectionally-reinforced heterogeneous materials are obtained in terms of Hill's strain concentration matrices valid under arbitrary combined loading, which yield the homogenized Hooke's law. Results for simple unit cells with off-set fibers, which require the use of periodic boundary conditions, are compared with corresponding finite-element results demonstrating excellent correlation.

  4. An Update on Modern Approaches to Localized Esophageal Cancer

    PubMed Central

    Welsh, James; Amini, Arya; Likhacheva, Anna; Erasmus, Jeremy; Gomez, Daniel; Davila, Marta; Mehran, Reza J; Komaki, Ritsuko; Liao, Zhongxing; Hofstetter, Wayne L; Bhutani, Manoop; Ajani, Jaffer A

    2014-01-01

    Esophageal cancer treatment continues to be a topic of wide debate. Based on improvements in chemotherapy drugs, surgical techniques, and radiotherapy advances, esophageal cancer treatment approaches are becoming more specific to the stage of the tumor and the overall performance status of the patient. While surgery continues to be the standard treatment option for localized disease, the current direction favors multimodality treatment including both radiation and chemotherapy with surgery. In the next few years, we will continue to see improvements in radiation techniques and proton treatment, with more minimally invasive surgical approaches minimizing postoperative side effects, and the discovery of molecular biomarkers to help deliver more specifically targeted medication to treat esophageal cancers. PMID:21365188

  5. Reactive Gas transport in soil: Kinetics versus Local Equilibrium Approach

    NASA Astrophysics Data System (ADS)

    Geistlinger, Helmut; Jia, Ruijan

    2010-05-01

    Gas transport through the unsaturated soil zone was studied using an analytical solution of the gas transport model that is mathematically equivalent to the Two-Region model. The gas transport model includes diffusive and convective gas fluxes, interphase mass transfer between the gas and water phase, and biodegradation. The influence of non-equilibrium phenomena, spatially variable initial conditions, and transient boundary conditions are studied. The objective of this paper is to compare the kinetic approach for interphase mass transfer with the standard local equilibrium approach and to find conditions and time-scales under which the local equilibrium approach is justified. The time-scale of investigation was limited to the day-scale, because this is the relevant scale for understanding gas emission from the soil zone with transient water saturation. For the first time a generalized mass transfer coefficient is proposed that justifies the often used steady-state Thin-Film mass transfer coefficient for small and medium water-saturated aggregates of about 10 mm. The main conclusion from this study is that non-equilibrium mass transfer depends strongly on the temporal and small-scale spatial distribution of water within the unsaturated soil zone. For regions with low water saturation and small water-saturated aggregates (radius about 1 mm) the local equilibrium approach can be used as a first approximation for diffusive gas transport. For higher water saturation and medium radii of water-saturated aggregates (radius about 10 mm) and for convective gas transport, the non-equilibrium effect becomes more and more important if the hydraulic residence time and the Damköhler number decrease. Relative errors can range up to 100% and more. While for medium radii the local equilibrium approach describes the main features both of the spatial concentration profile and the time-dependence of the emission rate, it fails completely for larger aggregates (radius about 100 mm

  6. Comparison of local grid refinement methods for MODFLOW

    USGS Publications Warehouse

    Mehl, S.; Hill, M.C.; Leake, S.A.

    2006-01-01

    Many ground water modeling efforts use a finite-difference method to solve the ground water flow equation, and many of these models require a relatively fine-grid discretization to accurately represent the selected process in limited areas of interest. Use of a fine grid over the entire domain can be computationally prohibitive; using a variably spaced grid can lead to cells with a large aspect ratio and refinement in areas where detail is not needed. One solution is to use local-grid refinement (LGR) whereby the grid is only refined in the area of interest. This work reviews some LGR methods and identifies advantages and drawbacks in test cases using MODFLOW-2000. The first test case is two dimensional and heterogeneous; the second is three dimensional and includes interaction with a meandering river. Results include simulations using a uniform fine grid, a variably spaced grid, a traditional method of LGR without feedback, and a new shared node method with feedback. Discrepancies from the solution obtained with the uniform fine grid are investigated. For the models tested, the traditional one-way coupled approaches produced discrepancies in head up to 6.8% and discrepancies in cell-to-cell fluxes up to 7.1%, while the new method has head and cell-to-cell flux discrepancies of 0.089% and 0.14%, respectively. Additional results highlight the accuracy, flexibility, and CPU time trade-off of these methods and demonstrate how the new method can be successfully implemented to model surface water-ground water interactions. Copyright ?? 2006 The Author(s).

  7. Local knowledge in community-based approaches to medicinal plant conservation: lessons from India

    PubMed Central

    Shukla, Shailesh; Gardner, James

    2006-01-01

    Background Community-based approaches to conservation of natural resources, in particular medicinal plants, have attracted attention of governments, non governmental organizations and international funding agencies. This paper highlights the community-based approaches used by an Indian NGO, the Rural Communes Medicinal Plant Conservation Centre (RCMPCC). The RCMPCC recognized and legitimized the role of local medicinal knowledge along with other knowledge systems to a wider audience, i.e. higher levels of government. Methods Besides a review of relevant literature, the research used a variety of qualitative techniques, such as semi-structured, in-depth interviews and participant observations in one of the project sites of RCMPCC. Results The review of local medicinal plant knowledge systems reveals that even though medicinal plants and associated knowledge systems (particularly local knowledge) are gaining wider recognition at the global level, the efforts to recognize and promote the un-codified folk systems of medicinal knowledge are still inadequate. In country like India, such neglect is evident through the lack of legal recognition and supporting policies. On the other hand, community-based approaches like local healers' workshops or village biologist programs implemented by RCMPCC are useful in combining both local (folk and codified) and formal systems of medicine. Conclusion Despite the high reliance on the local medicinal knowledge systems for health needs in India, the formal policies and national support structures are inadequate for traditional systems of medicine and almost absent for folk medicine. On the other hand, NGOs like the RCMPCC have demonstrated that community-based and local approaches such as local healer's workshops and village biologist program can synergistically forge linkages between local knowledge with the formal sciences (in this case botany and ecology) and generate positive impacts at various levels. PMID:16603082

  8. Method of calculating local dispersion in arbitrary photonic crystal waveguides.

    PubMed

    Dastmalchi, Babak; Mohtashami, Abbas; Hingerl, Kurt; Zarbakhsh, Javad

    2007-10-15

    We introduce a novel method to calculate the local dispersion relation in photonic crystal waveguides, based on the finite-difference time-domain simulation and filter diagonalization method (FDM). In comparison with the spatial Fourier transform method (SFT), the highly local dispersion calculations based on FDM are considerably superior and pronounced. For the first time to our knowledge, the presented numerical technique allows comparing the dispersion in straight and bent waveguides.

  9. Dense Stereo Matching Method Based on Local Affine Model.

    PubMed

    Li, Jie; Shi, Wenxuan; Deng, Dexiang; Jia, Wenyan; Sun, Mingui

    2013-07-01

    A new method for constructing an accurate disparity space image and performing an efficient cost aggregation in stereo matching based on local affine model is proposed in this paper. The key algorithm includes a new self-adapting dissimilarity measurement used for calculating the matching cost and a local affine model used in cost aggregation stage. Different from the traditional region-based methods, which try to change the matching window size or to calculate an adaptive weight to do the aggregation, the proposed method focuses on obtaining the efficient and accurate local affine model to aggregate the cost volume while preserving the disparity discontinuity. Moreover, the local affine model can be extended to the color space. Experimental results demonstrate that the proposed method is able to provide subpixel precision disparity maps compared with some state-of-the-art stereo matching methods. PMID:24163727

  10. A Selective Vision and Landmark based Approach to Improve the Efficiency of Position Probability Grid Localization

    NASA Astrophysics Data System (ADS)

    Loukianov, Andrey A.; Sugisaka, Masanori

    This paper presents a vision and landmark based approach to improve the efficiency of probability grid Markov localization for mobile robots. The proposed approach uses visual landmarks that can be detected by a rotating video camera on the robot. We assume that visual landmark positions in the map are known and that each landmark can be assigned to a certain landmark class. The method uses classes of observed landmarks and their relative arrangement to select regions in the robot posture space where the location probability density function is to be updated. Subsequent computations are performed only in these selected update regions thus the computational workload is significantly reduced. Probabilistic landmark-based localization method, details of the map and robot perception are discussed. A technique to compute the update regions and their parameters for selective computation is introduced. Simulation results are presented to show the effectiveness of the approach.

  11. Adaptive windowed range-constrained Otsu method using local information

    NASA Astrophysics Data System (ADS)

    Zheng, Jia; Zhang, Dinghua; Huang, Kuidong; Sun, Yuanxi; Tang, Shaojie

    2016-01-01

    An adaptive windowed range-constrained Otsu method using local information is proposed for improving the performance of image segmentation. First, the reason why traditional thresholding methods do not perform well in the segmentation of complicated images is analyzed. Therein, the influences of global and local thresholdings on the image segmentation are compared. Second, two methods that can adaptively change the size of the local window according to local information are proposed by us. The characteristics of the proposed methods are analyzed. Thereby, the information on the number of edge pixels in the local window of the binarized variance image is employed to adaptively change the local window size. Finally, the superiority of the proposed method over other methods such as the range-constrained Otsu, the active contour model, the double Otsu, the Bradley's, and the distance-regularized level set evolution is demonstrated. It is validated by the experiments that the proposed method can keep more details and acquire much more satisfying area overlap measure as compared with the other conventional methods.

  12. Automated construction of maximally localized Wannier functions: Optimized projection functions method

    NASA Astrophysics Data System (ADS)

    Mustafa, Jamal I.; Coh, Sinisa; Cohen, Marvin L.; Louie, Steven G.

    2015-10-01

    Maximally localized Wannier functions are widely used in electronic structure theory for analyses of bonding, electric polarization, orbital magnetization, and for interpolation. The state of the art method for their construction is based on the method of Marzari and Vanderbilt. One of the practical difficulties of this method is guessing functions (initial projections) that approximate the final Wannier functions. Here we present an approach based on optimized projection functions that can construct maximally localized Wannier functions without a guess. We describe and demonstrate this approach on several realistic examples.

  13. Inverse source localization for EEG using system identification approach

    NASA Astrophysics Data System (ADS)

    Xanthopoulos, Petros; Yatsenko, Vitaliy; Kammerdiner, Alla; Pardalos, Panos M.

    2007-11-01

    The reconstruction of the brain current sources from scalp electric recordings (Electroen-cephalogram) also known as the inverse source localization problem is a highly underdetermined problem in the field of computational neuroscience, and this problem still remains open . In this chapter we propose an alternative formulation for the inverse electroencephalography (EEG) problem based on optimization theory. For simulation purposes, a three shell realistic head model based on an averaged magnetic resonance imaging (MRI) segmentation and Boundary Element method (BEM) is constructed. System identification methodology is employed in order to determine the parameters of the system. In the last stage the inverse problem is solved using the computed forward model.

  14. Influence of skull modeling approaches on EEG source localization.

    PubMed

    Montes-Restrepo, Victoria; van Mierlo, Pieter; Strobbe, Gregor; Staelens, Steven; Vandenberghe, Stefaan; Hallez, Hans

    2014-01-01

    Electroencephalographic source localization (ESL) relies on an accurate model representing the human head for the computation of the forward solution. In this head model, the skull is of utmost importance due to its complex geometry and low conductivity compared to the other tissues inside the head. We investigated the influence of using different skull modeling approaches on ESL. These approaches, consisting in skull conductivity and geometry modeling simplifications, make use of X-ray computed tomography (CT) and magnetic resonance (MR) images to generate seven different head models. A head model with an accurately segmented skull from CT images, including spongy and compact bone compartments as well as some air-filled cavities, was used as the reference model. EEG simulations were performed for a configuration of 32 and 128 electrodes, and for both noiseless and noisy data. The results show that skull geometry simplifications have a larger effect on ESL than those of the conductivity modeling. This suggests that accurate skull modeling is important in order to achieve reliable results for ESL that are useful in a clinical environment. We recommend the following guidelines to be taken into account for skull modeling in the generation of subject-specific head models: (i) If CT images are available, i.e., if the geometry of the skull and its different tissue types can be accurately segmented, the conductivity should be modeled as isotropic heterogeneous. The spongy bone might be segmented as an erosion of the compact bone; (ii) when only MR images are available, the skull base should be represented as accurately as possible and the conductivity can be modeled as isotropic heterogeneous, segmenting the spongy bone directly from the MR image; (iii) a large number of EEG electrodes should be used to obtain high spatial sampling, which reduces the localization errors at realistic noise levels.

  15. Localized Surface Plasmon Resonance Biosensing: Current Challenges and Approaches

    PubMed Central

    Unser, Sarah; Bruzas, Ian; He, Jie; Sagle, Laura

    2015-01-01

    Localized surface plasmon resonance (LSPR) has emerged as a leader among label-free biosensing techniques in that it offers sensitive, robust, and facile detection. Traditional LSPR-based biosensing utilizes the sensitivity of the plasmon frequency to changes in local index of refraction at the nanoparticle surface. Although surface plasmon resonance technologies are now widely used to measure biomolecular interactions, several challenges remain. In this article, we have categorized these challenges into four categories: improving sensitivity and limit of detection, selectivity in complex biological solutions, sensitive detection of membrane-associated species, and the adaptation of sensing elements for point-of-care diagnostic devices. The first section of this article will involve a conceptual discussion of surface plasmon resonance and the factors affecting changes in optical signal detected. The following sections will discuss applications of LSPR biosensing with an emphasis on recent advances and approaches to overcome the four limitations mentioned above. First, improvements in limit of detection through various amplification strategies will be highlighted. The second section will involve advances to improve selectivity in complex media through self-assembled monolayers, “plasmon ruler” devices involving plasmonic coupling, and shape complementarity on the nanoparticle surface. The following section will describe various LSPR platforms designed for the sensitive detection of membrane-associated species. Finally, recent advances towards multiplexed and microfluidic LSPR-based devices for inexpensive, rapid, point-of-care diagnostics will be discussed. PMID:26147727

  16. Localized Surface Plasmon Resonance Biosensing: Current Challenges and Approaches.

    PubMed

    Unser, Sarah; Bruzas, Ian; He, Jie; Sagle, Laura

    2015-07-02

    Localized surface plasmon resonance (LSPR) has emerged as a leader among label-free biosensing techniques in that it offers sensitive, robust, and facile detection. Traditional LSPR-based biosensing utilizes the sensitivity of the plasmon frequency to changes in local index of refraction at the nanoparticle surface. Although surface plasmon resonance technologies are now widely used to measure biomolecular interactions, several challenges remain. In this article, we have categorized these challenges into four categories: improving sensitivity and limit of detection, selectivity in complex biological solutions, sensitive detection of membrane-associated species, and the adaptation of sensing elements for point-of-care diagnostic devices. The first section of this article will involve a conceptual discussion of surface plasmon resonance and the factors affecting changes in optical signal detected. The following sections will discuss applications of LSPR biosensing with an emphasis on recent advances and approaches to overcome the four limitations mentioned above. First, improvements in limit of detection through various amplification strategies will be highlighted. The second section will involve advances to improve selectivity in complex media through self-assembled monolayers, "plasmon ruler" devices involving plasmonic coupling, and shape complementarity on the nanoparticle surface. The following section will describe various LSPR platforms designed for the sensitive detection of membrane-associated species. Finally, recent advances towards multiplexed and microfluidic LSPR-based devices for inexpensive, rapid, point-of-care diagnostics will be discussed.

  17. Multilevel local refinement and multigrid methods for 3-D turbulent flow

    SciTech Connect

    Liao, C.; Liu, C.; Sung, C.H.; Huang, T.T.

    1996-12-31

    A numerical approach based on multigrid, multilevel local refinement, and preconditioning methods for solving incompressible Reynolds-averaged Navier-Stokes equations is presented. 3-D turbulent flow around an underwater vehicle is computed. 3 multigrid levels and 2 local refinement grid levels are used. The global grid is 24 x 8 x 12. The first patch is 40 x 16 x 20 and the second patch is 72 x 32 x 36. 4th order artificial dissipation are used for numerical stability. The conservative artificial compressibility method are used for further improvement of convergence. To improve the accuracy of coarse/fine grid interface of local refinement, flux interpolation method for refined grid boundary is used. The numerical results are in good agreement with experimental data. The local refinement can improve the prediction accuracy significantly. The flux interpolation method for local refinement can keep conservation for a composite grid, therefore further modify the prediction accuracy.

  18. Analysis of localization methods for unintended emitting sources

    NASA Astrophysics Data System (ADS)

    Guzey, Nurbanu; Ghasr, Mohammad T.; Jagannathan, S.

    2016-10-01

    This paper presents an analysis of localization and tracking of unintended emissions from electronic devices using computer simulations. The available localization and tracking methods assume that the device is in the far-field region of the array. However, the received power of unintended emissions is very low and, therefore, requires near-field techniques. In the near-field, the performance of far-field schemes degrades since they ignore the effect of range on phase characteristics. Computer simulation results for localization and tracking methods developed by the authors are summarized to analyze the effects of near and far-field regions.

  19. A Non-Local Low-Rank Approach to Enforce Integrability.

    PubMed

    Badri, Hicham; Yahia, Hussein

    2016-08-01

    We propose a new approach to enforce integrability using recent advances in non-local methods. Our formulation consists in a sparse gradient data-fitting term to handle outliers together with a gradient-domain non-local low-rank prior. This regularization has two main advantages: 1) the low-rank prior ensures similarity between non-local gradient patches, which helps recovering high-quality clean patches from severe outliers corruption and 2) the low-rank prior efficiently reduces dense noise as it has been shown in recent image restoration works. We propose an efficient solver for the resulting optimization formulation using alternate minimization. Experiments show that the new method leads to an important improvement compared with previous optimization methods and is able to efficiently handle both outliers and dense noise mixed together. PMID:27214898

  20. A Non-Local Low-Rank Approach to Enforce Integrability.

    PubMed

    Badri, Hicham; Yahia, Hussein

    2016-08-01

    We propose a new approach to enforce integrability using recent advances in non-local methods. Our formulation consists in a sparse gradient data-fitting term to handle outliers together with a gradient-domain non-local low-rank prior. This regularization has two main advantages: 1) the low-rank prior ensures similarity between non-local gradient patches, which helps recovering high-quality clean patches from severe outliers corruption and 2) the low-rank prior efficiently reduces dense noise as it has been shown in recent image restoration works. We propose an efficient solver for the resulting optimization formulation using alternate minimization. Experiments show that the new method leads to an important improvement compared with previous optimization methods and is able to efficiently handle both outliers and dense noise mixed together.

  1. Global and Local Sensitivity Analysis Methods for a Physical System

    ERIC Educational Resources Information Center

    Morio, Jerome

    2011-01-01

    Sensitivity analysis is the study of how the different input variations of a mathematical model influence the variability of its output. In this paper, we review the principle of global and local sensitivity analyses of a complex black-box system. A simulated case of application is given at the end of this paper to compare both approaches.…

  2. Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.

    PubMed

    Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona

    2016-01-01

    Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms. PMID:27258279

  3. Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization

    PubMed Central

    Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona

    2016-01-01

    Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms. PMID:27258279

  4. Robust Statistical Approaches for RSS-Based Floor Detection in Indoor Localization.

    PubMed

    Razavi, Alireza; Valkama, Mikko; Lohan, Elena Simona

    2016-05-31

    Floor detection for indoor 3D localization of mobile devices is currently an important challenge in the wireless world. Many approaches currently exist, but usually the robustness of such approaches is not addressed or investigated. The goal of this paper is to show how to robustify the floor estimation when probabilistic approaches with a low number of parameters are employed. Indeed, such an approach would allow a building-independent estimation and a lower computing power at the mobile side. Four robustified algorithms are to be presented: a robust weighted centroid localization method, a robust linear trilateration method, a robust nonlinear trilateration method, and a robust deconvolution method. The proposed approaches use the received signal strengths (RSS) measured by the Mobile Station (MS) from various heard WiFi access points (APs) and provide an estimate of the vertical position of the MS, which can be used for floor detection. We will show that robustification can indeed increase the performance of the RSS-based floor detection algorithms.

  5. Community-Based Outdoor Education Using a Local Approach to Conservation

    ERIC Educational Resources Information Center

    Maeda, Kazushi

    2005-01-01

    Local people of a community interact with nature in a way that is mediated by their local cultures and shape their own environment. We need a local approach to conservation for the local environment adding to the political or technological approaches for global environmental problems such as the destruction of the ozone layer or global warming.…

  6. Local and Non-local Regularization Techniques in Emission (PET/SPECT) Tomographic Image Reconstruction Methods.

    PubMed

    Ahmad, Munir; Shahzad, Tasawar; Masood, Khalid; Rashid, Khalid; Tanveer, Muhammad; Iqbal, Rabail; Hussain, Nasir; Shahid, Abubakar; Fazal-E-Aleem

    2016-06-01

    Emission tomographic image reconstruction is an ill-posed problem due to limited and noisy data and various image-degrading effects affecting the data and leads to noisy reconstructions. Explicit regularization, through iterative reconstruction methods, is considered better to compensate for reconstruction-based noise. Local smoothing and edge-preserving regularization methods can reduce reconstruction-based noise. However, these methods produce overly smoothed images or blocky artefacts in the final image because they can only exploit local image properties. Recently, non-local regularization techniques have been introduced, to overcome these problems, by incorporating geometrical global continuity and connectivity present in the objective image. These techniques can overcome drawbacks of local regularization methods; however, they also have certain limitations, such as choice of the regularization function, neighbourhood size or calibration of several empirical parameters involved. This work compares different local and non-local regularization techniques used in emission tomographic imaging in general and emission computed tomography in specific for improved quality of the resultant images.

  7. Optimization of receiver arrangements for passive emitter localization methods.

    PubMed

    Flückiger, M; Neild, A; Nelson, B J

    2012-03-01

    Passive localization of an object from its emission can be based on time difference of arrival or phase shift measurements for different receiver groups in sensor arrays. The accuracy of the localization primarily depends on accurate time and/or phase measurements. The frequency of the emission and the number and arrangement of the receivers mainly effect the resolution of the emitter localization. In this paper optimal receiver positions for passive localization methods are proposed, resulting in a maximal resolution for the emitter location estimate. The optimization is done by analyzing the uncertainty of the emitted signal, including its frequency. The technique has been developed specifically for ultrasound signals obtained from omnidirectional transducers, although the results apply for other application using passive localization techniques.

  8. Ensemble lymph node detection from CT volumes combining local intensity structure analysis approach and appearance learning approach

    NASA Astrophysics Data System (ADS)

    Nakamura, Yoshihiko; Nimura, Yukitaka; Oda, Masahiro; Kitasaka, Takayuki; Furukawa, Kazuhiro; Goto, Hidemi; Fujiwara, Michitaka; Misawa, Kazunari; Mori, Kensaku

    2016-03-01

    This paper presents an ensemble lymph node detection method combining two automated lymph node detection methods from CT volumes. Detecting enlarged abdominal lymph nodes from CT volumes is an important task for the pre-operative diagnosis and planning done for cancer surgery. Although several research works have been conducted toward achieving automated abdominal lymph node detection methods, such methods still do not have enough accuracy for detecting lymph nodes of 5 mm or larger. This paper proposes an ensemble lymph node detection method that integrates two different lymph node detection schemes: (1) the local intensity structure analysis approach and (2) the appearance learning approach. This ensemble approach is introduced with the aim of achieving high sensitivity and specificity. Each component detection method is independently designed to detect candidate regions of enlarged abdominal lymph nodes whose diameters are over 5 mm. We applied the proposed ensemble method to 22 cases using abdominal CT volumes. Experimental results showed that we can detect about 90.4% (47/52) of the abdominal lymph nodes with about 15.2 false-positives/case for lymph nodes of 5mm or more in diameter.

  9. Hierarchy-Direction Selective Approach for Locally Adaptive Sparse Grids

    SciTech Connect

    Stoyanov, Miroslav K

    2013-09-01

    We consider the problem of multidimensional adaptive hierarchical interpolation. We use sparse grids points and functions that are induced from a one dimensional hierarchical rule via tensor products. The classical locally adaptive sparse grid algorithm uses an isotropic refinement from the coarser to the denser levels of the hierarchy. However, the multidimensional hierarchy provides a more complex structure that allows for various anisotropic and hierarchy selective refinement techniques. We consider the more advanced refinement techniques and apply them to a number of simple test functions chosen to demonstrate the various advantages and disadvantages of each method. While there is no refinement scheme that is optimal for all functions, the fully adaptive family-direction-selective technique is usually more stable and requires fewer samples.

  10. A novel local learning based approach with application to breast cancer diagnosis

    NASA Astrophysics Data System (ADS)

    Xu, Songhua; Tourassi, Georgia

    2012-03-01

    In this paper, we introduce a new local learning based approach and apply it for the well-studied problem of breast cancer diagnosis using BIRADS-based mammographic features. To learn from our clinical dataset the latent relationship between these features and the breast biopsy result, our method first dynamically partitions the whole sample population into multiple sub-population groups through stochastically searching the sample population clustering space. Each encountered clustering scheme in our online searching process is then used to create a certain sample population partition plan. For every resultant sub-population group identified according to a partition plan, our method then trains a dedicated local learner to capture the underlying data relationship. In our study, we adopt the linear logistic regression model as our local learning method's base learner. Such a choice is made both due to the well-understood linear nature of the problem, which is compellingly revealed by a rich body of prior studies, and the computational efficiency of linear logistic regression--the latter feature allows our local learning method to more effectively perform its search in the sample population clustering space. Using a database of 850 biopsy-proven cases, we compared the performance of our method with a large collection of publicly available state-of-the-art machine learning methods and successfully demonstrated its performance advantage with statistical significance.

  11. Intelligent Resource Management for Local Area Networks: Approach and Evolution

    NASA Technical Reports Server (NTRS)

    Meike, Roger

    1988-01-01

    The Data Management System network is a complex and important part of manned space platforms. Its efficient operation is vital to crew, subsystems and experiments. AI is being considered to aid in the initial design of the network and to augment the management of its operation. The Intelligent Resource Management for Local Area Networks (IRMA-LAN) project is concerned with the application of AI techniques to network configuration and management. A network simulation was constructed employing real time process scheduling for realistic loads, and utilizing the IEEE 802.4 token passing scheme. This simulation is an integral part of the construction of the IRMA-LAN system. From it, a causal model is being constructed for use in prediction and deep reasoning about the system configuration. An AI network design advisor is being added to help in the design of an efficient network. The AI portion of the system is planned to evolve into a dynamic network management aid. The approach, the integrated simulation, project evolution, and some initial results are described.

  12. Multiblock approach for the passive scalar thermal lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Huang, Rongzong; Wu, Huiying

    2014-04-01

    A multiblock approach for the passive scalar thermal lattice Boltzmann method (TLBM) with multiple-relaxation-time collision scheme is proposed based on the Chapman-Enskog analysis. The interaction between blocks is executed in the moment space directly and an external force term is considered. Theoretical analysis shows that all the nonequilibrium parts of the nonconserved moments should be rescaled, while the nonequilibrium parts of the conserved moments can be calculated directly. Moreover, a local scheme based on the pseudoparticles for computing heat flux is proposed with no need to calculate temperature gradient based on the finite-difference scheme. In order to validate the multiblock approach and local scheme for computing heat flux, thermal Couette flow with wall injection is simulated and good results are obtained, which show that the adoption of the multiblock approach does not deteriorate the convergence rate of TLBM and the local scheme for computing heat flux has second-order convergence rate. Further application of the present approach is the simulation of natural convection in a square cavity with the Rayleigh number up to 109.

  13. Local weak form meshless techniques based on the radial point interpolation (RPI) method and local boundary integral equation (LBIE) method to evaluate European and American options

    NASA Astrophysics Data System (ADS)

    Rad, Jamal Amani; Parand, Kourosh; Abbasbandy, Saeid

    2015-05-01

    For the first time in mathematical finance field, we propose the local weak form meshless methods for option pricing; especially in this paper we select and analysis two schemes of them named local boundary integral equation method (LBIE) based on moving least squares approximation (MLS) and local radial point interpolation (LRPI) based on Wu's compactly supported radial basis functions (WCS-RBFs). LBIE and LRPI schemes are the truly meshless methods, because, a traditional non-overlapping, continuous mesh is not required, either for the construction of the shape functions, or for the integration of the local sub-domains. In this work, the American option which is a free boundary problem, is reduced to a problem with fixed boundary using a Richardson extrapolation technique. Then the θ -weighted scheme is employed for the time derivative. Stability analysis of the methods is analyzed and performed by the matrix method. In fact, based on an analysis carried out in the present paper, the methods are unconditionally stable for implicit Euler (θ = 0) and Crank-Nicolson (θ = 0.5) schemes. It should be noted that LBIE and LRPI schemes lead to banded and sparse system matrices. Therefore, we use a powerful iterative algorithm named the Bi-conjugate gradient stabilized method (BCGSTAB) to get rid of this system. Numerical experiments are presented showing that the LBIE and LRPI approaches are extremely accurate and fast.

  14. Rescaled Local Interaction Simulation Approach for Shear Wave Propagation Modelling in Magnetic Resonance Elastography.

    PubMed

    Hashemiyan, Z; Packo, P; Staszewski, W J; Uhl, T

    2016-01-01

    Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808

  15. Rescaled Local Interaction Simulation Approach for Shear Wave Propagation Modelling in Magnetic Resonance Elastography

    PubMed Central

    Packo, P.; Staszewski, W. J.; Uhl, T.

    2016-01-01

    Properties of soft biological tissues are increasingly used in medical diagnosis to detect various abnormalities, for example, in liver fibrosis or breast tumors. It is well known that mechanical stiffness of human organs can be obtained from organ responses to shear stress waves through Magnetic Resonance Elastography. The Local Interaction Simulation Approach is proposed for effective modelling of shear wave propagation in soft tissues. The results are validated using experimental data from Magnetic Resonance Elastography. These results show the potential of the method for shear wave propagation modelling in soft tissues. The major advantage of the proposed approach is a significant reduction of computational effort. PMID:26884808

  16. A method of periodic pattern localization on document images

    NASA Astrophysics Data System (ADS)

    Chernov, Timofey S.; Nikolaev, Dmitry P.; Kliatskine, Vitali M.

    2015-12-01

    Periodic patterns often present on document images as holograms, watermarks or guilloche elements which are mostly used for fraud protection. Localization of such patterns lets an embedded OCR system to vary its settings depending on pattern presence in particular image regions and improves the precision of pattern removal to preserve as much useful data as possible. Many document images' noise detection and removal methods deal with unstructured noise or clutter on documents with simple background. In this paper we propose a method of periodic pattern localization on document images which uses discrete Fourier transform that works well on documents with complex background.

  17. A new approach to the photon localization problem

    NASA Technical Reports Server (NTRS)

    Han, D.; Kim, Y. S.; Noz, Marilyn E.

    1994-01-01

    Since wavelets form a representation of the Poincare group, it is possible to construct a localized superposition of light waves with different frequencies in a Lorentz-covariant manner. This localized wavelet satisfies a Lorentz-invariant uncertainty relation, and also the Lorentz-invariant Parseval's relation. A quantitative analysis is given for the difference between photons and localized waves. It is then shown that this localized entity corresponds to a relativistic photon with a sharply defined momentum in the non-localization limit. Waves are not particles. It is confirmed that the wave-particle duality is subject to the uncertainty principle.

  18. Perturbation approach applied to modal diffraction methods.

    PubMed

    Bischoff, Joerg; Hehl, Karl

    2011-05-01

    Eigenvalue computation is an important part of many modal diffraction methods, including the rigorous coupled wave approach (RCWA) and the Chandezon method. This procedure is known to be computationally intensive, accounting for a large proportion of the overall run time. However, in many cases, eigenvalue information is already available from previous calculations. Some of the examples include adjacent slices in the RCWA, spectral- or angle-resolved scans in optical scatterometry and parameter derivatives in optimization. In this paper, we present a new technique that provides accurate and highly reliable solutions with significant improvements in computational time. The proposed method takes advantage of known eigensolution information and is based on perturbation method. PMID:21532698

  19. The local projection in the density functional theory plus U approach: A critical assessment

    NASA Astrophysics Data System (ADS)

    Wang, Yue-Chao; Chen, Ze-Hua; Jiang, Hong

    2016-04-01

    Density-functional theory plus the Hubbard U correction (DFT + U) method is widely used in first-principles studies of strongly correlated systems, as it can give qualitatively (and sometimes, semi-quantitatively) correct description of energetic and structural properties of many strongly correlated systems with similar computational cost as local density approximation or generalized gradient approximation. On the other hand, the DFT + U approach is limited both theoretically and practically in several important aspects. In particular, the results of DFT + U often depend on the choice of local orbitals (the local projection) defining the subspace in which the Hubbard U correction is applied. In this work we have systematically investigated the issue of the local projection by considering typical transition metal oxides, β-MnO2 and MnO, and comparing the results obtained from different implementations of DFT + U. We found that the choice of the local projection has significant effects on the DFT + U results, which are more significant for systems with stronger covalent bonding (e.g., MnO2) than those with more ionic bonding (e.g., MnO). These findings can help to clarify some confusion arising from the practical use of DFT + U and may also provide insights for the development of new first-principles approaches beyond DFT + U.

  20. The local projection in the density functional theory plus U approach: A critical assessment.

    PubMed

    Wang, Yue-Chao; Chen, Ze-Hua; Jiang, Hong

    2016-04-14

    Density-functional theory plus the Hubbard U correction (DFT + U) method is widely used in first-principles studies of strongly correlated systems, as it can give qualitatively (and sometimes, semi-quantitatively) correct description of energetic and structural properties of many strongly correlated systems with similar computational cost as local density approximation or generalized gradient approximation. On the other hand, the DFT + U approach is limited both theoretically and practically in several important aspects. In particular, the results of DFT + U often depend on the choice of local orbitals (the local projection) defining the subspace in which the Hubbard U correction is applied. In this work we have systematically investigated the issue of the local projection by considering typical transition metal oxides, β-MnO2 and MnO, and comparing the results obtained from different implementations of DFT + U. We found that the choice of the local projection has significant effects on the DFT + U results, which are more significant for systems with stronger covalent bonding (e.g., MnO2) than those with more ionic bonding (e.g., MnO). These findings can help to clarify some confusion arising from the practical use of DFT + U and may also provide insights for the development of new first-principles approaches beyond DFT + U. PMID:27083707

  1. A space–angle DGFEM approach for the Boltzmann radiation transport equation with local angular refinement

    SciTech Connect

    Kópházi, József Lathouwers, Danny

    2015-09-15

    In this paper a new method for the discretization of the radiation transport equation is presented, based on a discontinuous Galerkin method in space and angle that allows for local refinement in angle where any spatial element can support its own angular discretization. To cope with the discontinuous spatial nature of the solution, a generalized Riemann procedure is required to distinguish between incoming and outgoing contributions of the numerical fluxes. A new consistent framework is introduced that is based on the solution of a generalized eigenvalue problem. The resulting numerical fluxes for the various possible cases where neighboring elements have an equal, higher or lower level of refinement in angle are derived based on tensor algebra and the resulting expressions have a very clear physical interpretation. The choice of discontinuous trial functions not only has the advantage of easing local refinement, it also facilitates the use of efficient sweep-based solvers due to decoupling of unknowns on a large scale thereby approaching the efficiency of discrete ordinates methods with local angular resolution. The approach is illustrated by a series of numerical experiments. Results show high orders of convergence for the scalar flux on angular refinement. The generalized Riemann upwinding procedure leads to stable and consistent solutions. Further the sweep-based solver performs well when used as a preconditioner for a Krylov method.

  2. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  3. Local time-correlation approach for calculations of x-ray spectra

    NASA Astrophysics Data System (ADS)

    Lee, A. J.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We present a local time-correlation function method for real-time calculations of core level x-ray spectra (RTXS). The approach is implemented in a local orbital basis using a Crank-Nicolson time-evolution algorithm applied to an extension of the siesta code, together with projector augmented wave (PAW) atomic transition matrix elements. Our RTXS is formally equivalent to ΔSCF (Δ self consistent field) Fermi's golden rule calculations with a screened core-hole and an effective independent particle approximation. Illustrative calculations are presented for several molecular and condensed matter systems and found to be in good agreement with experiment. The method can also be advantageous compared to conventional frequency-space methods.

  4. An online substructure identification method for local structural health monitoring

    NASA Astrophysics Data System (ADS)

    Hou, Jilin; Jankowski, Łukasz; Ou, Jinping

    2013-09-01

    This paper proposes a substructure isolation method, which uses time series of measured local response for online monitoring of substructures. The proposed monitoring process consists of two key steps: construction of the isolated substructure, and its identification. The isolated substructure is an independent virtual structure, which is numerically isolated from the global structure by placing virtual supports on the interface. First, the isolated substructure is constructed by a specific linear combination of time series of its measured local responses. Then, the isolated substructure is identified using its local natural frequencies extracted from the combined responses. The substructure is assumed to be linear; the outside part of the global structure can have any characteristics. The method has no requirements on the initial state of the structure, and so the process can be carried out repetitively for online monitoring. Online isolation and monitoring is illustrated in a numerical example with a frame model, and then verified in a cantilever beam experiment.

  5. Fault Location Methods for Ungrounded Distribution Systems Using Local Measurements

    NASA Astrophysics Data System (ADS)

    Xiu, Wanjing; Liao, Yuan

    2013-08-01

    This article presents novel fault location algorithms for ungrounded distribution systems. The proposed methods are capable of locating faults by using obtained voltage and current measurements at the local substation. Two types of fault location algorithms, using line to neutral and line to line measurements, are presented. The network structure and parameters are assumed to be known. The network structure needs to be updated based on information obtained from utility telemetry system. With the help of bus impedance matrix, local voltage changes due to the fault can be expressed as a function of fault currents. Since the bus impedance matrix contains information about fault location, superimposed voltages at local substation can be expressed as a function of fault location, through which fault location can be solved. Simulation studies have been carried out based on a sample distribution power system. From the evaluation study, it is evinced that very accurate fault location estimates are obtained from both types of methods.

  6. A Novel Local Learning based Approach With Application to Breast Cancer Diagnosis

    SciTech Connect

    Xu, Songhua; Tourassi, Georgia

    2012-01-01

    The purpose of this study is to develop and evaluate a novel local learning-based approach for computer-assisted diagnosis of breast cancer. Our new local learning based algorithm using the linear logistic regression method as its base learner is described. Overall, our algorithm will perform its stochastic searching process until the total allowed computing time is used up by our random walk process in identifying the most suitable population subdivision scheme and their corresponding individual base learners. The proposed local learning-based approach was applied for the prediction of breast cancer given 11 mammographic and clinical findings reported by physicians using the BI-RADS lexicon. Our database consisted of 850 patients with biopsy confirmed diagnosis (290 malignant and 560 benign). We also compared the performance of our method with a collection of publicly available state-of-the-art machine learning methods. Predictive performance for all classifiers was evaluated using 10-fold cross validation and Receiver Operating Characteristics (ROC) analysis. Figure 1 reports the performance of 54 machine learning methods implemented in the machine learning toolkit Weka (version 3.0). We introduced a novel local learning-based classifier and compared it with an extensive list of other classifiers for the problem of breast cancer diagnosis. Our experiments show that the algorithm superior prediction performance outperforming a wide range of other well established machine learning techniques. Our conclusion complements the existing understanding in the machine learning field that local learning may capture complicated, non-linear relationships exhibited by real-world datasets.

  7. A novel method for medical implant in-body localization.

    PubMed

    Pourhomayoun, Mohammad; Fowler, Mark; Jin, Zhanpeng

    2012-01-01

    Wireless communication medical implants are gaining an important role in healthcare systems by controlling and transmitting the vital information of the patients. Recently, Wireless Capsule Endoscopy (WCE) has become a popular method to visualize and diagnose the human gastrointestinal (GI) tract. Estimating the exact location of the capsule when each image is taken is a very critical issue in capsule endoscopy. Most of the common capsule localization methods are based on estimating one or more location-dependent signal parameters like TOA or RSS. However, some unique challenges exist for in-body localization due to the complex nature within the human body. In this paper, we propose a novel one-stage localization method based on spatial sparsity in 3D space. In this method, we directly estimate the location of the capsule (as the emitter) without going through the intermediate stage of TOA or signal strength estimation. We evaluate the performance of the proposed method using Monte Carlo simulation with an RF signal following the allowable power and bandwidth ranges according to the standards. The results show that the proposed method is very effective and accurate even in massive multipath and shadowing conditions. PMID:23367237

  8. Teaching Local Lore in EFL Class: New Approaches

    ERIC Educational Resources Information Center

    Yarmakeev, Iskander E.; Pimenova, Tatiana S.; Zamaletdinova, Gulyusa R.

    2016-01-01

    This paper is dedicated to the up-to-date educational problem, that is, the role of local lore in teaching EFL to University students. Although many educators admit that local lore knowledge plays a great role in the development of a well-bred and well-educated personality and meets students' needs, the problem has not been thoroughly studied.…

  9. Think Locally: A Prudent Approach to Electronic Resource Management Systems

    ERIC Educational Resources Information Center

    Gustafson-Sundell, Nat

    2011-01-01

    A few articles have drawn some amount of attention specifically to the local causes of the success or failure of electronic resource management system (ERMS) implementations. In fact, it seems clear that local conditions will largely determine whether any given ERMS implementation will succeed or fail. This statement might seem obvious, but the…

  10. Efficient integration method for fictitious domain approaches

    NASA Astrophysics Data System (ADS)

    Duczek, Sascha; Gabbert, Ulrich

    2015-10-01

    In the current article, we present an efficient and accurate numerical method for the integration of the system matrices in fictitious domain approaches such as the finite cell method (FCM). In the framework of the FCM, the physical domain is embedded in a geometrically larger domain of simple shape which is discretized using a regular Cartesian grid of cells. Therefore, a spacetree-based adaptive quadrature technique is normally deployed to resolve the geometry of the structure. Depending on the complexity of the structure under investigation this method accounts for most of the computational effort. To reduce the computational costs for computing the system matrices an efficient quadrature scheme based on the divergence theorem (Gauß-Ostrogradsky theorem) is proposed. Using this theorem the dimension of the integral is reduced by one, i.e. instead of solving the integral for the whole domain only its contour needs to be considered. In the current paper, we present the general principles of the integration method and its implementation. The results to several two-dimensional benchmark problems highlight its properties. The efficiency of the proposed method is compared to conventional spacetree-based integration techniques.

  11. Implementation of the locally renormalized CCSD(T) approaches for arbitrary reference function.

    PubMed

    Kowalski, Karol

    2005-07-01

    Several new variants of the locally-renormalized coupled-cluster (CC) approaches that account for the effect of triples (LR-CCSD(T)) have been formulated and implemented for arbitrary reference states using the TENSOR CONTRACTION ENGINE functionality, enabling the automatic generation of an efficient parallel code. Deeply rooted in the recently derived numerator-denominator-connected (NDC) expansion for the ground-state energy [K. Kowalski and P. Piecuch, J. Chem. Phys. 122, 074107 (2005)], LR-CCSD(T) approximations use, in analogy to the completely renormalized CCSD(T) (CR-CCSD(T)) approach, the three-body moments in constructing the noniterative corrections to the energies obtained in CC calculations with singles and doubles (CCSD). In contrast to the CR-CCSD(T) method, the LR-CCSD(T) approaches discussed in this paper employ local denominators, which assure the additive separability of the energies in the noninteracting system limit when the localized occupied spin-orbitals are employed in the CCSD and LR-CCSD(T) calculations. As clearly demonstrated on several challenging examples, including breaking the bonds of the F2, N2, and CN molecules, the LR-CCSD(T) approaches are capable of providing a highly accurate description of the entire potential-energy surface (PES), while maintaining the characteristic N(7) scaling of the ubiquitous CCSD(T) approach. Moreover, as illustrated numerically for the ozone molecule, the LR-CCSD(T) approaches yield highly competitive values for a number of equilibrium properties including bond lengths, angles, and harmonic frequencies. PMID:16035828

  12. A localized meshless method for diffusion on folded surfaces

    NASA Astrophysics Data System (ADS)

    Cheung, Ka Chun; Ling, Leevan; Ruuth, Steven J.

    2015-09-01

    Partial differential equations (PDEs) on surfaces arise in a variety of application areas including biological systems, medical imaging, fluid dynamics, mathematical physics, image processing and computer graphics. In this paper, we propose a radial basis function (RBF) discretization of the closest point method. The corresponding localized meshless method may be used to approximate diffusion on smooth or folded surfaces. Our method has the benefit of having an a priori error bound in terms of percentage of the norm of the solution. A stable solver is used to avoid the ill-conditioning that arises when the radial basis functions (RBFs) become flat.

  13. Speeding up local correlation methods: System-inherent domains

    NASA Astrophysics Data System (ADS)

    Kats, Daniel

    2016-07-01

    A new approach to determine local virtual space in correlated calculations is presented. It restricts the virtual space in a pair-specific manner on the basis of a preceding approximate calculation adapting automatically to the locality of the studied problem. The resulting pair system-inherent domains are considerably smaller than the starting domains, without significant loss in the accuracy. Utilization of such domains speeds up integral transformations and evaluations of the residual and reduces memory requirements. The system-inherent domains are especially suitable in cases which require high accuracy, e.g., in generation of pair-natural orbitals, or for which standard domains are problematic, e.g., excited-state calculations.

  14. Speeding up local correlation methods: System-inherent domains.

    PubMed

    Kats, Daniel

    2016-07-01

    A new approach to determine local virtual space in correlated calculations is presented. It restricts the virtual space in a pair-specific manner on the basis of a preceding approximate calculation adapting automatically to the locality of the studied problem. The resulting pair system-inherent domains are considerably smaller than the starting domains, without significant loss in the accuracy. Utilization of such domains speeds up integral transformations and evaluations of the residual and reduces memory requirements. The system-inherent domains are especially suitable in cases which require high accuracy, e.g., in generation of pair-natural orbitals, or for which standard domains are problematic, e.g., excited-state calculations. PMID:27394095

  15. Multiple Shooting-Local Linearization method for the identification of dynamical systems

    NASA Astrophysics Data System (ADS)

    Carbonell, F.; Iturria-Medina, Y.; Jimenez, J. C.

    2016-08-01

    The combination of the multiple shooting strategy with the generalized Gauss-Newton algorithm turns out in a recognized method for estimating parameters in ordinary differential equations (ODEs) from noisy discrete observations. A key issue for an efficient implementation of this method is the accurate integration of the ODE and the evaluation of the derivatives involved in the optimization algorithm. In this paper, we study the feasibility of the Local Linearization (LL) approach for the simultaneous numerical integration of the ODE and the evaluation of such derivatives. This integration approach results in a stable method for the accurate approximation of the derivatives with no more computational cost than that involved in the integration of the ODE. The numerical simulations show that the proposed Multiple Shooting-Local Linearization method recovers the true parameters value under different scenarios of noisy data.

  16. Method of Deployment of a Space Tethered System Aligned to the Local Vertical

    NASA Astrophysics Data System (ADS)

    Zakrzhevskii, A. E.

    2016-09-01

    The object of this research is a space tether of two bodies connected by a flexible massless string. The research objective is the development and theoretical justification of a novel approach to the solution of the problem of deployment of the space tether in a circular orbit with its alignment to the local vertical. The approach is based on use of the theorem on the angular momentum change. It allows developing the open-loop control of the tether length that provides desired change of the angular momentum of the tether under the effect of the gravitational torque to the value, which corresponds to the angular momentum of the deployed tether aligned to the local vertical. The given example of application of the approach to a case of deployment of a tether demonstrates the simplicity of use of the method in practice, and also the method of validation of the mathematical model.

  17. An adaptive locally optimal method detecting weak deterministic signals

    NASA Astrophysics Data System (ADS)

    Wang, C. H.

    1983-10-01

    A new method for detecting weak signals in interference and clutter in radar systems is presented. The detector which uses this method is adaptive for an environment varying with time and locally optimal for detecting targets and constant false-alarm ratio (CFAR) for the statistics of interference and clutter varying with time. The loss of CFAR is small, and the detector is also simple in structure. The statistical equivalent transfer characteristic of a rank quantizer which can be used as part of an adaptive locally most powerful detector (ALMP) is obtained. It is shown that the distribution-free Doppler processor of Dillard (1974) is not only a nonparameter detector, but also an ALMP detector under certain conditions.

  18. A Novel Seismic Method for Glacial Calving Localization

    NASA Astrophysics Data System (ADS)

    Mei, M. Y. J.; Holland, D. M.; Zheng, T.

    2015-12-01

    Glacial calving is a significant contributor to sea level rise, but the dynamics of how and why calving happens is not yet understood. A novel method of determining calving location using seismic wave arrival times from two local seismic stations at Helheim Glacier is presented. The difference in wave arrival times is used to define a locus (hyperbola) of possible origins, which intersects uniquely with the calving front. Our method is motivated by difficulties with traditional seismic location methods that fail due to both the emergent nature of calving, which obscures the P and S-wave onsets, as well as the proximity of the seismometers, which combines body and surface waves into one arrival. This method is calibrated via known calving events at Helheim Glacier in August 2014, then applied to other calving events in both 2013-2014 and 2014-2015. Extending this method with an additional station allows for triangulation of the calving location, which removes the need for up-to-date imagery of the calving front. Additionally, this method can be extended to allow for three-dimensional localization. By getting more precise locations of glacial calving, we may improve our understanding of why and how glaciers calve.

  19. PSI: A Comprehensive and Integrative Approach for Accurate Plant Subcellular Localization Prediction

    PubMed Central

    Chen, Ming

    2013-01-01

    Predicting the subcellular localization of proteins conquers the major drawbacks of high-throughput localization experiments that are costly and time-consuming. However, current subcellular localization predictors are limited in scope and accuracy. In particular, most predictors perform well on certain locations or with certain data sets while poorly on others. Here, we present PSI, a novel high accuracy web server for plant subcellular localization prediction. PSI derives the wisdom of multiple specialized predictors via a joint-approach of group decision making strategy and machine learning methods to give an integrated best result. The overall accuracy obtained (up to 93.4%) was higher than best individual (CELLO) by ∼10.7%. The precision of each predicable subcellular location (more than 80%) far exceeds that of the individual predictors. It can also deal with multi-localization proteins. PSI is expected to be a powerful tool in protein location engineering as well as in plant sciences, while the strategy employed could be applied to other integrative problems. A user-friendly web server, PSI, has been developed for free access at http://bis.zju.edu.cn/psi/. PMID:24194827

  20. An efficient linear-scaling CCSD(T) method based on local natural orbitals

    NASA Astrophysics Data System (ADS)

    Rolik, Zoltán; Szegedy, Lóránt; Ladjánszki, István; Ladóczki, Bence; Kállay, Mihály

    2013-09-01

    An improved version of our general-order local coupled-cluster (CC) approach [Z. Rolik and M. Kállay, J. Chem. Phys. 135, 104111 (2011)], 10.1063/1.3632085 and its efficient implementation at the CC singles and doubles with perturbative triples [CCSD(T)] level is presented. The method combines the cluster-in-molecule approach of Li and co-workers [J. Chem. Phys. 131, 114109 (2009)], 10.1063/1.3218842 with frozen natural orbital (NO) techniques. To break down the unfavorable fifth-power scaling of our original approach a two-level domain construction algorithm has been developed. First, an extended domain of localized molecular orbitals (LMOs) is assembled based on the spatial distance of the orbitals. The necessary integrals are evaluated and transformed in these domains invoking the density fitting approximation. In the second step, for each occupied LMO of the extended domain a local subspace of occupied and virtual orbitals is constructed including approximate second-order Møller-Plesset NOs. The CC equations are solved and the perturbative corrections are calculated in the local subspace for each occupied LMO using a highly-efficient CCSD(T) code, which was optimized for the typical sizes of the local subspaces. The total correlation energy is evaluated as the sum of the individual contributions. The computation time of our approach scales linearly with the system size, while its memory and disk space requirements are independent thereof. Test calculations demonstrate that currently our method is one of the most efficient local CCSD(T) approaches and can be routinely applied to molecules of up to 100 atoms with reasonable basis sets.

  1. [The epidemiological approach to health inequalities at the local level].

    PubMed

    Alazraqui, Marcio; Mota, Eduardo; Spinelli, Hugo

    2007-02-01

    What are the advantages and limitations of epidemiology for decreasing health inequalities at the local level? To answer this question, the current article discusses the role of epidemiology. The hypothesis is that epidemiology produces useful knowledge for local management of interventions aimed at reducing health inequalities, expressed in spaces built by human communities through social and historical processes. Local production of epidemiological knowledge should support action by social actors in specific situations and contexts, thus renewing the appreciation for ecological designs and georeference studies. Such knowledge output and application are also an organizational phenomenon. Organizations can be seen as "conversational networks". In conclusion, strategic and communicative actions by health workers should provide the central thrust for defining new health care and management models committed to decreasing health inequalities, with epidemiology playing a key role. PMID:17221081

  2. System and method for bullet tracking and shooter localization

    DOEpatents

    Roberts, Randy S.; Breitfeller, Eric F.

    2011-06-21

    A system and method of processing infrared imagery to determine projectile trajectories and the locations of shooters with a high degree of accuracy. The method includes image processing infrared image data to reduce noise and identify streak-shaped image features, using a Kalman filter to estimate optimal projectile trajectories, updating the Kalman filter with new image data, determining projectile source locations by solving a combinatorial least-squares solution for all optimal projectile trajectories, and displaying all of the projectile source locations. Such a shooter-localization system is of great interest for military and law enforcement applications to determine sniper locations, especially in urban combat scenarios.

  3. A novel method to compare protein structures using local descriptors

    PubMed Central

    2011-01-01

    Background Protein structure comparison is one of the most widely performed tasks in bioinformatics. However, currently used methods have problems with the so-called "difficult similarities", including considerable shifts and distortions of structure, sequential swaps and circular permutations. There is a demand for efficient and automated systems capable of overcoming these difficulties, which may lead to the discovery of previously unknown structural relationships. Results We present a novel method for protein structure comparison based on the formalism of local descriptors of protein structure - DEscriptor Defined Alignment (DEDAL). Local similarities identified by pairs of similar descriptors are extended into global structural alignments. We demonstrate the method's capability by aligning structures in difficult benchmark sets: curated alignments in the SISYPHUS database, as well as SISY and RIPC sets, including non-sequential and non-rigid-body alignments. On the most difficult RIPC set of sequence alignment pairs the method achieves an accuracy of 77% (the second best method tested achieves 60% accuracy). Conclusions DEDAL is fast enough to be used in whole proteome applications, and by lowering the threshold of detectable structure similarity it may shed additional light on molecular evolution processes. It is well suited to improving automatic classification of structure domains, helping analyze protein fold space, or to improving protein classification schemes. DEDAL is available online at http://bioexploratorium.pl/EP/DEDAL. PMID:21849047

  4. Russian risk assessment methods and approaches

    SciTech Connect

    Dvorack, M.A.; Carlson, D.D.; Smith, R.E.

    1996-07-01

    One of the benefits resulting from the collapse of the Soviet Union is the increased dialogue currently taking place between American and Russian nuclear weapons scientists in various technical arenas. One of these arenas currently being investigated involves collaborative studies which illustrate how risk assessment is perceived and utilized in the Former Soviet Union (FSU). The collaborative studies indicate that, while similarities exist with respect to some methodologies, the assumptions and approaches in performing risk assessments were, and still are, somewhat different in the FSU as opposed to that in the US. The purpose of this paper is to highlight the present knowledge of risk assessment methodologies and philosophies within the two largest nuclear weapons laboratories of the Former Soviet Union, Arzamas-16 and Chelyabinsk-70. Furthermore, This paper will address the relative progress of new risk assessment methodologies, such as Fuzzy Logic, within the framework of current risk assessment methods at these two institutes.

  5. Exploring Local Approaches to Communicating Global Climate Change Information

    NASA Astrophysics Data System (ADS)

    Stevermer, A. J.

    2002-12-01

    Expected future climate changes are often presented as a global problem, requiring a global solution. Although this statement is accurate, communicating climate change science and prospective solutions must begin at local levels, each with its own subset of complexities to be addressed. Scientific evaluation of local changes can be complicated by large variability occurring over small spatial scales; this variability hinders efforts both to analyze past local changes and to project future ones. The situation is further encumbered by challenges associated with scientific literacy in the U.S., as well as by pressing economic difficulties. For people facing real-life financial and other uncertainties, a projected ``1.4 to 5.8 degrees Celsius'' rise in global temperature is likely to remain only an abstract concept. Despite this lack of concreteness, recent surveys have found that most U.S. residents believe current global warming science, and an even greater number view the prospect of increased warming as at least a ``somewhat serious'' problem. People will often be able to speak of long-term climate changes in their area, whether observed changes in the amount of snow cover in winter, or in the duration of extreme heat periods in summer. This work will explore the benefits and difficulties of communicating climate change from a local, rather than global, perspective, and seek out possible strategies for making less abstract, more concrete, and most importantly, more understandable information available to the public.

  6. Evolutionary Local Search of Fuzzy Rules through a novel Neuro-Fuzzy encoding method.

    PubMed

    Carrascal, A; Manrique, D; Ríos, J; Rossi, C

    2003-01-01

    This paper proposes a new approach for constructing fuzzy knowledge bases using evolutionary methods. We have designed a genetic algorithm that automatically builds neuro-fuzzy architectures based on a new indirect encoding method. The neuro-fuzzy architecture represents the fuzzy knowledge base that solves a given problem; the search for this architecture takes advantage of a local search procedure that improves the chromosomes at each generation. Experiments conducted both on artificially generated and real world problems confirm the effectiveness of the proposed approach.

  7. Evaluating the sensitization potential of surfactants: integrating data from the local lymph node assay, guinea pig maximization test, and in vitro methods in a weight-of-evidence approach.

    PubMed

    Ball, Nicholas; Cagen, Stuart; Carrillo, Juan-Carlos; Certa, Hans; Eigler, Dorothea; Emter, Roger; Faulhammer, Frank; Garcia, Christine; Graham, Cynthia; Haux, Carl; Kolle, Susanne N; Kreiling, Reinhard; Natsch, Andreas; Mehling, Annette

    2011-08-01

    An integral part of hazard and safety assessments is the estimation of a chemical's potential to cause skin sensitization. Currently, only animal tests (OECD 406 and 429) are accepted in a regulatory context. Nonanimal test methods are being developed and formally validated. In order to gain more insight into the responses induced by eight exemplary surfactants, a battery of in vivo and in vitro tests were conducted using the same batch of chemicals. In general, the surfactants were negative in the GPMT, KeratinoSens and hCLAT assays and none formed covalent adducts with test peptides. In contrast, all but one was positive in the LLNA. Most were rated as being irritants by the EpiSkin assay with the additional endpoint, IL1-alpha. The weight of evidence based on this comprehensive testing indicates that, with one exception, they are non-sensitizing skin irritants, confirming that the LLNA tends to overestimate the sensitization potential of surfactants. As results obtained from LLNAs are considered as the gold standard for the development of new nonanimal alternative test methods, results such as these highlight the necessity to carefully evaluate the applicability domains of test methods in order to develop reliable nonanimal alternative testing strategies for sensitization testing. PMID:21645576

  8. A modified Monte Carlo 'local importance function transform' method

    SciTech Connect

    Keady, K. P.; Larsen, E. W.

    2013-07-01

    The Local Importance Function Transform (LIFT) method uses an approximation of the contribution transport problem to bias a forward Monte-Carlo (MC) source-detector simulation [1-3]. Local (cell-based) biasing parameters are calculated from an inexpensive deterministic adjoint solution and used to modify the physics of the forward transport simulation. In this research, we have developed a new expression for the LIFT biasing parameter, which depends on a cell-average adjoint current to scalar flux (J{sup *}/{phi}{sup *}) ratio. This biasing parameter differs significantly from the original expression, which uses adjoint cell-edge scalar fluxes to construct a finite difference estimate of the flux derivative; the resulting biasing parameters exhibit spikes in magnitude at material discontinuities, causing the original LIFT method to lose efficiency in problems with high spatial heterogeneity. The new J{sup *}/{phi}{sup *} expression, while more expensive to obtain, generates biasing parameters that vary smoothly across the spatial domain. The result is an improvement in simulation efficiency. A representative test problem has been developed and analyzed to demonstrate the advantage of the updated biasing parameter expression with regards to solution figure of merit (FOM). For reference, the two variants of the LIFT method are compared to a similar variance reduction method developed by Depinay [4, 5], as well as MC with deterministic adjoint weight windows (WW). (authors)

  9. Local Authority Approaches to the School Admissions Process. LG Group Research Report

    ERIC Educational Resources Information Center

    Rudd, Peter; Gardiner, Clare; Marson-Smith, Helen

    2010-01-01

    What are the challenges, barriers and facilitating factors connected to the various school admissions approaches used by local authorities? This report gathers the views of local authority admissions officers on the strengths and weaknesses of different approaches, as well as the issues and challenges they face in this important area. It covers:…

  10. New Approach for 3D Local Structure Refinement Using Non-Muffin-Tin XANES Analysis

    SciTech Connect

    Smolentsev, Grigory; Soldatov, Alexander V.; Feiters, Martin C.

    2007-02-02

    A new technique of 3D local structure refinement using full-potential X-ray absorption near edge structure (XANES) analysis is proposed and demonstrated in application to metalloorganic complexes of Ni. It can be applied to determine local structure in those cases where the muffin-tin approximation used in most full multiple scattering schemes fails. The method is based on the fitting of experimental XANES data using multidimensional interpolation of spectra as a function of structural parameters, recently proposed by us, and ab-initio full potential calculations of XANES using finite difference method. The small number of required ab-initio calculations is the main advantage of the approach, which allows one to use computationally time-expensive non-muffin-tin finite-difference method. The possibility to extract information on bond angles in addition to bond-lengths accessible to standard EXAFS is demonstrated and it opens new perspectives of quantitative XANES analysis as a 3D local structure probe.

  11. An image analysis method to quantify CFTR subcellular localization.

    PubMed

    Pizzo, Lucilla; Fariello, María Inés; Lepanto, Paola; Aguilar, Pablo S; Kierbel, Arlinet

    2014-08-01

    Aberrant protein subcellular localization caused by mutation is a prominent feature of many human diseases. In Cystic Fibrosis (CF), a recessive lethal disorder that results from dysfunction of the Cystic Fibrosis Transmembrane Conductance Regulator (CFTR), the most common mutation is a deletion of phenylalanine-508 (pF508del). Such mutation produces a misfolded protein that fails to reach the cell surface. To date, over 1900 mutations have been identified in CFTR gene, but only a minority has been analyzed at the protein level. To establish if a particular CFTR variant alters its subcellular distribution, it is necessary to quantitatively determine protein localization in the appropriate cellular context. To date, most quantitative studies on CFTR localization have been based on immunoprecipitation and western blot. In this work, we developed and validated a confocal microscopy-image analysis method to quantitatively examine CFTR at the apical membrane of epithelial cells. Polarized MDCK cells transiently transfected with EGFP-CFTR constructs and stained for an apical marker were used. EGFP-CFTR fluorescence intensity in a region defined by the apical marker was normalized to EGFP-CFTR whole cell fluorescence intensity, rendering "apical CFTR ratio". We obtained an apical CFTR ratio of 0.67 ± 0.05 for wtCFTR and 0.11 ± 0.02 for pF508del. In addition, this image analysis method was able to discriminate intermediate phenotypes: partial rescue of the pF508del by incubation at 27 °C rendered an apical CFTR ratio value of 0.23 ± 0.01. We concluded the method has a good sensitivity and accurately detects milder phenotypes. Improving axial resolution through deconvolution further increased the sensitivity of the system as rendered an apical CFTR ratio of 0.76 ± 0.03 for wild type and 0.05 ± 0.02 for pF508del. The presented procedure is faster and simpler when compared with other available methods and it is therefore suitable as a screening method to identify

  12. A global/local analysis method for treating details in structural design

    NASA Technical Reports Server (NTRS)

    Aminpour, Mohammad A.; Mccleary, Susan L.; Ransom, Jonathan B.

    1993-01-01

    A method for analyzing global/local behavior of plate and shell structures is described. In this approach, a detailed finite element model of the local region is incorporated within a coarser global finite element model. The local model need not be nodally compatible (i.e., need not have a one-to-one nodal correspondence) with the global model at their common boundary; therefore, the two models may be constructed independently. The nodal incompatibility of the models is accounted for by introducing appropriate constraint conditions into the potential energy in a hybrid variational formulation. The primary advantage of this method is that the need for transition modeling between global and local models is eliminated. Eliminating transition modeling has two benefits. First, modeling efforts are reduced since tedious and complex transitioning need not be performed. Second, errors due to the mesh distortion, often unavoidable in mesh transitioning, are minimized by avoiding distorted elements beyond what is needed to represent the geometry of the component. The method is applied reduced to a plate loaded in tension and transverse bending. The plate has a central hole, and various hole sixes and shapes are studied. The method is also applied to a composite laminated fuselage panel with a crack emanating from a window in the panel. While this method is applied herein to global/local problems, it is also applicable to the coupled analysis of independently modeled components as well as adaptive refinement.

  13. Subjective comparison of brightness preservation methods for local backlight dimming displays

    NASA Astrophysics Data System (ADS)

    Korhonen, J.; Mantel, C.; Forchhammer, S.

    2015-01-01

    Local backlight dimming is a popular technology in high quality Liquid Crystal Displays (LCDs). In those displays, the backlight is composed of contributions from several individually adjustable backlight segments, set at different backlight luminance levels in different parts of the screen, according to the luma of the target image displayed on LCD. Typically, transmittance of the liquid crystal cells (pixels) located in the regions with dimmed backlight is increased in order to preserve their relative brightness with respect to the pixels located in the regions with bright backlight. There are different methods for brightness preservation for local backlight dimming displays, producing images with different visual characteristics. In this study, we have implemented, analyzed and evaluated several different approaches for brightness preservation, and conducted a subjective study based on rank ordering to compare the relevant methods on a real-life LCD with a local backlight dimming capability. In general, our results show that locally adapted brightness preservation methods produce more preferred visual outcome than global methods, but dependency on the content is also observed. Based on the results, guidelines for selecting the perceptually preferred brightness preservation method for local backlight dimming displays are outlined.

  14. Global and Local Approaches Describing Critical Phenomena on the Developing and Developed Financial Markets

    NASA Astrophysics Data System (ADS)

    Grech, Dariusz

    We define and confront global and local methods to analyze the financial crash-like events on the financial markets from the critical phenomena point of view. These methods are based respectively on the analysis of log-periodicity and on the local fractal properties of financial time series in the vicinity of phase transitions (crashes). The log-periodicity analysis is made in a daily time horizon, for the whole history (1991-2008) of Warsaw Stock Exchange Index (WIG) connected with the largest developing financial market in Europe. We find that crash-like events on the Polish financial market are described better by the log-divergent price model decorated with log-periodic behavior than by the power-law-divergent price model usually discussed in log-periodic scenarios for developed markets. Predictions coming from log-periodicity scenario are verified for all main crashes that took place in WIG history. It is argued that crash predictions within log-periodicity model strongly depend on the amount of data taken to make a fit and therefore are likely to contain huge inaccuracies. Next, this global analysis is confronted with the local fractal description. To do so, we provide calculation of the so-called local (time dependent) Hurst exponent H loc for the WIG time series and for main US stock market indices like DJIA and S&P 500. We point out dependence between the behavior of the local fractal properties of financial time series and the crashes appearance on the financial markets. We conclude that local fractal method seems to work better than the global approach - both for developing and developed markets. The very recent situation on the market, particularly related to the Fed intervention in September 2007 and the situation immediately afterwards is also analyzed within fractal approach. It is shown in this context how the financial market evolves through different phases of fractional Brownian motion. Finally, the current situation on American market is

  15. A graph-based approach for local and global panorama imaging in cystoscopy

    NASA Astrophysics Data System (ADS)

    Bergen, Tobias; Wittenberg, Thomas; Münzenmayer, Christian; Chen, Chi Chiung Grace; Hager, Gregory D.

    2013-03-01

    Inspection of the urinary bladder with an endoscope (cystoscope) is the usual procedure for early detection of bladder cancer. The very limited field of view provided by the endoscope makes it challenging to ensure, that the interior bladder wall has been examined completely. Panorama imaging techniques can be used to assist the surgeon and provide a larger view field. Different approaches have been proposed, but generating a panorama image of the entire bladder from real patient data is still a challenging research topic. We propose a graph-based and hierarchical approach to assess this problem to first generate several local panorama images, followed by a global textured three-dimensional reconstruction of the organ. In this contribution, we address details of the first level of the approach including a graph-based algorithm to deal with the challenging condition of in-vivo data. This graph strategy gives rise to a robust relocalization strategy in case of tracking failure, an effective keyframe selection process as well as the concept of building locally optimized sub-maps, which lay the ground for a global optimization process. Our results show the successful application of the method to four in-vivo data sets.

  16. Designing and Evaluating Bamboo Harvesting Methods for Local Needs: Integrating Local Ecological Knowledge and Science

    NASA Astrophysics Data System (ADS)

    Darabant, András; Rai, Prem Bahadur; Staudhammer, Christina Lynn; Dorji, Tshewang

    2016-08-01

    Dendrocalamus hamiltonii, a large, clump-forming bamboo, has great potential to contribute towards poverty alleviation efforts across its distributional range. Harvesting methods that maximize yield while they fulfill local objectives and ensure sustainability are a research priority. Documenting local ecological knowledge on the species and identifying local users' goals for its production, we defined three harvesting treatments (selective cut, horseshoe cut, clear cut) and experimentally compared them with a no-intervention control treatment in an action research framework. We implemented harvesting over three seasons and monitored annually and two years post-treatment. Even though the total number of culms positively influenced the number of shoots regenerated, a much stronger relationship was detected between the number of culms harvested and the number of shoots regenerated, indicating compensatory growth mechanisms to guide shoot regeneration. Shoot recruitment declined over time in all treatments as well as the control; however, there was no difference among harvest treatments. Culm recruitment declined with an increase in harvesting intensity. When univariately assessing the number of harvested culms and shoots, there were no differences among treatments. However, multivariate analyses simultaneously considering both variables showed that harvested output of shoots and culms was higher with clear cut and horseshoe cut as compared to selective cut. Given the ease of implementation and issues of work safety, users preferred the horseshoe cut, but the lack of sustainability of shoot production calls for investigating longer cutting cycles.

  17. An Adaptive Unstructured Grid Method by Grid Subdivision, Local Remeshing, and Grid Movement

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    1999-01-01

    An unstructured grid adaptation technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The approach is based on a combination of grid subdivision, local remeshing, and grid movement. For solution adaptive grids, the surface triangulation is locally refined by grid subdivision, and the tetrahedral grid in the field is partially remeshed at locations of dominant flow features. A grid redistribution strategy is employed for geometric adaptation of volume grids to moving or deforming surfaces. The method is automatic and fast and is designed for modular coupling with different solvers. Several steady state test cases with different inviscid flow features were tested for grid/solution adaptation. In all cases, the dominant flow features, such as shocks and vortices, were accurately and efficiently predicted with the present approach. A new and robust method of moving tetrahedral "viscous" grids is also presented and demonstrated on a three-dimensional example.

  18. An Approach to Teaching Applied GIS: Implementation for Local Organizations.

    ERIC Educational Resources Information Center

    Benhart, John, Jr.

    2000-01-01

    Describes the instructional method, Client-Life Cycle GIS Project Learning, used in a course at Indiana University of Pennsylvania that enables students to learn with and about geographic information system (GIS). Discusses the course technical issues in GIS and an example project using this method. (CMK)

  19. Fast approach to evaluate map reconstruction for lesion detection and localization

    SciTech Connect

    Qi, Jinyi; Huesman, Ronald H.

    2004-02-01

    Lesion detection is an important task in emission tomography. Localization ROC (LROC) studies are often used to analyze the lesion detection and localization performance. Most researchers rely on Monte Carlo reconstruction samples to obtain LROC curves, which can be very time-consuming for iterative algorithms. In this paper we develop a fast approach to obtain LROC curves that does not require Monte Carlo reconstructions. We use a channelized Hotelling observer model to search for lesions, and the results can be easily extended to other numerical observers. We theoretically analyzed the mean and covariance of the observer output. Assuming the observer outputs are multivariate Gaussian random variables, an LROC curve can be directly generated by integrating the conditional probability density functions. The high-dimensional integrals are calculated using a Monte Carlo method. The proposed approach is very fast because no iterative reconstruction is involved. Computer simulations show that the results of the proposed method match well with those obtained using the tradition LROC analysis.

  20. Method for localizing and isolating an errant process step

    DOEpatents

    Tobin, Jr., Kenneth W.; Karnowski, Thomas P.; Ferrell, Regina K.

    2003-01-01

    A method for localizing and isolating an errant process includes the steps of retrieving from a defect image database a selection of images each image having image content similar to image content extracted from a query image depicting a defect, each image in the selection having corresponding defect characterization data. A conditional probability distribution of the defect having occurred in a particular process step is derived from the defect characterization data. A process step as a highest probable source of the defect according to the derived conditional probability distribution is then identified. A method for process step defect identification includes the steps of characterizing anomalies in a product, the anomalies detected by an imaging system. A query image of a product defect is then acquired. A particular characterized anomaly is then correlated with the query image. An errant process step is then associated with the correlated image.

  1. New orbit correction method uniting global and local orbit corrections

    NASA Astrophysics Data System (ADS)

    Nakamura, N.; Takaki, H.; Sakai, H.; Satoh, M.; Harada, K.; Kamiya, Y.

    2006-01-01

    A new orbit correction method, called the eigenvector method with constraints (EVC), is proposed and formulated to unite global and local orbit corrections for ring accelerators, especially synchrotron radiation(SR) sources. The EVC can exactly correct the beam positions at arbitrarily selected ring positions such as light source points, simultaneously reducing closed orbit distortion (COD) around the whole ring. Computer simulations clearly demonstrate these features of the EVC for both cases of the Super-SOR light source and the Advanced Light Source (ALS) that have typical structures of high-brilliance SR sources. In addition, the effects of errors in beam position monitor (BPM) reading and steering magnet setting on the orbit correction are analytically expressed and also compared with the computer simulations. Simulation results show that the EVC is very effective and useful for orbit correction and beam position stabilization in SR sources.

  2. Optimizing Local Memory Allocation and Assignment through a Decoupled Approach

    NASA Astrophysics Data System (ADS)

    Diouf, Boubacar; Ozturk, Ozcan; Cohen, Albert

    Software-controlled local memories (LMs) are widely used to provide fast, scalable, power efficient and predictable access to critical data. While many studies addressed LM management, keeping hot data in the LM continues to cause major headache. This paper revisits LM management of arrays in light of recent progresses in register allocation, supporting multiple live-range splitting schemes through a generic integer linear program. These schemes differ in the grain of decision points. The model can also be extended to address fragmentation, assigning live ranges to precise offsets. We show that the links between LM management and register allocation have been underexploited, leaving much fundamental questions open and effective applications to be explored.

  3. Well-conditioning global-local analysis using stable generalized/extended finite element method for linear elastic fracture mechanics

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felicio Bruzzi

    2016-07-01

    Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.

  4. Well-conditioning global-local analysis using stable generalized/extended finite element method for linear elastic fracture mechanics

    NASA Astrophysics Data System (ADS)

    Malekan, Mohammad; Barros, Felicio Bruzzi

    2016-11-01

    Using the locally-enriched strategy to enrich a small/local part of the problem by generalized/extended finite element method (G/XFEM) leads to non-optimal convergence rate and ill-conditioning system of equations due to presence of blending elements. The local enrichment can be chosen from polynomial, singular, branch or numerical types. The so-called stable version of G/XFEM method provides a well-conditioning approach when only singular functions are used in the blending elements. This paper combines numeric enrichment functions obtained from global-local G/XFEM method with the polynomial enrichment along with a well-conditioning approach, stable G/XFEM, in order to show the robustness and effectiveness of the approach. In global-local G/XFEM, the enrichment functions are constructed numerically from the solution of a local problem. Furthermore, several enrichment strategies are adopted along with the global-local enrichment. The results obtained with these enrichments strategies are discussed in detail, considering convergence rate in strain energy, growth rate of condition number, and computational processing. Numerical experiments show that using geometrical enrichment along with stable G/XFEM for global-local strategy improves the convergence rate and the conditioning of the problem. In addition, results shows that using polynomial enrichment for global problem simultaneously with global-local enrichments lead to ill-conditioned system matrices and bad convergence rate.

  5. Food allergen detection methods: a coordinated approach.

    PubMed

    Goodwin, Philip R

    2004-01-01

    The levels (1-2%) and increasing severity of allergic responses to food in the adult population are well documented, as is the phenomenon of even higher (3-8%) and apparently increasing incidence in children, albeit that susceptibility decreases with age. Problematic foods include peanut, milk, eggs, tree nuts, and sesame, but the list is growing as awareness continues to rise. The amounts of such foods that can cause allergic reactions is difficult to gauge; however, the general consensus is that ingestion of low parts per million is sufficient to cause severe reactions in badly affected individuals. Symptoms can rapidly-within minutes-progress from minor discomfort to severe, even life-threatening anaphylactic shock in those worst affected. Given the combination of high incidence of atopy, potential severity of response, and apparently widespread instances of "hidden" allergens in the food supply, it is not surprising that this issue is increasingly subject to legislative and regulatory scrutiny. In order to assist in the control of allergen levels in foods to acceptable levels, analysts require a combination of test methods, each designed to produce accurate, timely, and cost-effective analytical information. Such information contributes significantly to Hazard Analysis Critical Control Point programs to determine food manufacturers' risk and improves the accuracy of monitoring and surveillance by food industry, commercial, and enforcement laboratories. Analysis thereby facilitates improvements in compliance with labeling laws with concomitant reductions in risks to atopic consumers. This article describes a combination of analytical approaches to fulfill the various needs of these 3 analytical communities.

  6. An automatic locally-adaptive method to estimate heavily-tailed breakthrough curves from particle distributions

    NASA Astrophysics Data System (ADS)

    Pedretti, Daniele; Fernàndez-Garcia, Daniel

    2013-09-01

    Particle tracking methods to simulate solute transport deal with the issue of having to reconstruct smooth concentrations from a limited number of particles. This is an error-prone process that typically leads to large fluctuations in the determined late-time behavior of breakthrough curves (BTCs). Kernel density estimators (KDE) can be used to automatically reconstruct smooth BTCs from a small number of particles. The kernel approach incorporates the uncertainty associated with subsampling a large population by equipping each particle with a probability density function. Two broad classes of KDE methods can be distinguished depending on the parametrization of this function: global and adaptive methods. This paper shows that each method is likely to estimate a specific portion of the BTCs. Although global methods offer a valid approach to estimate early-time behavior and peak of BTCs, they exhibit important fluctuations at the tails where fewer particles exist. In contrast, locally adaptive methods improve tail estimation while oversmoothing both early-time and peak concentrations. Therefore a new method is proposed combining the strength of both KDE approaches. The proposed approach is universal and only needs one parameter (α) which slightly depends on the shape of the BTCs. Results show that, for the tested cases, heavily-tailed BTCs are properly reconstructed with α ≈ 0.5 .

  7. Planning and visualization methods for effective bronchoscopic target localization

    NASA Astrophysics Data System (ADS)

    Gibbs, Jason D.; Taeprasarsit, Pinyo; Higgins, William E.

    2012-02-01

    Bronchoscopic biopsy of lymph nodes is an important step in staging lung cancer. Lymph nodes, however, lie behind the airway walls and are near large vascular structures - all of these structures are hidden from the bronchoscope's field of view. Previously, we had presented a computer-based virtual bronchoscopic navigation system that provides reliable guidance for bronchoscopic sampling. While this system offers a major improvement over standard practice, bronchoscopists told us that target localization- lining up the bronchoscope before deploying a needle into the target - can still be challenging. We therefore address target localization in two distinct ways: (1) automatic computation of an optimal diagnostic sampling pose for safe, effective biopsies, and (2) a novel visualization of the target and surrounding major vasculature. The planning determines the final pose for the bronchoscope such that the needle, when extended from the tip, maximizes the tissue extracted. This automatically calculated local pose orientation is conveyed in endoluminal renderings by a 3D arrow. Additional visual cues convey obstacle locations and target depths-of-sample from arbitrary instantaneous viewing orientations. With the system, a physician can freely navigate in the virtual bronchoscopic world perceiving the depth-of-sample and possible obstacle locations at any endoluminal pose, not just one pre-determined optimal pose. We validated the system using mediastinal lymph nodes in eleven patients. The system successfully planned for 20 separate targets in human MDCT scans. In particular, given the patient and bronchoscope constraints, our method found that safe, effective biopsies were feasible in 16 of the 20 targets; the four remaining targets required more aggressive safety margins than a "typical" target. In all cases, planning computation took only a few seconds, while the visualizations updated in real time during bronchoscopic navigation.

  8. Cryo-Balloon Catheter Localization Based on a Support-Vector-Machine Approach.

    PubMed

    Kurzendorfer, Tanja; Mewes, Philip W; Maier, Andreas; Strobel, Norbert; Brost, Alexander

    2016-08-01

    Cryo-balloon catheters have attracted an increasing amount of interest in the medical community as they can reduce patient risk during left atrial pulmonary vein ablation procedures. As cryo-balloon catheters are not equipped with electrodes, they cannot be localized automatically by electro-anatomical mapping systems. As a consequence, X-ray fluoroscopy has remained an important means for guidance during the procedure. Most recently, image guidance methods for fluoroscopy-based procedures have been proposed, but they provide only limited support for cryo-balloon catheters and require significant user interaction. To improve this situation, we propose a novel method for automatic cryo-balloon catheter detection in fluoroscopic images by detecting the cryo-balloon catheter's built-in X-ray marker. Our approach is based on a blob detection algorithm to find possible X-ray marker candidates. Several of these candidates are then excluded using prior knowledge. For the remaining candidates, several catheter specific features are introduced. They are processed using a machine learning approach to arrive at the final X-ray marker position. Our method was evaluated on 75 biplane fluoroscopy images from 40 patients, from two sites, acquired with a biplane angiography system. The method yielded a success rate of 99.0% in plane A and 90.6% in plane B, respectively. The detection achieved an accuracy of 1.00 mm±0.82 mm in plane A and 1.13 mm±0.24 mm in plane B. The localization in 3-D was associated with an average error of 0.36 mm±0.86 mm.

  9. Polaron Localization in Conjugated Polymers by Hybrid DFT Methods

    NASA Astrophysics Data System (ADS)

    Shao, Nan; Wu, Qin; Theorey; Computation Group Team

    2013-03-01

    Reliable application of density functional theory (DFT) to study the electronic properties of polarons remains controversial. A proper description should exhibit both the formation of a charge-localized electronic state and saturation of the polaron size for increasing oligomer length. The aim of this work is to find a proper hybrid DFT method to study the chain length related electronic properties of charged conjugated polymer system. Using oligopyrrole cations as a test case, global hybrid functionals such as BHandHLYP can show charge localization, but a well-defined polaron size does not emerge when the length of the oligomer is increased; the saturation effect was not predicted correctly. By applying 100% long-range corrected hybrid functionals, LRC-PBE, the saturation of charge distribution has been achieved, implying that the LRC-PBE is a better way to describe the spatial extent of the electronic state of polypyrrole than the conventional hybrid functionals. The tuning of the range parameter and the study of other polymer polaron systems will be discussed. Supported by Brookhaven Science Associates, LLC under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy.

  10. Evaluation of geospatial methods to generate subnational HIV prevalence estimates for local level planning

    PubMed Central

    2016-01-01

    Objective: There is evidence of substantial subnational variation in the HIV epidemic. However, robust spatial HIV data are often only available at high levels of geographic aggregation and not at the finer resolution needed for decision making. Therefore, spatial analysis methods that leverage available data to provide local estimates of HIV prevalence may be useful. Such methods exist but have not been formally compared when applied to HIV. Design/methods: Six candidate methods – including those used by the Joint United Nations Programme on HIV/AIDS to generate maps and a Bayesian geostatistical approach applied to other diseases – were used to generate maps and subnational estimates of HIV prevalence across three countries using cluster level data from household surveys. Two approaches were used to assess the accuracy of predictions: internal validation, whereby a proportion of input data is held back (test dataset) to challenge predictions; and comparison with location-specific data from household surveys in earlier years. Results: Each of the methods can generate usefully accurate predictions of prevalence at unsampled locations, with the magnitude of the error in predictions similar across approaches. However, the Bayesian geostatistical approach consistently gave marginally the strongest statistical performance across countries and validation procedures. Conclusions: Available methods may be able to furnish estimates of HIV prevalence at finer spatial scales than the data currently allow. The subnational variation revealed can be integrated into planning to ensure responsiveness to the spatial features of the epidemic. The Bayesian geostatistical approach is a promising strategy for integrating HIV data to generate robust local estimates. PMID:26919737

  11. Local and global approaches to the problem of Poincaré recurrences. Applications in nonlinear dynamics

    NASA Astrophysics Data System (ADS)

    Anishchenko, V. S.; Boev, Ya. I.; Semenova, N. I.; Strelkova, G. I.

    2015-07-01

    We review rigorous and numerical results on the statistics of Poincaré recurrences which are related to the modern development of the Poincaré recurrence problem. We analyze and describe the rigorous results which are achieved both in the classical (local) approach and in the recently developed global approach. These results are illustrated by numerical simulation data for simple chaotic and ergodic systems. It is shown that the basic theoretical laws can be applied to noisy systems if the probability measure is ergodic and stationary. Poincaré recurrences are studied numerically in nonautonomous systems. Statistical characteristics of recurrences are analyzed in the framework of the global approach for the cases of positive and zero topological entropy. We show that for the positive entropy, there is a relationship between the Afraimovich-Pesin dimension, Lyapunov exponents and the Kolmogorov-Sinai entropy either without and in the presence of external noise. The case of zero topological entropy is exemplified by numerical results for the Poincare recurrence statistics in the circle map. We show and prove that the dependence of minimal recurrence times on the return region size demonstrates universal properties for the golden and the silver ratio. The behavior of Poincaré recurrences is analyzed at the critical point of Feigenbaum attractor birth. We explore Poincaré recurrences for an ergodic set which is generated in the stroboscopic section of a nonautonomous oscillator and is similar to a circle shift. Based on the obtained results we show how the Poincaré recurrence statistics can be applied for solving a number of nonlinear dynamics issues. We propose and illustrate alternative methods for diagnosing effects of external and mutual synchronization of chaotic systems in the context of the local and global approaches. The properties of the recurrence time probability density can be used to detect the stochastic resonance phenomenon. We also discuss how

  12. A Challenging Surgical Approach to Locally Advanced Primary Urethral Carcinoma

    PubMed Central

    Lucarelli, Giuseppe; Spilotros, Marco; Vavallo, Antonio; Palazzo, Silvano; Miacola, Carlos; Forte, Saverio; Matera, Matteo; Campagna, Marcello; Colamonico, Ottavio; Schiralli, Francesco; Sebastiani, Francesco; Di Cosmo, Federica; Bettocchi, Carlo; Di Lorenzo, Giuseppe; Buonerba, Carlo; Vincenti, Leonardo; Ludovico, Giuseppe; Ditonno, Pasquale; Battaglia, Michele

    2016-01-01

    Abstract Primary urethral carcinoma (PUC) is a rare and aggressive cancer, often underdetected and consequently unsatisfactorily treated. We report a case of advanced PUC, surgically treated with combined approaches. A 47-year-old man underwent transurethral resection of a urethral lesion with histological evidence of a poorly differentiated squamous cancer of the bulbomembranous urethra. Computed tomography (CT) and bone scans excluded metastatic spread of the disease but showed involvement of both corpora cavernosa (cT3N0M0). A radical surgical approach was advised, but the patient refused this and opted for chemotherapy. After 17 months the patient was referred to our department due to the evidence of a fistula in the scrotal area. CT scan showed bilateral metastatic disease in the inguinal, external iliac, and obturator lymph nodes as well as the involvement of both corpora cavernosa. Additionally, a fistula originating from the right corpus cavernosum extended to the scrotal skin. At this stage, the patient accepted the surgical treatment, consisting of different phases. Phase I: Radical extraperitoneal cystoprostatectomy with iliac-obturator lymph nodes dissection. Phase II: Creation of a urinary diversion through a Bricker ileal conduit. Phase III: Repositioning of the patient in lithotomic position for an overturned Y skin incision, total penectomy, fistula excision, and “en bloc” removal of surgical specimens including the bladder, through the perineal breach. Phase IV: Right inguinal lymphadenectomy. The procedure lasted 9-and-a-half hours, was complication-free, and intraoperative blood loss was 600 mL. The patient was discharged 8 days after surgery. Pathological examination documented a T4N2M0 tumor. The clinical situation was stable during the first 3 months postoperatively but then metastatic spread occurred, not responsive to adjuvant chemotherapy, which led to the patient's death 6 months after surgery. Patients with advanced stage tumors of

  13. Tempest - Efficient Computation of Atmospheric Flows Using High-Order Local Discretization Methods

    NASA Astrophysics Data System (ADS)

    Ullrich, P. A.; Guerra, J. E.

    2014-12-01

    The Tempest Framework composes several compact numerical methods to easily facilitate intercomparison of atmospheric flow calculations on the sphere and in rectangular domains. This framework includes the implementations of Spectral Elements, Discontinuous Galerkin, Flux Reconstruction, and Hybrid Finite Element methods with the goal of achieving optimal accuracy in the solution of atmospheric problems. Several advantages of this approach are discussed such as: improved pressure gradient calculation, numerical stability by vertical/horizontal splitting, arbitrary order of accuracy, etc. The local numerical discretization allows for high performance parallel computation and efficient inclusion of parameterizations. These techniques are used in conjunction with a non-conformal, locally refined, cubed-sphere grid for global simulations and standard Cartesian grids for simulations at the mesoscale. A complete implementation of the methods described is demonstrated in a non-hydrostatic setting.

  14. Application of local mesh refinement in the DSMC method

    NASA Astrophysics Data System (ADS)

    Wu, J.-S.; Tseng, K.-C.; Kuo, C.-H.

    2001-08-01

    The implementation of an adaptive mesh embedding (h-refinement) schemes using unstructured grid in two-dimensional Direct Simulation Monte Carlo (DSMC) method is reported. In this technique, local isotropic refinement is used to introduce new meshes where local cell Knudsen number is less than some preset value. This simple scheme, however, has several severe consequences affecting the performance of the DSMC method. Thus, we have applied a technique to remove the hanging mode, by introducing anisotropic refinement in the interfacial cells. This is completed by simply connect the hanging node(s) with the other non-hanging node(s) in the non-refined, interfacial cells. In contrast, this remedy increases negligible amount of work; however, it removes all the difficulties presented in the first scheme with hanging nodes. We have tested the proposed scheme for Argon gas using different types of mesh, such as triangular and quadrilateral or mixed, to high-speed driven cavity flow. The results show an improved flow resolution as compared with that of unadaptive mesh. Finally, we have triangular adaptive mesh to compute two near-continuum gas flows, including a supersonic flow over a cylinder and a supersonic flow over a 35° compression ramp. The results show fairly good agreement with previous studies. In summary, the computational penalties by the proposed adaptive schemes are found to be small as compared with the DSMC computation itself. Nevertheless, we have concluded that the proposed scheme is superior to the original unadaptive scheme considering the accuracy of the solution.

  15. Characterization of peak flow events with local singularity method

    NASA Astrophysics Data System (ADS)

    Cheng, Q.; Li, L.; Wang, L.

    2009-07-01

    Three methods, return period, power-law frequency plot (concentration-area) and local singularity index, are introduced in the paper for characterizing peak flow events from river flow data for the past 100 years from 1900 to 2000 recorded at 25 selected gauging stations on rivers in the Oak Ridges Moraine (ORM) area, Canada. First a traditional method, return period, was applied to the maximum annual river flow data. Whereas the Pearson III distribution generally fits the values, a power-law frequency plot (C-A) on the basis of self-similarity principle provides an effective mean for distinguishing "extremely" large flow events from the regular flow events. While the latter show a power-law distribution, about 10 large flow events manifest departure from the power-law distribution and these flow events can be classified into a separate group most of which are related to flood events. It is shown that the relation between the average water releases over a time period after flow peak and the time duration may follow a power-law distribution. The exponent of the power-law or singularity index estimated from this power-law relation may be used to characterize non-linearity of peak flow recessions. Viewing large peak flow events or floods as singular processes can anticipate the application of power-law models not only for characterizing the frequency distribution of peak flow events, for example, power-law relation between the number and size of floods, but also for describing local singularity of processes such as power-law relation between the amount of water released versus releasing time. With the introduction and validation of singularity of peak flow events, alternative power-law models can be used to depict the recession property as well as other types of non-linear properties.

  16. Noninvasive localization of electromagnetic epileptic activity. I. Method descriptions and simulations.

    PubMed

    Grave de Peralta Menendez, R; Gonzalez Andino, S; Lantz, G; Michel, C M; Landis, T

    2001-01-01

    This paper considers the solution of the bioelectromagnetic inverse problem with particular emphasis on focal compact sources that are likely to arise in epileptic data. Two linear inverse methods are proposed and evaluated in simulations. The first method belongs to the class of distributed inverse solutions, capable of dealing with multiple simultaneously active sources. This solution is based on a Local Auto Regressive Average (LAURA) model. Since no assumption is made about the number of activated sources, this approach can be applied to data with multiple sources. The second method, EPIFOCUS, assumes that there is only a single focal source. However, in contrast to the single dipole model, it allows the source to have a spatial extent beyond a single point and avoids the non-linear optimization process required by dipole fitting. The performance of both methods is evaluated with synthetic data in noisy and noise free conditions. The simulation results demonstrate that LAURA and EPIFOCUS increase the number of sources retrieved with zero dipole localization error and produce lower maximum error and lower average error compared to Minimum Norm, Weighted Minimum Norm and Minimum Laplacian (LORETA). The results show that EPIFOCUS is a robust and powerful tool to localize focal sources. Alternatives to localize data generated by multiple sources are discussed. A companion paper (Lantz et al. 2001, this issue) illustrates the application of LAURA and EPIFOCUS to the analysis of interictal data in epileptic patients.

  17. Qualitative Approaches to Mixed Methods Practice

    ERIC Educational Resources Information Center

    Hesse-Biber, Sharlene

    2010-01-01

    This article discusses how methodological practices can shape and limit how mixed methods is practiced and makes visible the current methodological assumptions embedded in mixed methods practice that can shut down a range of social inquiry. The article argues that there is a "methodological orthodoxy" in how mixed methods is practiced that…

  18. Periodic local MP2 method employing orbital specific virtuals

    SciTech Connect

    Usvyat, Denis Schütz, Martin; Maschio, Lorenzo

    2015-09-14

    We introduce orbital specific virtuals (OSVs) to represent the truncated pair-specific virtual space in periodic local Møller-Plesset perturbation theory of second order (LMP2). The OSVs are constructed by diagonalization of the LMP2 amplitude matrices which correspond to diagonal Wannier-function (WF) pairs. Only a subset of these OSVs is adopted for the subsequent OSV-LMP2 calculation, namely, those with largest contribution to the diagonal pair correlation energy and with the accumulated value of these contributions reaching a certain accuracy. The virtual space for a general (non diagonal) pair is spanned by the union of the two OSV sets related to the individual WFs of the pair. In the periodic LMP2 method, the diagonal LMP2 amplitude matrices needed for the construction of the OSVs are calculated in the basis of projected atomic orbitals (PAOs), employing very large PAO domains. It turns out that the OSVs are excellent to describe short range correlation, yet less appropriate for long range van der Waals correlation. In order to compensate for this bias towards short range correlation, we augment the virtual space spanned by the OSVs by the most diffuse PAOs of the corresponding minimal PAO domain. The Fock and overlap matrices in OSV basis are constructed in the reciprocal space. The 4-index electron repulsion integrals are calculated by local density fitting and, for distant pairs, via multipole approximation. New procedures for determining the fit-domains and the distant-pair lists, leading to higher efficiency in the 4-index integral evaluation, have been implemented. Generally, and in contrast to our previous PAO based periodic LMP2 method, the OSV-LMP2 method does not require anymore great care in the specification of the individual domains (to get a balanced description when calculating energy differences) and is in that sense a black box procedure. Discontinuities in potential energy surfaces, which may occur for PAO-based calculations if one is not

  19. Periodic local MP2 method employing orbital specific virtuals.

    PubMed

    Usvyat, Denis; Maschio, Lorenzo; Schütz, Martin

    2015-09-14

    We introduce orbital specific virtuals (OSVs) to represent the truncated pair-specific virtual space in periodic local Møller-Plesset perturbation theory of second order (LMP2). The OSVs are constructed by diagonalization of the LMP2 amplitude matrices which correspond to diagonal Wannier-function (WF) pairs. Only a subset of these OSVs is adopted for the subsequent OSV-LMP2 calculation, namely, those with largest contribution to the diagonal pair correlation energy and with the accumulated value of these contributions reaching a certain accuracy. The virtual space for a general (non diagonal) pair is spanned by the union of the two OSV sets related to the individual WFs of the pair. In the periodic LMP2 method, the diagonal LMP2 amplitude matrices needed for the construction of the OSVs are calculated in the basis of projected atomic orbitals (PAOs), employing very large PAO domains. It turns out that the OSVs are excellent to describe short range correlation, yet less appropriate for long range van der Waals correlation. In order to compensate for this bias towards short range correlation, we augment the virtual space spanned by the OSVs by the most diffuse PAOs of the corresponding minimal PAO domain. The Fock and overlap matrices in OSV basis are constructed in the reciprocal space. The 4-index electron repulsion integrals are calculated by local density fitting and, for distant pairs, via multipole approximation. New procedures for determining the fit-domains and the distant-pair lists, leading to higher efficiency in the 4-index integral evaluation, have been implemented. Generally, and in contrast to our previous PAO based periodic LMP2 method, the OSV-LMP2 method does not require anymore great care in the specification of the individual domains (to get a balanced description when calculating energy differences) and is in that sense a black box procedure. Discontinuities in potential energy surfaces, which may occur for PAO-based calculations if one is not

  20. Periodic local MP2 method employing orbital specific virtuals.

    PubMed

    Usvyat, Denis; Maschio, Lorenzo; Schütz, Martin

    2015-09-14

    We introduce orbital specific virtuals (OSVs) to represent the truncated pair-specific virtual space in periodic local Møller-Plesset perturbation theory of second order (LMP2). The OSVs are constructed by diagonalization of the LMP2 amplitude matrices which correspond to diagonal Wannier-function (WF) pairs. Only a subset of these OSVs is adopted for the subsequent OSV-LMP2 calculation, namely, those with largest contribution to the diagonal pair correlation energy and with the accumulated value of these contributions reaching a certain accuracy. The virtual space for a general (non diagonal) pair is spanned by the union of the two OSV sets related to the individual WFs of the pair. In the periodic LMP2 method, the diagonal LMP2 amplitude matrices needed for the construction of the OSVs are calculated in the basis of projected atomic orbitals (PAOs), employing very large PAO domains. It turns out that the OSVs are excellent to describe short range correlation, yet less appropriate for long range van der Waals correlation. In order to compensate for this bias towards short range correlation, we augment the virtual space spanned by the OSVs by the most diffuse PAOs of the corresponding minimal PAO domain. The Fock and overlap matrices in OSV basis are constructed in the reciprocal space. The 4-index electron repulsion integrals are calculated by local density fitting and, for distant pairs, via multipole approximation. New procedures for determining the fit-domains and the distant-pair lists, leading to higher efficiency in the 4-index integral evaluation, have been implemented. Generally, and in contrast to our previous PAO based periodic LMP2 method, the OSV-LMP2 method does not require anymore great care in the specification of the individual domains (to get a balanced description when calculating energy differences) and is in that sense a black box procedure. Discontinuities in potential energy surfaces, which may occur for PAO-based calculations if one is not

  1. Local response dispersion method. II. Generalized multicenter interactions

    NASA Astrophysics Data System (ADS)

    Sato, Takeshi; Nakai, Hiromi

    2010-11-01

    Recently introduced local response dispersion method [T. Sato and H. Nakai, J. Chem. Phys. 131, 224104 (2009)], which is a first-principles alternative to empirical dispersion corrections in density functional theory, is implemented with generalized multicenter interactions involving both atomic and atomic pair polarizabilities. The generalization improves the asymptote of intermolecular interactions, reducing the mean absolute percentage error from about 30% to 6% in the molecular C6 coefficients of more than 1000 dimers, compared to experimental values. The method is also applied to calculations of potential energy curves of molecules in the S22 database [P. Jurečka et al., Phys. Chem. Chem. Phys. 8, 1985 (2006)]. The calculated potential energy curves are in a good agreement with reliable benchmarks recently published by Molnar et al. [J. Chem. Phys. 131, 065102 (2009)]. These improvements are achieved at the price of increasing complexity in the implementation, but without losing the computational efficiency of the previous two-center (atom-atom) formulation. A set of different truncations of two-center and three- or four-center interactions is shown to be optimal in the cost-performance balance.

  2. Architecture-Centric Methods and Agile Approaches

    NASA Astrophysics Data System (ADS)

    Babar, Muhammad Ali; Abrahamsson, Pekka

    Agile software development approaches have had significant impact on industrial software development practices. Despite becoming widely popular, there is an increasing perplexity about the role and importance of a system’s software architecture in agile approaches [1, 2]. Advocates of the vital role of architecture in achieving quality goals of large-scale-software-intensive-systems are skeptics of the scalability of any development approach that does not pay sufficient attention to architectural issues. However, the proponents of agile approaches usually perceive the upfront design and evaluation of architecture as being of less value to the customers of a system. According to them, for example, re-factoring can help fix most of the problems. Many experiences show that large-scale re-factoring often results in significant defects, which are very costly to address later in the development cycle. It is considered that re-factoring is worthwhile as long as the high-level design is good enough to limit the need for large-scale re-factoring [1, 3, 4].

  3. Comparing passive source localization and tracking approaches with a towed horizontal receiver array in an ocean waveguide.

    PubMed

    Gong, Zheng; Tran, Duong D; Ratilal, Purnima

    2013-11-01

    Approaches for instantaneous passive source localization using a towed horizontal receiver array in a random range-dependent ocean waveguide are examined. They include: (1) Moving array triangulation, (2) array invariant, (3) bearings-only target motion analysis in modified polar coordinates via the extended Kalman filter, and (4) bearings-migration minimum mean-square error. These methods are applied to localize and track a vertical source array deployed in the far-field of a towed horizontal receiver array during the Gulf of Maine 2006 Experiment. The source transmitted intermittent broadband pulses in the 300 to 1200 Hz frequency range. A nonlinear matched-filter kernel designed to replicate the acoustic signal measured by the receiver array is applied to enhance the signal-to-noise ratio. The source localization accuracy is found to be highly dependent on source-receiver geometry and the localization approach. For a relatively stationary source drifting at speeds much slower than the receiver array tow-speed, the mean source position can be estimated by moving array triangulation with less than 3% error near broadside direction. For a moving source, the Kalman filter method gives the best performance with 5.5% error. The array invariant is the best approach for localizing sources within the endfire beam of the receiver array with 7% error.

  4. Optogenetics in the cerebellum: Purkinje cell-specific approaches for understanding local cerebellar functions.

    PubMed

    Tsubota, Tadashi; Ohashi, Yohei; Tamura, Keita

    2013-10-15

    The cerebellum consists of the cerebellar cortex and the cerebellar nuclei. Although the basic neuronal circuitry of the cerebellar cortex is uniform everywhere, anatomical data demonstrate that the input and output relationships of the cortex are spatially segregated between different cortical areas, which suggests that there are functional distinctions between these different areas. Perturbation of cerebellar cortical functions in a spatially restricted fashion is thus essential for investigating the distinctions among different cortical areas. In the cerebellar cortex, Purkinje cells are the sole output neurons that send information to downstream cerebellar and vestibular nuclei. Therefore, selective manipulation of Purkinje cell activities, without disturbing other neuronal types and passing fibers within the cortex, is a direct approach to spatially restrict the effects of perturbations. Although this type of approach has for many years been technically difficult, recent advances in optogenetics now enable selective activation or inhibition of Purkinje cell activities, with high temporal resolution. Here we discuss the effectiveness of using Purkinje cell-specific optogenetic approaches to elucidate the functions of local cerebellar cortex regions. We also discuss what improvements to current methods are necessary for future investigations of cerebellar functions to provide further advances.

  5. In situ localization of epidermal stem cells using a novel multi epitope ligand cartography approach.

    PubMed

    Ruetze, Martin; Gallinat, Stefan; Wenck, Horst; Deppert, Wolfgang; Knott, Anja

    2010-06-01

    Precise knowledge of the frequency and localization of epidermal stem cells within skin tissue would further our understanding of their role in maintaining skin homeostasis. As a novel approach we used the recently developed method of multi epitope ligand cartography, applying a set of described putative epidermal stem cell markers. Bioinformatic evaluation of the data led to the identification of several discrete basal keratinocyte populations, but none of them displayed the complete stem cell marker set. The distribution of the keratinocyte populations within the tissue was remarkably heterogeneous, but determination of distance relationships revealed a population of quiescent cells highly expressing p63 and the integrins alpha(6)/beta(1) that represent origins of a gradual differentiation lineage. This population comprises about 6% of all basal cells, shows a scattered distribution pattern and could also be found in keratinocyte holoclone colonies. The data suggest that this population identifies interfollicular epidermal stem cells.

  6. Strategy for the Development of a DNB Local Predictive Approach Based on Neptune CFD Software

    SciTech Connect

    Haynes, Pierre-Antoine; Peturaud, Pierre; Montout, Michael; Hervieu, Eric

    2006-07-01

    The NEPTUNE project constitutes the thermal-hydraulics part of a long-term joint development program for the next generation of nuclear reactor simulation tools. This project is being carried through by EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique), with the co-sponsorship of IRSN (Institut de Radioprotection et de Surete Nucleaire) and AREVA NP. NEPTUNE is a multi-phase flow software platform that includes advanced physical models and numerical methods for each simulation scale (CFD, component, system). NEPTUNE also provides new multi-scale and multi-disciplinary coupling functionalities. This new generation of two-phase flow simulation tools aims at meeting major industrial needs. DNB (Departure from Nucleate Boiling) prediction in PWRs is one of the high priority needs, and this paper focuses on its anticipated improvement by means of a so-called 'Local Predictive Approach' using the NEPTUNE CFD code. We firstly present the ambitious 'Local Predictive Approach' anticipated for a better prediction of DNB, i.e. an approach that intends to result in CHF correlations based on relevant local parameters as provided by the CFD modeling. The associated requirements for the two-phase flow modeling are underlined as well as those for the good level of performance of the NEPTUNE CFD code; hence, the code validation strategy based on different experimental data base types (including separated effect and integral-type tests data) is depicted. Secondly, we present comparisons between low pressure adiabatic bubbly flow experimental data obtained on the DEDALE experiment and the associated numerical simulation results. This study anew shows the high potential of NEPTUNE CFD code, even if, with respect to the aforementioned DNB-related aim, there is still a need for some modeling improvements involving new validation data obtained in thermal-hydraulics conditions representative of PWR ones. Finally, we deal with one of these new experimental data needs

  7. [Local straight line screening method for the detection of Chinese proprietary medicines containing undeclared prescription drugs].

    PubMed

    Li, Shu; Cao, Yan; Le, Jian; Chen, Gui-Liang; Chai, Yi-Feng; Lu, Feng

    2009-02-01

    The present paper constructs a new approach named local straight-line screening (LSLS) to detect Chinese proprietary medicines (CPM) containing undeclared prescription drugs (UPD). Different from traditional methods used in analysis of multi-component spectrum, LSLS is proposed according to the characteristics of original infrared spectra of the UPD and suspected CPM, without any pattern recognition or concentration model establishment. Spectrum-subtraction leads to the variance in local straight line, which serves as a key in discrimination of whether suspected CPD is adulterated or not. Sibutramine hydrochloride, fenfluramine hydrochloride, sildenafil citrate and lovastatin were used as reference substances of UPD to analyze 16 suspected CPM samples. The results show that LSLS can obtain an accurate quantitative and qualitative analysis of suspected CPM. It is possible for the method to be potentially used in the preliminary screening of CPM containing possible UPD.

  8. Virtual local target method for avoiding local minimum in potential field based robot navigation.

    PubMed

    Zou, Xi-Yong; Zhu, Jing

    2003-01-01

    A novel robot navigation algorithm with global path generation capability is presented. Local minimum is a most intractable but is an encountered frequently problem in potential field based robot navigation. Through appointing appropriately some virtual local targets on the journey, it can be solved effectively. The key concept employed in this algorithm are the rules that govern when and how to appoint these virtual local targets. When the robot finds itself in danger of local minimum, a virtual local target is appointed to replace the global goal temporarily according to the rules. After the virtual target is reached, the robot continues on its journey by heading towards the global goal. The algorithm prevents the robot from running into local minima anymore. Simulation results showed that it is very effective in complex obstacle environments. PMID:12765277

  9. Virtual local target method for avoiding local minimum in potential field based robot navigation.

    PubMed

    Zou, Xi-Yong; Zhu, Jing

    2003-01-01

    A novel robot navigation algorithm with global path generation capability is presented. Local minimum is a most intractable but is an encountered frequently problem in potential field based robot navigation. Through appointing appropriately some virtual local targets on the journey, it can be solved effectively. The key concept employed in this algorithm are the rules that govern when and how to appoint these virtual local targets. When the robot finds itself in danger of local minimum, a virtual local target is appointed to replace the global goal temporarily according to the rules. After the virtual target is reached, the robot continues on its journey by heading towards the global goal. The algorithm prevents the robot from running into local minima anymore. Simulation results showed that it is very effective in complex obstacle environments.

  10. Training NOAA Staff on Effective Communication Methods with Local Climate Users

    NASA Astrophysics Data System (ADS)

    Timofeyeva, M. M.; Mayes, B.

    2011-12-01

    Since 2002 NOAA National Weather Service (NWS) Climate Services Division (CSD) offered training opportunities to NWS staff. As a result of eight-year-long development of the training program, NWS offers three training courses and about 25 online distance learning modules covering various climate topics: climate data and observations, climate variability and change, NWS national and local climate products, their tools, skill, and interpretation. Leveraging climate information and expertise available at all NOAA line offices and partners allows delivery of the most advanced knowledge and is a very critical aspect of the training program. NWS challenges in providing local climate services includes effective communication techniques on provide highly technical scientific information to local users. Addressing this challenge requires well trained, climate-literate workforce at local level capable of communicating the NOAA climate products and services as well as provide climate-sensitive decision support. Trained NWS climate service personnel use proactive and reactive approaches and professional education methods in communicating climate variability and change information to local users. Both scientifically-unimpaired messages and amiable communication techniques such as story telling approach are important in developing an engaged dialog between the climate service providers and users. Several pilot projects NWS CSD conducted in the past year applied the NWS climate services training program to training events for NOAA technical user groups. The technical user groups included natural resources managers, engineers, hydrologists, and planners for transportation infrastructure. Training of professional user groups required tailoring the instructions to the potential applications of each group of users. Training technical user identified the following critical issues: (1) Knowledge of target audience expectations, initial knowledge status, and potential use of climate

  11. Modeling local extinction in turbulent combustion using an embedding method

    NASA Astrophysics Data System (ADS)

    Knaus, Robert; Pantano, Carlos

    2012-11-01

    Local regions of extinction in diffusion flames, called ``flame holes,'' can reduce the efficiency of combustion and increase the production of certain pollutants. At sufficiently high speeds, a flame may also be lifted from the rim of the burner to a downstream location that may be stable. These two phenomena share a common underlying mechanism of propagation related to edge-flame dynamics where chemistry and fluid mechanics are equally important. We present a formulation that describes the formation, propagation, and growth of flames holes on the stoichiometric surface using edge flame dynamics. The boundary separating the flame from the quenched region is modeled using a progress variable defined on the moving stoichiometric surface that is embedded in the three-dimensional space using an extension algorithm. This Cartesian problem is solved using a high-order finite-volume WENO method extended to this nonconservative problem. This algorithm can track the dynamics of flame holes in a turbulent reacting-shear layer and model flame liftoff without requiring full chemistry calculations.

  12. Moire and grid methods: a signal-processing approach

    NASA Astrophysics Data System (ADS)

    Surrel, Yves

    1994-11-01

    This presentation is a formulation of moire and grid methods with the vocabulary of signal processing. It addresses basically the case of in-plane geometrical moire, but, as is well known, any geometrical moire setup can be related to in-plane moire. We show that the moire phenomenon is not a measurement method by itself, but only a step in a process of information transmission by spatial frequency modulation. The distortion of a grid bonded onto the surface of a loaded specimen or structure will cause locally a modulation (Delta) F of the spatial frequency vector F of the grid. The modulation (Delta) F is linearly related to the strain and rotation tensors. An equivalent point of view is to consider the same phenomenon as a phase modulation, caused by the inverse displacements. In this approach, moire is presented merely as an analog means of frequency substraction. The interpretation of the classical fringe processing techniques -- temporal and spatial phase shifting, Fourier transform method -- is made, and some consequences of the zoom-in effect induced by the moire phenomenon are given.

  13. OCT-based approach to local relaxations discrimination from translational relaxation motions

    NASA Astrophysics Data System (ADS)

    Matveev, Lev A.; Matveyev, Alexandr L.; Gubarkova, Ekaterina V.; Gelikonov, Grigory V.; Sirotkina, Marina A.; Kiseleva, Elena B.; Gelikonov, Valentin M.; Gladkova, Natalia D.; Vitkin, Alex; Zaitsev, Vladimir Y.

    2016-04-01

    Multimodal optical coherence tomography (OCT) is an emerging tool for tissue state characterization. Optical coherence elastography (OCE) is an approach to mapping mechanical properties of tissue based on OCT. One of challenging problems in OCE is elimination of the influence of residual local tissue relaxation that complicates obtaining information on elastic properties of the tissue. Alternatively, parameters of local relaxation itself can be used as an additional informative characteristic for distinguishing the tissue in normal and pathological states over the OCT image area. Here we briefly present an OCT-based approach to evaluation of local relaxation processes in the tissue bulk after sudden unloading of its initial pre-compression. For extracting the local relaxation rate we evaluate temporal dependence of local strains that are mapped using our recently developed hybrid phase resolved/displacement-tracking (HPRDT) approach. This approach allows one to subtract the contribution of global displacements of scatterers in OCT scans and separate the temporal evolution of local strains. Using a sample excised from of a coronary arteria, we demonstrate that the observed relaxation of local strains can be reasonably fitted by an exponential law, which opens the possibility to characterize the tissue by a single relaxation time. The estimated local relaxation times are assumed to be related to local biologically-relevant processes inside the tissue, such as diffusion, leaking/draining of the fluids, local folding/unfolding of the fibers, etc. In general, studies of evolution of such features can provide new metrics for biologically-relevant changes in tissue, e.g., in the problems of treatment monitoring.

  14. A new heuristic method for approximating the number of local minima in partial RNA energy landscapes.

    PubMed

    Albrecht, Andreas A; Day, Luke; Abdelhadi Ep Souki, Ouala; Steinhöfel, Kathleen

    2016-02-01

    The analysis of energy landscapes plays an important role in mathematical modelling, simulation and optimisation. Among the main features of interest are the number and distribution of local minima within the energy landscape. Granier and Kallel proposed in 2002 a new sampling procedure for estimating the number of local minima. In the present paper, we focus on improved heuristic implementations of the general framework devised by Granier and Kallel with regard to run-time behaviour and accuracy of predictions. The new heuristic method is demonstrated for the case of partial energy landscapes induced by RNA secondary structures. While the computation of minimum free energy RNA secondary structures has been studied for a long time, the analysis of folding landscapes has gained momentum over the past years in the context of co-transcriptional folding and deeper insights into cell processes. The new approach has been applied to ten RNA instances of length between 99 nt and 504 nt and their respective partial energy landscapes defined by secondary structures within an energy offset ΔE above the minimum free energy conformation. The number of local minima within the partial energy landscapes ranges from 1440 to 3441. Our heuristic method produces for the best approximations on average a deviation below 3.0% from the true number of local minima.

  15. A new approach for beam hardening correction based on the local spectrum distributions

    NASA Astrophysics Data System (ADS)

    Rasoulpour, Naser; Kamali-Asl, Alireza; Hemmati, Hamidreza

    2015-09-01

    Energy dependence of material absorption and polychromatic nature of x-ray beams in the Computed Tomography (CT) causes a phenomenon which called "beam hardening". The purpose of this study is to provide a novel approach for Beam Hardening (BH) correction. This approach is based on the linear attenuation coefficients of Local Spectrum Distributions (LSDs) in the various depths of a phantom. The proposed method includes two steps. Firstly, the hardened spectra in various depths of the phantom (or LSDs) are estimated based on the Expectation Maximization (EM) algorithm for arbitrary thickness interval of known materials in the phantom. The performance of LSD estimation technique is evaluated by applying random Gaussian noise to transmission data. Then, the linear attenuation coefficients with regarding to the mean energy of LSDs are obtained. Secondly, a correction function based on the calculated attenuation coefficients is derived in order to correct polychromatic raw data. Since a correction function has been used for the conversion of the polychromatic data to the monochromatic data, the effect of BH in proposed reconstruction must be reduced in comparison with polychromatic reconstruction. The proposed approach has been assessed in the phantoms which involve less than two materials, but the correction function has been extended for using in the constructed phantoms with more than two materials. The relative mean energy difference in the LSDs estimations based on the noise-free transmission data was less than 1.5%. Also, it shows an acceptable value when a random Gaussian noise is applied to the transmission data. The amount of cupping artifact in the proposed reconstruction method has been effectively reduced and proposed reconstruction profile is uniform more than polychromatic reconstruction profile.

  16. DNA methods: critical review of innovative approaches.

    PubMed

    Kok, Esther J; Aarts, Henk J M; Van Hoef, A M Angeline; Kuiper, Harry A

    2002-01-01

    The presence of ingredients derived from genetically modified organisms (GMOs) in food products in the market place is subject to a number of European regulations that stipulate which product consisting of or containing GMO-derived ingredients should be labeled as such. In order to maintain these labeling requirements, a variety of different GMO detection methods have been developed to screen for either the presence of DNA or protein derived from (approved) GM varieties. Recent incidents where unapproved GM varieties entered the European market show that more powerful GMO detection and identification methods will be needed to maintain European labeling requirements in an adequate, efficient, and cost-effective way. This report discusses the current state-of-the-art as well as future developments in GMO detection.

  17. The active titration method for measuring local hydroxyl radical concentration

    NASA Technical Reports Server (NTRS)

    Sprengnether, Michele; Prinn, Ronald G.

    1994-01-01

    We are developing a method for measuring ambient OH by monitoring its rate of reaction with a chemical species. Our technique involves the local, instantaneous release of a mixture of saturated cyclic hydrocarbons (titrants) and perfluorocarbons (dispersants). These species must not normally be present in ambient air above the part per trillion concentration. We then track the mixture downwind using a real-time portable ECD tracer instrument. We collect air samples in canisters every few minutes for roughly one hour. We then return to the laboratory and analyze our air samples to determine the ratios of the titrant to dispersant concentrations. The trends in these ratios give us the ambient OH concentration from the relation: dlnR/dt = -k(OH). A successful measurement of OH requires that the trends in these ratios be measureable. We must not perturb ambient OH concentrations. The titrant to dispersant ratio must be spatially invariant. Finally, heterogeneous reactions of our titrant and dispersant species must be negligible relative to the titrant reaction with OH. We have conducted laboratory studies of our ability to measure the titrant to dispersant ratios as a function of concentration down to the few part per trillion concentration. We have subsequently used these results in a gaussian puff model to estimate our expected uncertainty in a field measurement of OH. Our results indicate that under a range of atmospheric conditions we expect to be able to measure OH with a sensitivity of 3x10(exp 5) cm(exp -3). In our most optimistic scenarios, we obtain a sensitivity of 1x10(exp 5) cm(exp -3). These sensitivity values reflect our anticipated ability to measure the ratio trends. However, because we are also using a rate constant to obtain our (OH) from this ratio trend, our accuracy cannot be better than that of the rate constant, which we expect to be about 20 percent.

  18. Modeling of fatigue crack induced nonlinear ultrasonics using a highly parallelized explicit local interaction simulation approach

    NASA Astrophysics Data System (ADS)

    Shen, Yanfeng; Cesnik, Carlos E. S.

    2016-04-01

    This paper presents a parallelized modeling technique for the efficient simulation of nonlinear ultrasonics introduced by the wave interaction with fatigue cracks. The elastodynamic wave equations with contact effects are formulated using an explicit Local Interaction Simulation Approach (LISA). The LISA formulation is extended to capture the contact-impact phenomena during the wave damage interaction based on the penalty method. A Coulomb friction model is integrated into the computation procedure to capture the stick-slip contact shear motion. The LISA procedure is coded using the Compute Unified Device Architecture (CUDA), which enables the highly parallelized supercomputing on powerful graphic cards. Both the explicit contact formulation and the parallel feature facilitates LISA's superb computational efficiency over the conventional finite element method (FEM). The theoretical formulations based on the penalty method is introduced and a guideline for the proper choice of the contact stiffness is given. The convergence behavior of the solution under various contact stiffness values is examined. A numerical benchmark problem is used to investigate the new LISA formulation and results are compared with a conventional contact finite element solution. Various nonlinear ultrasonic phenomena are successfully captured using this contact LISA formulation, including the generation of nonlinear higher harmonic responses. Nonlinear mode conversion of guided waves at fatigue cracks is also studied.

  19. A Local DCT-II Feature Extraction Approach for Personal Identification Based on Palmprint

    NASA Astrophysics Data System (ADS)

    Choge, H. Kipsang; Oyama, Tadahiro; Karungaru, Stephen; Tsuge, Satoru; Fukumi, Minoru

    Biometric applications based on the palmprint have recently attracted increased attention from various researchers. In this paper, a method is presented that differs from the commonly used global statistical and structural techniques by extracting and using local features instead. The middle palm area is extracted after preprocessing for rotation, position and illumination normalization. The segmented region of interest is then divided into blocks of either 8×8 or 16×16 pixels in size. The type-II Discrete Cosine Transform (DCT) is applied to transform the blocks into DCT space. A subset of coefficients that encode the low to medium frequency components is selected using the JPEG-style zigzag scanning method. Features from each block are subsequently concatenated into a compact feature vector and used in palmprint verification experiments with palmprints from the PolyU Palmprint Database. Results indicate that this approach achieves better results than many conventional transform-based methods, with an excellent recognition accuracy above 99% and an Equal Error Rate (EER) of less than 1.2% in palmprint verification.

  20. Groundwater abstraction management in Sana'a Basin, Yemen: a local community approach

    NASA Astrophysics Data System (ADS)

    Taher, Taha M.

    2016-07-01

    Overexploitation of groundwater resources in Sana'a Basin, Yemen, is causing severe water shortages associated water quality degradation. Groundwater abstraction is five times higher than natural recharge and the water-level decline is about 4-8 m/year. About 90 % of the groundwater resource is used for agricultural activities. The situation is further aggravated by the absence of a proper water-management approach for the Basin. Water scarcity in the Wadi As-Ssirr catchment, the study area, is the most severe and this area has the highest well density (average 6.8 wells/km2) compared with other wadi catchments. A local scheme of groundwater abstraction redistribution is proposed, involving the retirement of a substantial number of wells. The scheme encourages participation of the local community via collective actions to reduce the groundwater overexploitation, and ultimately leads to a locally acceptable, manageable groundwater abstraction pattern. The proposed method suggests using 587 wells rather than 1,359, thus reducing the well density to 2.9 wells/km2. Three scenarios are suggested, involving different reductions to the well yields and/or the number of pumping hours for both dry and wet seasons. The third scenario is selected as a first trial for the communities to action; the resulting predicted reduction, by 2,371,999 m3, is about 6 % of the estimated annual demand. Initially, the groundwater abstraction volume should not be changed significantly until there are protective measures in place, such as improved irrigation efficiency, with the aim of increasing the income of farmers and reducing water use.

  1. Groundwater abstraction management in Sana'a Basin, Yemen: a local community approach

    NASA Astrophysics Data System (ADS)

    Taher, Taha M.

    2016-09-01

    Overexploitation of groundwater resources in Sana'a Basin, Yemen, is causing severe water shortages associated water quality degradation. Groundwater abstraction is five times higher than natural recharge and the water-level decline is about 4-8 m/year. About 90 % of the groundwater resource is used for agricultural activities. The situation is further aggravated by the absence of a proper water-management approach for the Basin. Water scarcity in the Wadi As-Ssirr catchment, the study area, is the most severe and this area has the highest well density (average 6.8 wells/km2) compared with other wadi catchments. A local scheme of groundwater abstraction redistribution is proposed, involving the retirement of a substantial number of wells. The scheme encourages participation of the local community via collective actions to reduce the groundwater overexploitation, and ultimately leads to a locally acceptable, manageable groundwater abstraction pattern. The proposed method suggests using 587 wells rather than 1,359, thus reducing the well density to 2.9 wells/km2. Three scenarios are suggested, involving different reductions to the well yields and/or the number of pumping hours for both dry and wet seasons. The third scenario is selected as a first trial for the communities to action; the resulting predicted reduction, by 2,371,999 m3, is about 6 % of the estimated annual demand. Initially, the groundwater abstraction volume should not be changed significantly until there are protective measures in place, such as improved irrigation efficiency, with the aim of increasing the income of farmers and reducing water use.

  2. Microscopic approach to the generator coordinate method

    SciTech Connect

    Haider, Q.; Gogny, D.; Weiss, M.S.

    1989-08-22

    In this paper, we solve different theoretical problems associated with the calculation of the kernel occurring in the Hill-Wheeler integral equations within the framework of generator coordinate method. In particular, we extend the Wick's theorem to nonorthogonal Bogoliubov states. Expressions for the overlap between Bogoliubov states and for the generalized density matrix are also derived. These expressions are valid even when using an incomplete basis, as in the case of actual calculations. Finally, the Hill-Wheeler formalism is developed for a finite range interaction and the Skyrme force, and evaluated for the latter. 20 refs., 1 fig., 4 tabs.

  3. [Spiritual themes in mental pathology. Methodical approach].

    PubMed

    Marchais, P; Randrup, A

    1994-10-01

    The meaning of the themes with spiritual connotations poses complex problems for psychiatry, because these themes induce the observer to project his own convictions and frames of references on his investigations. A double detachment (objectivation) concerning both the object of study and the observer is implied. This makes it possible to study these phenomena by a more rigorous method, to investigate the conditions of their formation and to demonstrate objectifiable correlates (experienced space and time, the various levels of psychic experience, factors in the environment...). In consequence the appropriate medical behaviour can be more precisely delineated. PMID:7818230

  4. Slant-hole collimator, dual mode sterotactic localization method

    DOEpatents

    Weisenberger, Andrew G.

    2002-01-01

    The use of a slant-hole collimator in the gamma camera of dual mode stereotactic localization apparatus allows the acquisition of a stereo pair of scintimammographic images without repositioning of the gamma camera between image acquisitions.

  5. An ESPRIT-Based Approach for 2-D Localization of Incoherently Distributed Sources in Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Hu, Anzhong; Lv, Tiejun; Gao, Hui; Zhang, Zhang; Yang, Shaoshi

    2014-10-01

    In this paper, an approach of estimating signal parameters via rotational invariance technique (ESPRIT) is proposed for two-dimensional (2-D) localization of incoherently distributed (ID) sources in large-scale/massive multiple-input multiple-output (MIMO) systems. The traditional ESPRIT-based methods are valid only for one-dimensional (1-D) localization of the ID sources. By contrast, in the proposed approach the signal subspace is constructed for estimating the nominal azimuth and elevation direction-of-arrivals and the angular spreads. The proposed estimator enjoys closed-form expressions and hence it bypasses the searching over the entire feasible field. Therefore, it imposes significantly lower computational complexity than the conventional 2-D estimation approaches. Our analysis shows that the estimation performance of the proposed approach improves when the large-scale/massive MIMO systems are employed. The approximate Cram\\'{e}r-Rao bound of the proposed estimator for the 2-D localization is also derived. Numerical results demonstrate that albeit the proposed estimation method is comparable with the traditional 2-D estimators in terms of performance, it benefits from a remarkably lower computational complexity.

  6. Genetics of psychiatric disorders: Methods: Molecular approaches

    PubMed Central

    Avramopoulos, Dimitrios

    2010-01-01

    Summary The launch of the Human Genome Project in 1990 triggered unprecedented technological advances in DNA analysis technologies, followed by tremendous advances in our understanding of the human genome since the completion of the first draft in 2001. During the same time the interest shifted from the genetic causes of the Mendelian disorders, most of which were uncovered through linkage analyses and positional cloning, to the genetic causes of complex (including psychiatric) disorders that proved more of a challenge for linkage methods. The new technologies, together with our new knowledge of the properties of the genome, and significant efforts towards generating large patient and control sample collections, allowed for the success of genome-wide association studies. The result has been that reports currently appear in the literature every week identifying new genes for complex disorders. We are still far from completely explaining the heritable component of complex disorders, but we are certainly closer to being able to use the new information towards prevention and treatment of illness. Next-generation sequencing methods, combined with the results of association and perhaps linkage studies, will help us uncover the missing heritability and achieve a better understanding of the genetic aspects of psychiatric disease, as well as the best strategies for incorporating genetics in the service of patients. PMID:20159337

  7. Feature weight estimation for gene selection: a local hyperlinear learning approach

    PubMed Central

    2014-01-01

    Background Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task. One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), k-nearest neighbor (KNN), hyperplane k-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms. PMID:24625071

  8. Improved radiological/nuclear source localization in variable NORM background: An MLEM approach with segmentation data

    NASA Astrophysics Data System (ADS)

    Penny, Robert D.; Crowley, Tanya M.; Gardner, Barbara M.; Mandell, Myron J.; Guo, Yanlin; Haas, Eric B.; Knize, Duane J.; Kuharski, Robert A.; Ranta, Dale; Shyffer, Ryan; Labov, Simon; Nelson, Karl; Seilhan, Brandon; Valentine, John D.

    2015-06-01

    A novel approach and algorithm have been developed to rapidly detect and localize both moving and static radiological/nuclear (R/N) sources from an airborne platform. Current aerial systems with radiological sensors are limited in their ability to compensate for variable naturally occurring radioactive material (NORM) background. The proposed approach suppresses the effects of NORM background by incorporating additional information to segment the survey area into regions over which the background is likely to be uniform. The method produces pixelated Source Activity Maps (SAMs) of both target and background radionuclide activity over the survey area. The task of producing the SAMs requires (1) the development of a forward model which describes the transformation of radionuclide activity to detector measurements and (2) the solution of the associated inverse problem. The inverse problem is ill-posed as there are typically fewer measurements than unknowns. In addition the measurements are subject to Poisson statistical noise. The Maximum-Likelihood Expectation-Maximization (MLEM) algorithm is used to solve the inverse problem as it is well suited for under-determined problems corrupted by Poisson noise. A priori terrain information is incorporated to segment the reconstruction space into regions within which we constrain NORM background activity to be uniform. Descriptions of the algorithm and examples of performance with and without segmentation on simulated data are presented.

  9. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS.

    PubMed

    Jiao, Xiangmin; Einstein, Daniel R; Dyedov, Vladimir

    2010-03-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods.

  10. LOCAL ORTHOGONAL CUTTING METHOD FOR COMPUTING MEDIAL CURVES AND ITS BIOMEDICAL APPLICATIONS.

    PubMed

    Jiao, Xiangmin; Einstein, Daniel R; Dyedov, Vladimir

    2010-03-01

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method called local orthogonal cutting (LOC) for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques and result in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods. PMID:20628546

  11. Green technology approach towards herbal extraction method

    NASA Astrophysics Data System (ADS)

    Mutalib, Tengku Nur Atiqah Tengku Ab; Hamzah, Zainab; Hashim, Othman; Mat, Hishamudin Che

    2015-05-01

    The aim of present study was to compare maceration method of selected herbs using green and non-green solvents. Water and d-limonene are a type of green solvents while non-green solvents are chloroform and ethanol. The selected herbs were Clinacanthus nutans leaf and stem, Orthosiphon stamineus leaf and stem, Sesbania grandiflora leaf, Pluchea indica leaf, Morinda citrifolia leaf and Citrus hystrix leaf. The extracts were compared with the determination of total phenolic content. Total phenols were analyzed using a spectrophotometric technique, based on Follin-ciocalteau reagent. Gallic acid was used as standard compound and the total phenols were expressed as mg/g gallic acid equivalent (GAE). The most suitable and effective solvent is water which produced highest total phenol contents compared to other solvents. Among the selected herbs, Orthosiphon stamineus leaves contain high total phenols at 9.087mg/g.

  12. Local force variations caused by isoelectric impurities: Method of determination from first principles

    NASA Astrophysics Data System (ADS)

    Kunc, K.

    1983-02-01

    It is shown how the variation of lattice dynamical force constants caused by substitutional isoelectronic impurities can be evaluated ab initio. The approach, illustrated on the example of Al in GaAs, is based on local density functional and uses ionic pseudopotentials of Al, Ga, As as the only input; Hellmann-Feynman theorem is applied in order to extract from self-consistent electronic charge densities the forces acting on atoms in periodic patterns in which entire planes of impurities are displaced. The defect-induced variations of inter planar force constants are converted into the inter atomic ones, which can be compared with those determined by phenomenological models from the measured local mode frequencies. A method is presented which allows to account for the effect of relaxation without requiring an explicit determination of the latter. Particular problems resulting from dealing with entire plane of defects are discussed and an estimate for relaxation is given.

  13. Establishment of local searching methods for orbitrap-based high throughput metabolomics analysis.

    PubMed

    Tang, Haiping; Wang, Xueying; Xu, Lina; Ran, Xiaorong; Li, Xiangjun; Chen, Ligong; Zhao, Xinbin; Deng, Haiteng; Liu, Xiaohui

    2016-08-15

    Our method aims to establish local endogenous metabolite databases economically without purchasing chemical standards, giving strong bases for following orbitrap based high throughput untargeted metabolomics analysis. A new approach here is introduced to construct metabolite databases on the base of biological sample analysis and mathematic extrapolation. Building local metabolite databases traditionally requires expensive chemical standards, which is barely affordable for most research labs. As a result, most labs working on metabolomics analysis have to refer public libraries, which is time consuming and limited for high throughput analysis. Using this strategy, a high throughput orbitrap based metabolomics platform can be established at almost no cost within a couple of months. It enables to facilitate the application of high throughput metabolomics analysis to identify disease-related biomarkers or investigate biological functions using orbitrap. PMID:27260449

  14. Finding fossils in new ways: an artificial neural network approach to predicting the location of productive fossil localities.

    PubMed

    Anemone, Robert; Emerson, Charles; Conroy, Glenn

    2011-01-01

    Chance and serendipity have long played a role in the location of productive fossil localities by vertebrate paleontologists and paleoanthropologists. We offer an alternative approach, informed by methods borrowed from the geographic information sciences and using recent advances in computer science, to more efficiently predict where fossil localities might be found. Our model uses an artificial neural network (ANN) that is trained to recognize the spectral characteristics of known productive localities and other land cover classes, such as forest, wetlands, and scrubland, within a study area based on the analysis of remotely sensed (RS) imagery. Using these spectral signatures, the model then classifies other pixels throughout the study area. The results of the neural network classification can be examined and further manipulated within a geographic information systems (GIS) software package. While we have developed and tested this model on fossil mammal localities in deposits of Paleocene and Eocene age in the Great Divide Basin of southwestern Wyoming, a similar analytical approach can be easily applied to fossil-bearing sedimentary deposits of any age in any part of the world. We suggest that new analytical tools and methods of the geographic sciences, including remote sensing and geographic information systems, are poised to greatly enrich paleoanthropological investigations, and that these new methods should be embraced by field workers in the search for, and geospatial analysis of, fossil primates and hominins.

  15. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities

    PubMed Central

    Green, Carla A.; Duan, Naihua; Gibbons, Robert D.; Hoagwood, Kimberly E.; Palinkas, Lawrence A.; Wisdom, Jennifer P.

    2015-01-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings. PMID:24722814

  16. Approaches to Mixed Methods Dissemination and Implementation Research: Methods, Strengths, Caveats, and Opportunities.

    PubMed

    Green, Carla A; Duan, Naihua; Gibbons, Robert D; Hoagwood, Kimberly E; Palinkas, Lawrence A; Wisdom, Jennifer P

    2015-09-01

    Limited translation of research into practice has prompted study of diffusion and implementation, and development of effective methods of encouraging adoption, dissemination and implementation. Mixed methods techniques offer approaches for assessing and addressing processes affecting implementation of evidence-based interventions. We describe common mixed methods approaches used in dissemination and implementation research, discuss strengths and limitations of mixed methods approaches to data collection, and suggest promising methods not yet widely used in implementation research. We review qualitative, quantitative, and hybrid approaches to mixed methods dissemination and implementation studies, and describe methods for integrating multiple methods to increase depth of understanding while improving reliability and validity of findings.

  17. Correlation between variability of hand-outlined segmentation drawn by experts and local features of underlying image: a neuronal approach

    NASA Astrophysics Data System (ADS)

    Brahmi, Djamel; Cassoux, Nathalie; Serruys, Camille; Giron, Alain; Lehoang, Phuc; Fertil, Bernard

    1999-03-01

    Detection of contours in biomedical imags is quite often an a priori step to quantification. Using computer facilities, it is now straightforward for a medical expert to draw boundaries around regions of interest. However, accuracy of drawing is an issue, which is rarely addressed although it may be a crucial point when for example one looks for local evolution of boundaries on a series of images. The aim of our study is to correlate the local accuracy of experts' outlines with local features of the underlying image to allow meaningful comparisons of boundaries. Local variability of experts' outlines has been characterized by deriving a set of distances between outlines repeatedly drawn on the same image. Local features of underlying images were extracted from 64 by 64 pixel windows. We have used a two-stage neural network approach in order to deal with complexity of data within windows and to correlate their features with local variability of outlines. Our method has been applied to the quantification of the progression of the Cytomegalovirus infection as observed from a series of retinal angiograms in patients with AIDS. Reconstruction of new windows from the set of primitives obtained from the GHA network shows that the method preserves desired features. Accuracy of the border of infection is properly predicted and allows to generate confidence envelope around every hand-outlined.

  18. A local pseudo arc-length method for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Wang, Xing; Ma, Tian-Bao; Ren, Hui-Lan; Ning, Jian-Guo

    2014-12-01

    A local pseudo arc-length method (LPALM) for solving hyperbolic conservation laws is presented in this paper. The key idea of this method comes from the original arc-length method, through which the critical points are bypassed by transforming the computational space. The method is based on local changes of physical variables to choose the discontinuous stencil and introduce the pseudo arc-length parameter, and then transform the governing equations from physical space to arc-length space. In order to solve these equations in arc-length coordinate, it is necessary to combine the velocity of mesh points in the moving mesh method, and then convert the physical variable in arclength space back to physical space. Numerical examples have proved the effectiveness and generality of the new approach for linear equation, nonlinear equation and system of equations with discontinuous initial values. Non-oscillation solution can be obtained by adjusting the parameter and the mesh refinement number for problems containing both shock and rarefaction waves.

  19. Communication: Improved pair approximations in local coupled-cluster methods

    SciTech Connect

    Schwilk, Max; Werner, Hans-Joachim; Usvyat, Denis

    2015-03-28

    In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.

  20. Towards Multi-Method Research Approach in Empirical Software Engineering

    NASA Astrophysics Data System (ADS)

    Mandić, Vladimir; Markkula, Jouni; Oivo, Markku

    This paper presents results of a literature analysis on Empirical Research Approaches in Software Engineering (SE). The analysis explores reasons why traditional methods, such as statistical hypothesis testing and experiment replication are weakly utilized in the field of SE. It appears that basic assumptions and preconditions of the traditional methods are contradicting the actual situation in the SE. Furthermore, we have identified main issues that should be considered by the researcher when selecting the research approach. In virtue of reasons for weak utilization of traditional methods we propose stronger use of Multi-Method approach with Pragmatism as the philosophical standpoint.

  1. Use of the Support Group Method to Tackle Bullying, and Evaluation from Schools and Local Authorities in England

    ERIC Educational Resources Information Center

    Smith, Peter K.; Howard, Sharon; Thompson, Fran

    2007-01-01

    The Support Group Method (SGM), formerly the No Blame Approach, is widely used as an anti-bullying intervention in schools, but has aroused some controversy. There is little evidence from users regarding its effectiveness. We aimed to ascertain the use of and support for the SGM in Local Authorities (LAs) and schools; and obtain ratings of…

  2. Tests of Local Hadron Calibration Approaches in ATLAS Combined Beam Tests

    NASA Astrophysics Data System (ADS)

    Grahn, Karl-Johan; Kiryunin, Andrey; Pospelov, Guennadi; ATLAS Calorimeter Group

    2011-04-01

    Three ATLAS calorimeters in the region of the forward crack at |η| = 3.2 in the nominal ATLAS setup and a typical section of the two barrel calorimeters at |η| = 0.45 of ATLAS have been exposed to combined beam tests with single electrons and pions. Detailed shower shape studies of electrons and pions with comparisons to various Geant4 based simulations utilizing different physics lists are presented for the endcap beam test. The local hadron calibration approach as used in the full Atlas setup has been applied to the endcap beam test data. An extension of it using layer correlations has been tested with the barrel test beam data. Both methods utilize modular correction steps based on shower shape variables to correct for invisible energy inside the reconstructed clusters in the calorimeters (compensation) and for lost energy deposits outside of the reconstructed clusters (dead material and out-of-cluster deposits). Results for both methods and comparisons to Monte Carlo simulations are presented.

  3. Locating Damage Using Integrated Global-Local Approach with Wireless Sensing System and Single-Chip Impedance Measurement Device

    PubMed Central

    Hung, Shih-Lin

    2014-01-01

    This study developed an integrated global-local approach for locating damage on building structures. A damage detection approach with a novel embedded frequency response function damage index (NEFDI) was proposed and embedded in the Imote2.NET-based wireless structural health monitoring (SHM) system to locate global damage. Local damage is then identified using an electromechanical impedance- (EMI-) based damage detection method. The electromechanical impedance was measured using a single-chip impedance measurement device which has the advantages of small size, low cost, and portability. The feasibility of the proposed damage detection scheme was studied with reference to a numerical example of a six-storey shear plane frame structure and a small-scale experimental steel frame. Numerical and experimental analysis using the integrated global-local SHM approach reveals that, after NEFDI indicates the approximate location of a damaged area, the EMI-based damage detection approach can then identify the detailed damage location in the structure of the building. PMID:24672359

  4. Factors Influencing Local Communities' Satisfaction Levels with Different Forest Management Approaches of Kakamega Forest, Kenya

    NASA Astrophysics Data System (ADS)

    Guthiga, Paul M.; Mburu, John; Holm-Mueller, Karin

    2008-05-01

    Satisfaction of communities living close to forests with forest management authorities is essential for ensuring continued support for conservation efforts. However, more often than not, community satisfaction is not systematically elicited, analyzed, and incorporated in conservation decisions. This study attempts to elicit levels of community satisfaction with three management approaches of Kakamega forest in Kenya and analyze factors influencing them. Three distinct management approaches are applied by three different authorities: an incentive-based approach of the Forest Department (FD), a protectionist approach of the Kenya Wildlife Service (KWS), and a quasi-private incentive-based approach of Quakers Church Mission (QCM). Data was obtained from a random sample of about 360 households living within a 10-km radius around the forest margin. The protectionist approach was ranked highest overall for its performance in forest management. Results indicate that households are influenced by different factors in their ranking of management approaches. Educated households and those located far from market centers are likely to be dissatisfied with all the three management approaches. The location of the households from the forest margin influences negatively the satisfaction with the protectionist approach, whereas land size, a proxy for durable assets, has a similar effect on the private incentive based approach of the QCM. In conclusion, this article indicates a number of policy implications that can enable the different authorities and their management approaches to gain approval of the local communities.

  5. A novel local-phase method of automatic atlas construction in fetal ultrasound

    NASA Astrophysics Data System (ADS)

    Fathima, Sana; Rueda, Sylvia; Papageorghiou, Aris; Noble, J. Alison

    2011-03-01

    In recent years, fetal diagnostics have relied heavily on clinical assessment and biometric analysis of manually acquired ultrasound images. There is a profound need for automated and standardized evaluation tools to characterize fetal growth and development. This work addresses this need through the novel use of feature-based techniques to develop evaluators of fetal brain gestation. The methodology is comprised of an automated database-driven 2D/3D image atlas construction method, which includes several iterative processes. A unique database was designed to store fetal image data acquired as part of the Intergrowth-21st study. This database drives the proposed automated atlas construction methodology using local phase information to perform affine registration with normalized mutual information as the similarity parameter, followed by wavelet-based image fusion and averaging. The unique feature-based application of local phase and wavelet fusion towards creating the atlas reduces the intensity dependence and difficulties in registering ultrasound images. The method is evaluated on fetal transthalamic head ultrasound images of 20 weeks gestation. The results show that the proposed method is more robust to intensity variations than standard intensity-based methods. Results also suggest that the feature-based approach improves the registration accuracy needed in creating a clinically valid ultrasound image atlas.

  6. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    PubMed

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online.

  7. SuBSENSE: a universal change detection method with local adaptive sensitivity.

    PubMed

    St-Charles, Pierre-Luc; Bilodeau, Guillaume-Alexandre; Bergevin, Robert

    2015-01-01

    Foreground/background segmentation via change detection in video sequences is often used as a stepping stone in high-level analytics and applications. Despite the wide variety of methods that have been proposed for this problem, none has been able to fully address the complex nature of dynamic scenes in real surveillance tasks. In this paper, we present a universal pixel-level segmentation method that relies on spatiotemporal binary features as well as color information to detect changes. This allows camouflaged foreground objects to be detected more easily while most illumination variations are ignored. Besides, instead of using manually set, frame-wide constants to dictate model sensitivity and adaptation speed, we use pixel-level feedback loops to dynamically adjust our method's internal parameters without user intervention. These adjustments are based on the continuous monitoring of model fidelity and local segmentation noise levels. This new approach enables us to outperform all 32 previously tested state-of-the-art methods on the 2012 and 2014 versions of the ChangeDetection.net dataset in terms of overall F-Measure. The use of local binary image descriptors for pixel-level modeling also facilitates high-speed parallel implementations: our own version, which used no low-level or architecture-specific instruction, reached real-time processing speed on a midlevel desktop CPU. A complete C++ implementation based on OpenCV is available online. PMID:25494507

  8. Damage localization in a residential-sized wind turbine blade by use of the SDDLV method

    NASA Astrophysics Data System (ADS)

    Johansen, R. J.; Hansen, L. M.; Ulriksen, M. D.; Tcherniak, D.; Damkilde, L.

    2015-07-01

    The stochastic dynamic damage location vector (SDDLV) method has previously proved to facilitate effective damage localization in truss- and plate-like structures. The method is based on interrogating damage-induced changes in transfer function matrices in cases where these matrices cannot be derived explicitly due to unknown input. Instead, vectors from the kernel of the transfer function matrix change are utilized; vectors which are derived on the basis of the system and state-to-output mapping matrices from output-only state-space realizations. The idea is then to convert the kernel vectors associated with the lowest singular values into static pseudo-loads and apply these alternately to an undamaged reference model with known stiffness matrix. By doing so, the stresses in the potentially damaged elements will, theoretically, approach zero. The present paper demonstrates an application of the SDDLV method for localization of structural damages in a cantilevered residential-sized wind turbine blade. The blade was excited by an unmeasured multi-impulse load and the resulting dynamic response was captured through accelerometers mounted along the blade. The static pseudo-loads were applied to a finite element (FE) blade model, which was tuned against the modal parameters of the actual blade. In the experiments, an undamaged blade configuration was analysed along with different damage scenarios, hereby testing the applicability of the SDDLV method.

  9. Iterative normalization method for improved prostate cancer localization with multispectral magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Liu, Xin; Samil Yetik, Imam

    2012-04-01

    Use of multispectral magnetic resonance imaging has received a great interest for prostate cancer localization in research and clinical studies. Manual extraction of prostate tumors from multispectral magnetic resonance imaging is inefficient and subjective, while automated segmentation is objective and reproducible. For supervised, automated segmentation approaches, learning is essential to obtain the information from training dataset. However, in this procedure, all patients are assumed to have similar properties for the tumor and normal tissues, and the segmentation performance suffers since the variations across patients are ignored. To conquer this difficulty, we propose a new iterative normalization method based on relative intensity values of tumor and normal tissues to normalize multispectral magnetic resonance images and improve segmentation performance. The idea of relative intensity mimics the manual segmentation performed by human readers, who compare the contrast between regions without knowing the actual intensity values. We compare the segmentation performance of the proposed method with that of z-score normalization followed by support vector machine, local active contours, and fuzzy Markov random field. Our experimental results demonstrate that our method outperforms the three other state-of-the-art algorithms, and was found to have specificity of 0.73, sensitivity of 0.69, and accuracy of 0.79, significantly better than alternative methods.

  10. A Discourse Based Approach to the Language Documentation of Local Ecological Knowledge

    ERIC Educational Resources Information Center

    Odango, Emerson Lopez

    2016-01-01

    This paper proposes a discourse-based approach to the language documentation of local ecological knowledge (LEK). The knowledge, skills, beliefs, cultural worldviews, and ideologies that shape the way a community interacts with its environment can be examined through the discourse in which LEK emerges. 'Discourse-based' refers to two components:…

  11. Effectiveness of a Local Inter-Loan System for Five Academic Libraries: An Operational Research Approach.

    ERIC Educational Resources Information Center

    MacDougall, A. F.; And Others

    1990-01-01

    Discussion of operational effectiveness in libraries focuses on a modeling approach that was used to compare the effectiveness of a local interlibrary loan system with using a national system, the British Library Document Supply Centre (BLDSC). Cost figures and surveys of five academic libraries are described. (six references) (LRW)

  12. International Students' Motivation and Learning Approach: A Comparison with Local Students

    ERIC Educational Resources Information Center

    Chue, Kah Loong; Nie, Youyan

    2016-01-01

    Psychological factors contribute to motivation and learning for international students as much as teaching strategies. 254 international students and 144 local students enrolled in a private education institute were surveyed regarding their perception of psychological needs support, their motivation and learning approach. The results from this…

  13. New hidden beauty molecules predicted by the local hidden gauge approach and heavy quark spin symmetry

    NASA Astrophysics Data System (ADS)

    Xiao, C. W.; Ozpineci, A.; Oset, E.

    2015-10-01

    Using a coupled channel unitary approach, combining the heavy quark spin symmetry and the dynamics of the local hidden gauge, we investigate the meson-meson interaction with hidden beauty. We obtain several new states of isospin I = 0: six bound states, and weakly bound six more possible states which depend on the influence of the coupled channel effects.

  14. [DEVELOPMENT OF THE LOCAL CLINICAL PATHWAY "PREVENTION OF CARDIOVASCULAR DISEASE" (SCIENTIFIC STATEMENT AND PRACTICAL APPROACH)].

    PubMed

    Dyachuk, D D; Lysenko, I Y; Moroz, G Z; Gidzinska, I M

    2015-01-01

    An overview of scientific data on current approaches to the prevention of cardiovascular diseases has been exposed. The results of proceedings on development of the local clinical pathway "Prevention of cardiovascular disease" in the State Institution of Science "Research and Practical Center of Preventive and Clinical Medicine" State Administrative Department has been generalized. PMID:27491170

  15. Grid-Search Location Methods for Ground-Truth Collection From Local and Regional Seismic Networks

    SciTech Connect

    William Rodi; Craig A. Schultz; Gardar Johannesson; Stephen C. Myers

    2005-05-13

    This project investigated new techniques for improving seismic event locations derived from regional and local networks. The technqiues include a new approach to empirical travel-time calibration that simultaneously fits data from multiple stations and events, using a generalization of the kriging method, and predicts travel-time corrections for arbitrary event-station paths. We combined this calibration approach with grid-search event location to produce a prototype new multiple-event location method that allows the use of spatially well-distributed events and takes into account correlations between the travel-time corrections from proximate event-station paths. Preliminary tests with a high quality data set from Nevada Test Site explosions indicated that our new calibration/location method offers improvement over the conventional multiple-event location methods now in common use, and is applicable to more general event-station geometries than the conventional methods. The tests were limited, however, and further research is needed to fully evaluate, and improve, the approach. Our project also demonstrated the importance of using a realistic model for observational errors in an event location procedure. We took the initial steps in developing a new error model based on mixture-of-Gaussians probability distributions, which possess the properties necessary to characterize the complex arrival time error processes that can occur when picking low signal-to-noise arrivals. We investigated various inference methods for fitting these distributions to observed travel-time residuals, including a Markov Chain Monte Carlo technique for computing Bayesian estimates of the distribution parameters.

  16. Simplified approaches to some nonoverlapping domain decomposition methods

    SciTech Connect

    Xu, Jinchao

    1996-12-31

    An attempt will be made in this talk to present various domain decomposition methods in a way that is intuitively clear and technically coherent and concise. The basic framework used for analysis is the {open_quotes}parallel subspace correction{close_quotes} or {open_quotes}additive Schwarz{close_quotes} method, and other simple technical tools include {open_quotes}local-global{close_quotes} and {open_quotes}global-local{close_quotes} techniques, the formal one is for constructing subspace preconditioner based on a preconditioner on the whole space whereas the later one for constructing preconditioner on the whole space based on a subspace preconditioner. The domain decomposition methods discussed in this talk fall into two major categories: one, based on local Dirichlet problems, is related to the {open_quotes}substructuring method{close_quotes} and the other, based on local Neumann problems, is related to the {open_quotes}Neumann-Neumann method{close_quotes} and {open_quotes}balancing method{close_quotes}. All these methods will be presented in a systematic and coherent manner and the analysis for both two and three dimensional cases are carried out simultaneously. In particular, some intimate relationship between these algorithms are observed and some new variants of the algorithms are obtained.

  17. Method for rapid localization of seafloor petroleum contamination using concurrent mass spectrometry and acoustic positioning.

    PubMed

    Camilli, R; Bingham, B; Reddy, C M; Nelson, R K; Duryea, A N

    2009-10-01

    Locating areas of seafloor contamination caused by heavy oil spills is challenging, in large part because of observational limitations in aquatic subsurface environments. Accepted methods for surveying and locating sunken oil are generally slow, labor intensive and spatially imprecise. This paper describes a method to locate seafloor contamination caused by heavy oil fractions using in situ mass spectrometry and concurrent acoustic navigation. We present results of laboratory sensitivity tests and proof-of-concept evaluations conducted at the US Coast Guard OHMSETT national oil spill response test facility. Preliminary results from a robotic seafloor contamination survey conducted in deep water using the mass spectrometer and a geo-referenced acoustic navigation system are also described. Results indicate that this technological approach can accurately localize seafloor oil contamination in real-time at spatial resolutions better than a decimeter.

  18. A method to compute treatment suggestions from local order entry data.

    PubMed

    Klann, Jeffrey; Schadow, Gunther; Downs, Stephen M

    2010-11-13

    Although clinical decision support systems can reduce costs and improve care, the challenges associated with manually maintaining content has led to low utilization. Here we pilot an alternative, more automatic approach to decision support content generation. We use local order entry data and Bayesian networks to automatically find multivariate associations and suggest treatments. We evaluated this on 5044 hospitalizations of pregnant women, choosing 70 frequent order and treatment variables comprising 20 treatable conditions. The method produced treatment suggestion lists for 15 of these conditions. The lists captured accurate and non-trivial clinical knowledge, and all contained the key treatment for the condition, often as the first suggestion (71% overall, 90% non-labor-related). Additionally, when run on a test set of patient data, it very accurately predicted treatments (average AUC .873) and predicted pregnancy-specific treatments with even higher accuracy (AUC above .9). This method is a starting point for harnessing the wisdom-of-the-crowd for decision support.

  19. A Novel Microaneurysms Detection Method Based on Local Applying of Markov Random Field.

    PubMed

    Ganjee, Razieh; Azmi, Reza; Moghadam, Mohsen Ebrahimi

    2016-03-01

    Diabetic Retinopathy (DR) is one of the most common complications of long-term diabetes. It is a progressive disease and by damaging retina, it finally results in blindness of patients. Since Microaneurysms (MAs) appear as a first sign of DR in retina, early detection of this lesion is an essential step in automatic detection of DR. In this paper, a new MAs detection method is presented. The proposed approach consists of two main steps. In the first step, the MA candidates are detected based on local applying of Markov random field model (MRF). In the second step, these candidate regions are categorized to identify the correct MAs using 23 features based on shape, intensity and Gaussian distribution of MAs intensity. The proposed method is evaluated on DIARETDB1 which is a standard and publicly available database in this field. Evaluation of the proposed method on this database resulted in the average sensitivity of 0.82 for a confidence level of 75 as a ground truth. The results show that our method is able to detect the low contrast MAs with the background while its performance is still comparable to other state of the art approaches.

  20. Local second-order boundary methods for lattice Boltzmann models

    SciTech Connect

    Ginzbourg, I.; d`Humieres, D.

    1996-09-01

    A new way to implement solid obstacles in lattice Boltzmann models is presented. The unknown populations at the boundary nodes are derived from the locally known populations with the help of a second-order Chapman-Enskog expansion and Dirichlet boundary conditions with a given momentum. Steady flows near a flat wall, arbitrarily inclined with respect to the lattice links, are then obtained with a third-order error. In particular, Couette and Poiseuille flows are exactly recovered without the Knudsen layers produced for inclined walls by the bounce back condition.

  1. A localized re-initialization equation for the conservative level set method

    NASA Astrophysics Data System (ADS)

    McCaslin, Jeremy O.; Desjardins, Olivier

    2014-04-01

    The conservative level set methodology for interface transport is modified to allow for localized level set re-initialization. This approach is suitable to applications in which there is a significant amount of spatial variability in level set transport. The steady-state solution of the modified re-initialization equation matches that of the original conservative level set provided an additional Eikonal equation is solved, which can be done efficiently through a fast marching method (FMM). Implemented within the context of the accurate conservative level set method (ACLS) (Desjardins et al., 2008, [6]), the FMM solution of this Eikonal equation comes at no additional cost. A metric for the appropriate amount of local re-initialization is proposed based on estimates of local flow deformation and numerical diffusion. The method is compared to standard global re-initialization for two test cases, yielding the expected results that minor differences are observed for Zalesak's disk, and improvements in both mass conservation and interface topology are seen for a drop deforming in a vortex. Finally, the method is applied to simulation of a viscously damped standing wave and a three-dimensional drop impacting on a shallow pool. Negligible differences are observed for the standing wave, as expected. For the last case, results suggest that spatially varying re-initialization provides a reduction in spurious interfacial corrugations, improvements in the prediction of radial growth of the splashing lamella, and a reduction in conservation errors, as well as a reduction in overall computational cost that comes from improved conditioning of the pressure Poisson equation due to the removal of spurious corrugations.

  2. Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration

    PubMed Central

    Chang, Haitao; Zhu, Lianqing; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu

    2016-01-01

    One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20–200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity. PMID:27271636

  3. Real Space DFT by Locally Optimal Block Preconditioned Conjugate Gradient Method

    NASA Astrophysics Data System (ADS)

    Michaud, Vincent; Guo, Hong

    2012-02-01

    Real space approaches solve the Kohn-Sham (KS) DFT problem as a system of partial differential equations (PDE) in real space numerical grids. In such techniques, the Hamiltonian matrix is typically much larger but sparser than the matrix arising in state-of-the-art DFT codes which are often based on directly minimizing the total energy functional. Evidence of good performance of real space methods - by Chebyshev filtered subspace iteration (CFSI) - was reported by Zhou, Saad, Tiago and Chelikowsky [1]. We found that the performance of the locally optimal block preconditioned conjugate gradient method (LOGPCG) introduced by Knyazev [2], when used in conjunction with CFSI, generally exceeds that of CFSI for solving the KS equations. We will present our implementation of the LOGPCG based real space electronic structure calculator. [4pt] [1] Y. Zhou, Y. Saad, M. L. Tiago, and J. R. Chelikowsky, ``Self-consistent-field calculations using Chebyshev-filtered subspace iteration,'' J. Comput. Phys., vol. 219,pp. 172-184, November 2006. [0pt] [2] A. V. Knyazev, ``Toward the optimal preconditioned eigensolver: Locally optimal block preconditioned conjugate gradient method,'' SIAM J. Sci. Comput, vol. 23, pp. 517-541, 2001.

  4. Local Strategy Combined with a Wavelength Selection Method for Multivariate Calibration.

    PubMed

    Chang, Haitao; Zhu, Lianqing; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu

    2016-01-01

    One of the essential factors influencing the prediction accuracy of multivariate calibration models is the quality of the calibration data. A local regression strategy, together with a wavelength selection approach, is proposed to build the multivariate calibration models based on partial least squares regression. The local algorithm is applied to create a calibration set of spectra similar to the spectrum of an unknown sample; the synthetic degree of grey relation coefficient is used to evaluate the similarity. A wavelength selection method based on simple-to-use interactive self-modeling mixture analysis minimizes the influence of noisy variables, and the most informative variables of the most similar samples are selected to build the multivariate calibration model based on partial least squares regression. To validate the performance of the proposed method, ultraviolet-visible absorbance spectra of mixed solutions of food coloring analytes in a concentration range of 20-200 µg/mL is measured. Experimental results show that the proposed method can not only enhance the prediction accuracy of the calibration model, but also greatly reduce its complexity. PMID:27271636

  5. Practical approaches for assessing local land use change and conservation priorities in the tropics

    NASA Astrophysics Data System (ADS)

    Rivas, Cassandra J.

    Tropical areas typically support high biological diversity; however, many are experiencing rapid land-use change. The resulting loss, fragmentation, and degradation of habitats place biodiversity at risk. For these reasons, the tropics are frequently identified as global conservation hotspots. Safeguarding tropical biodiversity necessitates successful and efficient conservation planning and implementation at local scales, where land use decisions are made and enforced. Yet, despite considerable agreement on the need for improved practices, planning may be difficult due to limited resources, such as funding, data, and expertise, especially for small conservation organizations in tropical developing countries. My thesis aims to assist small, non-governmental organizations (NGOs), operating in tropical developing countries, in overcoming resource limitations by providing recommendations for improved conservation planning. Following a brief introduction in Chapter 1, I present a literature review of systematic conservation planning (SCP) projects in the developing tropics. Although SCP is considered an efficient, effective approach, it requires substantial data and expertise to conduct the analysis and may present challenges for implementation. I reviewed and synthesized the methods and results of 14 case studies to identify practical ways to implement and overcome limitations for employing SCP. I found that SCP studies in the peer-reviewed literature were primarily implemented by researchers in large organizations or institutions, as opposed to on-the-ground conservation planners. A variety of data types were used in the SCP analyses, many of which data are freely available. Few case studies involved stakeholders and intended to implement the assessment; instead, the case studies were carried out in the context of research and development, limiting local involvement and implementation. Nonetheless, the studies provided valuable strategies for employing each step of

  6. Probabilistic prediction of real-world time series: A local regression approach

    NASA Astrophysics Data System (ADS)

    Laio, Francesco; Ridolfi, Luca; Tamea, Stefania

    2007-02-01

    We propose a probabilistic prediction method, based on local polynomial regressions, which complements the point forecasts with robust estimates of the corresponding forecast uncertainty. The reliability, practicability and generality of the method is demonstrated by applying it to astronomical, physiological, economic, and geophysical time series.

  7. How Nectar-Feeding Bats Localize their Food: Echolocation Behavior of Leptonycteris yerbabuenae Approaching Cactus Flowers

    PubMed Central

    Koblitz, Jens C.; Fleming, Theodore H.; Medellín, Rodrigo A.; Kalko, Elisabeth K. V.; Schnitzler, Hans-Ulrich; Tschapka, Marco

    2016-01-01

    Nectar-feeding bats show morphological, physiological, and behavioral adaptations for feeding on nectar. How they find and localize flowers is still poorly understood. While scent cues alone allow no precise localization of a floral target, the spatial properties of flower echoes are very precise and could play a major role, particularly at close range. The aim of this study is to understand the role of echolocation for classification and localization of flowers. We compared the approach behavior of Leptonycteris yerbabuenae to flowers of a columnar cactus, Pachycereus pringlei, to that to an acrylic hollow hemisphere that is acoustically conspicuous to bats, but has different acoustic properties and, contrary to the cactus flower, present no scent. For recording the flight and echolocation behaviour we used two infrared video cameras under stroboscopic illumination synchronized with ultrasound recordings. During search flights all individuals identified both targets as a possible food source and initiated an approach flight; however, they visited only the cactus flower. In experiments with the acrylic hemisphere bats aborted the approach at ca. 40–50 cm. In the last instant before the flower visit the bats emitted a long terminal group of 10–20 calls. This is the first report of this behaviour for a nectar-feeding bat. Our findings suggest that L. yerbabuenae use echolocation for classification and localization of cactus flowers and that the echo-acoustic characteristics of the flower guide the bats directly to the flower opening. PMID:27684373

  8. An integrated lean-methods approach to hospital facilities redesign.

    PubMed

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  9. An integrated lean-methods approach to hospital facilities redesign.

    PubMed

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach. PMID:22671435

  10. Method of preliminary localization of the iris in biometric access control systems

    NASA Astrophysics Data System (ADS)

    Minacova, N.; Petrov, I.

    2015-10-01

    This paper presents a method of preliminary localization of the iris, based on the stable brightness features of the iris in images of the eye. In tests on images of eyes from publicly available databases method showed good accuracy and speed compared to existing methods preliminary localization.

  11. Problems d'elaboration d'une methode locale: la methode "Paris-Khartoum" (Problems in Implementing a Local Method: the Paris-Khartoum Method)

    ERIC Educational Resources Information Center

    Penhoat, Loick; Sakow, Kostia

    1978-01-01

    A description of the development and implementation of a method introduced in the Sudan that attempts to relate to Sudanese culture and to motivate students. The relationship between language teaching methods and the total educational system is discussed. (AMH)

  12. The Local Plasma Frequency Approach in Description of the Impact-Parameter Dependence of Energy Loss

    NASA Astrophysics Data System (ADS)

    Khodyrev, V. A.

    The LPF approach of Lindhard and Scharff is generalized to describe on the same basis the impact parameter dependence of energy loss in ion-atom collision. To make this feasible the energy loss is represented as an integral of the local energy deposition over the atomic shell volume. The local energy loss is determined by the induced electron current and the intensity of the projectile field at a given point. The LPF approach consists in an approximate description of the induced current using the corresponding expression for a uniform electron gas. With an appropriate description of the electron gas response, the atomic shell polarization and the state of electron motion are considered. The developed approach provides a possibility to test the accuracy of the customary approximation where the energy loss is expressed through the electron density on the ion trajectory, the local density approximation. A comparison with the available experimental results displays the adequateness of the developed approach if, additionally, the higher-order corrections over the projectile charge are taken into account.

  13. Artificial neural networks optimization method for radioactive source localization

    SciTech Connect

    Wacholder, E.; Elias, E.; Merlis, Y.

    1995-05-01

    An optimization artificial neural networks model is developed for solving the ill-posed inverse transport problem associated with localizing radioactive sources in a medium with known properties and dimensions. The model is based on the recurrent (or feedback) Hopfield network with fixed weights. The source distribution is determined based on the response of a limited number of external detectors of known spatial deployment in conjunction with a radiation transport model. The algorithm is tested and evaluated for a large number of simulated two-dimensional cases. Computations are carried out at different noise levels to account for statistical errors encountered in engineering applications. The sensitivity to noise is found to depend on the number of detectors and on their spatial deployment. A pretest empirical procedure is, therefore, suggested for determining an effective arrangement of detectors for a given problem.

  14. Method for optical coherence tomography image classification using local features and earth mover's distance

    NASA Astrophysics Data System (ADS)

    Sun, Yankui; Lei, Ming

    2009-09-01

    Optical coherence tomography (OCT) is a recent imaging method that allows high-resolution, cross-sectional imaging through tissues and materials. Over the past 18 years, OCT has been successfully used in disease diagnosis, biomedical research, material evaluation, and many other domains. As OCT is a recent imaging method, until now surgeons have limited experience using it. In addition, the number of images obtained from the imaging device is too large, so we need an automated method to analyze them. We propose a novel method for automated classification of OCT images based on local features and earth mover's distance (EMD). We evaluated our algorithm using an OCT image set which contains two kinds of skin images, normal skin and nevus flammeus. Experimental results demonstrate the effectiveness of our method, which achieved classification accuracy of 0.97 for an EMD+KNN scheme and 0.99 for an EMD+SVM (support vector machine) scheme, much higher than the previous method. Our approach is especially suitable for nonhomogeneous images and could be applied to a wide range of OCT images.

  15. Grid-Search Location Methods for Ground-Truth Collection from Local and Regional Seismic Networks

    SciTech Connect

    Schultz, C A; Rodi, W; Myers, S C

    2003-07-24

    The objective of this project is to develop improved seismic event location techniques that can be used to generate more and better quality reference events using data from local and regional seismic networks. Their approach is to extend existing methods of multiple-event location with more general models of the errors affecting seismic arrival time data, including picking errors and errors in model-based travel-times (path corrections). Toward this end, they are integrating a grid-search based algorithm for multiple-event location (GMEL) with a new parameterization of travel-time corrections and new kriging method for estimating the correction parameters from observed travel-time residuals. Like several other multiple-event location algorithms, GMEL currently assumes event-independent path corrections and is thus restricted to small event clusters. The new parameterization assumes that travel-time corrections are a function of both the event and station location, and builds in source-receiver reciprocity and correlation between the corrections from proximate paths as constraints. The new kriging method simultaneously interpolates travel-time residuals from multiple stations and events to estimate the correction parameters as functions of position. They are currently developing the algorithmic extensions to GMEL needed to combine the new parameterization and kriging method with the simultaneous location of events. The result will be a multiple-event location method which is applicable to non-clustered, spatially well-distributed events. They are applying the existing components of the new multiple-event location method to a data set of regional and local arrival times from Nevada Test Site (NTS) explosions with known origin parameters. Preliminary results show the feasibility and potential benefits of combining the location and kriging techniques. They also show some preliminary work on generalizing of the error model used in GMEL with the use of mixture

  16. Localized surface plasmon resonance mercury detection system and methods

    DOEpatents

    James, Jay; Lucas, Donald; Crosby, Jeffrey Scott; Koshland, Catherine P.

    2016-03-22

    A mercury detection system that includes a flow cell having a mercury sensor, a light source and a light detector is provided. The mercury sensor includes a transparent substrate and a submonolayer of mercury absorbing nanoparticles, e.g., gold nanoparticles, on a surface of the substrate. Methods of determining whether mercury is present in a sample using the mercury sensors are also provided. The subject mercury detection systems and methods find use in a variety of different applications, including mercury detecting applications.

  17. A method for spatially resolved local intracellular mechanochemical sensing and organelle manipulation.

    PubMed

    Shekhar, S; Cambi, A; Figdor, C G; Subramaniam, V; Kanger, J S

    2012-08-01

    Because both the chemical and mechanical properties of living cells play crucial functional roles, there is a strong need for biophysical methods to address these properties simultaneously. Here we present a novel (to our knowledge) approach to measure local intracellular micromechanical and chemical properties using a hybrid magnetic chemical biosensor. We coupled a fluorescent dye, which serves as a chemical sensor, to a magnetic particle that is used for measurement of the viscoelastic environment by studying the response of the particle to magnetic force pulses. As a demonstration of the potential of this approach, we applied the method to study the process of phagocytosis, wherein cytoskeletal reorganization occurs in parallel with acidification of the phagosome. During this process, we measured the shear modulus and viscosity of the phagosomal environment concurrently with the phagosomal pH. We found that it is possible to manipulate phagocytosis by stalling the centripetal movement of the phagosome using magnetic force. Our results suggest that preventing centripetal phagosomal transport delays the onset of acidification. To our knowledge, this is the first report of manipulation of intracellular phagosomal transport without interfering with the underlying motor proteins or cytoskeletal network through biochemical methods. PMID:22947855

  18. Local Orthogonal Cutting Method for Computing Medial Curves and Its Biomedical Applications

    SciTech Connect

    Jiao, Xiangmin; Einstein, Daniel R.; Dyedov, Volodymyr

    2010-03-24

    Medial curves have a wide range of applications in geometric modeling and analysis (such as shape matching) and biomedical engineering (such as morphometry and computer assisted surgery). The computation of medial curves poses significant challenges, both in terms of theoretical analysis and practical efficiency and reliability. In this paper, we propose a definition and analysis of medial curves and also describe an efficient and robust method for computing medial curves. Our approach is based on three key concepts: a local orthogonal decomposition of objects into substructures, a differential geometry concept called the interior center of curvature (ICC), and integrated stability and consistency tests. These concepts lend themselves to robust numerical techniques including eigenvalue analysis, weighted least squares approximations, and numerical minimization, resulting in an algorithm that is efficient and noise resistant. We illustrate the effectiveness and robustness of our approach with some highly complex, large-scale, noisy biomedical geometries derived from medical images, including lung airways and blood vessels. We also present comparisons of our method with some existing methods.

  19. An efficient implementation of the localized operator partitioning method for electronic energy transfer

    SciTech Connect

    Nagesh, Jayashree; Brumer, Paul; Izmaylov, Artur F.

    2015-02-28

    The localized operator partitioning method [Y. Khan and P. Brumer, J. Chem. Phys. 137, 194112 (2012)] rigorously defines the electronic energy on any subsystem within a molecule and gives a precise meaning to the subsystem ground and excited electronic energies, which is crucial for investigating electronic energy transfer from first principles. However, an efficient implementation of this approach has been hindered by complicated one- and two-electron integrals arising in its formulation. Using a resolution of the identity in the definition of partitioning, we reformulate the method in a computationally efficient manner that involves standard one- and two-electron integrals. We apply the developed algorithm to the 9 − ((1 − naphthyl) − methyl) − anthracene (A1N) molecule by partitioning A1N into anthracenyl and CH{sub 2} − naphthyl groups as subsystems and examine their electronic energies and populations for several excited states using configuration interaction singles method. The implemented approach shows a wide variety of different behaviors amongst the excited electronic states.

  20. Total System Performance Assessment - License Application Methods and Approach

    SciTech Connect

    J. McNeish

    2003-12-08

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issues (KTIs) identified in agreements with the U.S. Nuclear Regulatory Commission, the ''Yucca Mountain Review Plan'' (YMRP), ''Final Report'' (NRC 2003 [163274]), and the NRC final rule 10 CFR Part 63 (NRC 2002 [156605]). This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are used in this document.

  1. Total System Performance Assessment-License Application Methods and Approach

    SciTech Connect

    J. McNeish

    2002-09-13

    ''Total System Performance Assessment-License Application (TSPA-LA) Methods and Approach'' provides the top-level method and approach for conducting the TSPA-LA model development and analyses. The method and approach is responsive to the criteria set forth in Total System Performance Assessment Integration (TSPAI) Key Technical Issue (KTI) agreements, the ''Yucca Mountain Review Plan'' (CNWRA 2002 [158449]), and 10 CFR Part 63. This introductory section provides an overview of the TSPA-LA, the projected TSPA-LA documentation structure, and the goals of the document. It also provides a brief discussion of the regulatory framework, the approach to risk management of the development and analysis of the model, and the overall organization of the document. The section closes with some important conventions that are utilized in this document.

  2. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation.

    PubMed

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-01-01

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is

  3. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation

    PubMed Central

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-01-01

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is

  4. An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation.

    PubMed

    Yuan, Xin; Martínez, José-Fernán; Eckert, Martina; López-Santidrián, Lourdes

    2016-07-22

    The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is

  5. Percutaneous Irreversible Electroporation of Locally Advanced Pancreatic Carcinoma Using the Dorsal Approach: A Case Report

    SciTech Connect

    Scheffer, Hester J. Melenhorst, Marleen C. A. M.; Vogel, Jantien A.; Tilborg, Aukje A. J. M. van; Nielsen, Karin Kazemier, Geert; Meijerink, Martijn R.

    2015-06-15

    Irreversible electroporation (IRE) is a novel image-guided ablation technique that is increasingly used to treat locally advanced pancreatic carcinoma (LAPC). We describe a 67-year-old male patient with a 5 cm stage III pancreatic tumor who was referred for IRE. Because the ventral approach for electrode placement was considered dangerous due to vicinity of the tumor to collateral vessels and duodenum, the dorsal approach was chosen. Under CT-guidance, six electrodes were advanced in the tumor, approaching paravertebrally alongside the aorta and inferior vena cava. Ablation was performed without complications. This case describes that when ventral electrode placement for pancreatic IRE is impaired, the dorsal approach could be considered alternatively.

  6. The local maxima method for enhancement of time-frequency map and its application to local damage detection in rotating machines

    NASA Astrophysics Data System (ADS)

    Obuchowski, Jakub; Wyłomańska, Agnieszka; Zimroz, Radosław

    2014-06-01

    In this paper a new method of fault detection in rotating machinery is presented. It is based on a vibration time series analysis in time-frequency domain. A raw vibration signal is decomposed via the short-time Fourier transform (STFT). The time-frequency map is considered as matrix (M×N) with N sub-signals with length M. Each sub-signal is considered as a time series and might be interpreted as energy variation for narrow frequency bins. Each sub-signal is processed using a novel approach called the local maxima method. Basically, we search for local maxima because they should appear in the signal if local damage in bearings or gearbox exists. Finally, information for all sub-signals is combined in order to validate impulsive behavior of energy. Due to random character of the obtained time series, each maximum occurrence has to be checked for its significance. If there are time points for which the average number of local maxima for all sub-signals is significantly higher than for the other time instances, then location of these maxima is “weighted” as more important (at this time instance local maxima create for a set of Δf a pattern on the time-frequency map). This information, called vector of weights, is used for enhancement of spectrogram. When vector of weights is applied for spectrogram, non-informative energy is suppressed while informative features on spectrogram are enhanced. If the distribution of local maxima on spectrogram creates a pattern of wide-band cyclic energy growth, the machine is suspected of being damaged. For healthy condition, the vector of the average number of maxima for each time point should not have outliers, aggregation of information from all sub-signals is rather random and does not create any pattern. The method is illustrated by analysis of very noisy both real and simulated signals.

  7. A general approach for the prediction of localized instability generation in boundary layer flows

    NASA Technical Reports Server (NTRS)

    Choudhari, Meelan; Ng, Lian; Streett, Craig L.

    1991-01-01

    The present approach to the prediction of instability generation that is due to the interaction of freestream disturbances with regions of subscale variations in surface boundary conditions can account for the finite Reynolds number effects, while furnishing a framework for the study of receptivity in compressible flow and in 3D boundary layers. The approach is illustrated for the case of Tollmien-Schlichting wave generation in a Blasius boundary layer, due to the interaction of a freestream acoustic wave with a localized wall inhomogeneity. Results are presented for the generation of viscous and inviscid instabilities in adverse pressure-gradient boundary layers, supersonic boundary layer instabilities, and cross-flow vortex instabilities.

  8. Gaussian Process Regression Plus Method for Localization Reliability Improvement.

    PubMed

    Liu, Kehan; Meng, Zhaopeng; Own, Chung-Ming

    2016-01-01

    Location data are among the most widely used context data in context-aware and ubiquitous computing applications. Many systems with distinct deployment costs and positioning accuracies have been developed over the past decade for indoor positioning. The most useful method is focused on the received signal strength and provides a set of signal transmission access points. However, compiling a manual measuring Received Signal Strength (RSS) fingerprint database involves high costs and thus is impractical in an online prediction environment. The system used in this study relied on the Gaussian process method, which is a nonparametric model that can be characterized completely by using the mean function and the covariance matrix. In addition, the Naive Bayes method was used to verify and simplify the computation of precise predictions. The authors conducted several experiments on simulated and real environments at Tianjin University. The experiments examined distinct data size, different kernels, and accuracy. The results showed that the proposed method not only can retain positioning accuracy but also can save computation time in location predictions.

  9. Gaussian Process Regression Plus Method for Localization Reliability Improvement.

    PubMed

    Liu, Kehan; Meng, Zhaopeng; Own, Chung-Ming

    2016-01-01

    Location data are among the most widely used context data in context-aware and ubiquitous computing applications. Many systems with distinct deployment costs and positioning accuracies have been developed over the past decade for indoor positioning. The most useful method is focused on the received signal strength and provides a set of signal transmission access points. However, compiling a manual measuring Received Signal Strength (RSS) fingerprint database involves high costs and thus is impractical in an online prediction environment. The system used in this study relied on the Gaussian process method, which is a nonparametric model that can be characterized completely by using the mean function and the covariance matrix. In addition, the Naive Bayes method was used to verify and simplify the computation of precise predictions. The authors conducted several experiments on simulated and real environments at Tianjin University. The experiments examined distinct data size, different kernels, and accuracy. The results showed that the proposed method not only can retain positioning accuracy but also can save computation time in location predictions. PMID:27483276

  10. Development of acoustic sniper localization methods and models

    NASA Astrophysics Data System (ADS)

    Grasing, David; Ellwood, Benjamin

    2010-04-01

    A novel examination of a method capable of providing situational awareness of sniper fire from small arms fire is presented. Situational Awareness (SA) information is extracted by exploiting two distinct sounds created by small arms discharge: the muzzle blast (created when the bullet leaves the barrel of the gun) and the shockwave (sound created by a supersonic bullet). The direction of arrival associated with the muzzle blast will always point in the direction of the shooter. Range can be estimated from the muzzle blast alone, however at greater distances geometric dilution of precision will make obtaining accurate range estimates difficult. To address this issue, additional information obtained from the shockwave is utilized in order to estimate range to shooter. The focus of the paper is the development of a shockwave propagation model, the development of ballistics models (based off empirical measurements), and the subsequent application towards methods of determining shooter position. Knowledge of the rounds ballistics is required to estimate range to shooter. Many existing methods rely on extracting information from the shockwave in an attempt to identify the round type and thus the ballistic model to use ([1]). It has been our experience that this information becomes unreliable at greater distances or in high noise environments. Our method differs from existing solutions in that classification of the round type is not required, thus making the proposed solution more robust. Additionally, we demonstrate that sufficient accuracy can be achieved without the need to classify the round.

  11. Gaussian Process Regression Plus Method for Localization Reliability Improvement

    PubMed Central

    Liu, Kehan; Meng, Zhaopeng; Own, Chung-Ming

    2016-01-01

    Location data are among the most widely used context data in context-aware and ubiquitous computing applications. Many systems with distinct deployment costs and positioning accuracies have been developed over the past decade for indoor positioning. The most useful method is focused on the received signal strength and provides a set of signal transmission access points. However, compiling a manual measuring Received Signal Strength (RSS) fingerprint database involves high costs and thus is impractical in an online prediction environment. The system used in this study relied on the Gaussian process method, which is a nonparametric model that can be characterized completely by using the mean function and the covariance matrix. In addition, the Naive Bayes method was used to verify and simplify the computation of precise predictions. The authors conducted several experiments on simulated and real environments at Tianjin University. The experiments examined distinct data size, different kernels, and accuracy. The results showed that the proposed method not only can retain positioning accuracy but also can save computation time in location predictions. PMID:27483276

  12. A Multi-Modal Face Recognition Method Using Complete Local Derivative Patterns and Depth Maps

    PubMed Central

    Yin, Shouyi; Dai, Xu; Ouyang, Peng; Liu, Leibo; Wei, Shaojun

    2014-01-01

    In this paper, we propose a multi-modal 2D + 3D face recognition method for a smart city application based on a Wireless Sensor Network (WSN) and various kinds of sensors. Depth maps are exploited for the 3D face representation. As for feature extraction, we propose a new feature called Complete Local Derivative Pattern (CLDP). It adopts the idea of layering and has four layers. In the whole system, we apply CLDP separately on Gabor features extracted from a 2D image and depth map. Then, we obtain two features: CLDP-Gabor and CLDP-Depth. The two features weighted by the corresponding coefficients are combined together in the decision level to compute the total classification distance. At last, the probe face is assigned the identity with the smallest classification distance. Extensive experiments are conducted on three different databases. The results demonstrate the robustness and superiority of the new approach. The experimental results also prove that the proposed multi-modal 2D + 3D method is superior to other multi-modal ones and CLDP performs better than other Local Binary Pattern (LBP) based features. PMID:25333290

  13. Combining bayesian source imaging with equivalent dipole approach to solve the intracranial EEG source localization problem.

    PubMed

    Le Cam, Steven; Caune, Vairis; Ranta, Radu; Korats, Gundars; Louis-Dorr, Valerie

    2015-08-01

    The brain source localization problem has been extensively studied in the past years, yielding a large panel of methodologies, each bringing their own strengths and weaknesses. Combining several of these approaches might help in enhancing their respective performance. Our study is carried out in the particular context of intracranial recordings, with the objective to explain the measurements based on a reduced number of dipolar activities. We take benefit of the sparse nature of the Bayesian approaches to separate the noise from the source space, and to distinguish between several source contributions on the electrodes. This first step provides accurate estimates of the dipole projections, which can be used as an entry to an equivalent current dipole fitting procedure. We demonstrate on simulations that the localization results are significantly enhanced by this post-processing step when up to five dipoles are activated simultaneously.

  14. Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods

    PubMed Central

    Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.

    2013-01-01

    Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822

  15. The Local Integrity Approach for Urban Contexts: Definition and Vehicular Experimental Assessment.

    PubMed

    Margaria, Davide; Falletti, Emanuela

    2016-01-01

    A novel cooperative integrity monitoring concept, called "local integrity", suitable to automotive applications in urban scenarios, is discussed in this paper. The idea is to take advantage of a collaborative Vehicular Ad hoc NETwork (VANET) architecture in order to perform a spatial/temporal characterization of possible degradations of Global Navigation Satellite System (GNSS) signals. Such characterization enables the computation of the so-called "Local Protection Levels", taking into account local impairments to the received signals. Starting from theoretical concepts, this paper describes the experimental validation by means of a measurement campaign and the real-time implementation of the algorithm on a vehicular prototype. A live demonstration in a real scenario has been successfully carried out, highlighting effectiveness and performance of the proposed approach. PMID:26821028

  16. Flow equation approach to one-body and many-body localization

    NASA Astrophysics Data System (ADS)

    Quito, Victor; Bhattacharjee, Paraj; Pekker, David; Refael, Gil

    2014-03-01

    We study one-body and many-body localization using the flow equation technique applied to spin-1/2 Hamiltonians. This technique, first introduced by Wegner, allows us to exact diagonalize interacting systems by solving a set of first-order differential equations for coupling constants. Besides, by the flow of individual operators we also compute physical properties, such as correlation and localization lengths, by looking at the flow of probability distributions of couplings in the Hilbert space. As a first example, we analyze the one-body localization problem written in terms of spins, the disordered XY model with a random transverse field. We compare the results obtained in the flow equation approach with the diagonalization in the fermionic language. For the many-body problem, we investigate the physical properties of the disordered XXZ Hamiltonian with a random transverse field in the z-direction.

  17. The Local Integrity Approach for Urban Contexts: Definition and Vehicular Experimental Assessment

    PubMed Central

    Margaria, Davide; Falletti, Emanuela

    2016-01-01

    A novel cooperative integrity monitoring concept, called “local integrity”, suitable to automotive applications in urban scenarios, is discussed in this paper. The idea is to take advantage of a collaborative Vehicular Ad hoc NETwork (VANET) architecture in order to perform a spatial/temporal characterization of possible degradations of Global Navigation Satellite System (GNSS) signals. Such characterization enables the computation of the so-called “Local Protection Levels”, taking into account local impairments to the received signals. Starting from theoretical concepts, this paper describes the experimental validation by means of a measurement campaign and the real-time implementation of the algorithm on a vehicular prototype. A live demonstration in a real scenario has been successfully carried out, highlighting effectiveness and performance of the proposed approach. PMID:26821028

  18. Connectometry: A statistical approach harnessing the analytical potential of the local connectome.

    PubMed

    Yeh, Fang-Cheng; Badre, David; Verstynen, Timothy

    2016-01-15

    Here we introduce the concept of the local connectome: the degree of connectivity between adjacent voxels within a white matter fascicle defined by the density of the diffusing spins. While most human structural connectomic analyses can be summarized as finding global connectivity patterns at either end of anatomical pathways, the analysis of local connectomes, termed connectometry, tracks the local connectivity patterns along the fiber pathways themselves in order to identify the subcomponents of the pathways that express significant associations with a study variable. This bottom-up analytical approach is made possible by reconstructing diffusion MRI data into a common stereotaxic space that allows for associating local connectomes across subjects. The substantial associations can then be tracked along the white matter pathways, and statistical inference is obtained using permutation tests on the length of coherent associations and corrected for multiple comparisons. Using two separate samples, with different acquisition parameters, we show how connectometry can capture variability within core white matter pathways in a statistically efficient manner and extract meaningful variability from white matter pathways, complements graph-theoretic connectomic measures, and is more sensitive than region-of-interest approaches.

  19. Local search methods based on variable focusing for random K-satisfiability.

    PubMed

    Lemoy, Rémi; Alava, Mikko; Aurell, Erik

    2015-01-01

    We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed. PMID:25679737

  20. Local search methods based on variable focusing for random K -satisfiability

    NASA Astrophysics Data System (ADS)

    Lemoy, Rémi; Alava, Mikko; Aurell, Erik

    2015-01-01

    We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed.

  1. Multidimensional Programming Methods for Energy Facility Siting: Alternative Approaches

    NASA Technical Reports Server (NTRS)

    Solomon, B. D.; Haynes, K. E.

    1982-01-01

    The use of multidimensional optimization methods in solving power plant siting problems, which are characterized by several conflicting, noncommensurable objectives is addressed. After a discussion of data requirements and exclusionary site screening methods for bounding the decision space, classes of multiobjective and goal programming models are discussed in the context of finite site selection. Advantages and limitations of these approaches are highlighted and the linkage of multidimensional methods with the subjective, behavioral components of the power plant siting process is emphasized.

  2. Potential energy surface fitting by a statistically localized, permutationally invariant, local interpolating moving least squares method for the many-body potential: Method and application to N{sub 4}

    SciTech Connect

    Bender, Jason D.; Doraiswamy, Sriram; Candler, Graham V. E-mail: candler@aem.umn.edu; Truhlar, Donald G. E-mail: candler@aem.umn.edu

    2014-02-07

    Fitting potential energy surfaces to analytic forms is an important first step for efficient molecular dynamics simulations. Here, we present an improved version of the local interpolating moving least squares method (L-IMLS) for such fitting. Our method has three key improvements. First, pairwise interactions are modeled separately from many-body interactions. Second, permutational invariance is incorporated in the basis functions, using permutationally invariant polynomials in Morse variables, and in the weight functions. Third, computational cost is reduced by statistical localization, in which we statistically correlate the cutoff radius with data point density. We motivate our discussion in this paper with a review of global and local least-squares-based fitting methods in one dimension. Then, we develop our method in six dimensions, and we note that it allows the analytic evaluation of gradients, a feature that is important for molecular dynamics. The approach, which we call statistically localized, permutationally invariant, local interpolating moving least squares fitting of the many-body potential (SL-PI-L-IMLS-MP, or, more simply, L-IMLS-G2), is used to fit a potential energy surface to an electronic structure dataset for N{sub 4}. We discuss its performance on the dataset and give directions for further research, including applications to trajectory calculations.

  3. Global/local methods research using a common structural analysis framework

    NASA Technical Reports Server (NTRS)

    Knight, Norman F., Jr.; Ransom, Jonathan B.; Griffin, O. H., Jr.; Thompson, Danniella M.

    1991-01-01

    Methodologies for global/local stress analysis are described including both two- and three-dimensional analysis methods. These methods are being developed within a common structural analysis framework. Representative structural analysis problems are presented to demonstrate the global/local methodologies being developed.

  4. Assessment of Social Vulnerability Identification at Local Level around Merapi Volcano - A Self Organizing Map Approach

    NASA Astrophysics Data System (ADS)

    Lee, S.; Maharani, Y. N.; Ki, S. J.

    2015-12-01

    The application of Self-Organizing Map (SOM) to analyze social vulnerability to recognize the resilience within sites is a challenging tasks. The aim of this study is to propose a computational method to identify the sites according to their similarity and to determine the most relevant variables to characterize the social vulnerability in each cluster. For this purposes, SOM is considered as an effective platform for analysis of high dimensional data. By considering the cluster structure, the characteristic of social vulnerability of the sites identification can be fully understand. In this study, the social vulnerability variable is constructed from 17 variables, i.e. 12 independent variables which represent the socio-economic concepts and 5 dependent variables which represent the damage and losses due to Merapi eruption in 2010. These variables collectively represent the local situation of the study area, based on conducted fieldwork on September 2013. By using both independent and dependent variables, we can identify if the social vulnerability is reflected onto the actual situation, in this case, Merapi eruption 2010. However, social vulnerability analysis in the local communities consists of a number of variables that represent their socio-economic condition. Some of variables employed in this study might be more or less redundant. Therefore, SOM is used to reduce the redundant variable(s) by selecting the representative variables using the component planes and correlation coefficient between variables in order to find the effective sample size. Then, the selected dataset was effectively clustered according to their similarities. Finally, this approach can produce reliable estimates of clustering, recognize the most significant variables and could be useful for social vulnerability assessment, especially for the stakeholder as decision maker. This research was supported by a grant 'Development of Advanced Volcanic Disaster Response System considering

  5. A New Local Modelling Approach Based on Predicted Errors for Near-Infrared Spectral Analysis.

    PubMed

    Chang, Haitao; Zhu, Lianqing; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu

    2016-01-01

    Over the last decade, near-infrared spectroscopy, together with the use of chemometrics models, has been widely employed as an analytical tool in several industries. However, most chemical processes or analytes are multivariate and nonlinear in nature. To solve this problem, local errors regression method is presented in order to build an accurate calibration model in this paper, where a calibration subset is selected by a new similarity criterion which takes the full information of spectra, chemical property, and predicted errors. After the selection of calibration subset, the partial least squares regression is applied to build calibration model. The performance of the proposed method is demonstrated through a near-infrared spectroscopy dataset of pharmaceutical tablets. Compared with other local strategies with different similarity criterions, it has been shown that the proposed local errors regression can result in a significant improvement in terms of both prediction ability and calculation speed. PMID:27446631

  6. A New Local Modelling Approach Based on Predicted Errors for Near-Infrared Spectral Analysis

    PubMed Central

    Chang, Haitao; Lou, Xiaoping; Meng, Xiaochen; Guo, Yangkuan; Wang, Zhongyu

    2016-01-01

    Over the last decade, near-infrared spectroscopy, together with the use of chemometrics models, has been widely employed as an analytical tool in several industries. However, most chemical processes or analytes are multivariate and nonlinear in nature. To solve this problem, local errors regression method is presented in order to build an accurate calibration model in this paper, where a calibration subset is selected by a new similarity criterion which takes the full information of spectra, chemical property, and predicted errors. After the selection of calibration subset, the partial least squares regression is applied to build calibration model. The performance of the proposed method is demonstrated through a near-infrared spectroscopy dataset of pharmaceutical tablets. Compared with other local strategies with different similarity criterions, it has been shown that the proposed local errors regression can result in a significant improvement in terms of both prediction ability and calculation speed. PMID:27446631

  7. A Coproduction Community Based Approach to Reducing Smoking Prevalence in a Local Community Setting

    PubMed Central

    McGeechan, G. J.; Woodall, D.; Anderson, L.; Wilson, L.; O'Neill, G.; Newbury-Birch, D.

    2016-01-01

    Research highlights that asset-based community development where local residents become equal partners in service development may help promote health and well-being. This paper outlines baseline results of a coproduction evaluation of an asset-based approach to improving health and well-being within a small community through promoting tobacco control. Local residents were recruited and trained as community researchers to deliver a smoking prevalence survey within their local community and became local health champions, promoting health and well-being. The results of the survey will be used to inform health promotion activities within the community. The local smoking prevalence was higher than the regional and national averages. Half of the households surveyed had at least one smoker, and 63.1% of children lived in a smoking household. Nonsmokers reported higher well-being than smokers; however, the differences were not significant. Whilst the community has a high smoking prevalence, more than half of the smokers surveyed would consider quitting. Providing smoking cessation advice in GP surgeries may help reduce smoking prevalence in this community. Work in the area could be done to reduce children's exposure to smoking in the home. PMID:27446219

  8. Lagrangian Particle Method for Local Scale Dispersion Modeling

    NASA Astrophysics Data System (ADS)

    Sunarko; ZakiSu'ud

    2016-08-01

    A deterministic model is developed for radioactive dispersion analysis based on random-walk Lagrangian Particle Dispersion Method (LPDM). A diagnostic 3dimensional mass-consistent wind-field with a capability to handle complex topography can be used to provide input for particle advection. Turbulent diffusion process of particles is determined based on empirical lateral and linear vertical relationships. Surface-level concentration is calculated for constant unit release from elevated point source. A series of 60-second segmented groups of particles are released in 3600 seconds total duration. Averaged surface-level concentration within a 5 meter surface layer is obtained and compared with available analytical solution. Results from LPDM shows good agreement with the analytical result for vertically constant and varying wind field with the same atmospheric stability.

  9. a Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction

    NASA Astrophysics Data System (ADS)

    Stentoumis, C.; Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2013-02-01

    In recent years, a demand for 3D models of various scales and precisions has been growing for a wide range of applications; among them, cultural heritage recording is a particularly important and challenging field. We outline an automatic 3D reconstruction pipeline, mainly focusing on dense stereo-matching which relies on a hierarchical, local optimization scheme. Our matching framework consists of a combination of robust cost measures, extracted via an intuitive cost aggregation support area and set within a coarse-tofine strategy. The cost function is formulated by combining three individual costs: a cost computed on an extended census transformation of the images; the absolute difference cost, taking into account information from colour channels; and a cost based on the principal image derivatives. An efficient adaptive method of aggregating matching cost for each pixel is then applied, relying on linearly expanded cross skeleton support regions. Aggregated cost is smoothed via a 3D Gaussian function. Finally, a simple "winnertakes- all" approach extracts the disparity value with minimum cost. This keeps algorithmic complexity and system computational requirements acceptably low for high resolution images (or real-time applications), when compared to complex matching functions of global formulations. The stereo algorithm adopts a hierarchical scheme to accommodate high-resolution images and complex scenes. In a last step, a robust post-processing work-flow is applied to enhance the disparity map and, consequently, the geometric quality of the reconstructed scene. Successful results from our implementation, which combines pre-existing algorithms and novel considerations, are presented and evaluated on the Middlebury platform.

  10. Extradural middle fossa approach. Proposal of a learning method: the "rule of two fans." Technical note.

    PubMed

    Mastronardi, Luciano; Sameshima, Tetsuro; Ducati, Alessandro; De Waele, Luc F; Ferrante, Luigi; Fukushima, Takanori

    2006-08-01

    The extradural middle fossa approach is used to access lesions of the petroclival and cavernous sinus regions. It may be included in combined petrosal and anterolateral transcavernous approaches. Technically, it is a demanding exposure that provides a wide extradural corridor between the 5th, 7th, and 8th cranial nerves. Its major advantages are that it offers extradural dissection, limits temporal lobe retraction, and avoids the transposition of nerves or vessels. Its disadvantages are primarily related to the complicated anatomy of the petrous apex from the middle fossa trajectory, which can be unfamiliar to neurosurgeons. To facilitate the first attempts with this relatively uncommon approach during dissections of human cadaveric injected heads and isolated temporal bones, we developed a simple learning method useful for localizing all anatomical structures. Using this "rule of two fans," vascular, nervous, fibrous, and osseous structures are localized within two bordering fans with a 90-degree relationship to each other.

  11. Millimeter-Wave Localizers for Aircraft-to-Aircraft Approach Navigation

    NASA Technical Reports Server (NTRS)

    Tang, Adrian J.

    2013-01-01

    Aerial refueling technology for both manned and unmanned aircraft is critical for operations where extended aircraft flight time is required. Existing refueling assets are typically manned aircraft, which couple to a second aircraft through the use of a refueling boom. Alignment and mating of the two aircraft continues to rely on human control with use of high-resolution cameras. With the recent advances in unmanned aircraft, it would be highly advantageous to remove/reduce human control from the refueling process, simplifying the amount of remote mission management and enabling new operational scenarios. Existing aerial refueling uses a camera, making it non-autonomous and prone to human error. Existing commercial localizer technology has proven robust and reliable, but not suited for aircraft-to-aircraft approaches like in aerial refueling scenarios since the resolution is too coarse (approximately one meter). A localizer approach system for aircraft-to-aircraft docking can be constructed using the same modulation with a millimeterwave carrier to provide high resolution. One technology used to remotely align commercial aircraft on approach to a runway are ILS (instrument landing systems). ILS have been in service within the U.S. for almost 50 years. In a commercial ILS, two partially overlapping beams of UHF (109 to 126 MHz) are broadcast from an antenna array so that their overlapping region defines the centerline of the runway. This is called a localizer system and is responsible for horizontal alignment of the approach. One beam is modulated with a 150-Hz tone, while the other with a 90-Hz tone. Through comparison of the modulation depths of both tones, an autopilot system aligns the approaching aircraft with the runway centerline. A similar system called a glide-slope (GS) exists in the 320-to-330MHz band for vertical alignment of the approach. While this technology has been proven reliable for millions of commercial flights annually, its UHF nature limits

  12. Various contact approaches for the finite cell method

    NASA Astrophysics Data System (ADS)

    Konyukhov, Alexander; Lorenz, Christian; Schweizerhof, Karl

    2015-08-01

    The finite cell method (FCM) provides a method for the computation of structures which can be described as a mixture of high-order FEM and a special integration technique. The method is one of the novel computational methods and is highly developed within the last decade. One of the major problems of FCM is the description of boundary conditions inside cells as well as in sub-cells. And a completely open problem is the description of contact. Therefore, the motivation of the current work is to develop a set of computational contact mechanics approaches which will be effective for the finite element cell method. Thus, for the FCM method we are developing and testing hereby focusing on the Hertz problem the following algorithms: direct integration in the cell method, allowing the fastest implementation, but suffering from numerical artifacts such as the "stamp effect"; the most efficient scheme concerning approximation properties the cell-surface-to-analytical-surface contact element designed for contact with rigid bodies leading to cell-wisely contact elements; and finally the discrete-cell-to-cell contact approach based on the finite discrete method. All developed methods are carefully verified with the analytical Hertz solution. The cell subdivisions, the order of the shape functions as well as the selection of the classes for shape functions are investigated for all developed contact approaches. This analysis allows to choose the most robust approach depending on the needs of the user such as correct representation of the stresses, or only satisfaction of geometrical non-penetration conditions.

  13. Methods to analyze subcellular localization and intracellular trafficking of Claudin-16.

    PubMed

    Kausalya, P Jaya; Hunziker, Walter

    2011-01-01

    The integral tight junction protein Claudin-16 (Cldn16) is predominantly expressed in renal epithelial cells of the thick ascending limb of Henle's loop where, together with claudin-19, it forms a cation-selective pore that allows influx of Na+ from the interstitial fluid into the lumen of the kidney tubule. This leads to an electrochemical gradient that drives the reabsorbtion of Mg2+ and Ca2+ ions from the renal filtrate. Mutations in the Cldn16 gene have been identified in patients suffering from familial hypomagnesemia with hypercalciuria and nephrocalcinosis, with excessive renal wastage of Mg2+ and Ca2+ being a hallmark of this condition. Studies into the mechanism by which mutations impair Cldn16 function have shown that although several mutations affect paracellular ion transport, many interfere with intracellular trafficking of Cldn16, ultimately compromising its localization to TJs. Here, we describe the experimental approaches that can be used to monitor intracellular localization and trafficking of Cldn16. These methods can easily be adapted to study other claudins, provided suitable antibodies are available.

  14. Multiscale Energy and Eigenspace Approach to Detection and Localization of Myocardial Infarction.

    PubMed

    Sharma, L N; Tripathy, R K; Dandapat, S

    2015-07-01

    In this paper, a novel technique on a multiscale energy and eigenspace (MEES) approach is proposed for the detection and localization of myocardial infarction (MI) from multilead electrocardiogram (ECG). Wavelet decomposition of multilead ECG signals grossly segments the clinical components at different subbands. In MI, pathological characteristics such as hypercute T-wave, inversion of T-wave, changes in ST elevation, or pathological Q-wave are seen in ECG signals. This pathological information alters the covariance structures of multiscale multivariate matrices at different scales and the corresponding eigenvalues. The clinically relevant components can be captured by eigenvalues. In this study, multiscale wavelet energies and eigenvalues of multiscale covariance matrices are used as diagnostic features. Support vector machines (SVMs) with both linear and radial basis function (RBF) kernel and K-nearest neighbor are used as classifiers. Datasets, which include healthy control, and various types of MI, such as anterior, anteriolateral, anterioseptal, inferior, inferiolateral, and inferioposterio-lateral, from the PTB diagnostic ECG database are used for evaluation. The results show that the proposed technique can successfully detect the MI pathologies. The MEES approach also helps localize different types of MIs. For MI detection, the accuracy, the sensitivity, and the specificity values are 96%, 93%, and 99% respectively. The localization accuracy is 99.58%, using a multiclass SVM classifier with RBF kernel. PMID:26087076

  15. New approaches for automatic threedimensional source localization of acoustic emissions--Applications to concrete specimens.

    PubMed

    Kurz, Jochen H

    2015-12-01

    The task of locating a source in space by measuring travel time differences of elastic or electromagnetic waves from the source to several sensors is evident in varying fields. The new concepts of automatic acoustic emission localization presented in this article are based on developments from geodesy and seismology. A detailed description of source location determination in space is given with the focus on acoustic emission data from concrete specimens. Direct and iterative solvers are compared. A concept based on direct solvers from geodesy extended by a statistical approach is described which allows a stable source location determination even for partly erroneous onset times. The developed approach is validated with acoustic emission data from a large specimen leading to travel paths up to 1m and therefore to noisy data with errors in the determined onsets. The adaption of the algorithms from geodesy to the localization procedure of sources of elastic waves offers new possibilities concerning stability, automation and performance of localization results. Fracture processes can be assessed more accurately.

  16. Multiple scattering, radiative transfer, and weak localization in discrete random media: Unified microphysical approach

    NASA Astrophysics Data System (ADS)

    Mishchenko, Michael I.

    2008-06-01

    The radiative transfer theory has been extensively used in geophysics, remote sensing, and astrophysics for more than a century, but its physical basis had remained uncertain until quite recently. This ambiguous situation has finally changed, and the theory of radiative transfer in random particulate media has become a legitimate branch of Maxwell's electromagnetics. This tutorial review is intended to provide an accessible outline of recent basic developments. It discusses elastic electromagnetic scattering by random many-particle groups and summarizes the unified microphysical approach to radiative transfer and the effect of weak localization of electromagnetic waves (otherwise known as coherent backscattering). It explains the exact meaning of such fundamental concepts as single and multiple scattering, demonstrates how the theories of radiative transfer and weak localization originate in the Maxwell equations, and exposes and corrects certain misconceptions of the traditional phenomenological approach to radiative transfer. It also discusses the challenges facing the theories of multiple scattering, radiative transfer, and weak localization in the context of geophysical applications.

  17. New approaches for automatic threedimensional source localization of acoustic emissions--Applications to concrete specimens.

    PubMed

    Kurz, Jochen H

    2015-12-01

    The task of locating a source in space by measuring travel time differences of elastic or electromagnetic waves from the source to several sensors is evident in varying fields. The new concepts of automatic acoustic emission localization presented in this article are based on developments from geodesy and seismology. A detailed description of source location determination in space is given with the focus on acoustic emission data from concrete specimens. Direct and iterative solvers are compared. A concept based on direct solvers from geodesy extended by a statistical approach is described which allows a stable source location determination even for partly erroneous onset times. The developed approach is validated with acoustic emission data from a large specimen leading to travel paths up to 1m and therefore to noisy data with errors in the determined onsets. The adaption of the algorithms from geodesy to the localization procedure of sources of elastic waves offers new possibilities concerning stability, automation and performance of localization results. Fracture processes can be assessed more accurately. PMID:26233938

  18. Comparison of alternative MS/MS and bioinformatics approaches for confident phosphorylation site localization.

    PubMed

    Wiese, Heike; Kuhlmann, Katja; Wiese, Sebastian; Stoepel, Nadine S; Pawlas, Magdalena; Meyer, Helmut E; Stephan, Christian; Eisenacher, Martin; Drepper, Friedel; Warscheid, Bettina

    2014-02-01

    Over the past years, phosphoproteomics has advanced to a prime tool in signaling research. Since then, an enormous amount of information about in vivo protein phosphorylation events has been collected providing a treasure trove for gaining a better understanding of the molecular processes involved in cell signaling. Yet, we still face the problem of how to achieve correct modification site localization. Here we use alternative fragmentation and different bioinformatics approaches for the identification and confident localization of phosphorylation sites. Phosphopeptide-enriched fractions were analyzed by multistage activation, collision-induced dissociation and electron transfer dissociation (ETD), yielding complementary phosphopeptide identifications. We further found that MASCOT, OMSSA and Andromeda each identified a distinct set of phosphopeptides allowing the number of site assignments to be increased. The postsearch engine SLoMo provided confident phosphorylation site localization, whereas different versions of PTM-Score integrated in MaxQuant differed in performance. Based on high-resolution ETD and higher collisional dissociation (HCD) data sets from a large synthetic peptide and phosphopeptide reference library reported by Marx et al. [Nat. Biotechnol. 2013, 31 (6), 557-564], we show that an Andromeda/PTM-Score probability of 1 is required to provide an false localization rate (FLR) of 1% for HCD data, while 0.55 is sufficient for high-resolution ETD spectra. Additional analyses of HCD data demonstrated that for phosphotyrosine peptides and phosphopeptides containing two potential phosphorylation sites, PTM-Score probability cutoff values of <1 can be applied to ensure an FLR of 1%. Proper adjustment of localization probability cutoffs allowed us to significantly increase the number of confident sites with an FLR of <1%.Our findings underscore the need for the systematic assessment of FLRs for different score values to report confident modification site

  19. Mapping Patterns of Local Recurrence After Pancreaticoduodenectomy for Pancreatic Adenocarcinoma: A New Approach to Adjuvant Radiation Field Design

    SciTech Connect

    Dholakia, Avani S.; Kumar, Rachit; Raman, Siva P.; Moore, Joseph A.; Ellsworth, Susannah; McNutt, Todd; Laheru, Daniel A.; Jaffee, Elizabeth; Cameron, John L.; Tran, Phuoc T.; Hobbs, Robert F.; Wolfgang, Christopher L.; and others

    2013-12-01

    Purpose: To generate a map of local recurrences after pancreaticoduodenectomy (PD) for patients with resectable pancreatic ductal adenocarcinoma (PDA) and to model an adjuvant radiation therapy planning treatment volume (PTV) that encompasses a majority of local recurrences. Methods and Materials: Consecutive patients with resectable PDA undergoing PD and 1 or more computed tomography (CT) scans more than 60 days after PD at our institution were reviewed. Patients were divided into 3 groups: no adjuvant treatment (NA), chemotherapy alone (CTA), or chemoradiation (CRT). Cross-sectional scans were centrally reviewed, and local recurrences were plotted to scale with respect to the celiac axis (CA), superior mesenteric artery (SMA), and renal veins on 1 CT scan of a template post-PD patient. An adjuvant clinical treatment volume comprising 90% of local failures based on standard expansions of the CA and SMA was created and simulated on 3 post-PD CT scans to assess the feasibility of this planning approach. Results: Of the 202 patients in the study, 40 (20%), 34 (17%), and 128 (63%) received NA, CTA, and CRT adjuvant therapy, respectively. The rate of margin-positive resections was greater in CRT patients than in CTA patients (28% vs 9%, P=.023). Local recurrence occurred in 90 of the 202 patients overall (45%) and in 19 (48%), 22 (65%), and 49 (38%) in the NA, CTA, and CRT groups, respectively. Ninety percent of recurrences were within a 3.0-cm right-lateral, 2.0-cm left-lateral, 1.5-cm anterior, 1.0-cm posterior, 1.0-cm superior, and 2.0-cm inferior expansion of the combined CA and SMA contours. Three simulated radiation treatment plans using these expansions with adjustments to avoid nearby structures were created to demonstrate the use of this treatment volume. Conclusions: Modified PTVs targeting high-risk areas may improve local control while minimizing toxicities, allowing dose escalation with intensity-modulated or stereotactic body radiation therapy.

  20. A hybrid approach for efficient anomaly detection using metaheuristic methods

    PubMed Central

    Ghanem, Tamer F.; Elkilani, Wail S.; Abdul-kader, Hatem M.

    2014-01-01

    Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms. PMID:26199752

  1. A hybrid approach for efficient anomaly detection using metaheuristic methods.

    PubMed

    Ghanem, Tamer F; Elkilani, Wail S; Abdul-Kader, Hatem M

    2015-07-01

    Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms. PMID:26199752

  2. Service Areas of Local Urban Green Spaces: AN Explorative Approach in Arroios, Lisbon

    NASA Astrophysics Data System (ADS)

    Figueiredo, R.; Gonçalves, A. B.; Ramos, I. L.

    2016-09-01

    The identification of service areas of urban green spaces and areas with lack of these is increasingly necessary within city planning and management, as it translates into important indicators for the assessment of quality of life. In this setting, it is important to evaluate the attractiveness and accessibility dynamics through a set of attributes, taking into account the local reality of the territory under study. This work presents an operational methodology associated with these dynamics in local urban green spaces, assisting in the planning and management of this type of facilities. The methodology is supported firstly on questionnaire surveys and then on network analysis, processing spatial data in a Geographic Information Systems (GIS) environment. In the case study, two local green spaces in Lisbon were selected, on a local perspective explorative approach. Through field data, it was possible to identify service areas for both spaces, and compare the results with references in the literature. It was also possible to recognise areas with lack of these spaces. The difficulty to evaluate the dynamics of real individuals in their choices of urban green spaces and the respective route is a major challenge to the application of the methodology. In this sense it becomes imperative to develop different instruments and adapt them to other types of urban green spaces.

  3. The Feldenkrais Method: A Dynamic Approach to Changing Motor Behavior.

    ERIC Educational Resources Information Center

    Buchanan, Patricia A.; Ulrich, Beverly D.

    2001-01-01

    Describes the Feldenkrais Method of somatic education, noting parallels with a dynamic systems theory (DST) approach to motor behavior. Feldenkrais uses movement and perception to foster individualized improvement in function. DST explains that a human-environment system continually adapts to changing conditions and assembles behaviors…

  4. Method ruggedness studies incorporating a risk based approach: a tutorial.

    PubMed

    Borman, Phil J; Chatfield, Marion J; Damjanov, Ivana; Jackson, Patrick

    2011-10-10

    This tutorial explains how well thought-out application of design and analysis methodology, combined with risk assessment, leads to improved assessment of method ruggedness. The authors define analytical method ruggedness as an experimental evaluation of noise factors such as analyst, instrument or stationary phase batch. Ruggedness testing is usually performed upon transfer of a method to another laboratory, however, it can also be employed during method development when an assessment of the method's inherent variability is required. The use of a ruggedness study provides a more rigorous method for assessing method precision than a simple comparative intermediate precision study which is typically performed as part of method validation. Prior to designing a ruggedness study, factors that are likely to have a significant effect on the performance of the method should be identified (via a risk assessment) and controlled where appropriate. Noise factors that are not controlled are considered for inclusion in the study. The purpose of the study should be to challenge the method and identify whether any noise factors significantly affect the method's precision. The results from the study are firstly used to identify any special cause variability due to specific attributable circumstances. Secondly, common cause variability is apportioned to determine which factors are responsible for most of the variability. The total common cause variability can then be used to assess whether the method's precision requirements are achievable. The approach used to design and analyse method ruggedness studies will be covered in this tutorial using a real example.

  5. Definitive localization of intracellular proteins: Novel approach using CRISPR-Cas9 genome editing, with glucose 6-phosphate dehydrogenase as a model.

    PubMed

    Spencer, Netanya Y; Yan, Ziying; Cong, Le; Zhang, Yulong; Engelhardt, John F; Stanton, Robert C

    2016-02-01

    Studies to determine subcellular localization and translocation of proteins are important because subcellular localization of proteins affects every aspect of cellular function. Such studies frequently utilize mutagenesis to alter amino acid sequences hypothesized to constitute subcellular localization signals. These studies often utilize fluorescent protein tags to facilitate live cell imaging. These methods are excellent for studies of monomeric proteins, but for multimeric proteins, they are unable to rule out artifacts from native protein subunits already present in the cells. That is, native monomers might direct the localization of fluorescent proteins with their localization signals obliterated. We have developed a method for ruling out such artifacts, and we use glucose 6-phosphate dehydrogenase (G6PD) as a model to demonstrate the method's utility. Because G6PD is capable of homodimerization, we employed a novel approach to remove interference from native G6PD. We produced a G6PD knockout somatic (hepatic) cell line using CRISPR-Cas9 mediated genome engineering. Transfection of G6PD knockout cells with G6PD fluorescent mutant proteins demonstrated that the major subcellular localization sequences of G6PD are within the N-terminal portion of the protein. This approach sets a new gold standard for similar studies of subcellular localization signals in all homodimerization-capable proteins.

  6. Vibration localization in mono- and bi-coupled bladed disks - A transfer matrix approach

    NASA Technical Reports Server (NTRS)

    Ottarsson, Gisli; Pierre, Christophe

    1993-01-01

    A transfer matrix approach to the analysis of the dynamics of mistuned bladed disks is presented. The study focuses on mono-coupled systems, in which each blade is coupled to its two neighboring blades, and bi-coupled systems, where each blade is coupled to its four nearest neighbors. Transfer matrices yield the free dynamics, both the characteristic free wave and the normal modes - in closed form for the tuned assemblies. Mistuned assemblies are represented by random transfer matrices and an examination of the effect of mistuning on harmonic wave propagation yields the localization factor - the average rate of spatial wave amplitude decay per blade - in the mono-coupled assembly. Based on a comparison of the wave propagation characteristics of the mono- and bi-coupled assemblies, important conclusions are drawn about the effect of the additional coupling coordinate on the sensitivity to mistuning and the strength of mode localization predicted by a mono-coupled analysis.

  7. Non-local total variation method for despeckling of ultrasound images

    NASA Astrophysics Data System (ADS)

    Feng, Jianbin; Ding, Mingyue; Zhang, Xuming

    2014-03-01

    Despeckling of ultrasound images, as a very active topic research in medical image processing, plays an important or even indispensable role in subsequent ultrasound image processing. The non-local total variation (NLTV) method has been widely applied to denoising images corrupted by Gaussian noise, but it cannot provide satisfactory restoration results for ultrasound images corrupted by speckle noise. To address this problem, a novel non-local total variation despeckling method is proposed for speckle reduction. In the proposed method, the non-local gradient is computed on the images restored by the optimized Bayesian non-local means (OBNLM) method and it is introduced into the total variation method to suppress speckle in ultrasound images. Comparisons of the restoration performance are made among the proposed method and such state-of-the-art despeckling methods as the squeeze box filter (SBF), the non-local means (NLM) method and the OBNLM method. The quantitative comparisons based on synthetic speckled images show that the proposed method can provide higher Peak signal-to-noise ratio (PSNR) and structure similarity (SSIM) than compared despeckling methods. The subjective visual comparisons based on synthetic and real ultrasound images demonstrate that the proposed method outperforms other compared algorithms in that it can achieve better performance of noise reduction, artifact avoidance, edge and texture preservation.

  8. Optical coherence tomography as approach for the minimal invasive localization of the germinal disc in ovo before chicken sexing

    NASA Astrophysics Data System (ADS)

    Burkhardt, Anke; Geissler, Stefan; Koch, Edmund

    2010-04-01

    In most industrial states a huge amount of newly hatched male layer chickens are usually killed immediately after hatching by maceration or gassing. The reason for killing most of the male chickens of egg producing races is their slow growth rate compared to races specialized on meat production. When the egg has been laid, the egg contains already a small disc of cells on the surface of the yolk known as the blastoderm. This region is about 4 - 5 mm in diameter and contains the information whether the chick becomes male or female and hence allows sexing of the chicks by spectroscopy and other methods in the unincubated state. Different imaging methods like sonography, 3D-X-ray micro computed tomography and magnetic resonance imaging were used for localization of the blastoderm until now, but found to be impractical for different reasons. Optical coherence tomography (OCT) enables micrometer-scale, subsurface imaging of biological tissue and could therefore be a suitable technique for an accurate localization. The intention of this study is to prove if OCT can be an appropriate approach for the precise in ovo localization.

  9. Fourth order exponential time differencing method with local discontinuous Galerkin approximation for coupled nonlinear Schrodinger equations

    SciTech Connect

    Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong

    2015-01-23

    In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.

  10. Stellar mass functions: methods, systematics and results for the local Universe

    NASA Astrophysics Data System (ADS)

    Weigel, Anna K.; Schawinski, Kevin; Bruderer, Claudio

    2016-06-01

    We present a comprehensive method for determining stellar mass functions, and apply it to samples in the local Universe. We combine the classical 1/Vmax approach with STY, a parametric maximum likelihood method and step-wise maximum likelihood, a non-parametric maximum likelihood technique. In the parametric approach, we are assuming that the stellar mass function can be modelled by either a single or a double Schechter function and we use a likelihood ratio test to determine which model provides a better fit to the data. We discuss how the stellar mass completeness as a function of z biases the three estimators and how it can affect, especially the low-mass end of the stellar mass function. We apply our method to Sloan Digital Sky Survey DR7 data in the redshift range from 0.02 to 0.06. We find that the entire galaxy sample is best described by a double Schechter function with the following parameters: log (M*/M⊙) = 10.79 ± 0.01, log (Φ ^{{ast }}_1/h^3 Mpc^{-3}) = -3.31 ± 0.20, α1 = -1.69 ± 0.10, log (Φ ^{{ast }}_2/h^3 Mpc^{-3}) = -2.01 ± 0.28 and α2 = -0.79 ± 0.04. We also use morphological classifications from Galaxy Zoo and halo mass, overdensity, central/satellite, colour and specific star formation rate measurements to split the galaxy sample into over 130 subsamples. We determine and present the stellar mass functions and the best-fitting Schechter function parameters for each of these subsamples.

  11. A Poincaré's approach for plasmonics: the plasmon localization.

    PubMed

    Barchiesi, D; Kremer, E; Mai, V P; Grosges, T

    2008-03-01

    A Poincaré's approach is employed to characterize the excitation of a plasmon, which in essence corresponds to a zero of a complex S-matrix. Throughout this work we study the plasmonic behaviour of gold, as this metal not only is frequently used in experimental arrays, but also requires an accurate dispersion model to properly excite plasmons. We investigate the plasmonic behaviour of gold nanogratings by means of Born's approximation and the Finite-Elements Method. Also, a method based on the Poincaré's approach is proposed to optimize this kind of structures.

  12. Adhesive restorations, centric relation, and the Dahl principle: minimally invasive approaches to localized anterior tooth erosion.

    PubMed

    Magne, Pascal; Magne, Michel; Belser, Urs C

    2007-01-01

    The purpose of this article is to review biomechanical and occlusal principles that could help optimize the conservative treatment of severely eroded and worn anterior dentition using adhesive restorations. It appears that enamel and dentin bonding, through the combined use of resin composites (on the palatal surface) and indirect porcelain veneers (on the facial/incisal surfaces) can lead to an optimal result from both esthetic and functional/biomechanical aspects. Cases of deep bite combined with palatal erosion and wear can be particularly challenging. A simplified approach is proposed through the use of an occlusal therapy combining centric relation and the Dahl principle to create anterior interocclusal space to reduce the need for more invasive palatal reduction. This approach allows the ultraconservative treatment of localized anterior tooth erosion and wear.

  13. [Communicative approach of Situational Strategic Planning at the local level: health and equity in Venezuela].

    PubMed

    Heredia-Martínez, Henny Luz; Artmann, Elizabeth; Porto, Silvia Marta

    2010-06-01

    The article discusses the results of operationalizing Situational Strategic Planning adapted to the local level in health, considering the communicative approach and equity in a parish in Venezuela. Two innovative criteria were used: estimated health needs and analysis of the actors' potential for participation. The problems identified were compared to the corresponding article on rights in the Venezuelan Constitution. The study measured inequalities using health indicators associated with the selected problems; equity criteria were incorporated into the action proposals and communicative elements. Priority was assigned to the problem of "low case-resolving capacity in the health services network", and five critical points were selected for the action plan, which finally consisted of 6 operations and 21 actions. The article concludes that the combination of epidemiology and planning expands the situational explanation. Incorporation of the communicative approach and the equity dimension into Situational Strategic Planning allows empowering health management and helps decrease the gaps from inequality.

  14. Frequency-domain localization of alpha rhythm in humans via a maximum entropy approach

    NASA Astrophysics Data System (ADS)

    Patel, Pankaj; Khosla, Deepak; Al-Dayeh, Louai; Singh, Manbir

    1997-05-01

    Generators of spontaneous human brain activity such as alpha rhythm may be easier and more accurate to localize in frequency-domain than in time-domain since these generators are characterized by a specific frequency range. We carried out a frequency-domain analysis of synchronous alpha sources by generating equivalent potential maps using the Fourier transform of each channel of electro-encephalographic (EEG) recordings. SInce the alpha rhythm recorded by EEG scalp measurements is probably produced by several independent generators, a distributed source imaging approach was considered more appropriate than a model based on a single equivalent current dipole. We used an imaging approach based on a Bayesian maximum entropy technique. Reconstructed sources were superposed on corresponding anatomy form magnetic resonance imaging. Results from human studies suggest that reconstructed sources responsible for alpha rhythm are mainly located in the occipital and parieto- occipital lobes.

  15. Advanced Numerical Methods and Software Approaches for Semiconductor Device Simulation

    DOE PAGES

    Carey, Graham F.; Pardhanani, A. L.; Bova, S. W.

    2000-01-01

    In this article we concisely present several modern strategies that are applicable to driftdominated carrier transport in higher-order deterministic models such as the driftdiffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of “upwind” and artificial dissipation schemes, generalization of the traditional Scharfetter – Gummel approach, Petrov – Galerkin and streamline-upwind Petrov Galerkin (SUPG), “entropy” variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of themore » methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. We have included numerical examples from our recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and we emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, we briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.« less

  16. Advanced numerical methods and software approaches for semiconductor device simulation

    SciTech Connect

    CAREY,GRAHAM F.; PARDHANANI,A.L.; BOVA,STEVEN W.

    2000-03-23

    In this article the authors concisely present several modern strategies that are applicable to drift-dominated carrier transport in higher-order deterministic models such as the drift-diffusion, hydrodynamic, and quantum hydrodynamic systems. The approaches include extensions of upwind and artificial dissipation schemes, generalization of the traditional Scharfetter-Gummel approach, Petrov-Galerkin and streamline-upwind Petrov Galerkin (SUPG), entropy variables, transformations, least-squares mixed methods and other stabilized Galerkin schemes such as Galerkin least squares and discontinuous Galerkin schemes. The treatment is representative rather than an exhaustive review and several schemes are mentioned only briefly with appropriate reference to the literature. Some of the methods have been applied to the semiconductor device problem while others are still in the early stages of development for this class of applications. They have included numerical examples from the recent research tests with some of the methods. A second aspect of the work deals with algorithms that employ unstructured grids in conjunction with adaptive refinement strategies. The full benefits of such approaches have not yet been developed in this application area and they emphasize the need for further work on analysis, data structures and software to support adaptivity. Finally, they briefly consider some aspects of software frameworks. These include dial-an-operator approaches such as that used in the industrial simulator PROPHET, and object-oriented software support such as those in the SANDIA National Laboratory framework SIERRA.

  17. The Health Role of Local Area Coordinators in Scotland: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Brown, Michael; Karatzias, Thanos; O'Leary, Lisa

    2013-01-01

    The study set out to explore whether local area coordinators (LACs) and their managers view the health role of LACs as an essential component of their work and identify the health-related activities undertaken by LACs in Scotland. A mixed methods cross-sectional phenomenological study involving local authority service managers (n = 25) and LACs (n…

  18. Networks of informal caring: a mixed-methods approach.

    PubMed

    Rutherford, Alasdair; Bowes, Alison

    2014-12-01

    Care for older people is a complex phenomenon, and is an area of pressing policy concern. Bringing together literature on care from social gerontology and economics, we report the findings of a mixed-methods project exploring networks of informal caring. Using quantitative data from the British Household Panel Survey (official survey of British households), together with qualitative interviews with older people and informal carers, we describe differences in formal care networks, and the factors and decision-making processes that have contributed to the formation of the networks. A network approach to care permits both quantitative and qualitative study, and the approach can be used to explore many important questions.

  19. Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2003-01-01

    A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.

  20. Approach-Method Interaction: The Role of Teaching Method on the Effect of Context-Based Approach in Physics Instruction

    ERIC Educational Resources Information Center

    Pesman, Haki; Ozdemir, Omer Faruk

    2012-01-01

    The purpose of this study is to explore not only the effect of context-based physics instruction on students' achievement and motivation in physics, but also how the use of different teaching methods influences it (interaction effect). Therefore, two two-level-independent variables were defined, teaching approach (contextual and non-contextual…

  1. Localization of causal locus in the genome of the brown macroalga Ectocarpus: NGS-based mapping and positional cloning approaches.

    PubMed

    Billoud, Bernard; Jouanno, Émilie; Nehr, Zofia; Carton, Baptiste; Rolland, Élodie; Chenivesse, Sabine; Charrier, Bénédicte

    2015-01-01

    Mutagenesis is the only process by which unpredicted biological gene function can be identified. Despite that several macroalgal developmental mutants have been generated, their causal mutation was never identified, because experimental conditions were not gathered at that time. Today, progresses in macroalgal genomics and judicious choices of suitable genetic models make mutated gene identification possible. This article presents a comparative study of two methods aiming at identifying a genetic locus in the brown alga Ectocarpus siliculosus: positional cloning and Next-Generation Sequencing (NGS)-based mapping. Once necessary preliminary experimental tools were gathered, we tested both analyses on an Ectocarpus morphogenetic mutant. We show how a narrower localization results from the combination of the two methods. Advantages and drawbacks of these two approaches as well as potential transfer to other macroalgae are discussed.

  2. Localization of causal locus in the genome of the brown macroalga Ectocarpus: NGS-based mapping and positional cloning approaches

    PubMed Central

    Billoud, Bernard; Jouanno, Émilie; Nehr, Zofia; Carton, Baptiste; Rolland, Élodie; Chenivesse, Sabine; Charrier, Bénédicte

    2015-01-01

    Mutagenesis is the only process by which unpredicted biological gene function can be identified. Despite that several macroalgal developmental mutants have been generated, their causal mutation was never identified, because experimental conditions were not gathered at that time. Today, progresses in macroalgal genomics and judicious choices of suitable genetic models make mutated gene identification possible. This article presents a comparative study of two methods aiming at identifying a genetic locus in the brown alga Ectocarpus siliculosus: positional cloning and Next-Generation Sequencing (NGS)-based mapping. Once necessary preliminary experimental tools were gathered, we tested both analyses on an Ectocarpus morphogenetic mutant. We show how a narrower localization results from the combination of the two methods. Advantages and drawbacks of these two approaches as well as potential transfer to other macroalgae are discussed. PMID:25745426

  3. Assessing and evaluating multidisciplinary translational teams: a mixed methods approach.

    PubMed

    Wooten, Kevin C; Rose, Robert M; Ostir, Glenn V; Calhoun, William J; Ameredes, Bill T; Brasier, Allan R

    2014-03-01

    A case report illustrates how multidisciplinary translational teams can be assessed using outcome, process, and developmental types of evaluation using a mixed-methods approach. Types of evaluation appropriate for teams are considered in relation to relevant research questions and assessment methods. Logic models are applied to scientific projects and team development to inform choices between methods within a mixed-methods design. Use of an expert panel is reviewed, culminating in consensus ratings of 11 multidisciplinary teams and a final evaluation within a team-type taxonomy. Based on team maturation and scientific progress, teams were designated as (a) early in development, (b) traditional, (c) process focused, or (d) exemplary. Lessons learned from data reduction, use of mixed methods, and use of expert panels are explored.

  4. A new approach to constructing efficient stiffly accurate EPIRK methods

    NASA Astrophysics Data System (ADS)

    Rainwater, G.; Tokman, M.

    2016-10-01

    The structural flexibility of the exponential propagation iterative methods of Runge-Kutta type (EPIRK) enables construction of particularly efficient exponential time integrators. While the EPIRK methods have been shown to perform well on stiff problems, all of the schemes proposed up to now have been derived using classical order conditions. In this paper we extend the stiff order conditions and the convergence theory developed for the exponential Rosenbrock methods to the EPIRK integrators. We derive stiff order conditions for the EPIRK methods and develop algorithms to solve them to obtain specific schemes. Moreover, we propose a new approach to constructing particularly efficient EPIRK integrators that are optimized to work with an adaptive Krylov algorithm. We use a set of numerical examples to illustrate the computational advantages that the newly constructed EPIRK methods offer compared to previously proposed exponential integrators.

  5. Finite Element approach for Density Functional Theory calculations on locally refined meshes

    SciTech Connect

    Fattebert, J; Hornung, R D; Wissink, A M

    2007-02-23

    We present a quadratic Finite Element approach to discretize the Kohn-Sham equations on structured non-uniform meshes. A multigrid FAC preconditioner is proposed to iteratively solve the equations by an accelerated steepest descent scheme. The method was implemented using SAMRAI, a parallel software infrastructure for general AMR applications. Examples of applications to small nanoclusters calculations are presented.

  6. Finite Elements approach for Density Functional Theory calculations on locally refined meshes

    SciTech Connect

    Fattebert, J; Hornung, R D; Wissink, A M

    2006-03-27

    We present a quadratic Finite Elements approach to discretize the Kohn-Sham equations on structured non-uniform meshes. A multigrid FAC preconditioner is proposed to iteratively solve the equations by an accelerated steepest descent scheme. The method was implemented using SAMRAI, a parallel software infrastructure for general AMR applications. Examples of applications to small nanoclusters calculations are presented.

  7. [CRITICAL VIEWPOINT ON THE CURRENT APPROACH OF LOCALIZED PANCREATIC DUCTAL ADENOCARCINOMA].

    PubMed

    Van Daele, D; Martinive, P; Loly, C; Polus, M; Collignon, J; Gast, P; Louis, E; Van Laethem, J-L

    2015-11-01

    Surgical resection followed by chemotherapy is the actual standard of care for localized, deemed resectable, pancreatic ductal adenocarcinoma. Despite a better selection of surgical candidates and the actual performance of expert teams, the proportion of patients with a prolonged survival has not been ameliorated during the last three decades. The morphological determinants of resectability are the subject of limitations. In the future, only a better understanding of the biological process, an earlier diagnosis of purely localized disease and more efficient systemic therapies may lead to a better prognosis. Meanwhile, taking into account the prognostic factors associated with a lower chance of cure is currently a matter of debate. The optimal therapeutic sequence, being a surgery-first or a neoadjuvant approach is controversial. The theoretical advantages of preoperative chemotherapy eventually associated with chemo-radiation are demonstrated in other tumours and applicable to pancreatic cancer without any excess of operative mortality, early progression rates and, on the contrary with positive survival data. The completion rates of multi-modal therapy are in favour of the preoperative approach, which also gives the opportunity to select the best candidates for surgical resection.

  8. Using archaeogenomic and computational approaches to unravel the history of local adaptation in crops

    PubMed Central

    Allaby, Robin G.; Gutaker, Rafal; Clarke, Andrew C.; Pearson, Neil; Ware, Roselyn; Palmer, Sarah A.; Kitchen, James L.; Smith, Oliver

    2015-01-01

    Our understanding of the evolution of domestication has changed radically in the past 10 years, from a relatively simplistic rapid origin scenario to a protracted complex process in which plants adapted to the human environment. The adaptation of plants continued as the human environment changed with the expansion of agriculture from its centres of origin. Using archaeogenomics and computational models, we can observe genome evolution directly and understand how plants adapted to the human environment and the regional conditions to which agriculture expanded. We have applied various archaeogenomics approaches as exemplars to study local adaptation of barley to drought resistance at Qasr Ibrim, Egypt. We show the utility of DNA capture, ancient RNA, methylation patterns and DNA from charred remains of archaeobotanical samples from low latitudes where preservation conditions restrict ancient DNA research to within a Holocene timescale. The genomic level of analyses that is now possible, and the complexity of the evolutionary process of local adaptation means that plant studies are set to move to the genome level, and account for the interaction of genes under selection in systems-level approaches. This way we can understand how plants adapted during the expansion of agriculture across many latitudes with rapidity. PMID:25487329

  9. Efficacy assessment of local doxycycline treatment in periodontal patients using multivariate chemometric approach.

    PubMed

    Bogdanovska, Liljana; Poceva Panovska, Ana; Nakov, Natalija; Zafirova, Marija; Popovska, Mirjana; Dimitrovska, Aneta; Petkovska, Rumenka

    2016-08-25

    The aim of our study was application of chemometric algorithms for multivariate data analysis in efficacy assessment of the local periodontal treatment with doxycycline (DOX). Treatment efficacy was evaluated by monitoring inflammatory biomarkers in gingival crevicular fluid (GCF) samples and clinical indices before and after the local treatment as well as by determination of DOX concentration in GCF after the local treatment. The experimental values from these determinations were submitted to several chemometric algorithms: principal component analysis (PCA), partial least square discriminant analysis (PLS-DA) and orthogonal projection to latent structures-discriminant analysis (OPLS-DA). The data structure and the mutual relations of the selected variables were thoroughly investigated by PCA. The PLS-DA model identified variables responsible for discrimination of classes of data, before and after DOX treatment. The OPLS-DA model compared the efficacy of the two commonly used medications in periodontal treatment, chlorhexidine (CHX) and DOX, at the same time providing insight in their mechanism of action. The obtained results indicate that application of multivariate chemometric algorithms can be used as a valuable approach for assessment of treatment efficacy. PMID:27283484

  10. Partial fault dictionary: A new approach for computer-aided fault localization

    SciTech Connect

    Hunger, A.; Papathanasiou, A.

    1995-12-31

    The approach described in this paper has been developed to address the computation time and problem size of localization methodologies in VLSI circuits in order to speed up the overall time consumption for fault localization. The reduction of the problem to solve is combined with the idea of the fault dictionary. In a pre-processing phase, a possibly faulty area is derived using the netlist and the actual test results as input data. The result is a set of cones originating from each faulty primary output. In the next step, the best cone is extracted for the fault dictionary methodology according to a heuristic formula. The circuit nodes, which are included in the intersection of the cones, are combined to a fault list. This fault list together with the best cone can be used by the fault simulator to generate a small and manageable fault dictionary related to one faulty output. In connection with additional algorithms for the reduction of stimuli and netlist a partial fault dictionary can be set up. This dictionary is valid only for the given faulty device together with the given and reduced stimuli, but offers important benefits: Practical results show a reduction of simulation time and size of the fault dictionary by factors around 100 or even more, depending on the actual circuit and assumed fault. The list of fault candidates is significantly reduced, and the required number of steps during the process of localization is reduced, too.

  11. A multilevel approach for minimum weight structural design including local and system buckling constraints

    NASA Technical Reports Server (NTRS)

    Schmit, L. A., Jr.; Ramanathan, R. K.

    1977-01-01

    A rational multilevel approach for minimum weight structural design of truss and wing structures including local and system buckling constraints is presented. Overall proportioning of the structure is achieved at the system level subject to strength, displacement and system buckling constraints, while the detailed component designs are carried out separately at the component level satisfying local buckling constraints. Total structural weight is taken to be the objective function at the system level while employing the change in the equivalent system stiffness of the component as the component level objective function. Finite element analysis is used to predict static response while system buckling behavior is handled by incorporating a geometric stiffness matrix capability. Buckling load factors and the corresponding mode shapes are obtained by solving the eigenvalue problem associated with the assembled elastic stiffness and geometric stiffness matrices for the structural system. At the component level various local buckling failure modes are guarded against using semi-empirical formulas. Mathematical programming techniques are employed at both the system and component level.

  12. The contour method: a new approach in experimental mechanics

    SciTech Connect

    Prime, Michael B

    2009-01-01

    The recently developed contour method can measure complex residual-stress maps in situations where other measurement methods cannot. This talk first describes the principle of the contour method. A part is cut in two using a precise and low-stress cutting technique such as electric discharge machining. The contour of the resulting new surface, which will not be flat if residual stresses are relaxed by the cutting, is then measured. Finally, a conceptually simple finite element analysis determines the original residual stresses from the measured contour. Next, this talk gives several examples of applications. The method is validated by comparing with neutron diffraction measurements in an indented steel disk and in a friction stir weld between dissimilar aluminum alloys. Several applications are shown that demonstrate the power of the contour method: large aluminum forgings, railroad rails, and welds. Finally, this talk discusses why the contour method is significant departure from conventional experimental mechanics. Other relaxation method, for example hole-drilling, can only measure a 1-D profile of residual stresses, and yet they require a complicated inverse calculation to determine the stresses from the strain data. The contour method gives a 2-D stress map over a full cross-section, yet a direct calculation is all that is needed to reduce the data. The reason for these advantages lies in a subtle but fundamental departure from conventional experimental mechanics. Applying new technology to old methods like will not give similar advances, but the new approach also introduces new errors.

  13. Meshless local integral equation method for two-dimensional nonlocal elastodynamic problems

    NASA Astrophysics Data System (ADS)

    Huang, X. J.; Wen, P. H.

    2016-08-01

    This paper presents the meshless local integral equation method (LIEM) for nonlocal analyses of two-dimensional dynamic problems based on the Eringen’s model. A unit test function is used in the local weak-form of the governing equation and by applying the divergence theorem to the weak-form, local boundary-domain integral equations are derived. Radial Basis Function (RBF) approximations are utilized for implementation of displacements. The Newmark method is employed to carry out the time marching approximation. Two numerical examples are presented to demonstrate the application of time domain technique to deal with nonlocal elastodynamic mechanical problems.

  14. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    NASA Technical Reports Server (NTRS)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  15. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  16. Selection of construction methods: a knowledge-based approach.

    PubMed

    Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  17. Methods for Sight Word Recognition in Kindergarten: Traditional Flashcard Method vs. Multisensory Approach

    ERIC Educational Resources Information Center

    Phillips, William E.; Feng, Jay

    2012-01-01

    A quasi-experimental action research with a pretest-posttest same subject design was implemented to determine if there is a different effect of the flash card method and the multisensory approach on kindergarteners' achievement in sight word recognition, and which method is more effective if there is any difference. Instrumentation for pretest and…

  18. The Rise and Attenuation of the Basic Education Programme (BEP) in Botswana: A Global-Local Dialectic Approach

    ERIC Educational Resources Information Center

    Tabulawa, Richard

    2011-01-01

    Using a global-local dialectic approach, this paper traces the rise of the basic education programme in the 1980s and 1990s in Botswana and its subsequent attenuation in the 2000s. Amongst the local forces that led to the rise of BEP were Botswana's political project of nation-building; the country's dire human resources situation in the decades…

  19. A finite-volume Eulerian-Lagrangian localized adjoint method for solution of the advection-dispersion equation

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1993-01-01

    Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods for solute transport problems that are dominated by advection. FVELLAM systematically conserves mass globally with all types of boundary conditions. Integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking of characteristic lines intersecting inflow boundaries. FVELLAM extends previous results by obtaining mass conservation locally on Lagrangian space-time elements. -from Authors

  20. The Local Discontinuous Galerkin Method for Time-Dependent Convection-Diffusion Systems

    NASA Technical Reports Server (NTRS)

    Cockburn, Bernardo; Shu, Chi-Wang

    1997-01-01

    In this paper, we study the Local Discontinuous Galerkin methods for nonlinear, time-dependent convection-diffusion systems. These methods are an extension of the Runge-Kutta Discontinuous Galerkin methods for purely hyperbolic systems to convection-diffusion systems and share with those methods their high parallelizability, their high-order formal accuracy, and their easy handling of complicated geometries, for convection dominated problems. It is proven that for scalar equations, the Local Discontinuous Galerkin methods are L(sup 2)-stable in the nonlinear case. Moreover, in the linear case, it is shown that if polynomials of degree k are used, the methods are k-th order accurate for general triangulations; although this order of convergence is suboptimal, it is sharp for the LDG methods. Preliminary numerical examples displaying the performance of the method are shown.

  1. A multistate local coupled cluster CC2 response method based on the Laplace transform.

    PubMed

    Kats, Danylo; Schütz, Martin

    2009-09-28

    A new Laplace transform based multistate local CC2 response method for calculating excitation energies of extended molecular systems is presented. By virtue of the Laplace transform trick, the eigenvalue problem involving the local CC2 Jacobian is partitioned along the doubles-doubles block (which is diagonal in the parental canonical method) without losing the sparsity in the integral, amplitude, and amplitude response supermatrices. Hence, only an effective eigenvalue problem involving singles vectors has to be solved, while the doubles part can be computed on-the-fly. Within this framework, a multistate treatment of excited states with state specific and adaptive local approximations imposed on the doubles part is straightforwardly possible. Furthermore, in the context of the density fitting approximation of the two-electron integrals, a procedure to specify the local approximation, i.e., the restricted pair lists and domains, on the basis of an analysis of the object to be approximated itself is proposed. Performance and accuracy of the new Laplace transformed density fitted local CC2 (LT-DF-LCC2) response method are tested for set of different test molecules and states. It turns out that LT-DF-LCC2 response is much more robust than the earlier local CC2 response method proposed before, which failed to find some excited states in difficult cases.

  2. A multistate local coupled cluster CC2 response method based on the Laplace transform

    NASA Astrophysics Data System (ADS)

    Kats, Danylo; Schütz, Martin

    2009-09-01

    A new Laplace transform based multistate local CC2 response method for calculating excitation energies of extended molecular systems is presented. By virtue of the Laplace transform trick, the eigenvalue problem involving the local CC2 Jacobian is partitioned along the doubles-doubles block (which is diagonal in the parental canonical method) without losing the sparsity in the integral, amplitude, and amplitude response supermatrices. Hence, only an effective eigenvalue problem involving singles vectors has to be solved, while the doubles part can be computed on-the-fly. Within this framework, a multistate treatment of excited states with state specific and adaptive local approximations imposed on the doubles part is straightforwardly possible. Furthermore, in the context of the density fitting approximation of the two-electron integrals, a procedure to specify the local approximation, i.e., the restricted pair lists and domains, on the basis of an analysis of the object to be approximated itself is proposed. Performance and accuracy of the new Laplace transformed density fitted local CC2 (LT-DF-LCC2) response method are tested for set of different test molecules and states. It turns out that LT-DF-LCC2 response is much more robust than the earlier local CC2 response method proposed before, which failed to find some excited states in difficult cases.

  3. Method for local temperature measurement in a nanoreactor for in situ high-resolution electron microscopy.

    PubMed

    Vendelbo, S B; Kooyman, P J; Creemer, J F; Morana, B; Mele, L; Dona, P; Nelissen, B J; Helveg, S

    2013-10-01

    In situ high-resolution transmission electron microscopy (TEM) of solids under reactive gas conditions can be facilitated by microelectromechanical system devices called nanoreactors. These nanoreactors are windowed cells containing nanoliter volumes of gas at ambient pressures and elevated temperatures. However, due to the high spatial confinement of the reaction environment, traditional methods for measuring process parameters, such as the local temperature, are difficult to apply. To address this issue, we devise an electron energy loss spectroscopy (EELS) method that probes the local temperature of the reaction volume under inspection by the electron beam. The local gas density, as measured using quantitative EELS, is combined with the inherent relation between gas density and temperature, as described by the ideal gas law, to obtain the local temperature. Using this method we determined the temperature gradient in a nanoreactor in situ, while the average, global temperature was monitored by a traditional measurement of the electrical resistivity of the heater. The local gas temperatures had a maximum of 56 °C deviation from the global heater values under the applied conditions. The local temperatures, obtained with the proposed method, are in good agreement with predictions from an analytical model. PMID:23831940

  4. Weak-localization approach to a 2D electron gas with a spectral node

    NASA Astrophysics Data System (ADS)

    Ziegler, K.; Sinner, A.

    2015-07-01

    We study a weakly disordered 2D electron gas with two bands and a spectral node within the weak-localization approach and compare its results with those of Gaussian fluctuations around the self-consistent Born approximation. The appearance of diffusive modes depends on the type of disorder. In particular, we find for a random gap a diffusive mode only from ladder contributions, whereas for a random scalar potential the diffusive mode is created by ladder and by maximally crossed contributions. The ladder (maximally crossed) contributions correspond to fermionic (bosonic) Gaussian fluctuations. We calculate the conductivity corrections from the density-density Kubo formula and find a good agreement with the experimentally observed V-shape conductivity of graphene.

  5. Localization of incipient tip vortex cavitation using ray based matched field inversion method

    NASA Astrophysics Data System (ADS)

    Kim, Dongho; Seong, Woojae; Choo, Youngmin; Lee, Jeunghoon

    2015-10-01

    Cavitation of marine propeller is one of the main contributing factors of broadband radiated ship noise. In this research, an algorithm for the source localization of incipient vortex cavitation is suggested. Incipient cavitation is modeled as monopole type source and matched-field inversion method is applied to find the source position by comparing the spatial correlation between measured and replicated pressure fields at the receiver array. The accuracy of source localization is improved by broadband matched-field inversion technique that enhances correlation by incoherently averaging correlations of individual frequencies. Suggested localization algorithm is verified through known virtual source and model test conducted in Samsung ship model basin cavitation tunnel. It is found that suggested localization algorithm enables efficient localization of incipient tip vortex cavitation using a few pressure data measured on the outer hull above the propeller and practically applicable to the typically performed model scale experiment in a cavitation tunnel at the early design stage.

  6. Local adaptive approach toward segmentation of microscopic images of activated sludge flocs

    NASA Astrophysics Data System (ADS)

    Khan, Muhammad Burhan; Nisar, Humaira; Ng, Choon Aun; Lo, Po Kim; Yap, Vooi Voon

    2015-11-01

    Activated sludge process is a widely used method to treat domestic and industrial effluents. The conditions of activated sludge wastewater treatment plant (AS-WWTP) are related to the morphological properties of flocs (microbial aggregates) and filaments, and are required to be monitored for normal operation of the plant. Image processing and analysis is a potential time-efficient monitoring tool for AS-WWTPs. Local adaptive segmentation algorithms are proposed for bright-field microscopic images of activated sludge flocs. Two basic modules are suggested for Otsu thresholding-based local adaptive algorithms with irregular illumination compensation. The performance of the algorithms has been compared with state-of-the-art local adaptive algorithms of Sauvola, Bradley, Feng, and c-mean. The comparisons are done using a number of region- and nonregion-based metrics at different microscopic magnifications and quantification of flocs. The performance metrics show that the proposed algorithms performed better and, in some cases, were comparable to the state-of the-art algorithms. The performance metrics were also assessed subjectively for their suitability for segmentations of activated sludge images. The region-based metrics such as false negative ratio, sensitivity, and negative predictive value gave inconsistent results as compared to other segmentation assessment metrics.

  7. An Approach to Estimate the Localized Effects of an Aircraft Crash on a Facility

    SciTech Connect

    Kimura, C; Sanzo, D; Sharirli, M

    2004-04-19

    Aircraft crashes are an element of external events required to be analyzed and documented in facility Safety Analysis Reports (SARs) and Nuclear Explosive Safety Studies (NESSs). This paper discusses the localized effects of an aircraft crash impact into the Device Assembly Facility (DAF) located at the Nevada Test Site (NTS), given that the aircraft hits the facility. This was done to gain insight into the robustness of the DAF and to account for the special features of the DAF that enhance its ability to absorb the effects of an aircraft crash. For the purpose of this paper, localized effects are considered to be only perforation or scabbing of the facility. This paper presents an extension to the aircraft crash risk methodology of Department of Energy (DOE) Standard 3014. This extension applies to facilities that may find it necessary or desirable to estimate the localized effects of an aircraft crash hit on a facility of nonuniform construction or one that is shielded in certain directions by surrounding terrain or buildings. This extension is not proposed as a replacement to the aircraft crash risk methodology of DOE Standard 3014 but rather as an alternate method to cover situations that were not considered.

  8. Remotely actuated localized pressure and heat apparatus and method of use

    NASA Technical Reports Server (NTRS)

    Merret, John B. (Inventor); Taylor, DeVor R. (Inventor); Wheeler, Mark M. (Inventor); Gale, Dan R. (Inventor)

    2004-01-01

    Apparatus and method for the use of a remotely actuated localized pressure and heat apparatus for the consolidation and curing of fiber elements in, structures. The apparatus includes members for clamping the desired portion of the fiber elements to be joined, pressure members and/or heat members. The method is directed to the application and use of the apparatus.

  9. A Local Discontinuous Galerkin Method for the Complex Modified KdV Equation

    SciTech Connect

    Li Wenting; Jiang Kun

    2010-09-30

    In this paper, we develop a local discontinuous Galerkin(LDG) method for solving complex modified KdV(CMKdV) equation. The LDG method has the flexibility for arbitrary h and p adaptivity. We prove the L{sup 2} stability for general solutions.

  10. A local fuzzy method based on “p-strong” community for detecting communities in networks

    NASA Astrophysics Data System (ADS)

    Yi, Shen; Gang, Ren; Yang, Liu; Jia-Li, Xu

    2016-06-01

    In this paper, we propose a local fuzzy method based on the idea of “p-strong” community to detect the disjoint and overlapping communities in networks. In the method, a refined agglomeration rule is designed for agglomerating nodes into local communities, and the overlapping nodes are detected based on the idea of making each community strong. We propose a contribution coefficient to measure the contribution of an overlapping node to each of its belonging communities, and the fuzzy coefficients of the overlapping node can be obtained by normalizing the to all its belonging communities. The running time of our method is analyzed and varies linearly with network size. We investigate our method on the computer-generated networks and real networks. The testing results indicate that the accuracy of our method in detecting disjoint communities is higher than those of the existing local methods and our method is efficient for detecting the overlapping nodes with fuzzy coefficients. Furthermore, the local optimizing scheme used in our method allows us to partly solve the resolution problem of the global modularity. Project supported by the National Natural Science Foundation of China (Grant Nos. 51278101 and 51578149), the Science and Technology Program of Ministry of Transport of China (Grant No. 2015318J33080), the Jiangsu Provincial Post-doctoral Science Foundation, China (Grant No. 1501046B), and the Fundamental Research Funds for the Central Universities, China (Grant No. Y0201500219).

  11. A new inverse approach for the localization and characterization of defects based on compressive experiments

    NASA Astrophysics Data System (ADS)

    Barbarella, E.; Allix, O.; Daghia, F.; Lamon, J.; Jollivet, T.

    2016-06-01

    Compressive tests involving buckling are known to be defect sensitive, nevertheless, to our knowledge, no inverse approach has been proposed yet to use this property for the localization and characterization of material defects. This is due to geometric imperfections, which greatly influence and even dominate the response of defective parts under compression. In comparison with a system lacking geometric imperfections, the modified system does not present any bifurcation, showing that the non-linear progressive response is mainly governed by such imperfections. Before implementing any inverse procedures it is necessary to know whether extracting meaningful material defect information from compressive tests on specimen which also have geometric imperfections is possible. To tackle this issue, an equivalent eigenvalue problem will be extracted from the non-linear response, a problem corrected from geometric imperfections. A dedicated inverse formulation based on the modified constitutive relation error will then be constructed which will involve only well-posed linear problems. Examples illustrate the potential of the methodology to localize and identify single and multiple defects.

  12. Local interaction simulation approach to modelling nonclassical, nonlinear elastic behavior in solids.

    PubMed

    Scalerandi, Marco; Agostini, Valentina; Delsanto, Pier Paolo; Van Den Abeele, Koen; Johnson, Paul A

    2003-06-01

    Recent studies show that a broad category of materials share "nonclassical" nonlinear elastic behavior much different from "classical" (Landau-type) nonlinearity. Manifestations of "nonclassical" nonlinearity include stress-strain hysteresis and discrete memory in quasistatic experiments, and specific dependencies of the harmonic amplitudes with respect to the drive amplitude in dynamic wave experiments, which are remarkably different from those predicted by the classical theory. These materials have in common soft "bond" elements, where the elastic nonlinearity originates, contained in hard matter (e.g., a rock sample). The bond system normally comprises a small fraction of the total material volume, and can be localized (e.g., a crack in a solid) or distributed, as in a rock. In this paper a model is presented in which the soft elements are treated as hysteretic or reversible elastic units connected in a one-dimensional lattice to elastic elements (grains), which make up the hard matrix. Calculations are performed in the framework of the local interaction simulation approach (LISA). Experimental observations are well predicted by the model, which is now ready both for basic investigations about the physical origins of nonlinear elasticity and for applications to material damage diagnostics.

  13. Identification of inelastic parameters based on deep drawing forming operations using a global-local hybrid Particle Swarm approach

    NASA Astrophysics Data System (ADS)

    Vaz, Miguel; Luersen, Marco A.; Muñoz-Rojas, Pablo A.; Trentin, Robson G.

    2016-04-01

    Application of optimization techniques to the identification of inelastic material parameters has substantially increased in recent years. The complex stress-strain paths and high nonlinearity, typical of this class of problems, require the development of robust and efficient techniques for inverse problems able to account for an irregular topography of the fitness surface. Within this framework, this work investigates the application of the gradient-based Sequential Quadratic Programming method, of the Nelder-Mead downhill simplex algorithm, of Particle Swarm Optimization (PSO), and of a global-local PSO-Nelder-Mead hybrid scheme to the identification of inelastic parameters based on a deep drawing operation. The hybrid technique has shown to be the best strategy by combining the good PSO performance to approach the global minimum basin of attraction with the efficiency demonstrated by the Nelder-Mead algorithm to obtain the minimum itself.

  14. Particle Swarm Optimization Method Based on Chaotic Local Search and Roulette Wheel Mechanism

    NASA Astrophysics Data System (ADS)

    Xia, Xiaohua

    Combining the particle swarm optimization (PSO) technique with the chaotic local search (CLS) and roulette wheel mechanism (RWM), an efficient optimization method solving the constrained nonlinear optimization problems is presented in this paper. PSO can be viewed as the global optimizer while the CLS and RWM are employed for the local search. Thus, the possibility of exploring a global minimum in problems with many local optima is increased. The search will continue until a termination criterion is satisfied. Benefit from the fast globally converging characteristics of PSO and the effective local search ability of CLS and RWM, the proposed method can obtain the global optimal results quickly which was tested for six benchmark optimization problems. And the improved performance comparing with the standard PSO and genetic algorithm (GA) testified its validity.

  15. Local Analysis via the Real Space GREEN’S Function Method

    NASA Astrophysics Data System (ADS)

    Wu, Shi-Yu; Jayanthi, Chakram S.

    A complete account of the development of the method of real space Green’s function is given in this review. The emphasis is placed on the calculation of the local Green’s function in a real space representation. The discussion is centered on a list of issues particularly relevant to the study of properties of complex systems with reduced symmetry.They include: (i) the convergence procedure for calculating the local Green’s function of infinite systems without any boundary effects associated with an arbitrary truncation of the system; (ii) a general recursive relation which streamlines the calculation of the local Green’s function; (iii) the calculation of the eigenvector of selected eigenvalues directly from the Green’s function. An example of the application of the method to carry out a local analysis of dynamics of the Au(511) surface is also presented.

  16. The effect of walking speed on local dynamic stability is sensitive to calculation methods.

    PubMed

    Stenum, Jan; Bruijn, Sjoerd M; Jensen, Bente R

    2014-11-28

    Local dynamic stability has been assessed by the short-term local divergence exponent (λS), which quantifies the average rate of logarithmic divergence of infinitesimally close trajectories in state space. Both increased and decreased local dynamic stability at faster walking speeds have been reported. This might pertain to methodological differences in calculating λS. Therefore, the aim was to test if different calculation methods would induce different effects of walking speed on local dynamic stability. Ten young healthy participants walked on a treadmill at five speeds (60%, 80%, 100%, 120% and 140% of preferred walking speed) for 3min each, while upper body accelerations in three directions were sampled. From these time-series, λS was calculated by three different methods using: (a) a fixed time interval and expressed as logarithmic divergence per stride-time (λS-a), (b) a fixed number of strides and expressed as logarithmic divergence per time (λS-b) and (c) a fixed number of strides and expressed as logarithmic divergence per stride-time (λS-c). Mean preferred walking speed was 1.16±0.09m/s. There was only a minor effect of walking speed on λS-a. λS-b increased with increasing walking speed indicating decreased local dynamic stability at faster walking speeds, whereas λS-c decreased with increasing walking speed indicating increased local dynamic stability at faster walking speeds. Thus, the effect of walking speed on calculated local dynamic stability was significantly different between methods used to calculate local dynamic stability. Therefore, inferences and comparisons of studies employing λS should be made with careful consideration of the calculation method.

  17. Solution of the advection-dispersion equation by a finite-volume eulerian-lagrangian local adjoint method

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1992-01-01

    A finite-volume Eulerian-Lagrangian local adjoint method for solution of the advection-dispersion equation is developed and discussed. The method is mass conservative and can solve advection-dominated ground-water solute-transport problems accurately and efficiently. An integrated finite-difference approach is used in the method. A key component of the method is that the integral representing the mass-storage term is evaluated numerically at the current time level. Integration points, and the mass associated with these points, are then forward tracked up to the next time level. The number of integration points required to reach a specified level of accuracy is problem dependent and increases as the sharpness of the simulated solute front increases. Integration points are generally equally spaced within each grid cell. For problems involving variable coefficients it has been found to be advantageous to include additional integration points at strategic locations in each well. These locations are determined by backtracking. Forward tracking of boundary fluxes by the method alleviates problems that are encountered in the backtracking approaches of most characteristic methods. A test problem is used to illustrate that the new method offers substantial advantages over other numerical methods for a wide range of problems.

  18. A rapid inversion and resolution analysis of magnetic microscope data by the subtractive optimally localized averages method

    NASA Astrophysics Data System (ADS)

    Usui, Y.; Uehara, M.; Okuno, K.

    2012-01-01

    Modern scanning magnetic microscopes have the potential for fine-scale magnetic investigations of rocks. Observations at high spatial resolution produce large volumes of data, and the interpretation of these data is a nontrivial task. We have developed software using an efficient magnetic inversion technique that explicitly constructs the spatially localized Backus-Gilbert averaging kernel. Our approach, using the subtractive optimally localized averages (SOLA) method (Pijpers, R.P., Thompson, M.J., 1992. Faster formulations of the optimally localized averages method for helioseismic inversions. Astronomy and Astrophysics 262, L33-L36), yield a unidirectional magnetization. The averaging kernel expresses the spatial resolution of the inversion and is valuable for paleomagnetic application of the scanning magnetic microscope. Inversion examples for numerical magnetization patterns are provided to exhibit the performance of the method. Examples of actual magnetic field data collected from thin sections of natural rocks measured with a magnetoimpedance (MI) magnetic microscope are also provided. Numerical tests suggest that the data-independent averaging kernel is desirable for a point-to-point comparison among multiple data. Contamination by vector magnetization components can be estimated by the averaging kernel. We conclude that the SOLA method is a useful technique for paleomagnetic and rock magnetic investigations using scanning magnetic microscopy.

  19. Laser-optical method of visualization the local net of tissue blood vessels and its biomedical applications

    NASA Astrophysics Data System (ADS)

    Asimov, M. M.; Asimov, R. M.; Rubinov, A. N.

    2007-06-01

    New approach in laser-optical diagnostic methods of cell metabolism based on visualization the local net of tissue blood vessels is proposed. Optical model of laser - tissue interaction and algorithm of mathematical calculation of optical signals is developed. Novel technology of local tissue hypoxia elimination based on laser-induced photodissosiation of oxyhemoglobin in cutaneous blood vessels is developed. Method of determination of oxygen diffusion coefficient into tissue on the base of kinetics of tissue oxygenation TcPO II under the laser irradiation is proposed. The results of mathematical modeling the kinetic of oxygen distribution into tissue from arterial blood are presented. The possibility of calculation and determination of the level of TcPO II in zones with the disturbed blood microcirculation is demonstrated. The increase of the value of oxygen release rate more than for times under the irradiation by laser light is obtained. It is shown that the efficiency of laser-induced oxygenation by means of increasing oxygen concentration in blood plasma is comparable with the method of hyperbaric oxygenation (HBO) at the same time gaining advantages in local action. Different biomedical applications of developing method are discussed.

  20. Strain localization in shear zones during exhumation: a graphical approach to facies interpretation

    NASA Astrophysics Data System (ADS)

    Cardello, Giovanni Luca; Augier, Romain; Laurent, Valentin; Roche, Vincent; Jolivet, Laurent

    2015-04-01

    Strain localization is a fundamental process determining plate tectonics. It is expressed in the ductile field by shear zones where strain concentrates. Despite their worldwide distribution in most metamorphic units, their detailed characterization and processes comprehension are far to be fully addressed. In this work, a graphic approach to tectono-metamorphic facies identification is applied to the Delfini Shear Zone in Syros (Cyclades, Greece), which is mostly characterized by metabasites displaying different degree of retrogression from fresh eclogite to prasinite. Several exhumation mechanisms brought them from the depths of the subduction zone to the surface, from syn-orogenic exhumation to post-orogenic backarc extension. Boudinage, grain-size reduction and metamorphic reactions determinate strain localization across well-deformed volumes of rocks organized in a hierarchic frame of smaller individual shear zones (10-25 meters thick). The most representative of them can be subdivided in 5 tectono-metamorphic (Tm) facies, TmA to E. TmA records HP witnesses and older folding stages preserved within large boudins as large as 1-2 m across. TmB is characterized by much smaller and progressively more asymmetric boudins and sigmoids. TmC is defined by well-transposed sub- to plane-parallel blueschist textures crossed by chlorite-shear bands bounding the newly formed boudins. When strain increases (facies TmD-E), the texture is progressively retrograded to LP-HT greenschist-facies conditions. Those observations allowed us to establish a sequence of stages of strain localization. The first stage (1) is determined by quite symmetric folding and boudinage. In a second stage (2), grain-size reduction is associated with dense shear bands formation along previously formed glaucophane and quartz-rich veins. With progressively more localized strain, mode-I veins may arrange as tension gashes that gradually evolve to blueschist shear bands. This process determinates the

  1. Classical convergence versus Zipf rank approach: Evidence from China's local-level data

    NASA Astrophysics Data System (ADS)

    Tang, Pan; Zhang, Ying; Baaquie, Belal E.; Podobnik, Boris

    2016-02-01

    This paper applies Zipf rank approach to measure how long it will take for the individual economy to reach the final state of equilibrium by using local-level data of China's urban areas. The indicators, the gross domestic product (GDP) per capita and the market capitalization (MCAP) per capita of 150 major cities in China are used for analyzing their convergence. Besides, the power law relationship is examined for GDP and MCAP. Our findings show that, compared to the classical approaches: β-convergence and σ-convergence, the Zipf ranking predicts that, in approximately 16 years, all the major cities in China will reach comparable values of GDP per capita. However, the MCAP per capita tends to follow the periodic fluctuation of the economic cycle, while the mean-log derivation (MLD) confirms the results of our study. Moreover, GDP per capita and MCAP per capita follow a power law with an average value of α = 0.41 which is higher than α = 0.38 obtained based on a large number of countries around the world.

  2. Local Discontinuous Galerkin Methods for Partial Differential Equations with Higher Order Derivatives

    NASA Technical Reports Server (NTRS)

    Yan, Jue; Shu, Chi-Wang; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    In this paper we review the existing and develop new continuous Galerkin methods for solving time dependent partial differential equations with higher order derivatives in one and multiple space dimensions. We review local discontinuous Galerkin methods for convection diffusion equations involving second derivatives and for KdV type equations involving third derivatives. We then develop new local discontinuous Galerkin methods for the time dependent bi-harmonic type equations involving fourth derivatives, and partial differential equations involving fifth derivatives. For these new methods we present correct interface numerical fluxes and prove L(exp 2) stability for general nonlinear problems. Preliminary numerical examples are shown to illustrate these methods. Finally, we present new results on a post-processing technique, originally designed for methods with good negative-order error estimates, on the local discontinuous Galerkin methods applied to equations with higher derivatives. Numerical experiments show that this technique works as well for the new higher derivative cases, in effectively doubling the rate of convergence with negligible additional computational cost, for linear as well as some nonlinear problems, with a local uniform mesh.

  3. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    SciTech Connect

    Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei

    2011-07-15

    Purpose: Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. Methods: First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a ''plug-and-play'' fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. Results: For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not

  4. Evaluation of exploration and monitoring methods for verification of natural attenuation using the virtual aquifer approach.

    PubMed

    Schäferl, Dirk; Schlenz, Bastian; Dahmke, Andreas

    2004-12-01

    Though natural attenuation (NA) is increasingly considered as a remediation technology, the methods for proper identification and quantification of NA are still under discussion. Here the "Virtual Aquifer" approach is used to demonstrate problems which may arise during measurement of concentrations in observation wells and for interpolation of locally measured concentrations in contaminated heterogeneous aquifers. The misinterpretation of measured concentrations complicates the identification and quantification of natural attenuation processes. The "Virtual Aquifer" approach accepts the plume simulated with a numerical model for a heterogeneous aquifer as "virtual reality". This virtual plume is investigated in the model with conventional methods like observations wells. The results of the investigation can be compared to the virtual "reality", evaluating the monitoring method. Locally determined concentrations are interpolated using various interpolation methods and different monitoring set-ups. The interpolation results are compared to the simulated plume to evaluate the quality of interpolation. This evaluation is not possible in nature, since concentrations in a heterogeneous aquifer are never known in detail.

  5. A divide and conquer approach to anomaly detection, localization and diagnosis

    NASA Astrophysics Data System (ADS)

    Liu, Jianbo; Djurdjanovic, Dragan; Marko, Kenneth A.; Ni, Jun

    2009-11-01

    With the growing complexity of dynamic control systems, the effective diagnosis of all possible failures has become increasingly difficult and time consuming. The virtually infinite variety of behavior patterns of such systems due to control inputs and environmental influences further complicates system characterization and fault diagnosis. To circumvent these difficulties, we propose a new diagnostic method, consisting of three elements: the first, based on anomaly detection, identifies any performance deviation from normal operation; the second, based on anomaly/fault localization, localizes the problem, as best as possible, to the specific component or subsystem that does not operate properly and the third, fault diagnosis, discriminates known and unknown faults and identifies the type of the fault if it is previously known. Our prescriptive method for diagnostic design relies on the use of self-organizing maps (SOMs) for regionalization of the system operating conditions, followed by the performance assessment module based on time-frequency distributions (TFDs) and principal component analysis (PCA) for anomaly detection and fault diagnosis. The complete procedure is described in detail and demonstrated with an example of automotive engine control system.

  6. A formative multi-method approach to evaluating training.

    PubMed

    Hayes, Holly; Scott, Victoria; Abraczinskas, Michelle; Scaccia, Jonathan; Stout, Soma; Wandersman, Abraham

    2016-10-01

    This article describes how we used a formative multi-method evaluation approach to gather real-time information about the processes of a complex, multi-day training with 24 community coalitions in the United States. The evaluation team used seven distinct, evaluation strategies to obtain evaluation data from the first Community Health Improvement Leadership Academy (CHILA) within a three-prong framework (inquiry, observation, and reflection). These methods included: comprehensive survey, rapid feedback form, learning wall, observational form, team debrief, social network analysis and critical moments reflection. The seven distinct methods allowed for both real time quality improvement during the CHILA and long term planning for the next CHILA. The methods also gave a comprehensive picture of the CHILA, which when synthesized allowed the evaluation team to assess the effectiveness of a training designed to tap into natural community strengths and accelerate health improvement. We hope that these formative evaluation methods can continue to be refined and used by others to evaluate training. PMID:27454882

  7. A hybrid approach for the modal analysis of continuous systems with localized nonlinear constraints.

    SciTech Connect

    Brake, Matthew Robert

    2010-10-01

    The analysis of continuous systems with nonlinearities in their domain have previously been limited to either numerical approaches, or analytical methods that are constrained in the parameter space, boundary conditions, or order of the system. The present analysis develops a robust method for studying continuous systems with arbitrary boundary conditions and nonlinearities using the assumption that the nonlinear constraint can be modeled with a piecewise-linear force-deflection constitutive relationship. Under this assumption, a superposition method is used to generate homogeneous boundary conditions, and modal analysis is used to find the displacement of the system in each state of the piecewise-linear nonlinearity. In order to map across each nonlinearity in the piecewise-linear force-deflection profile, a variational calculus approach is taken that minimizes the L2 energy norm between the previous and current states. To illustrate this method, a leaf spring coupled with a connector pin immersed in a viscous fluid is modeled as a beam with a piecewise-linear constraint. From the results of the convergence and parameter studies, a high correlation between the finite-time Lyapunov exponents and the contact time per period of the excitation is observed. The parameter studies also indicate that when the system's parameters are changed in order to reduce the magnitude of the velocity impact between the leaf spring and connector pin, the extent of the regions over which a chaotic response is observed increases.

  8. Optimizing neural networks for river flow forecasting - Evolutionary Computation methods versus the Levenberg-Marquardt approach

    NASA Astrophysics Data System (ADS)

    Piotrowski, Adam P.; Napiorkowski, Jarosław J.

    2011-09-01

    SummaryAlthough neural networks have been widely applied to various hydrological problems, including river flow forecasting, for at least 15 years, they have usually been trained by means of gradient-based algorithms. Recently nature inspired Evolutionary Computation algorithms have rapidly developed as optimization methods able to cope not only with non-differentiable functions but also with a great number of local minima. Some of proposed Evolutionary Computation algorithms have been tested for neural networks training, but publications which compare their performance with gradient-based training methods are rare and present contradictory conclusions. The main goal of the present study is to verify the applicability of a number of recently developed Evolutionary Computation optimization methods, mostly from the Differential Evolution family, to multi-layer perceptron neural networks training for daily rainfall-runoff forecasting. In the present paper eight Evolutionary Computation methods, namely the first version of Differential Evolution (DE), Distributed DE with Explorative-Exploitative Population Families, Self-Adaptive DE, DE with Global and Local Neighbors, Grouping DE, JADE, Comprehensive Learning Particle Swarm Optimization and Efficient Population Utilization Strategy Particle Swarm Optimization are tested against the Levenberg-Marquardt algorithm - probably the most efficient in terms of speed and success rate among gradient-based methods. The Annapolis River catchment was selected as the area of this study due to its specific climatic conditions, characterized by significant seasonal changes in runoff, rapid floods, dry summers, severe winters with snowfall, snow melting, frequent freeze and thaw, and presence of river ice - conditions which make flow forecasting more troublesome. The overall performance of the Levenberg-Marquardt algorithm and the DE with Global and Local Neighbors method for neural networks training turns out to be superior to other

  9. On local convergence analysis of inexact Newton method for singular systems of equations under majorant condition.

    PubMed

    Zhou, Fangqin

    2014-01-01

    We present a local convergence analysis of inexact Newton method for solving singular systems of equations. Under the hypothesis that the derivative of the function associated with the singular systems satisfies a majorant condition, we obtain that the method is well defined and converges. Our analysis provides a clear relationship between the majorant function and the function associated with the singular systems. It also allows us to obtain an estimate of convergence ball for inexact Newton method and some important special cases.

  10. 63,65Cu NMR Method in a Local Field for Investigation of Copper Ore Concentrates

    NASA Astrophysics Data System (ADS)

    Gavrilenko, A. N.; Starykh, R. V.; Khabibullin, I. Kh.; Matukhin, V. L.

    2015-01-01

    To choose the most efficient method and ore beneficiation flow diagram, it is important to know physical and chemical properties of ore concentrates. The feasibility of application of the 63,65Cu nuclear magnetic resonance (NMR) method in a local field aimed at studying the properties of copper ore concentrates in the copper-iron-sulfur system is demonstrated. 63,65Cu NMR spectrum is measured in a local field for a copper concentrate sample and relaxation parameters (times T1 and T2) are obtained. The spectrum obtained was used to identify a mineral (chalcopyrite) contained in the concentrate. Based on the experimental data, comparative characteristics of natural chalcopyrite and beneficiated copper concentrate are given. The feasibility of application of the NMR method in a local field to explore mineral deposits is analyzed.

  11. A Method for Non-Rigid Face Alignment via Combining Local and Holistic Matching

    PubMed Central

    Yang, Yang; Chen, Zhuo

    2016-01-01

    We propose a method for non-rigid face alignment which only needs a single template, such as using a person’s smile face to match his surprise face. First, in order to be robust to outliers caused by complex geometric deformations, a new local feature matching method called K Patch Pairs (K-PP) is proposed. Specifically, inspired by the state-of-art similarity measure used in template matching, K-PP is to find the mutual K nearest neighbors between two images. A weight matrix is then presented to balance the similarity and the number of local matching. Second, we proposed a modified Lucas-Kanade algorithm combined with local matching constraint to solve the non-rigid face alignment, so that a holistic face representation and local features can be jointly modeled in the object function. Both the flexible ability of local matching and the robust ability of holistic fitting are included in our method. Furthermore, we show that the optimization problem can be efficiently solved by the inverse compositional algorithm. Comparison results with conventional methods demonstrate our superiority in terms of both accuracy and robustness. PMID:27494319

  12. A high-resolution, fluorescence-based method for localization of endogenous alkaline phosphatase activity.

    PubMed

    Cox, W G; Singer, V L

    1999-11-01

    We describe a high-resolution, fluorescence-based method for localizing endogenous alkaline phosphatase in tissues and cultured cells. This method utilizes ELF (Enzyme-Labeled Fluorescence)-97 phosphate, which yields an intensely fluorescent yellow-green precipitate at the site of enzymatic activity. We compared zebrafish intestine, ovary, and kidney cryosections stained for endogenous alkaline phosphatase using four histochemical techniques: ELF-97 phosphate, Gomori method, BCIP/NBT, and naphthol AS-MX phosphate coupled with Fast Blue BB (colored) and Fast Red TR (fluorescent) diazonium salts. Each method localized endogenous alkaline phosphatase to the same specific sample regions. However, we found that sections labeled using ELF-97 phosphate exhibited significantly better resolution than the other samples. The enzymatic product remained highly localized to the site of enzymatic activity, whereas signals generated using the other methods diffused. We found that the ELF-97 precipitate was more photostable than the Fast Red TR azo dye adduct. Using ELF-97 phosphate in cultured cells, we detected an intracellular activity that was only weakly labeled with the other methods, but co-localized with an antibody against alkaline phosphatase, suggesting that the ELF-97 phosphate provided greater sensitivity. Finally, we found that detecting endogenous alkaline phosphatase with ELF-97 phosphate was compatible with the use of antibodies and lectins. (J Histochem Cytochem 47:1443-1455, 1999)

  13. Partially Strong Transparency Conditions and a Singular Localization Method In Geometric Optics

    NASA Astrophysics Data System (ADS)

    Lu, Yong; Zhang, Zhifei

    2016-10-01

    This paper focuses on the stability analysis of WKB approximate solutions in geometric optics with the absence of strong transparency conditions under the terminology of Joly, Métivier and Rauch. We introduce a compatible condition and a singular localization method which allows us to prove the stability of WKB solutions over long time intervals. This compatible condition is weaker than the strong transparency condition. The singular localization method allows us to do delicate analysis near resonances. As an application, we show the long time approximation of Klein-Gordon equations by Schrödinger equations in the non-relativistic limit regime.

  14. B-spline modal method: a polynomial approach compared to the Fourier modal method.

    PubMed

    Walz, Michael; Zebrowski, Thomas; Küchenmeister, Jens; Busch, Kurt

    2013-06-17

    A detailed analysis of the B-spline Modal Method (BMM) for one- and two-dimensional diffraction gratings and a comparison to the Fourier Modal Method (FMM) is presented. Owing to its intrinsic capability to accurately resolve discontinuities, BMM avoids the notorious problems of FMM that are associated with the Gibbs phenomenon. As a result, BMM facilitates significantly more efficient eigenmode computations. With regard to BMM-based transmission and reflection computations, it is demonstrated that a novel Galerkin approach (in conjunction with a scattering-matrix algorithm) allows for an improved field matching between different layers. This approach is superior relative to the traditional point-wise field matching. Moreover, only this novel Galerkin approach allows for an competitive extension of BMM to the case of two-dimensional diffraction gratings. These improvements will be very useful for high-accuracy grating computations in general and for the analysis of associated electromagnetic field profiles in particular.

  15. Travel time calculation in regular 3D grid in local and regional scale using fast marching method

    NASA Astrophysics Data System (ADS)

    Polkowski, M.

    2015-12-01

    Local and regional 3D seismic velocity models of crust and sediments are very important for numerous technics like mantle and core tomography, localization of local and regional events and others. Most of those techniques require calculation of wave travel time through the 3D model. This can be achieved using multiple approaches from simple ray tracing to advanced full waveform calculation. In this study simple and efficient implementation of fast marching method is presented. This method provides more information than ray tracing and is much less complicated than methods like full waveform being the perfect compromise. Presented code is written in C++, well commented and is easy to modify for different types of studies. Additionally performance is widely discussed including possibilities of multithreading and massive parallelism like GPU. Source code will be published in 2016 as it is part of the PhD thesis. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  16. Hybrid Genetic Algorithm - Local Search Method for Ground-Water Management

    NASA Astrophysics Data System (ADS)

    Chiu, Y.; Nishikawa, T.; Martin, P.

    2008-12-01

    Ground-water management problems commonly are formulated as a mixed-integer, non-linear programming problem (MINLP). Relying only on conventional gradient-search methods to solve the management problem is computationally fast; however, the methods may become trapped in a local optimum. Global-optimization schemes can identify the global optimum, but the convergence is very slow when the optimal solution approaches the global optimum. In this study, we developed a hybrid optimization scheme, which includes a genetic algorithm and a gradient-search method, to solve the MINLP. The genetic algorithm identifies a near- optimal solution, and the gradient search uses the near optimum to identify the global optimum. Our methodology is applied to a conjunctive-use project in the Warren ground-water basin, California. Hi- Desert Water District (HDWD), the primary water-manager in the basin, plans to construct a wastewater treatment plant to reduce future septic-tank effluent from reaching the ground-water system. The treated wastewater instead will recharge the ground-water basin via percolation ponds as part of a larger conjunctive-use strategy, subject to State regulations (e.g. minimum distances and travel times). HDWD wishes to identify the least-cost conjunctive-use strategies that control ground-water levels, meet regulations, and identify new production-well locations. As formulated, the MINLP objective is to minimize water-delivery costs subject to constraints including pump capacities, available recharge water, water-supply demand, water-level constraints, and potential new-well locations. The methodology was demonstrated by an enumerative search of the entire feasible solution and comparing the optimum solution with results from the branch-and-bound algorithm. The results also indicate that the hybrid method identifies the global optimum within an affordable computation time. Sensitivity analyses, which include testing different recharge-rate scenarios, pond

  17. Adaptive non-local means method for speckle reduction in ultrasound images

    NASA Astrophysics Data System (ADS)

    Ai, Ling; Ding, Mingyue; Zhang, Xuming

    2016-03-01

    Noise removal is a crucial step to enhance the quality of ultrasound images. However, some existing despeckling methods cannot ensure satisfactory restoration performance. In this paper, an adaptive non-local means (ANLM) filter is proposed for speckle noise reduction in ultrasound images. The distinctive property of the proposed method lies in that the decay parameter will not take the fixed value for the whole image but adapt itself to the variation of the local features in the ultrasound images. In the proposed method, the pre-filtered image will be obtained using the traditional NLM method. Based on the pre-filtered result, the local gradient will be computed and it will be utilized to determine the decay parameter adaptively for each image pixel. The final restored image will be produced by the ANLM method using the obtained decay parameters. Simulations on the synthetic image show that the proposed method can deliver sufficient speckle reduction while preserving image details very well and it outperforms the state-of-the-art despeckling filters in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). Experiments on the clinical ultrasound image further demonstrate the practicality and advantage of the proposed method over the compared filtering methods.

  18. Palaeoclimate estimates for the Middle Miocene Schrotzburg flora (S Germany): a multi-method approach

    NASA Astrophysics Data System (ADS)

    Uhl, Dieter; Bruch, Angela A.; Traiser, Christopher; Klotz, Stefan

    2006-11-01

    We present a detailed palaeoclimate analysis of the Middle Miocene (uppermost Badenian lowermost Sarmatian) Schrotzburg locality in S Germany, based on the fossil macro- and micro-flora, using four different methods for the estimation of palaeoclimate parameters: the coexistence approach (CA), leaf margin analysis (LMA), the Climate-Leaf Analysis Multivariate Program (CLAMP), as well as a recently developed multivariate leaf physiognomic approach based on an European calibration dataset (ELPA). Considering results of all methods used, the following palaeoclimate estimates seem to be most likely: mean annual temperature ˜15 16°C (MAT), coldest month mean temperature ˜7°C (CMMT), warmest month mean temperature between 25 and 26°C, and mean annual precipiation ˜1,300 mm, although CMMT values may have been colder as indicated by the disappearance of the crocodile Diplocynodon and the temperature thresholds derived from modern alligators. For most palaeoclimatic parameters, estimates derived by CLAMP significantly differ from those derived by most other methods. With respect to the consistency of the results obtained by CA, LMA and ELPA, it is suggested that for the Schrotzburg locality CLAMP is probably less reliable than most other methods. A possible explanation may be attributed to the correlation between leaf physiognomy and climate as represented by the CLAMP calibration data set which is largely based on extant floras from N America and E Asia and which may be not suitable for application to the European Neogene. All physiognomic methods used here were affected by taphonomic biasses. Especially the number of taxa had a great influence on the reliability of the palaeoclimate estimates. Both multivariate leaf physiognomic approaches are less influenced by such biasses than the univariate LMA. In combination with previously published results from the European and Asian Neogene, our data suggest that during the Neogene in Eurasia CLAMP may produce temperature

  19. New quantitative approaches for classifying and predicting local-scale habitats in estuaries

    NASA Astrophysics Data System (ADS)

    Valesini, Fiona J.; Hourston, Mathew; Wildsmith, Michelle D.; Coen, Natasha J.; Potter, Ian C.

    2010-03-01

    This study has developed quantitative approaches for firstly classifying local-scale nearshore habitats in an estuary and then predicting the habitat of any nearshore site in that system. Both approaches employ measurements for a suite of enduring environmental criteria that are biologically relevant and can be easily derived from readily available maps. While the approaches were developed for south-western Australian estuaries, with a focus here on the Swan and Peel-Harvey, they can easily be tailored to any system. Classification of the habitats in each of the above estuaries was achieved by subjecting to hierarchical agglomerative clustering (CLUSTER) and a Similarity Profiles test (SIMPROF), a Manhattan distance matrix constructed from measurements of a suite of enduring criteria recorded at numerous environmentally diverse sites. Groups of sites within the resultant dendogram that were shown by SIMPROF to not contain any significant internal differences, but differ significantly from all other groups in their enduring characteristics, were considered to represent habitat types. The enduring features of the 18 and 17 habitats identified among the 101 and 102 sites in the Swan and Peel-Harvey estuaries, respectively, are presented. The average measurements of the enduring characteristics at each habitat were then used in a novel application of the Linkage Tree (LINKTREE) and SIMPROF routines to produce a "decision tree" for predicting, on the basis of measurements for particular enduring variables, the habitat to which any further site in an estuary is best assigned. In both estuaries, the pattern of relative differences among habitats, as defined by their enduring characteristics, was significantly correlated with that defined by their non-enduring water physico-chemical characteristics recorded seasonally in the field. However, those correlations were substantially higher for the Swan, particularly when salinity was the only water physico-chemical variable

  20. A chemical method for flow visualization and determination of local mass transfer

    NASA Astrophysics Data System (ADS)

    Kottke, V.

    A method for measuring local mass transfer is presented, and the physical and chemical concept behind the measuring technique for reaction gases, such as ammonia or methylamine, are discussed, based on absorption, chemical reactions, and coupled-color reactions. Flow visualization at surfaces of arbitrary shape is evident by the color intensity distribution, which corresponds to the locally transferred mass rate. The technique is characterized by its simple handling, good local accuracy, and high local resolution. As an example, the effects of turbulence intensity on the formation of longitudinal vortices in stagnation flows and on the length of separation bubbles for a flat plate with a semi-circular nose section are discussed. Finally the influence of concentration and the temperature boundary layer at separation on the maximum of mass or heat transfer is described.

  1. Determining localized garment insulation values from manikin studies: computational method and results.

    PubMed

    Nelson, D A; Curlee, J S; Curran, A R; Ziriax, J M; Mason, P A

    2005-12-01

    The localized thermal insulation value expresses a garment's thermal resistance over the region which is covered by the garment, rather than over the entire surface of a subject or manikin. The determination of localized garment insulation values is critical to the development of high-resolution models of sensible heat exchange. A method is presented for determining and validating localized garment insulation values, based on whole-body insulation values (clo units) and using computer-aided design and thermal analysis software. Localized insulation values are presented for a catalog consisting of 106 garments and verified using computer-generated models. The values presented are suitable for use on volume element-based or surface element-based models of heat transfer involving clothed subjects.

  2. Modified patch-based locally optimal Wiener method for interferometric SAR phase filtering

    NASA Astrophysics Data System (ADS)

    Wang, Yang; Huang, Haifeng; Dong, Zhen; Wu, Manqing

    2016-04-01

    This paper presents a modified patch-based locally optimal Wiener (PLOW) method for interferometric synthetic aperture radar (InSAR) phase filtering. PLOW is a linear minimum mean squared error (LMMSE) estimator based on a Gaussian additive noise condition. It jointly estimates moments, including mean and covariance, using a non-local technique. By using similarities between image patches, this method can effectively filter noise while preserving details. When applied to InSAR phase filtering, three modifications are proposed based on spatial variant noise. First, pixels are adaptively clustered according to their coherence magnitudes. Second, rather than a global estimator, a locally adaptive estimator is used to estimate noise covariance. Third, using the coherence magnitudes as weights, the mean of each cluster is estimated, using a weighted mean to further reduce noise. The performance of the proposed method is experimentally verified using simulated and real data. The results of our study demonstrate that the proposed method is on par or better than the non-local interferometric SAR (NL-InSAR) method.

  3. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2016-06-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  4. FALCON: A method for flexible adaptation of local coordinates of nuclei.

    PubMed

    König, Carolin; Hansen, Mads Bøttger; Godtliebsen, Ian H; Christiansen, Ove

    2016-02-21

    We present a flexible scheme for calculating vibrational rectilinear coordinates with well-defined strict locality on a certain set of atoms. Introducing a method for Flexible Adaption of Local COordinates of Nuclei (FALCON) we show how vibrational subspaces can be "grown" in an adaptive manner. Subspace Hessian matrices are set up and used to calculate and analyze vibrational modes and frequencies. FALCON coordinates can more generally be used to construct vibrational coordinates for describing local and (semi-local) interacting modes with desired features. For instance, spatially local vibrations can be approximately described as internal motion within only a group of atoms and delocalized modes can be approximately expressed as relative motions of rigid groups of atoms. The FALCON method can support efficiency in the calculation and analysis of vibrational coordinates and energies in the context of harmonic and anharmonic calculations. The features of this method are demonstrated on a few small molecules, i.e., formylglycine, coumarin, and dimethylether as well as for the amide-I band and low-frequency modes of alanine oligomers and alpha conotoxin.

  5. FALCON: A method for flexible adaptation of local coordinates of nuclei

    NASA Astrophysics Data System (ADS)

    König, Carolin; Hansen, Mads Bøttger; Godtliebsen, Ian H.; Christiansen, Ove

    2016-02-01

    We present a flexible scheme for calculating vibrational rectilinear coordinates with well-defined strict locality on a certain set of atoms. Introducing a method for Flexible Adaption of Local COordinates of Nuclei (FALCON) we show how vibrational subspaces can be "grown" in an adaptive manner. Subspace Hessian matrices are set up and used to calculate and analyze vibrational modes and frequencies. FALCON coordinates can more generally be used to construct vibrational coordinates for describing local and (semi-local) interacting modes with desired features. For instance, spatially local vibrations can be approximately described as internal motion within only a group of atoms and delocalized modes can be approximately expressed as relative motions of rigid groups of atoms. The FALCON method can support efficiency in the calculation and analysis of vibrational coordinates and energies in the context of harmonic and anharmonic calculations. The features of this method are demonstrated on a few small molecules, i.e., formylglycine, coumarin, and dimethylether as well as for the amide-I band and low-frequency modes of alanine oligomers and alpha conotoxin.

  6. FALCON: A method for flexible adaptation of local coordinates of nuclei.

    PubMed

    König, Carolin; Hansen, Mads Bøttger; Godtliebsen, Ian H; Christiansen, Ove

    2016-02-21

    We present a flexible scheme for calculating vibrational rectilinear coordinates with well-defined strict locality on a certain set of atoms. Introducing a method for Flexible Adaption of Local COordinates of Nuclei (FALCON) we show how vibrational subspaces can be "grown" in an adaptive manner. Subspace Hessian matrices are set up and used to calculate and analyze vibrational modes and frequencies. FALCON coordinates can more generally be used to construct vibrational coordinates for describing local and (semi-local) interacting modes with desired features. For instance, spatially local vibrations can be approximately described as internal motion within only a group of atoms and delocalized modes can be approximately expressed as relative motions of rigid groups of atoms. The FALCON method can support efficiency in the calculation and analysis of vibrational coordinates and energies in the context of harmonic and anharmonic calculations. The features of this method are demonstrated on a few small molecules, i.e., formylglycine, coumarin, and dimethylether as well as for the amide-I band and low-frequency modes of alanine oligomers and alpha conotoxin. PMID:26896977

  7. An off-lattice, self-learning kinetic Monte Carlo method using local environments

    NASA Astrophysics Data System (ADS)

    Konwar, Dhrubajit; Bhute, Vijesh J.; Chatterjee, Abhijit

    2011-11-01

    We present a method called local environment kinetic Monte Carlo (LE-KMC) method for efficiently performing off-lattice, self-learning kinetic Monte Carlo (KMC) simulations of activated processes in material systems. Like other off-lattice KMC schemes, new atomic processes can be found on-the-fly in LE-KMC. However, a unique feature of LE-KMC is that as long as the assumption that all processes and rates depend only on the local environment is satisfied, LE-KMC provides a general algorithm for (i) unambiguously describing a process in terms of its local atomic environments, (ii) storing new processes and environments in a catalog for later use with standard KMC, and (iii) updating the system based on the local information once a process has been selected for a KMC move. Search, classification, storage and retrieval steps needed while employing local environments and processes in the LE-KMC method are discussed. The advantages and computational cost of LE-KMC are discussed. We assess the performance of the LE-KMC algorithm by considering test systems involving diffusion in a submonolayer Ag and Ag-Cu alloy films on Ag(001) surface.

  8. A Micro-delivery Approach for Studying Microvascular Responses to Localized Oxygen Delivery

    PubMed Central

    Ghonaim, Nour W.; Lau, Leo W. M.; Goldman, Daniel; Ellis, Christopher G.; Yang, Jun

    2011-01-01

    In vivo video microscopy has been used to study blood flow regulation as a function of varying oxygen concentration in microcirculatory networks. However, previous studies have measured the collective response of stimulating large areas of the microvascular network at the tissue surface. Objective We aim to limit the area being stimulated by controlling oxygen availability to highly localized regions of the microvascular bed within intact muscle. Design and Method Gas of varying O2 levels was delivered to specific locations on the surface of the Extensor Digitorum Longus muscle of rat through a set of micro-outlets (100 μm diameter) patterned in ultrathin glass using state-of-the-art microfabrication techniques. O2 levels were oscillated and digitized video sequences were processed for changes in capillary hemodynamics and erythrocyte O2 saturation. Results and Conclusions Oxygen saturations in capillaries positioned directly above the micro-outlets were closely associated with the controlled local O2 oscillations. Radial diffusion from the micro-outlet is limited to ~75 μm from the center as predicted by computational modelling and as measured in vivo. These results delineate a key step in the design of a novel micro-delivery device for controlled oxygen delivery to the microvasculature to understand fundamental mechanisms of microvascular regulation of O2 supply. PMID:21914035

  9. A Public Policy Approach to Local Models of HIV/AIDS Control in Brazil

    PubMed Central

    de Assis, Andreia; Costa-Couto, Maria-Helena; Thoenig, Jean-Claude; Fleury, Sonia; de Camargo, Kenneth; Larouzé, Bernard

    2009-01-01

    Objectives. We investigated involvement and cooperation patterns of local Brazilian AIDS program actors and the consequences of these patterns for program implementation and sustainability. Methods. We performed a public policy analysis (documentary analysis, direct observation, semistructured interviews of health service and nongovernmental organization [NGO] actors) in 5 towns in 2 states, São Paulo and Pará. Results. Patterns suggested 3 models. In model 1, local government, NGOs, and primary health care services were involved in AIDS programs with satisfactory response to new epidemiological trends but a risk that HIV/AIDS would become low priority. In model 2, mainly because of NGO activism, HIV/AIDS remained an exceptional issue, with limited responses to new epidemiological trends and program sustainability undermined by political instability. In model 3, involvement of public agencies and NGOs was limited, with inadequate response to epidemiological trends and poor mobilization threatening program sustainability. Conclusions. Within a common national AIDS policy framework, the degree of involvement and cooperation between public and NGO actors deeply impacts population coverage and program sustainability. Specific processes are required to maintain actor mobilization without isolating AIDS programs. PMID:19372523

  10. Novel approaches to improve iris recognition system performance based on local quality evaluation and feature fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system.

  11. Novel approaches to improve iris recognition system performance based on local quality evaluation and feature fusion.

    PubMed

    Chen, Ying; Liu, Yuanning; Zhu, Xiaodong; Chen, Huiling; He, Fei; Pang, Yutong

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243

  12. Novel Approaches to Improve Iris Recognition System Performance Based on Local Quality Evaluation and Feature Fusion

    PubMed Central

    2014-01-01

    For building a new iris template, this paper proposes a strategy to fuse different portions of iris based on machine learning method to evaluate local quality of iris. There are three novelties compared to previous work. Firstly, the normalized segmented iris is divided into multitracks and then each track is estimated individually to analyze the recognition accuracy rate (RAR). Secondly, six local quality evaluation parameters are adopted to analyze texture information of each track. Besides, particle swarm optimization (PSO) is employed to get the weights of these evaluation parameters and corresponding weighted coefficients of different tracks. Finally, all tracks' information is fused according to the weights of different tracks. The experimental results based on subsets of three public and one private iris image databases demonstrate three contributions of this paper. (1) Our experimental results prove that partial iris image cannot completely replace the entire iris image for iris recognition system in several ways. (2) The proposed quality evaluation algorithm is a self-adaptive algorithm, and it can automatically optimize the parameters according to iris image samples' own characteristics. (3) Our feature information fusion strategy can effectively improve the performance of iris recognition system. PMID:24693243

  13. An observation-based approach to identify local natural dust events from routine aerosol ground monitoring

    NASA Astrophysics Data System (ADS)

    Tong, D. Q.; Dan, M.; Wang, T.; Lee, P.

    2012-02-01

    Dust is a major component of atmospheric aerosols in many parts of the world. Although there exist many routine aerosol monitoring networks, it is often difficult to obtain dust records from these networks, because these monitors are either deployed far away from dust active regions (most likely collocated with dense population) or contaminated by anthropogenic sources and other natural sources, such as wildfires and vegetation detritus. Here we propose a new approach to identify local dust events relying solely on aerosol mass and composition from general-purpose aerosol measurements. Through analyzing the chemical and physical characteristics of aerosol observations during satellite-detected dust episodes, we select five indicators to be used to identify local dust records: (1) high PM10 concentrations; (2) low PM2.5/PM10 ratio; (3) higher concentrations and percentage of crustal elements; (4) lower percentage of anthropogenic pollutants; and (5) low enrichment factors of anthropogenic elements. After establishing these identification criteria, we conduct hierarchical cluster analysis for all validated aerosol measurement data over 68 IMPROVE sites in the Western United States. A total of 182 local dust events were identified over 30 of the 68 locations from 2000 to 2007. These locations are either close to the four US Deserts, namely the Great Basin Desert, the Mojave Desert, the Sonoran Desert, and the Chihuahuan Desert, or in the high wind power region (Colorado). During the eight-year study period, the total number of dust events displays an interesting four-year activity cycle (one in 2000-2003 and the other in 2004-2007). The years of 2003, 2002 and 2007 are the three most active dust periods, with 46, 31 and 24 recorded dust events, respectively, while the years of 2000, 2004 and 2005 are the calmest periods, all with single digit dust records. Among these deserts, the Chihuahua Desert (59 cases) and the Sonoran Desert (62 cases) are by far the most active

  14. A hybrid non-reflective boundary technique for efficient simulation of guided waves using local interaction simulation approach

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Cesnik, Carlos E. S.

    2016-04-01

    Local interaction simulation approach (LISA) is a highly parallelizable numerical scheme for guided wave simulation in structural health monitoring (SHM). This paper addresses the issue of simulating wave propagation in unbounded domain through the implementation of non-reflective boundary (NRB) in LISA. In this study, two different categories of NRB, i.e., the non-reflective boundary condition (NRBC) and the absorbing boundary layer (ABL), have been investigated in the parallelized LISA scheme. For the implementation of NRBC, a set of general LISA equations considering the effect from boundary stress is obtained first. As a simple example, the Lysmer and Kuhlemeyer (L-K) model is applied here to demonstrate the easiness of NRBC implementation in LISA. As a representative of ABL implementation, the LISA scheme incorporating the absorbing layers with increasing damping (ALID) is also proposed, based on elasto-dynamic equations considering damping effect. Finally, an effective hybrid model combining L-K and ALID methods in LISA is developed, and guidelines for implementing the hybrid model is presented. Case studies on a three-dimensional plate model compares the performance of hybrid method to that of L-K and ALID acting independently. The simulation results demonstrate that best absorbing efficiency is achieved with the hybrid method.

  15. The Local Maximum Clustering Method and Its Application in Microarray Gene Expression Data Analysis

    NASA Astrophysics Data System (ADS)

    Wu, Xiongwu; Chen, Yidong; Brooks, Bernard R.; Su, Yan A.

    2004-12-01

    An unsupervised data clustering method, called the local maximum clustering (LMC) method, is proposed for identifying clusters in experiment data sets based on research interest. A magnitude property is defined according to research purposes, and data sets are clustered around each local maximum of the magnitude property. By properly defining a magnitude property, this method can overcome many difficulties in microarray data clustering such as reduced projection in similarities, noises, and arbitrary gene distribution. To critically evaluate the performance of this clustering method in comparison with other methods, we designed three model data sets with known cluster distributions and applied the LMC method as well as the hierarchic clustering method, the[InlineEquation not available: see fulltext.]-mean clustering method, and the self-organized map method to these model data sets. The results show that the LMC method produces the most accurate clustering results. As an example of application, we applied the method to cluster the leukemia samples reported in the microarray study of Golub et al. (1999).

  16. Source localization approach for functional DOT using MUSIC and FDR control.

    PubMed

    Jung, Jin Wook; Lee, Ok Kyun; Ye, Jong Chul

    2012-03-12

    In this paper, we formulate diffuse optical tomography (DOT) problems as a source localization problem and propose a MUltiple SIgnal Classification (MUSIC) algorithm for functional brain imaging application. By providing MUSIC spectra for major chromophores such as oxy-hemoglobin (HbO) and deoxy-hemoglobin (HbR), we are able to investigate the spatial distribution of brain activities. Moreover, the false discovery rate (FDR) algorithm can be applied to control the family-wise error in the MUSIC spectra. The minimum distance between the center of mass in DOT and the Montreal Neurological Institute (MNI) coordinates of target regions in experiments was between approximately 6 and 18 mm, and the displacement of the center of mass in DOT and fMRI ranged between 12 and 28 mm, which demonstrate the legitimacy of the DOT-based imaging. The proposed brain mapping method revealed its potential as an alternative algorithm to monitor the brain activation. PMID:22418510

  17. An approach to the damping of local modes of oscillations resulting from large hydraulic transients

    SciTech Connect

    Dobrijevic, D.M.; Jankovic, M.V.

    1999-09-01

    A new method of damping of local modes of oscillations under large disturbance is presented in this paper. The digital governor controller is used. Controller operates in real time to improve the generating unit transients through the guide vane position and the runner blade position. The developed digital governor controller, whose control signals are adjusted using the on-line measurements, offers better damping effects for the generator oscillations under large disturbances than the conventional controller. Digital simulations of hydroelectric power plant equipped with low-head Kaplan turbine are performed and the comparisons between the digital governor control and the conventional governor control are presented. Simulation results show that the new controller offers better performances, than the conventional controller, when the system is subjected to large disturbances.

  18. Local moment approach as a quantum impurity solver for the Hubbard model

    NASA Astrophysics Data System (ADS)

    Barman, Himadri

    2016-07-01

    The local moment approach (LMA) has presented itself as a powerful semianalytical quantum impurity solver (QIS) in the context of the dynamical mean-field theory (DMFT) for the periodic Anderson model and it correctly captures the low-energy Kondo scale for the single impurity model, having excellent agreement with the Bethe ansatz and numerical renormalization group (NRG) results. However, the most common correlated lattice model, the Hubbard model, has not been explored well within the LMA+DMFT framework beyond the insulating phase. Here in our work, within the framework we complete the filling-interaction phase diagram of the single band Hubbard model at zero temperature. Our formalism is generic to any particle filling and can be extended to finite temperature. We contrast our results with another QIS, namely the iterated perturbation theory (IPT) and show that the second spectral moment sum rule improves better as the Hubbard interaction strength grows stronger in LMA, whereas it severely breaks down after the Mott transition in IPT. For the metallic case, the Fermi liquid (FL) scaling agreement with the NRG spectral density supports the fact that the FL scale emerges from the inherent Kondo physics of the impurity model. We also show that, in the metallic phase, the FL scaling of the spectral density leads to universality which extends to infinite frequency range at infinite correlation strength (strong coupling). At large interaction strength, the off half-filling spectral density forms a pseudogap near the Fermi level and filling-controlled Mott transition occurs as one approaches the half-filling. As a response property, we finally study the zero temperature optical conductivity and find universal features such as absorption peak position governed by the FL scale and a doping independent crossing point, often dubbed the isosbestic point in experiments.

  19. A Multiscale Constraints Method Localization of 3D Facial Feature Points

    PubMed Central

    Li, Hong-an; Zhang, Yongxin; Li, Zhanli; Li, Huilin

    2015-01-01

    It is an important task to locate facial feature points due to the widespread application of 3D human face models in medical fields. In this paper, we propose a 3D facial feature point localization method that combines the relative angle histograms with multiscale constraints. Firstly, the relative angle histogram of each vertex in a 3D point distribution model is calculated; then the cluster set of the facial feature points is determined using the cluster algorithm. Finally, the feature points are located precisely according to multiscale integral features. The experimental results show that the feature point localization accuracy of this algorithm is better than that of the localization method using the relative angle histograms. PMID:26539244

  20. Locally Embedding Autoencoders: A Semi-Supervised Manifold Learning Approach of Document Representation

    PubMed Central

    Wei, Chao; Luo, Senlin; Ma, Xincheng; Ren, Hao; Zhang, Ji; Pan, Limin

    2016-01-01

    Topic models and neural networks can discover meaningful low-dimensional latent representations of text corpora; as such, they have become a key technology of document representation. However, such models presume all documents are non-discriminatory, resulting in latent representation dependent upon all other documents and an inability to provide discriminative document representation. To address this problem, we propose a semi-supervised manifold-inspired autoencoder to extract meaningful latent representations of documents, taking the local perspective that the latent representation of nearby documents should be correlative. We first determine the discriminative neighbors set with Euclidean distance in observation spaces. Then, the autoencoder is trained by joint minimization of the Bernoulli cross-entropy error between input and output and the sum of the square error between neighbors of input and output. The results of two widely used corpora show that our method yields at least a 15% improvement in document clustering and a nearly 7% improvement in classification tasks compared to comparative methods. The evidence demonstrates that our method can readily capture more discriminative latent representation of new documents. Moreover, some meaningful combinations of words can be efficiently discovered by activating features that promote the comprehensibility of latent representation. PMID:26784692

  1. A novel approach for Milne's phase-amplitude method

    NASA Astrophysics Data System (ADS)

    Simbotin, I.; Shu, D.; Côté, R.

    2016-05-01

    We have uncovered a linear equation for the envelope function--fully equivalent with the original non-linear equation of Milne's--and have implemented a highly accurate and efficient numerical method for computing the envelope and the associated phase. Consequently, we obtain a high precision parametrization of the wavefunction, within a very economical approach. The key ingredients are: (i) straightforward optimization for smoothness, and (ii) Chebyshev polynomials as the workhorse for solving integro/differential equations. The latter also give a built-in interpolation, and allow for developing numerical tools that are robust, accurate, and convenient. Partial support from the US Army Research Office (Grant No. W911NF-13-1-0213), and from NSF (Grant No. PHY-1415560).

  2. Arbitrary Lagrangian-Eulerian Method with Local Structured Adaptive Mesh Refinement for Modeling Shock Hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliott, N S

    2001-10-22

    A new method that combines staggered grid Arbitrary Lagrangian-Eulerian (ALE) techniques with structured local adaptive mesh refinement (AMR) has been developed for solution of the Euler equations. This method facilitates the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required through dynamic adaption. Many of the core issues involved in the development of the combined ALEAMR method hinge upon the integration of AMR with a staggered grid Lagrangian integration method. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. Numerical examples are presented which demonstrate the accuracy and efficiency of the method.

  3. Local Mesh Refinement in the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Chang, Sin-Chung; Wu, Yuhui; Wang, Xiao-Yen; Yang, Vigor

    2000-01-01

    A local mesh refinement procedure for the CE/SE method which does not use an iterative procedure in the treatments of grid-to-grid communications is described. It is shown that a refinement ratio higher than ten can be applied successfully across a single coarse grid/fine grid interface.

  4. A global learning with local preservation method for microarray data imputation.

    PubMed

    Chen, Ye; Wang, Aiguo; Ding, Huitong; Que, Xia; Li, Yabo; An, Ning; Jiang, Lili

    2016-10-01

    Microarray data suffer from missing values for various reasons, including insufficient resolution, image noise, and experimental errors. Because missing values can hinder downstream analysis steps that require complete data as input, it is crucial to be able to estimate the missing values. In this study, we propose a Global Learning with Local Preservation method (GL2P) for imputation of missing values in microarray data. GL2P consists of two components: a local similarity measurement module and a global weighted imputation module. The former uses a local structure preservation scheme to exploit as much information as possible from the observable data, and the latter is responsible for estimating the missing values of a target gene by considering all of its neighbors rather than a subset of them. Furthermore, GL2P imputes the missing values in ascending order according to the rate of missing data for each target gene to fully utilize previously estimated values. To validate the proposed method, we conducted extensive experiments on six benchmarked microarray datasets. We compared GL2P with eight state-of-the-art imputation methods in terms of four performance metrics. The experimental results indicate that GL2P outperforms its competitors in terms of imputation accuracy and better preserves the structure of differentially expressed genes. In addition, GL2P is less sensitive to the number of neighbors than other local learning-based imputation methods.

  5. Individual tree crown delineation using localized contour tree method and airborne LiDAR data in coniferous forests

    NASA Astrophysics Data System (ADS)

    Wu, Bin; Yu, Bailang; Wu, Qiusheng; Huang, Yan; Chen, Zuoqi; Wu, Jianping

    2016-10-01

    Individual tree crown delineation is of great importance for forest inventory and management. The increasing availability of high-resolution airborne light detection and ranging (LiDAR) data makes it possible to delineate the crown structure of individual trees and deduce their geometric properties with high accuracy. In this study, we developed an automated segmentation method that is able to fully utilize high-resolution LiDAR data for detecting, extracting, and characterizing individual tree crowns with a multitude of geometric and topological properties. The proposed approach captures topological structure of forest and quantifies topological relationships of tree crowns by using a graph theory-based localized contour tree method, and finally segments individual tree crowns by analogy of recognizing hills from a topographic map. This approach consists of five key technical components: (1) derivation of canopy height model from airborne LiDAR data; (2) generation of contours based on the canopy height model; (3) extraction of hierarchical structures of tree crowns using the localized contour tree method; (4) delineation of individual tree crowns by segmenting hierarchical crown structure; and (5) calculation of geometric and topological properties of individual trees. We applied our new method to the Medicine Bow National Forest in the southwest of Laramie, Wyoming and the HJ Andrews Experimental Forest in the central portion of the Cascade Range of Oregon, U.S. The results reveal that the overall accuracy of individual tree crown delineation for the two study areas achieved 94.21% and 75.07%, respectively. Our method holds great potential for segmenting individual tree crowns under various forest conditions. Furthermore, the geometric and topological attributes derived from our method provide comprehensive and essential information for forest management.

  6. Baryon states with open beauty in the extended local hidden gauge approach

    NASA Astrophysics Data System (ADS)

    Liang, W. H.; Xiao, C. W.; Oset, E.

    2014-03-01

    In this paper, we examine the interaction of B stretchy="false">¯N, B stretchy="false">¯Δ, B stretchy="false">¯*N, and B stretchy="false">¯*Δ states, together with their coupled channels, by using a mapping from the light meson sector. The assumption that the heavy quarks act as spectators at the quark level automatically leads us to the results of the heavy quark spin symmetry for pion exchange and reproduces the results of the Weinberg Tomozawa term, coming from light vector exchanges in the extended local hidden gauge approach. With this dynamics we look for states dynamically generated from the interaction and find two states with nearly zero width, which we associate to the Λb(5912) and Λb(5920) states. The states couple mostly to B stretchy="false">¯*N, which are degenerate with the Weinberg Tomozawa interaction. The difference of masses between these two states, with J =1/2 and 3/2, respectively, is due to pion exchange connecting these states to intermediate B stretchy="false">¯N states. In addition to these two Λb states, we find three more states with I =0, one of them nearly degenerate in two states of J =1/2, 3/2. Furthermore, we also find eight more states in I =1, two of them degenerate in J =1/2, 3/2, and another two degenerate in J =1/2, 3/2, 5/2.

  7. Baryon states with hidden charm in the extended local hidden gauge approach

    NASA Astrophysics Data System (ADS)

    Uchino, T.; Liang, Wei-Hong; Oset, E.

    2016-03-01

    The s -wave interaction of bar{D}Λ_c , bar{D} Σ_c , bar{D}^{ast}Λ_c , bar{D}^{ast}Σ_c and bar{D}Σ_c^{ast} , bar{D}^{ast}Σ_c^{ast} , is studied within a unitary coupled channels scheme with the extended local hidden gauge approach. In addition to the Weinberg-Tomozawa term, several additional diagrams via the pion exchange are also taken into account as box potentials. Furthermore, in order to implement the full coupled channels calculation, some of the box potentials which mix the vector-baryon and pseudoscalar-baryon sectors are extended to construct the effective transition potentials. As a result, we have observed six possible states in several angular momenta. Four of them correspond to two pairs of admixture states, two of bar{D}Σ_c-bar{D}^{ast}Σ_c with J = 1/2 , and two of bar{D}Σ_c^{ast} - bar{D}^{ast}Σ_c^{ast} with J = 3/2 . Moreover, we find a bar{D}^{ast}Σ_c resonance which couples to the bar{D}Λ_c channel and one spin degenerated bound state of bar{D}^{ast}Σ_c^{ast} with J = 1/2,5/2.

  8. Baryons States with Hidden Charm in the Extended Local Hidden Gauge Approach

    NASA Astrophysics Data System (ADS)

    Uchino, Toshitaka; Liang, Wei-Hong; Oset, Eulogio

    The s-wave interaction of bar{D}Λ c, bar{D}Σ c, {bar{D}}nolimits*Λ c, {bar{D}}nolimits*Σ c and bar{D}Σ c*, {bar{D}}nolimits*Σ c*, is studied within a unitary coupled channels scheme with the extended local hidden gauge approach. In addition to the Weinberg-Tomozawa term, several additional diagrams via the pion-exchange are also taken into account as box potentials. Furthermore, in order to implement the full coupled channels calculation, some of the box potentials which mix the vector-baryon and pseudoscalar-baryon sectors are extended to construct the effective transition potentials. As a result, we have observed six possible states in several angular momenta. Four of them correspond to two pairs of admixture states, two of bar{D}Σ c - {bar{D}}nolimits*Σ c with J = 1/2, and two of bar{D}Σ c* - {bar{D}}nolimits*Σ c* with J = 3/2. Moreover, we find a {bar{D}}nolimits*Σ c resonance which couples to the bar{D}Λ c channel and one spin degenerated bound state of {bar{D}}nolimits*Σ c* with J = 1/2,5/2.

  9. An Integrated Approach to Locality-Conscious Processor Allocation and Scheduling of Mixed-Parallel Applications

    SciTech Connect

    Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.; Catalyurek, Umit V.; Kurc, Tahsin; Sadayappan, Ponnuswamy; Saltz, Joel H.

    2009-08-01

    Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisions are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.

  10. Acoustic flight tests of rotorcraft noise-abatement approaches using local differential GPS guidance

    NASA Technical Reports Server (NTRS)

    Chen, Robert T. N.; Hindson, William S.; Mueller, Arnold W.

    1995-01-01

    This paper presents the test design, instrumentation set-up, data acquisition, and the results of an acoustic flight experiment to study how noise due to blade-vortex interaction (BVI) may be alleviated. The flight experiment was conducted using the NASA/Army Rotorcraft Aircrew Systems Concepts Airborne Laboratory (RASCAL) research helicopter. A Local Differential Global Positioning System (LDGPS) was used for precision navigation and cockpit display guidance. A laser-based rotor state measurement system on board the aircraft was used to measure the main rotor tip-path-plane angle-of-attack. Tests were performed at Crows Landing Airfield in northern California with an array of microphones similar to that used in the standard ICAO/FAA noise certification test. The methodology used in the design of a RASCAL-specific, multi-segment, decelerating approach profile for BVI noise abatement is described, and the flight data pertaining to the flight technical errors and the acoustic data for assessing the noise reduction effectiveness are reported.

  11. An experimental verification of the local circulation method for a horizontal axis wind turbine

    SciTech Connect

    Nasu, K.I.; Azuma, A.

    1983-08-01

    The Local Circulation Method (LCM) was developed by the authors as a useful method for the prediction of rotary wing unsteady aerodynamics. In order to examine empirically the validity of the LCM, an experimental test of horizontal axis wind turbine was conducted in a low speed wind tunnel. The results obtained were compared with the computational results predicted by the LCM. The coverage of this experiment is from static performances to time-varying airloading of the wing in yawed wind.

  12. Analysing clinical reasoning characteristics using a combined methods approach

    PubMed Central

    2013-01-01

    Background Despite a major research focus on clinical reasoning over the last several decades, a method of evaluating the clinical reasoning process that is both objective and comprehensive is yet to be developed. The aim of this study was to test whether a dual approach, using two measures of clinical reasoning, the Clinical Reasoning Problem (CRP) and the Script Concordance Test (SCT), provides a valid, reliable and targeted analysis of clinical reasoning characteristics to facilitate the development of diagnostic thinking in medical students. Methods Three groups of participants, general practitioners, and third and fourth (final) year medical students completed 20 on-line clinical scenarios -10 in CRP and 10 in SCT format. Scores for each format were analysed for reliability, correlation between the two formats and differences between subject-groups. Results Cronbach’s alpha coefficient ranged from 0.36 for SCT 1 to 0.61 for CRP 2, Statistically significant correlations were found between the mean f-score of the CRP 2 and total SCT 2 score (0.69); and between the mean f-score for all CRPs and all mean SCT scores (0.57 and 0.47 respectively). The pass/fail rates of the SCT and CRP f-score are in keeping with the findings from the correlation analysis (i.e. 31% of students (11/35) passed both, 26% failed both, and 43% (15/35) of students passed one but not the other test), and suggest that the two formats measure overlapping but not identical characteristics. One-way ANOVA showed consistent differences in scores between levels of expertise with these differences being significant or approaching significance for the CRPs. Conclusion SCTs and CRPs are overlapping and complementary measures of clinical reasoning. Whilst SCTs are more efficient to administer, the use of both measures provides a more comprehensive appraisal of clinical skills than either single measure alone, and as such could potentially facilitate the customised teaching of clinical reasoning for

  13. Biodiversity Monitoring at the Tonle Sap Lake of Cambodia: A Comparative Assessment of Local Methods

    NASA Astrophysics Data System (ADS)

    Seak, Sophat; Schmidt-Vogt, Dietrich; Thapa, Gopal B.

    2012-10-01

    This paper assesses local biodiversity monitoring methods practiced in the Tonle Sap Lake of Cambodia. For the assessment we used the following criteria: methodological rigor, perceived cost, ease of use (user friendliness), compatibility with existing activities, and effectiveness of intervention. Constraints and opportunities for execution of the methods were also considered. Information was collected by use of: (1) key informant interview, (2) focus group discussion, and (3) researcher's observation. The monitoring methods for fish, birds, reptiles, mammals and vegetation practiced in the research area have their unique characteristics of generating data on biodiversity and biological resources. Most of the methods, however, serve the purpose of monitoring biological resources rather than biodiversity. There is potential that the information gained through local monitoring methods can provide input for long-term management and strategic planning. In order to realize this potential, the local monitoring methods should be better integrated with each other, adjusted to existing norms and regulations, and institutionalized within community-based organization structures.

  14. Hierarchical Leak Detection and Localization Method in Natural Gas Pipeline Monitoring Sensor Networks

    PubMed Central

    Wan, Jiangwen; Yu, Yang; Wu, Yinfeng; Feng, Renjian; Yu, Ning

    2012-01-01

    In light of the problems of low recognition efficiency, high false rates and poor localization accuracy in traditional pipeline security detection technology, this paper proposes a type of hierarchical leak detection and localization method for use in natural gas pipeline monitoring sensor networks. In the signal preprocessing phase, original monitoring signals are dealt with by wavelet transform technology to extract the single mode signals as well as characteristic parameters. In the initial recognition phase, a multi-classifier model based on SVM is constructed and characteristic parameters are sent as input vectors to the multi-classifier for initial recognition. In the final decision phase, an improved evidence combination rule is designed to integrate initial recognition results for final decisions. Furthermore, a weighted average localization algorithm based on time difference of arrival is introduced for determining the leak point’s position. Experimental results illustrate that this hierarchical pipeline leak detection and localization method could effectively improve the accuracy of the leak point localization and reduce the undetected rate as well as false alarm rate. PMID:22368464

  15. An Integrated Approach to Supplying the Local Table: Perceptions of Consumers, Producers, and Restaurateurs

    ERIC Educational Resources Information Center

    Wise, Dena; Sneed, Christopher; Velandia, Margarita; Berry, Ann; Rhea, Alice; Fairhurst, Ann

    2013-01-01

    The Local Table project compared results from parallel surveys of consumers and restaurateurs regarding local food purchasing and use. Results were also compared with producers' perception of, capacity for and participation in direct marketing through local venues, on-farm outlets, and restaurants. The surveys found consumers' and…

  16. Waves on Thin Plates: A New (Energy Based) Method on Localization

    NASA Astrophysics Data System (ADS)

    Turkaya, Semih; Toussaint, Renaud; Kvalheim Eriksen, Fredrik; Lengliné, Olivier; Daniel, Guillaume; Grude Flekkøy, Eirik; Jørgen Måløy, Knut

    2016-04-01

    Noisy acoustic signal localization is a difficult problem having a wide range of application. We propose a new localization method applicable for thin plates which is based on energy amplitude attenuation and inversed source amplitude comparison. This inversion is tested on synthetic data using a direct model of Lamb wave propagation and on experimental dataset (recorded with 4 Brüel & Kjær Type 4374 miniature piezoelectric shock accelerometers, 1 - 26 kHz frequency range). We compare the performance of this technique with classical source localization algorithms, arrival time localization, time reversal localization, localization based on energy amplitude. The experimental setup consist of a glass / plexiglass plate having dimensions of 80 cm x 40 cm x 1 cm equipped with four accelerometers and an acquisition card. Signals are generated using a steel, glass or polyamide ball (having different sizes) quasi perpendicular hit (from a height of 2-3 cm) on the plate. Signals are captured by sensors placed on the plate on different locations. We measure and compare the accuracy of these techniques as function of sampling rate, dynamic range, array geometry, signal to noise ratio and computational time. We show that this new technique, which is very versatile, works better than conventional techniques over a range of sampling rates 8 kHz - 1 MHz. It is possible to have a decent resolution (3cm mean error) using a very cheap equipment set. The numerical simulations allow us to track the contributions of different error sources in different methods. The effect of the reflections is also included in our simulation by using the imaginary sources outside the plate boundaries. This proposed method can easily be extended for applications in three dimensional environments, to monitor industrial activities (e.g boreholes drilling/production activities) or natural brittle systems (e.g earthquakes, volcanoes, avalanches).

  17. A Computationally Efficient Meshless Local Petrov-Galerkin Method for Axisymmetric Problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Chen, T.

    2003-01-01

    The Meshless Local Petrov-Galerkin (MLPG) method is one of the recently developed element-free methods. The method is convenient and can produce accurate results with continuous secondary variables, but is more computationally expensive than the finite element method. To overcome this disadvantage, a simple Heaviside test function is chosen. The computational effort is significantly reduced by eliminating the domain integral for the axisymmetric potential problems and by simplifying the domain integral for the axisymmetric elasticity problems. The method is evaluated through several patch tests for axisymmetric problems and example problems for which the exact solutions are available. The present method yielded very accurate solutions. The sensitivity of several parameters of the method is also studied.

  18. A novel tip-induced local electrical decomposition method for thin GeO films nanostructuring.

    PubMed

    Sheglov, D V; Gorokhov, E B; Volodin, V A; Astankova, K N; Latyshev, A V

    2008-06-18

    Decomposition of germanium monoxide (GeO) films under the impact of an atomic force microscope (AFM) tip was observed for the first time. It is known that GeO is metastable in the solid phase and decomposes into Ge and GeO(2) under thermal annealing or radiation impact. AFM tip treatments allow us to carry out local decomposition. A novel tip-induced local electrical decomposition (TILED) method of metastable GeO films has been developed. Using TILED of 10 nm thin GeO film, Ge nanowires on silicon substrates were obtained.

  19. Refinement of overlapping local/global iteration method based on Monte Carlo/p-CMFD calculations

    SciTech Connect

    Jo, Y.; Yun, S.; Cho, N. Z.

    2013-07-01

    In this paper, the overlapping local/global (OLG) iteration method based on Monte Carlo/p-CMFD calculations is refined in two aspects. One is the consistent use of estimators to generate homogenized scattering cross sections. Another is that the incident or exiting angular interval is divided into multi-angular bins to modulate albedo boundary conditions for local problems. Numerical tests show that, compared to the one angle bin case in a previous study, the four angle bin case shows significantly improved results. (authors)

  20. Automatic localization of endoscope in intraoperative CT image: A simple approach to augmented reality guidance in laparoscopic surgery.

    PubMed

    Bernhardt, Sylvain; Nicolau, Stéphane A; Agnus, Vincent; Soler, Luc; Doignon, Christophe; Marescaux, Jacques

    2016-05-01

    The use of augmented reality in minimally invasive surgery has been the subject of much research for more than a decade. The endoscopic view of the surgical scene is typically augmented with a 3D model extracted from a preoperative acquisition. However, the organs of interest often present major changes in shape and location because of the pneumoperitoneum and patient displacement. There have been numerous attempts to compensate for this distortion between the pre- and intraoperative states. Some have attempted to recover the visible surface of the organ through image analysis and register it to the preoperative data, but this has proven insufficiently robust and may be problematic with large organs. A second approach is to introduce an intraoperative 3D imaging system as a transition. Hybrid operating rooms are becoming more and more popular, so this seems to be a viable solution, but current techniques require yet another external and constraining piece of apparatus such as an optical tracking system to determine the relationship between the intraoperative images and the endoscopic view. In this article, we propose a new approach to automatically register the reconstruction from an intraoperative CT acquisition with the static endoscopic view, by locating the endoscope tip in the volume data. We first describe our method to localize the endoscope orientation in the intraoperative image using standard image processing algorithms. Secondly, we highlight that the axis of the endoscope needs a specific calibration process to ensure proper registration accuracy. In the last section, we present quantitative and qualitative results proving the feasibility and the clinical potential of our approach.

  1. Automatic localization of endoscope in intraoperative CT image: A simple approach to augmented reality guidance in laparoscopic surgery.

    PubMed

    Bernhardt, Sylvain; Nicolau, Stéphane A; Agnus, Vincent; Soler, Luc; Doignon, Christophe; Marescaux, Jacques

    2016-05-01

    The use of augmented reality in minimally invasive surgery has been the subject of much research for more than a decade. The endoscopic view of the surgical scene is typically augmented with a 3D model extracted from a preoperative acquisition. However, the organs of interest often present major changes in shape and location because of the pneumoperitoneum and patient displacement. There have been numerous attempts to compensate for this distortion between the pre- and intraoperative states. Some have attempted to recover the visible surface of the organ through image analysis and register it to the preoperative data, but this has proven insufficiently robust and may be problematic with large organs. A second approach is to introduce an intraoperative 3D imaging system as a transition. Hybrid operating rooms are becoming more and more popular, so this seems to be a viable solution, but current techniques require yet another external and constraining piece of apparatus such as an optical tracking system to determine the relationship between the intraoperative images and the endoscopic view. In this article, we propose a new approach to automatically register the reconstruction from an intraoperative CT acquisition with the static endoscopic view, by locating the endoscope tip in the volume data. We first describe our method to localize the endoscope orientation in the intraoperative image using standard image processing algorithms. Secondly, we highlight that the axis of the endoscope needs a specific calibration process to ensure proper registration accuracy. In the last section, we present quantitative and qualitative results proving the feasibility and the clinical potential of our approach. PMID:26925804

  2. The diffuse-scattering method for investigating locally ordered binary solid solutions

    SciTech Connect

    Epperson, J.E. ); Anderson, J.P. ); Chen, H. . Materials Science and Engineering Dept.)

    1994-01-01

    Diffuse-scattering investigations comprise a series of maturing methods for detailed characterization of the local-order structure and atomic displacements of binary alloy systems. The distribution of coherent diffuse scattering is determined by the local atomic ordering, and analytical techniques are available for extracting the relevant structural information. An extension of such structural investigations, for locally ordered alloys at equilibrium, allows one to obtain pairwise interaction energies. Having experimental pairwise interaction energies for the various coordination shells offers one the potential for more realistic kinetic Ising modeling of alloy systems as they relax toward equilibrium. Although the modeling of atomic displacements in conjunction with more conventional studies of chemical ordering is in its infancy, the method appears to offer considerable promise for revealing additional information about the strain fields in locally ordered and clustered alloys. The diffuse-scattering methods for structural characterization and for the recovery of interaction energies are reviewed, and some preliminary results are used to demonstrate the potential of the kinetic Ising modeling technique to follow the evolution of ordering or phase separation in an alloy system.

  3. A method for finding the optimal predictor indices for local wave climate conditions

    NASA Astrophysics Data System (ADS)

    Camus, Paula; Méndez, Fernando J.; Losada, Inigo J.; Menéndez, Melisa; Espejo, Antonio; Pérez, Jorge; Rueda, Ana; Guanche, Yanira

    2014-07-01

    In this study, a method to obtain local wave predictor indices that take into account the wave generation process is described and applied to several locations. The method is based on a statistical model that relates significant wave height with an atmospheric predictor, defined by sea level pressure fields. The predictor is composed of a local and a regional part, representing the sea and the swell wave components, respectively. The spatial domain of the predictor is determined using the Evaluation of Source and Travel-time of wave Energy reaching a Local Area (ESTELA) method. The regional component of the predictor includes the recent historical atmospheric conditions responsible for the swell wave component at the target point. The regional predictor component has a historical temporal coverage ( n-days) different to the local predictor component (daily coverage). Principal component analysis is applied to the daily predictor in order to detect the dominant variability patterns and their temporal coefficients. Multivariate regression model, fitted at daily scale for different n-days of the regional predictor, determines the optimum historical coverage. The monthly wave predictor indices are selected applying a regression model using the monthly values of the principal components of the daily predictor, with the optimum temporal coverage for the regional predictor. The daily predictor can be used in wave climate projections, while the monthly predictor can help to understand wave climate variability or long-term coastal morphodynamic anomalies.

  4. Discussions on equivalent solutions and localized structures via the mapping method based on Riccati equation

    NASA Astrophysics Data System (ADS)

    Xu, Ling; Cheng, Xuan; Dai, Chao-Qing

    2015-12-01

    Although the mapping method based on Riccati equation was proposed to obtain variable separation solutions many years ago, two important problems have not been studied: i) the equivalence of variable separation solutions by means of the mapping method based on Riccati equation with the radical sign combined ansatz; and ii) lack of physical meanings for some localized structures constructed by variable separation solutions. In this paper, we re-study the (2+1)-dimensional Boiti-Leon-Pempinelli equation via the mapping method based on Riccati equation and prove that nine types of variable separation solutions are actually equivalent to each other. Moreover, we also re-study localized structures constructed by variable separation solutions. Results indicate that some localized structures reported in the literature are lacking real values due to the appearance of the divergent and un-physical phenomenon for the initial field. Therefore, we must be careful with the initial field to avoid the appearance of some un-physical or even divergent structures in it when we construct localized structures for the potential field.

  5. A Historical Perspective on Local Environmental Movements in Japan: Lessons for the Transdisciplinary Approach on Water Resource Governance

    NASA Astrophysics Data System (ADS)

    Oh, T.

    2014-12-01

    Typical studies on natural resources from a social science perspective tend to choose one type of resource—water, for example— and ask what factors contribute to the sustainable use or wasteful exploitation of that resource. However, climate change and economic development, which are causing increased pressure on local resources and presenting communities with increased levels of tradeoffs and potential conflicts, force us to consider the trade-offs between options for using a particular resource. Therefore, the transdisciplinary approach that accurately captures the advantages and disadvantages of various possible resource uses is particularly important in the complex social-ecological systems, where concerns about inequality with respect to resource use and access have become unavoidable. Needless to say, resource management and policy require sound scientific understanding of the complex interconnections between nature and society, however, in contrast to typical international discussions, I discuss Japan not as an "advanced" case where various dilemmas have been successfully addressed by the government through the optimal use of technology, but rather as a nation seeing an emerging trend that is based on a awareness of the connections between local resources and the environment. Furthermore, from a historical viewpoint, the nexus of local resources is not a brand-new idea in the experience of environmental governance in Japan. There exist the local environment movements, which emphasized the interconnection of local resources and succeeded in urging the governmental action and policymaking. For this reason, local movements and local knowledge for the resource governance warrant attention. This study focuses on the historical cases relevant to water resource management including groundwater, and considers the contexts and conditions to holistically address local resource problems, paying particular attention to interactions between science and society. I

  6. Electromagnetic analysis of diffractive lens with C method and local linear grating model

    NASA Astrophysics Data System (ADS)

    Xiao, Kai; Liu, Ying; Fu, Shaojun

    2005-02-01

    The electromagnetic theory should be applied to determine the diffraction efficiency of structures whose minimum line width is comparable with wavelength or the grooves are too deep, where scalar theory is no longer useful. The coordinate transformation method (the C method) is a very efficient method for obtaining continuous surface-relief grating efficiency for both TE and TM polarization. The local linear grating model (LLGM) models 2-D circular diffractive lens with combination of a series of local linear gratings. We synthesized and analyzed circular diffractive lens with a continuous profile not as previous authors who always use multi-lever structures. The result is compared with that of scalar theory and analysis using LLGM and rigorous coupled-wave theory. This optimization can be used as a complement of the scalar design of diffractive lens.

  7. Density functional method including weak interactions: Dispersion coefficients based on the local response approximation

    NASA Astrophysics Data System (ADS)

    Sato, Takeshi; Nakai, Hiromi

    2009-12-01

    A new method to calculate the atom-atom dispersion coefficients in a molecule is proposed for the use in density functional theory with dispersion (DFT-D) correction. The method is based on the local response approximation due to Dobson and Dinte [Phys. Rev. Lett. 76, 1780 (1996)], with modified dielectric model recently proposed by Vydrov and van Voorhis [J. Chem. Phys. 130, 104105 (2009)]. The local response model is used to calculate the distributed multipole polarizabilities of atoms in a molecule, from which the dispersion coefficients are obtained by an explicit frequency integral of the Casimir-Polder type. Thus obtained atomic polarizabilities are also used in the damping function for the short-range singularity. Unlike empirical DFT-D methods, the local response dispersion (LRD) method is able to calculate the dispersion energy from the ground-state electron density only. It is applicable to any geometry, free from physical constants such as van der Waals radii or atomic polarizabilities, and computationally very efficient. The LRD method combined with the long-range corrected DFT functional (LC-BOP) is applied to calculations of S22 weakly bound complex set [Phys. Chem. Chem. Phys. 8, 1985 (2006)]. Binding energies obtained by the LC-BOP + LRD agree remarkably well with ab initio references.

  8. Critical evaluation of the role of scientific analysis in UK local authority AQMA decision-making: method development and preliminary results.

    PubMed

    Woodfield, N K; Longhurst, J W S; Beattie, C I; Laxen, D P H

    2003-07-20

    Over the past 4 years, local government in the UK has undertaken a process of scientific review and assessment of air quality, which has culminated in a suite of designated air quality management areas (AQMAs) in over 120 of the 403 local authorities in England (including London), Scotland and Wales. Methods to identify specific pollution hot-spots have involved the use of advanced and complex air-quality dispersion modelling and monitoring techniques, and the UK government has provided guidance on both the general and technical methods for undertaking local air quality review and assessments. Approaches to implementing UK air quality policy, through the local air quality management (LAQM) process (Air Quality Strategy 2000) has not been uniform across the UK, as an inevitable consequence of non-prescriptive guidelines. This has led to a variety of outcomes with respect to how different tools and techniques have been applied, the interpretation of scientific uncertainty and the application of caution. A technique to appraise the scientific approaches undertaken by local government and to survey local government officers involved in the LAQM process have been devised, and a conceptual model proposed to identify the main influences in the process of determining AQMAs. Modelling tools used and the consideration of modelling uncertainty, error and model inputs have played a significant role in AQMA decision-making in the majority of local authorities declaring AQMAs in the UK.

  9. Localization of extended brain sources from EEG/MEG: the ExSo-MUSIC approach.

    PubMed

    Birot, Gwénaël; Albera, Laurent; Wendling, Fabrice; Merlet, Isabelle

    2011-05-01

    We propose a new MUSIC-like method, called 2q-ExSo-MUSIC (q ≥ 1). This method is an extension of the 2q-MUSIC (q ≥ 1) approach for solving the EEG/MEG inverse problem, when spatially-extended neocortical sources ("ExSo") are considered. It introduces a novel ExSo-MUSIC principle. The novelty is two-fold: i) the parameterization of the spatial source distribution that leads to an appropriate metric in the context of distributed brain sources and ii) the introduction of an original, efficient and low-cost way of optimizing this metric. In 2q-ExSo-MUSIC, the possible use of higher order statistics (q ≥ 2) offers a better robustness with respect to Gaussian noise of unknown spatial coherence and modeling errors. As a result we reduced the penalizing effects of both the background cerebral activity that can be seen as a Gaussian and spatially correlated noise, and the modeling errors induced by the non-exact resolution of the forward problem. Computer results on simulated EEG signals obtained with physiologically-relevant models of both the sources and the volume conductor show a highly increased performance of our 2q-ExSo-MUSIC method as compared to the classical 2q-MUSIC algorithms.

  10. A local'' exponential transform method for global variance reduction in Monte Carlo transport problems

    SciTech Connect

    Baker, R.S. ); Larsen, E.W. . Dept. of Nuclear Engineering)

    1992-01-01

    Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, Local'' Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine local'' biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation.

  11. A ``local`` exponential transform method for global variance reduction in Monte Carlo transport problems

    SciTech Connect

    Baker, R.S.; Larsen, E.W.

    1992-08-01

    Numerous variance reduction techniques, such as splitting/Russian roulette, weight windows, and the exponential transform exist for improving the efficiency of Monte Carlo transport calculations. Typically, however, these methods, while reducing the variance in the problem area of interest tend to increase the variance in other, presumably less important, regions. As such, these methods tend to be not as effective in Monte Carlo calculations which require the minimization of the variance everywhere. Recently, ``Local`` Exponential Transform (LET) methods have been developed as a means of approximating the zero-variance solution. A numerical solution to the adjoint diffusion equation is used, along with an exponential representation of the adjoint flux in each cell, to determine ``local`` biasing parameters. These parameters are then used to bias the forward Monte Carlo transport calculation in a manner similar to the conventional exponential transform, but such that the transform parameters are now local in space and energy, not global. Results have shown that the Local Exponential Transform often offers a significant improvement over conventional geometry splitting/Russian roulette with weight windows. Since the biasing parameters for the Local Exponential Transform were determined from a low-order solution to the adjoint transport problem, the LET has been applied in problems where it was desirable to minimize the variance in a detector region. The purpose of this paper is to show that by basing the LET method upon a low-order solution to the forward transport problem, one can instead obtain biasing parameters which will minimize the maximum variance in a Monte Carlo transport calculation.

  12. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation

    PubMed Central

    Sills, Erin O.; Herrera, Diego; Kirkpatrick, A. Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts’ selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal “blacklist” that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on

  13. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    PubMed

    Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies

  14. Estimating the Impacts of Local Policy Innovation: The Synthetic Control Method Applied to Tropical Deforestation.

    PubMed

    Sills, Erin O; Herrera, Diego; Kirkpatrick, A Justin; Brandão, Amintas; Dickson, Rebecca; Hall, Simon; Pattanayak, Subhrendu; Shoch, David; Vedoveto, Mariana; Young, Luisa; Pfaff, Alexander

    2015-01-01

    Quasi-experimental methods increasingly are used to evaluate the impacts of conservation interventions by generating credible estimates of counterfactual baselines. These methods generally require large samples for statistical comparisons, presenting a challenge for evaluating innovative policies implemented within a few pioneering jurisdictions. Single jurisdictions often are studied using comparative methods, which rely on analysts' selection of best case comparisons. The synthetic control method (SCM) offers one systematic and transparent way to select cases for comparison, from a sizeable pool, by focusing upon similarity in outcomes before the intervention. We explain SCM, then apply it to one local initiative to limit deforestation in the Brazilian Amazon. The municipality of Paragominas launched a multi-pronged local initiative in 2008 to maintain low deforestation while restoring economic production. This was a response to having been placed, due to high deforestation, on a federal "blacklist" that increased enforcement of forest regulations and restricted access to credit and output markets. The local initiative included mapping and monitoring of rural land plus promotion of economic alternatives compatible with low deforestation. The key motivation for the program may have been to reduce the costs of blacklisting. However its stated purpose was to limit deforestation, and thus we apply SCM to estimate what deforestation would have been in a (counterfactual) scenario of no local initiative. We obtain a plausible estimate, in that deforestation patterns before the intervention were similar in Paragominas and the synthetic control, which suggests that after several years, the initiative did lower deforestation (significantly below the synthetic control in 2012). This demonstrates that SCM can yield helpful land-use counterfactuals for single units, with opportunities to integrate local and expert knowledge and to test innovations and permutations on policies

  15. Extended methods in fluorescence photoactivation localization microscopy and estimating activation yields of photoactivatable proteins

    NASA Astrophysics Data System (ADS)

    Gunewardene, Mudalige Siyath

    Applicability to living specimens and genetically encodable tagging has made fluorescence microscopy both powerful and versatile in life sciences, yet image resolution of fluorescence far-field microscopy has been limited by diffraction. Superseding this limitation, recently discovered super-resolution microscopic techniques can now resolve structures down to a few nanometers, at least ten fold better compared to conventional methods. The localization based super-resolution microscopic method fluorescence photoactivation localization microscopy (FPALM) utilizes genetically encodable photoactivatable fluorescent proteins (PAFPs) to label specimens that facilitate maintaining spatially sparse subsets of fluorescent molecules during imaging. Even though the image of a fluorescent single molecule itself is diffraction limited, it can be localized with precision better than the diffraction based resolution. This precision is primarily limited by the number of photons collected. A sequence of images sampling either all or most of the specimen's tagged PAFPs are subsequently localized and rendered as high density maps revealing structures of interest at tens of nanometer resolution. Rapidly evolving, localization based super-resolution microscopic methods are now capable of imaging live specimens, multiple species, and single molecule anisotropies, and have been extended to three dimensions. This thesis primarily discusses methodology developed to couple the superresolution capability of FPALM to measure single molecule anisotropy, three dimensional orientations (Chapter 2) and simultaneous imaging of multiple (three) PAFPs (Chapter 3) and used to supplement the understanding of the organization and interaction of the cell plasma membrane with cytoskeletal and viral proteins. A method based on single molecule counting statistics and a first order linear kinetic model are presented to estimate the photophysical property that quantifies the nature of photoactivation

  16. Approaching complexity by stochastic methods: From biological systems to turbulence

    NASA Astrophysics Data System (ADS)

    Friedrich, Rudolf; Peinke, Joachim; Sahimi, Muhammad; Reza Rahimi Tabar, M.

    2011-09-01

    This review addresses a central question in the field of complex systems: given a fluctuating (in time or space), sequentially measured set of experimental data, how should one analyze the data, assess their underlying trends, and discover the characteristics of the fluctuations that generate the experimental traces? In recent years, significant progress has been made in addressing this question for a class of stochastic processes that can be modeled by Langevin equations, including additive as well as multiplicative fluctuations or noise. Important results have emerged from the analysis of temporal data for such diverse fields as neuroscience, cardiology, finance, economy, surface science, turbulence, seismic time series and epileptic brain dynamics, to name but a few. Furthermore, it has been recognized that a similar approach can be applied to the data that depend on a length scale, such as velocity increments in fully developed turbulent flow, or height increments that characterize rough surfaces. A basic ingredient of the approach to the analysis of fluctuating data is the presence of a Markovian property, which can be detected in real systems above a certain time or length scale. This scale is referred to as the Markov-Einstein (ME) scale, and has turned out to be a useful characteristic of complex systems. We provide a review of the operational methods that have been developed for analyzing stochastic data in time and scale. We address in detail the following issues: (i) reconstruction of stochastic evolution equations from data in terms of the Langevin equations or the corresponding Fokker-Planck equations and (ii) intermittency, cascades, and multiscale correlation functions.

  17. A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System

    PubMed Central

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-01-01

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972

  18. A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.

    PubMed

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-07-03

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.

  19. A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.

    PubMed

    Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni

    2013-01-01

    Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972

  20. Meshless Local Petrov-Galerkin Method for Shallow Shells with Functionally Graded and Orthotropic Material Properties

    NASA Astrophysics Data System (ADS)

    Sladek, J.; Sladek, V.; Zhang, Ch.

    2008-02-01

    A meshless local Petrov-Galerkin (MLPG) formulation is presented for analysis of shear deformable shallow shells with orthotropic material properties and continuously varying material properties through the shell thickness. Shear deformation of shells described by the Reissner theory is considered. Analyses of shells under static and dynamic loads are given here. For transient elastodynamic case the Laplace-transform is used to eliminate the time dependence of the field variables. A weak formulation with a unit test function transforms the set of the governing equations into local integral equations on local subdomains in the plane domain of the shell. The meshless approximation based on the Moving Least-Squares (MLS) method is employed for the implementation.

  1. A novel full-field experimental method to measure the local compressibility of gas diffusion media

    NASA Astrophysics Data System (ADS)

    Lai, Yeh-Hung; Li, Yongqiang; Rock, Jeffrey A.

    The gas diffusion medium (GDM) in a proton exchange membrane (PEM) fuel cell needs to simultaneously satisfy the requirements of transporting reactant gases, removing product water, conducting electrons and heat, and providing mechanical support to the membrane electrode assembly (MEA). Concerning the localized over-compression which may force carbon fibers and other conductive debris into the membrane to cause fuel cell failure by electronically shorting through the membrane, we have developed a novel full-field experimental method to measure the local thickness and compressibility of GDM. Applying a uniform air pressure upon a thin polyimide film bonded on the top surface of the GDM with support from the bottom by a flat metal substrate and measuring the thickness change using the 3-D digital image correlation technique with an out-of-plane displacement resolution less than 0.5 μm, we have determined the local thickness and compressive stress/strain behavior in the GDM. Using the local thickness and compressibility data over an area of 11.2 mm × 11.2 mm, we numerically construct the nominal compressive response of a commercial Toray™ TGP-H-060 based GDM subjected to compression by flat platens. Good agreement in the nominal stress/strain curves from the numerical construction and direct experimental flat-platen measurement confirms the validity of the methodology proposed in this article. The result shows that a nominal pressure of 1.4 MPa compressed between two flat platens can introduce localized compressive stress concentration of more than 3 MPa in up to 1% of the total area at various locations from several hundred micrometers to 1 mm in diameter. We believe that this full-field experimental method can be useful in GDM material and process development to reduce the local hard spots and help to mitigate the membrane shorting failure in PEM fuel cells.

  2. Assessing Weather-Yield Relationships in Rice at Local Scale Using Data Mining Approaches.

    PubMed

    Delerce, Sylvain; Dorado, Hugo; Grillon, Alexandre; Rebolledo, Maria Camila; Prager, Steven D; Patiño, Victor Hugo; Garcés Varón, Gabriel; Jiménez, Daniel

    2016-01-01

    Seasonal and inter-annual climate variability have become important issues for farmers, and climate change has been shown to increase them. Simultaneously farmers and agricultural organizations are increasingly collecting observational data about in situ crop performance. Agriculture thus needs new tools to cope with changing environmental conditions and to take advantage of these data. Data mining techniques make it possible to extract embedded knowledge associated with farmer experiences from these large observational datasets in order to identify best practices for adapting to climate variability. We introduce new approaches through a case study on irrigated and rainfed rice in Colombia. Preexisting observational datasets of commercial harvest records were combined with in situ daily weather series. Using Conditional Inference Forest and clustering techniques, we assessed the relationships between climatic factors and crop yield variability at the local scale for specific cultivars and growth stages. The analysis showed clear relationships in the various location-cultivar combinations, with climatic factors explaining 6 to 46% of spatiotemporal variability in yield, and with crop responses to weather being non-linear and cultivar-specific. Climatic factors affected cultivars differently during each stage of development. For instance, one cultivar was affected by high nighttime temperatures in the reproductive stage but responded positively to accumulated solar radiation during the ripening stage. Another was affected by high nighttime temperatures during both the vegetative and reproductive stages. Clustering of the weather patterns corresponding to individual cropping events revealed different groups of weather patterns for irrigated and rainfed systems with contrasting yield levels. Best-suited cultivars were identified for some weather patterns, making weather-site-specific recommendations possible. This study illustrates the potential of data mining for

  3. Assessing Weather-Yield Relationships in Rice at Local Scale Using Data Mining Approaches.

    PubMed

    Delerce, Sylvain; Dorado, Hugo; Grillon, Alexandre; Rebolledo, Maria Camila; Prager, Steven D; Patiño, Victor Hugo; Garcés Varón, Gabriel; Jiménez, Daniel

    2016-01-01

    Seasonal and inter-annual climate variability have become important issues for farmers, and climate change has been shown to increase them. Simultaneously farmers and agricultural organizations are increasingly collecting observational data about in situ crop performance. Agriculture thus needs new tools to cope with changing environmental conditions and to take advantage of these data. Data mining techniques make it possible to extract embedded knowledge associated with farmer experiences from these large observational datasets in order to identify best practices for adapting to climate variability. We introduce new approaches through a case study on irrigated and rainfed rice in Colombia. Preexisting observational datasets of commercial harvest records were combined with in situ daily weather series. Using Conditional Inference Forest and clustering techniques, we assessed the relationships between climatic factors and crop yield variability at the local scale for specific cultivars and growth stages. The analysis showed clear relationships in the various location-cultivar combinations, with climatic factors explaining 6 to 46% of spatiotemporal variability in yield, and with crop responses to weather being non-linear and cultivar-specific. Climatic factors affected cultivars differently during each stage of development. For instance, one cultivar was affected by high nighttime temperatures in the reproductive stage but responded positively to accumulated solar radiation during the ripening stage. Another was affected by high nighttime temperatures during both the vegetative and reproductive stages. Clustering of the weather patterns corresponding to individual cropping events revealed different groups of weather patterns for irrigated and rainfed systems with contrasting yield levels. Best-suited cultivars were identified for some weather patterns, making weather-site-specific recommendations possible. This study illustrates the potential of data mining for

  4. Assessing Weather-Yield Relationships in Rice at Local Scale Using Data Mining Approaches

    PubMed Central

    Delerce, Sylvain; Dorado, Hugo; Grillon, Alexandre; Rebolledo, Maria Camila; Prager, Steven D.; Patiño, Victor Hugo; Garcés Varón, Gabriel; Jiménez, Daniel

    2016-01-01

    Seasonal and inter-annual climate variability have become important issues for farmers, and climate change has been shown to increase them. Simultaneously farmers and agricultural organizations are increasingly collecting observational data about in situ crop performance. Agriculture thus needs new tools to cope with changing environmental conditions and to take advantage of these data. Data mining techniques make it possible to extract embedded knowledge associated with farmer experiences from these large observational datasets in order to identify best practices for adapting to climate variability. We introduce new approaches through a case study on irrigated and rainfed rice in Colombia. Preexisting observational datasets of commercial harvest records were combined with in situ daily weather series. Using Conditional Inference Forest and clustering techniques, we assessed the relationships between climatic factors and crop yield variability at the local scale for specific cultivars and growth stages. The analysis showed clear relationships in the various location-cultivar combinations, with climatic factors explaining 6 to 46% of spatiotemporal variability in yield, and with crop responses to weather being non-linear and cultivar-specific. Climatic factors affected cultivars differently during each stage of development. For instance, one cultivar was affected by high nighttime temperatures in the reproductive stage but responded positively to accumulated solar radiation during the ripening stage. Another was affected by high nighttime temperatures during both the vegetative and reproductive stages. Clustering of the weather patterns corresponding to individual cropping events revealed different groups of weather patterns for irrigated and rainfed systems with contrasting yield levels. Best-suited cultivars were identified for some weather patterns, making weather-site-specific recommendations possible. This study illustrates the potential of data mining for

  5. Full-Field Strain Measurement On Titanium Welds And Local Elasto-Plastic Identification With The Virtual Fields Method

    SciTech Connect

    Tattoli, F.; Casavola, C.; Pierron, F.; Rotinat, R.; Pappalettere, C.

    2011-01-17

    One of the main problems in welding is the microstructural transformation within the area affected by the thermal history. The resulting heterogeneous microstructure within the weld nugget and the heat affected zones is often associated with changes in local material properties. The present work deals with the identification of material parameters governing the elasto--plastic behaviour of the fused and heat affected zones as well as the base material for titanium hybrid welded joints (Ti6Al4V alloy). The material parameters are identified from heterogeneous strain fields with the Virtual Fields Method. This method is based on a relevant use of the principle of virtual work and it has been shown to be useful and much less time consuming than classical finite element model updating approaches applied to similar problems. The paper will present results and discuss the problem of selection of the weld zones for the identification.

  6. Local tolerance testing under REACH: Accepted non-animal methods are not on equal footing with animal tests.

    PubMed

    Sauer, Ursula G; Hill, Erin H; Curren, Rodger D; Raabe, Hans A; Kolle, Susanne N; Teubner, Wera; Mehling, Annette; Landsiedel, Robert

    2016-07-01

    In general, no single non-animal method can cover the complexity of any given animal test. Therefore, fixed sets of in vitro (and in chemico) methods have been combined into testing strategies for skin and eye irritation and skin sensitisation testing, with pre-defined prediction models for substance classification. Many of these methods have been adopted as OECD test guidelines. Various testing strategies have been successfully validated in extensive in-house and inter-laboratory studies, but they have not yet received formal acceptance for substance classification. Therefore, under the European REACH Regulation, data from testing strategies can, in general, only be used in so-called weight-of-evidence approaches. While animal testing data generated under the specific REACH information requirements are per se sufficient, the sufficiency of weight-of-evidence approaches can be questioned under the REACH system, and further animal testing can be required. This constitutes an imbalance between the regulatory acceptance of data from approved non-animal methods and animal tests that is not justified on scientific grounds. To ensure that testing strategies for local tolerance testing truly serve to replace animal testing for the REACH registration 2018 deadline (when the majority of existing chemicals have to be registered), clarity on their regulatory acceptance as complete replacements is urgently required.

  7. Local tolerance testing under REACH: Accepted non-animal methods are not on equal footing with animal tests.

    PubMed

    Sauer, Ursula G; Hill, Erin H; Curren, Rodger D; Raabe, Hans A; Kolle, Susanne N; Teubner, Wera; Mehling, Annette; Landsiedel, Robert

    2016-07-01

    In general, no single non-animal method can cover the complexity of any given animal test. Therefore, fixed sets of in vitro (and in chemico) methods have been combined into testing strategies for skin and eye irritation and skin sensitisation testing, with pre-defined prediction models for substance classification. Many of these methods have been adopted as OECD test guidelines. Various testing strategies have been successfully validated in extensive in-house and inter-laboratory studies, but they have not yet received formal acceptance for substance classification. Therefore, under the European REACH Regulation, data from testing strategies can, in general, only be used in so-called weight-of-evidence approaches. While animal testing data generated under the specific REACH information requirements are per se sufficient, the sufficiency of weight-of-evidence approaches can be questioned under the REACH system, and further animal testing can be required. This constitutes an imbalance between the regulatory acceptance of data from approved non-animal methods and animal tests that is not justified on scientific grounds. To ensure that testing strategies for local tolerance testing truly serve to replace animal testing for the REACH registration 2018 deadline (when the majority of existing chemicals have to be registered), clarity on their regulatory acceptance as complete replacements is urgently required. PMID:27494627

  8. An h-adaptive local discontinuous Galerkin method for the Navier-Stokes-Korteweg equations

    NASA Astrophysics Data System (ADS)

    Tian, Lulu; Xu, Yan; Kuerten, J. G. M.; van der Vegt, J. J. W.

    2016-08-01

    In this article, we develop a mesh adaptation algorithm for a local discontinuous Galerkin (LDG) discretization of the (non)-isothermal Navier-Stokes-Korteweg (NSK) equations modeling liquid-vapor flows with phase change. This work is a continuation of our previous research, where we proposed LDG discretizations for the (non)-isothermal NSK equations with a time-implicit Runge-Kutta method. To save computing time and to capture the thin interfaces more accurately, we extend the LDG discretization with a mesh adaptation method. Given the current adapted mesh, a criterion for selecting candidate elements for refinement and coarsening is adopted based on the locally largest value of the density gradient. A strategy to refine and coarsen the candidate elements is then provided. We emphasize that the adaptive LDG discretization is relatively simple and does not require additional stabilization. The use of a locally refined mesh in combination with an implicit Runge-Kutta time method is, however, non-trivial, but results in an efficient time integration method for the NSK equations. Computations, including cases with solid wall boundaries, are provided to demonstrate the accuracy, efficiency and capabilities of the adaptive LDG discretizations.

  9. Experimental validation of normalized uniform load surface curvature method for damage localization.

    PubMed

    Jung, Ho-Yeon; Sung, Seung-Hoon; Jung, Hyung-Jo

    2015-10-16

    In this study, we experimentally validated the normalized uniform load surface (NULS) curvature method, which has been developed recently to assess damage localization in beam-type structures. The normalization technique allows for the accurate assessment of damage localization with greater sensitivity irrespective of the damage location. In this study, damage to a simply supported beam was numerically and experimentally investigated on the basis of the changes in the NULS curvatures, which were estimated from the modal flexibility matrices obtained from the acceleration responses under an ambient excitation. Two damage scenarios were considered for the single damage case as well as the multiple damages case by reducing the bending stiffness (EI) of the affected element(s). Numerical simulations were performed using MATLAB as a preliminary step. During the validation experiments, a series of tests were performed. It was found that the damage locations could be identified successfully without any false-positive or false-negative detections using the proposed method. For comparison, the damage detection performances were compared with those of two other well-known methods based on the modal flexibility matrix, namely, the uniform load surface (ULS) method and the ULS curvature method. It was confirmed that the proposed method is more effective for investigating the damage locations of simply supported beams than the two conventional methods in terms of sensitivity to damage under measurement noise.

  10. Autonomous pallet localization and picking for industrial forklifts: a robust range and look method

    NASA Astrophysics Data System (ADS)

    Baglivo, L.; Biasi, N.; Biral, F.; Bellomo, N.; Bertolazzi, E.; Da Lio, M.; De Cecco, M.

    2011-08-01

    A combined double-sensor architecture, laser and camera, and a new algorithm named RLPF are presented as a solution to the problem of identifying and localizing a pallet, the position and angle of which are a priori known with large uncertainty. Solving this task for autonomous robot forklifts is of great value for logistics industry. The state-of-the-art is described to show how our approach overcomes the limitations of using either laser ranging or vision. An extensive experimental campaign and uncertainty analysis are presented. For the docking task, new dynamic nonlinear path planning which takes into account vehicle dynamics is proposed.

  11. Evaluating a physician leadership development program - a mixed methods approach.

    PubMed

    Throgmorton, Cheryl; Mitchell, Trey; Morley, Tom; Snyder, Marijo

    2016-05-16

    Purpose - With the extent of change in healthcare today, organizations need strong physician leaders. To compensate for the lack of physician leadership education, many organizations are sending physicians to external leadership programs or developing in-house leadership programs targeted specifically to physicians. The purpose of this paper is to outline the evaluation strategy and outcomes of the inaugural year of a Physician Leadership Academy (PLA) developed and implemented at a Michigan-based regional healthcare system. Design/methodology/approach - The authors applied the theoretical framework of Kirkpatrick's four levels of evaluation and used surveys, observations, activity tracking, and interviews to evaluate the program outcomes. The authors applied grounded theory techniques to the interview data. Findings - The program met targeted outcomes across all four levels of evaluation. Interview themes focused on the significance of increasing self-awareness, building relationships, applying new skills, and building confidence. Research limitations/implications - While only one example, this study illustrates the importance of developing the evaluation strategy as part of the program design. Qualitative research methods, often lacking from learning evaluation design, uncover rich themes of impact. The study supports how a PLA program can enhance physician learning, engagement, and relationship building throughout and after the program. Physician leaders' partnership with organization development and learning professionals yield results with impact to individuals, groups, and the organization. Originality/value - Few studies provide an in-depth review of evaluation methods and outcomes of physician leadership development programs. Healthcare organizations seeking to develop similar in-house programs may benefit applying the evaluation strategy outlined in this study. PMID:27119393

  12. Evaluating a physician leadership development program - a mixed methods approach.

    PubMed

    Throgmorton, Cheryl; Mitchell, Trey; Morley, Tom; Snyder, Marijo

    2016-05-16

    Purpose - With the extent of change in healthcare today, organizations need strong physician leaders. To compensate for the lack of physician leadership education, many organizations are sending physicians to external leadership programs or developing in-house leadership programs targeted specifically to physicians. The purpose of this paper is to outline the evaluation strategy and outcomes of the inaugural year of a Physician Leadership Academy (PLA) developed and implemented at a Michigan-based regional healthcare system. Design/methodology/approach - The authors applied the theoretical framework of Kirkpatrick's four levels of evaluation and used surveys, observations, activity tracking, and interviews to evaluate the program outcomes. The authors applied grounded theory techniques to the interview data. Findings - The program met targeted outcomes across all four levels of evaluation. Interview themes focused on the significance of increasing self-awareness, building relationships, applying new skills, and building confidence. Research limitations/implications - While only one example, this study illustrates the importance of developing the evaluation strategy as part of the program design. Qualitative research methods, often lacking from learning evaluation design, uncover rich themes of impact. The study supports how a PLA program can enhance physician learning, engagement, and relationship building throughout and after the program. Physician leaders' partnership with organization development and learning professionals yield results with impact to individuals, groups, and the organization. Originality/value - Few studies provide an in-depth review of evaluation methods and outcomes of physician leadership development programs. Healthcare organizations seeking to develop similar in-house programs may benefit applying the evaluation strategy outlined in this study.

  13. Obtaining Highly Excited Eigenstates of Many-Body Localized Hamiltonians by the Density Matrix Renormalization Group Approach.

    PubMed

    Khemani, Vedika; Pollmann, Frank; Sondhi, S L

    2016-06-17

    The eigenstates of many-body localized (MBL) Hamiltonians exhibit low entanglement. We adapt the highly successful density-matrix renormalization group method, which is usually used to find modestly entangled ground states of local Hamiltonians, to find individual highly excited eigenstates of MBL Hamiltonians. The adaptation builds on the distinctive spatial structure of such eigenstates. We benchmark our method against the well-studied random field Heisenberg model in one dimension. At moderate to large disorder, the method successfully obtains excited eigenstates with high accuracy, thereby enabling a study of MBL systems at much larger system sizes than those accessible to exact-diagonalization methods. PMID:27367405

  14. A Grounded Approach to Citizenship Education: Local Interplays between Government Institutions, Adult Schools, and Community Events in Sacramento, California

    ERIC Educational Resources Information Center

    Loring, Ariel

    2015-01-01

    Following a grounded, bottom-up approach to language policy (Blommaert 2009; Canagarajah 2005; McCarty, 2011; Ramanathan, 2005), this paper investigates available resources and discourses of citizenship in Sacramento, California to those situated within the citizenship infrastructure. It analyzes how the discursive framing of local and national…

  15. Improvement of magnetic hardness at finite temperatures: Ab initio disordered local-moment approach for YCo5

    NASA Astrophysics Data System (ADS)

    Matsumoto, Munehisa; Banerjee, Rudra; Staunton, Julie B.

    2014-08-01

    Temperature dependence of the magnetocrystalline anisotropy energy and magnetization of the prototypical rare-earth magnet YCo5 is calculated from first principles, utilizing the relativistic disordered local-moment approach. We discuss a strategy to enhance the finite-temperature anisotropy field by hole doping, paving the way for an improvement of the coercivity near room temperature or higher.

  16. Supervisor Localization: A Top-Down Approach to Distributed Control of Discrete-Event Systems

    SciTech Connect

    Cai, K.; Wonham, W. M.

    2009-03-05

    A purely distributed control paradigm is proposed for discrete-event systems (DES). In contrast to control by one or more external supervisors, distributed control aims to design built-in strategies for individual agents. First a distributed optimal nonblocking control problem is formulated. To solve it, a top-down localization procedure is developed which systematically decomposes an external supervisor into local controllers while preserving optimality and nonblockingness. An efficient localization algorithm is provided to carry out the computation, and an automated guided vehicles (AGV) example presented for illustration. Finally, the 'easiest' and 'hardest' boundary cases of localization are discussed.

  17. Supervisor Localization: A Top-Down Approach to Distributed Control of Discrete-Event Systems

    NASA Astrophysics Data System (ADS)

    Cai, K.; Wonham, W. M.

    2009-03-01

    A purely distributed control paradigm is proposed for discrete-event systems (DES). In contrast to control by one or more external supervisors, distributed control aims to design built-in strategies for individual agents. First a distributed optimal nonblocking control problem is formulated. To solve it, a top-down localization procedure is developed which systematically decomposes an external supervisor into local controllers while preserving optimality and nonblockingness. An efficient localization algorithm is provided to carry out the computation, and an automated guided vehicles (AGV) example presented for illustration. Finally, the 'easiest' and 'hardest' boundary cases of localization are discussed.

  18. Cumulative Risk Assessment Toolbox: Methods and Approaches for the Practitioner

    PubMed Central

    MacDonell, Margaret M.; Haroun, Lynne A.; Teuschler, Linda K.; Rice, Glenn E.; Hertzberg, Richard C.; Butler, James P.; Chang, Young-Soo; Clark, Shanna L.; Johns, Alan P.; Perry, Camarie S.; Garcia, Shannon S.; Jacobi, John H.; Scofield, Marcienne A.

    2013-01-01

    The historical approach to assessing health risks of environmental chemicals has been to evaluate them one at a time. In fact, we are exposed every day to a wide variety of chemicals and are increasingly aware of potential health implications. Although considerable progress has been made in the science underlying risk assessments for real-world exposures, implementation has lagged because many practitioners are unaware of methods and tools available to support these analyses. To address this issue, the US Environmental Protection Agency developed a toolbox of cumulative risk resources for contaminated sites, as part of a resource document that was published in 2007. This paper highlights information for nearly 80 resources from the toolbox and provides selected updates, with practical notes for cumulative risk applications. Resources are organized according to the main elements of the assessment process: (1) planning, scoping, and problem formulation; (2) environmental fate and transport; (3) exposure analysis extending to human factors; (4) toxicity analysis; and (5) risk and uncertainty characterization, including presentation of results. In addition to providing online access, plans for the toolbox include addressing nonchemical stressors and applications beyond contaminated sites and further strengthening resource accessibility to support evolving analyses for cumulative risk and sustainable communities. PMID:23762048

  19. High Explosive Verification and Validation: Systematic and Methodical Approach

    NASA Astrophysics Data System (ADS)

    Scovel, Christina; Menikoff, Ralph

    2011-06-01

    Verification and validation of high explosive (HE) models does not fit the standard mold for several reasons. First, there are no non-trivial test problems with analytic solutions. Second, an HE model depends on a burn rate and the equation of states (EOS) of both the reactants and products. Third, there is a wide range of detonation phenomena from initiation under various stimuli to propagation of curved detonation fronts with non-rigid confining materials. Fourth, in contrast to a shock wave in a non-reactive material, the reaction-zone width is physically significant and affects the behavior of a detonation wave. Because of theses issues, a systematic and methodical approach to HE V & V is needed. Our plan is to build a test suite from the ground up. We have started with the cylinder test and have run simulations with several EOS models and burn models. We have compared with data and cross-compared the different runs to check on the sensitivity to model parameters. A related issue for V & V is what experimental data are available for calibrating and testing models. For this purpose we have started a WEB based high explosive database (HED). The current status of HED will be discussed.

  20. A systems approach to hemostasis: 3. Thrombus consolidation regulates intrathrombus solute transport and local thrombin activity.

    PubMed

    Stalker, Timothy J; Welsh, John D; Tomaiuolo, Maurizio; Wu, Jie; Colace, Thomas V; Diamond, Scott L; Brass, Lawrence F

    2014-09-11

    Hemostatic thrombi formed after a penetrating injury have a distinctive structure in which a core of highly activated, closely packed platelets is covered by a shell of less-activated, loosely packed platelets. We have shown that differences in intrathrombus molecular transport emerge in parallel with regional differences in platelet packing density and predicted that these differences affect thrombus growth and stability. Here we test that prediction in a mouse vascular injury model. The studies use a novel method for measuring thrombus contraction in vivo and a previously characterized mouse line with a defect in integrin αIIbβ3 outside-in signaling that affects clot retraction ex vivo. The results show that the mutant mice have a defect in thrombus consolidation following vascular injury, resulting in an increase in intrathrombus transport rates and, as predicted by computational modeling, a decrease in thrombin activity and platelet activation in the thrombus core. Collectively, these data (1) demonstrate that in addition to the activation state of individual platelets, the physical properties of the accumulated mass of adherent platelets is critical in determining intrathrombus agonist distribution and platelet activation and (2) define a novel role for integrin signaling in the regulation of intrathrombus transport rates and localization of thrombin activity. PMID:24951426

  1. Clustering Methods with Qualitative Data: a Mixed-Methods Approach for Prevention Research with Small Samples.

    PubMed

    Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G

    2015-10-01

    Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.

  2. Clustering Methods with Qualitative Data: A Mixed Methods Approach for Prevention Research with Small Samples

    PubMed Central

    Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.

    2016-01-01

    Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969

  3. [Systemic approach to ecologic safety at objects with radiation jeopardy, involved into localization of low and medium radioactive waste].

    PubMed

    Veselov, E I

    2011-01-01

    The article deals with specifying systemic approach to ecologic safety of objects with radiation jeopardy. The authors presented stages of work and algorithm of decisions on preserving reliability of storage for radiation jeopardy waste. Findings are that providing ecologic safety can cover 3 approaches: complete exemption of radiation jeopardy waste, removal of more dangerous waste from present buildings and increasing reliability of prolonged localization of radiation jeopardy waste at the initial place. The systemic approach presented could be realized at various radiation jeopardy objects. PMID:21774123

  4. Volume averaging: Local and nonlocal closures using a Green’s function approach

    NASA Astrophysics Data System (ADS)

    Wood, Brian D.; Valdés-Parada, Francisco J.

    2013-01-01

    Modeling transport phenomena in discretely hierarchical systems can be carried out using any number of upscaling techniques. In this paper, we revisit the method of volume averaging as a technique to pass from a microscopic level of description to a macroscopic one. Our focus is primarily on developing a more consistent and rigorous foundation for the relation between the microscale and averaged levels of description. We have put a particular focus on (1) carefully establishing statistical representations of the length scales used in volume averaging, (2) developing a time-space nonlocal closure scheme with as few assumptions and constraints as are possible, and (3) carefully identifying a sequence of simplifications (in terms of scaling postulates) that explain the conditions for which various upscaled models are valid. Although the approach is general for linear differential equations, we upscale the problem of linear convective diffusion as an example to help keep the discussion from becoming overly abstract. In our efforts, we have also revisited the concept of a closure variable, and explain how closure variables can be based on an integral formulation in terms of Green’s functions. In such a framework, a closure variable then represents the integration (in time and space) of the associated Green’s functions that describe the influence of the average sources over the spatial deviations. The approach using Green’s functions has utility not only in formalizing the method of volume averaging, but by clearly identifying how the method can be extended to transient and time or space nonlocal formulations. In addition to formalizing the upscaling process using Green’s functions, we also discuss the upscaling process itself in some detail to help foster improved understanding of how the process works. Discussion about the role of scaling postulates in the upscaling process is provided, and poised, whenever possible, in terms of measurable properties of (1) the

  5. Local unitary transformation method toward practical electron correlation calculations with scalar relativistic effect in large-scale molecules

    NASA Astrophysics Data System (ADS)

    Seino, Junji; Nakai, Hiromi

    2013-07-01

    In order to perform practical electron correlation calculations, the local unitary transformation (LUT) scheme at the spin-free infinite-order Douglas-Kroll-Hess (IODKH) level [J. Seino and H. Nakai, J. Chem. Phys. 136, 244102 (2012), 10.1063/1.4729463; J. Seino and H. Nakai, J. Chem. Phys. 137, 144101 (2012)], 10.1063/1.4757263, which is based on the locality of relativistic effects, has been combined with the linear-scaling divide-and-conquer (DC)-based Hartree-Fock (HF) and electron correlation methods, such as the second-order Møller-Plesset (MP2) and the coupled cluster theories with single and double excitations (CCSD). Numerical applications in hydrogen halide molecules, (HX)n (X = F, Cl, Br, and I), coinage metal chain systems, Mn (M = Cu and Ag), and platinum-terminated polyynediyl chain, trans,trans-{(p-CH3C6H4)3P}2(C6H5)Pt(C≡C)4Pt(C6H5){(p-CH3C6H4)3P}2, clarified that the present methods, namely DC-HF, MP2, and CCSD with the LUT-IODKH Hamiltonian, reproduce the results obtained using conventional methods with small computational costs. The combination of both LUT and DC techniques could be the first approach that achieves overall quasi-linear-scaling with a small prefactor for relativistic electron correlation calculations.

  6. Sonic-box method employing local Mach number for oscillating wings with thickness

    NASA Technical Reports Server (NTRS)

    Ruo, S. Y.

    1978-01-01

    A computer program was developed to account approximately for the effects of finite wing thickness in the transonic potential flow over an oscillating wing of finite span. The program is based on the original sonic-box program for planar wing which was previously extended to include the effects of the swept trailing edge and the thickness of the wing. Account for the nonuniform flow caused by finite thickness is made by application of the local linearization concept. The thickness effect, expressed in terms of the local Mach number, is included in the basic solution to replace the coordinate transformation method used in the earlier work. Calculations were made for a delta wing and a rectangular wing performing plunge and pitch oscillations, and the results were compared with those obtained from other methods. An input quide and a complete listing of the computer code are presented.

  7. A reliable acoustic path: Physical properties and a source localization method

    NASA Astrophysics Data System (ADS)

    Duan, Rui; Yang, Kun-De; Ma, Yuan-Liang; Lei, Bo

    2012-12-01

    The physical properties of a reliable acoustic path (RAP) are analysed and subsequently a weighted-subspace-fitting matched field (WSF-MF) method for passive localization is presented by exploiting the properties of the RAP environment. The RAP is an important acoustic duct in the deep ocean, which occurs when the receiver is placed near the bottom where the sound velocity exceeds the maximum sound velocity in the vicinity of the surface. It is found that in the RAP environment the transmission loss is rather low and no blind zone of surveillance exists in a medium range. The ray theory is used to explain these phenomena. Furthermore, the analysis of the arrival structures shows that the source localization method based on arrival angle is feasible in this environment. However, the conventional methods suffer from the complicated and inaccurate estimation of the arrival angle. In this paper, a straightforward WSF-MF method is derived to exploit the information about the arrival angles indirectly. The method is to minimize the distance between the signal subspace and the spanned space by the array manifold in a finite range-depth space rather than the arrival-angle space. Simulations are performed to demonstrate the features of the method, and the results are explained by the arrival structures in the RAP environment.

  8. Method to repair localized amplitude defects in a EUV lithography mask blank

    DOEpatents

    Stearns, Daniel G.; Sweeney, Donald W.; Mirkarimi, Paul B.; Chapman, Henry N.

    2005-11-22

    A method and apparatus are provided for the repair of an amplitude defect in a multilayer coating. A significant number of layers underneath the amplitude defect are undamaged. The repair technique restores the local reflectivity of the coating by physically removing the defect and leaving a wide, shallow crater that exposes the underlying intact layers. The particle, pit or scratch is first removed the remaining damaged region is etched away without disturbing the intact underlying layers.

  9. The massive schwinger model on the lattice studied via a local hamiltonian Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Schiller, A.; Ranft, J.

    1983-10-01

    A local hamiltonian Monte Carlo method is used to study the massive Schwinger model. A non-vanishing quark condensate is found and the dependence of the condensate and the string tension on the background field is calculated. These results reproduce well the expected continuum results. We study also the first order phase transition which separates the weak and strong coupling regimes and find evidence for the behaviour conjectured by Coleman.

  10. Linear-scaling explicitly correlated treatment of solids: Periodic local MP2-F12 method

    SciTech Connect

    Usvyat, Denis

    2013-11-21

    Theory and implementation of the periodic local MP2-F12 method in the 3*A fixed-amplitude ansatz is presented. The method is formulated in the direct space, employing local representation for the occupied, virtual, and auxiliary orbitals in the form of Wannier functions (WFs), projected atomic orbitals (PAOs), and atom-centered Gaussian-type orbitals, respectively. Local approximations are introduced, restricting the list of the explicitly correlated pairs, as well as occupied, virtual, and auxiliary spaces in the strong orthogonality projector to the pair-specific domains on the basis of spatial proximity of respective orbitals. The 4-index two-electron integrals appearing in the formalism are approximated via the direct-space density fitting technique. In this procedure, the fitting orbital spaces are also restricted to local fit-domains surrounding the fitted densities. The formulation of the method and its implementation exploits the translational symmetry and the site-group symmetries of the WFs. Test calculations are performed on LiH crystal. The results show that the periodic LMP2-F12 method substantially accelerates basis set convergence of the total correlation energy, and even more so the correlation energy differences. The resulting energies are quite insensitive to the resolution-of-the-identity domain sizes and the quality of the auxiliary basis sets. The convergence with the orbital domain size is somewhat slower, but still acceptable. Moreover, inclusion of slightly more diffuse functions, than those usually used in the periodic calculations, improves the convergence of the LMP2-F12 correlation energy with respect to both the size of the PAO-domains and the quality of the orbital basis set. At the same time, the essentially diffuse atomic orbitals from standard molecular basis sets, commonly utilized in molecular MP2-F12 calculations, but problematic in the periodic context, are not necessary for LMP2-F12 treatment of crystals.

  11. Feasibility of A-mode ultrasound attenuation as a monitoring method of local hyperthermia treatment.

    PubMed

    Manaf, Noraida Abd; Aziz, Maizatul Nadwa Che; Ridzuan, Dzulfadhli Saffuan; Mohamad Salim, Maheza Irna; Wahab, Asnida Abd; Lai, Khin Wee; Hum, Yan Chai

    2016-06-01

    Recently, there is an increasing interest in the use of local hyperthermia treatment for a variety of clinical applications. The desired therapeutic outcome in local hyperthermia treatment is achieved by raising the local temperature to surpass the tissue coagulation threshold, resulting in tissue necrosis. In oncology, local hyperthermia is used as an effective way to destroy cancerous tissues and is said to have the potential to replace conventional treatment regime like surgery, chemotherapy or radiotherapy. However, the inability to closely monitor temperature elevations from hyperthermia treatment in real time with high accuracy continues to limit its clinical applicability. Local hyperthermia treatment requires real-time monitoring system to observe the progression of the destroyed tissue during and after the treatment. Ultrasound is one of the modalities that have great potential for local hyperthermia monitoring, as it is non-ionizing, convenient and has relatively simple signal processing requirement compared to magnetic resonance imaging and computed tomography. In a two-dimensional ultrasound imaging system, changes in tissue microstructure during local hyperthermia treatment are observed in terms of pixel value analysis extracted from the ultrasound image itself. Although 2D ultrasound has shown to be the most widely used system for monitoring hyperthermia in ultrasound imaging family, 1D ultrasound on the other hand could offer a real-time monitoring and the method enables quantitative measurement to be conducted faster and with simpler measurement instrument. Therefore, this paper proposes a new local hyperthermia monitoring method that is based on one-dimensional ultrasound. Specifically, the study investigates the effect of ultrasound attenuation in normal and pathological breast tissue when the temperature in tissue is varied between 37 and 65 °C during local hyperthermia treatment. Besides that, the total protein content measurement was also

  12. Feasibility of A-mode ultrasound attenuation as a monitoring method of local hyperthermia treatment.

    PubMed

    Manaf, Noraida Abd; Aziz, Maizatul Nadwa Che; Ridzuan, Dzulfadhli Saffuan; Mohamad Salim, Maheza Irna; Wahab, Asnida Abd; Lai, Khin Wee; Hum, Yan Chai

    2016-06-01

    Recently, there is an increasing interest in the use of local hyperthermia treatment for a variety of clinical applications. The desired therapeutic outcome in local hyperthermia treatment is achieved by raising the local temperature to surpass the tissue coagulation threshold, resulting in tissue necrosis. In oncology, local hyperthermia is used as an effective way to destroy cancerous tissues and is said to have the potential to replace conventional treatment regime like surgery, chemotherapy or radiotherapy. However, the inability to closely monitor temperature elevations from hyperthermia treatment in real time with high accuracy continues to limit its clinical applicability. Local hyperthermia treatment requires real-time monitoring system to observe the progression of the destroyed tissue during and after the treatment. Ultrasound is one of the modalities that have great potential for local hyperthermia monitoring, as it is non-ionizing, convenient and has relatively simple signal processing requirement compared to magnetic resonance imaging and computed tomography. In a two-dimensional ultrasound imaging system, changes in tissue microstructure during local hyperthermia treatment are observed in terms of pixel value analysis extracted from the ultrasound image itself. Although 2D ultrasound has shown to be the most widely used system for monitoring hyperthermia in ultrasound imaging family, 1D ultrasound on the other hand could offer a real-time monitoring and the method enables quantitative measurement to be conducted faster and with simpler measurement instrument. Therefore, this paper proposes a new local hyperthermia monitoring method that is based on one-dimensional ultrasound. Specifically, the study investigates the effect of ultrasound attenuation in normal and pathological breast tissue when the temperature in tissue is varied between 37 and 65 °C during local hyperthermia treatment. Besides that, the total protein content measurement was also

  13. State-Based Curriculum-Making: Approaches to Local Curriculum Work in Norway and Finland

    ERIC Educational Resources Information Center

    Mølstad, Christina Elde

    2015-01-01

    This article investigates how state authorities in Norway and Finland design national curriculum to provide different policy conditions for local curriculum work in municipalities and schools. The topic is explored by comparing how national authorities in Norway and Finland create a scope for local curriculum. The data consist of interviews with…

  14. Source localization of turboshaft engine broadband noise using a three-sensor coherence method

    NASA Astrophysics Data System (ADS)

    Blacodon, Daniel; Lewy, Serge

    2015-03-01

    Turboshaft engines can become the main source of helicopter noise at takeoff. Inlet radiation mainly comes from the compressor tones, but aft radiation is more intricate: turbine tones usually are above the audible frequency range and do not contribute to the weighted sound levels; jet is secondary and radiates low noise levels. A broadband component is the most annoying but its sources are not well known (it is called internal or core noise). Present study was made in the framework of the European project TEENI (Turboshaft Engine Exhaust Noise Identification). Its main objective was to localize the broadband sources in order to better reduce them. Several diagnostic techniques were implemented by the various TEENI partners. As regards ONERA, a first attempt at separating sources was made in the past with Turbomeca using a three-signal coherence method (TSM) to reject background non-acoustic noise. The main difficulty when using TSM is the assessment of the frequency range where the results are valid. This drawback has been circumvented in the TSM implemented in TEENI. Measurements were made on a highly instrumented Ardiden turboshaft engine in the Turbomeca open-air test bench. Two engine powers (approach and takeoff) were selected to apply TSM. Two internal pressure probes were located in various cross-sections, either behind the combustion chamber (CC), the high-pressure turbine (HPT), the free-turbine first stage (TL), or in four nozzle sections. The third transducer was a far-field microphone located around the maximum of radiation, at 120° from the intake centerline. The key result is that coherence increases from CC to HPT and TL, then decreases in the nozzle up to the exit. Pressure fluctuations from HPT and TL are very coherent with the far-field acoustic spectra up to 700 Hz. They are thus the main acoustic source and can be attributed to indirect combustion noise (accuracy decreases above 700 Hz because coherence is lower, but far-field sound spectra

  15. A numerical method of tracing a vortical axis along local topological axis line

    NASA Astrophysics Data System (ADS)

    Nakayama, Katsuyuki; Hasegawa, Hideki

    2016-06-01

    A new numerical method is presented to trace or identify a vortical axis in flow, which is based on Galilean invariant flow topology. We focus on the local flow topology specified by the eigenvalues and eigenvectors of the velocity gradient tensor, and extract the axis component from its flow trajectory. Eigen-vortical-axis line is defined from the eigenvector of the real eigenvalue of the velocity gradient tensor where the tensor has the conjugate complex eigenvalues. This numerical method integrates the eigen-vortical-axis line and traces a vortical axis in terms of the invariant flow topology, which enables to investigate the feature of the topology-based vortical axis.

  16. Local Search Methods for Tree Chromosome Structure in a GA to Identify Functions

    NASA Astrophysics Data System (ADS)

    Matayoshi, Mitsukuni; Nakamura, Morikazu; Miyagi, Hayao

    In this paper, Local search methods for “Tree Chromosome Structure in a Genetic Algorithm to Identify Functions" which succeeds in function identifications are proposed. The proposed method aims at the identification success rate improvement and shortening identification time. The target functions of identification are composed of algebraic functions, primary transcendental functions, time series functions include a chaos function, and user-defined one-variable funcions. In testing, Kepler's the third law is added to Matayoshi's test functions(7)-(9). When some functions are identified, the improvement of identification rate and shortening time are indicated. However, we also report some ineffectual results, and give considerations.

  17. Physics-based approach to chemical source localization using mobile robotic swarms

    NASA Astrophysics Data System (ADS)

    Zarzhitsky, Dimitri

    2008-07-01

    Recently, distributed computation has assumed a dominant role in the fields of artificial intelligence and robotics. To improve system performance, engineers are combining multiple cooperating robots into cohesive collectives called swarms. This thesis illustrates the application of basic principles of physicomimetics, or physics-based design, to swarm robotic systems. Such principles include decentralized control, short-range sensing and low power consumption. We show how the application of these principles to robotic swarms results in highly scalable, robust, and adaptive multi-robot systems. The emergence of these valuable properties can be predicted with the help of well-developed theoretical methods. In this research effort, we have designed and constructed a distributed physicomimetics system for locating sources of airborne chemical plumes. This task, called chemical plume tracing (CPT), is receiving a great deal of attention due to persistent homeland security threats. For this thesis, we have created a novel CPT algorithm called fluxotaxis that is based on theoretical principles of fluid dynamics. Analytically, we show that fluxotaxis combines the essence, as well as the strengths, of the two most popular biologically-inspired CPT methods-- chemotaxis and anemotaxis. The chemotaxis strategy consists of navigating in the direction of the chemical density gradient within the plume, while the anemotaxis approach is based on an upwind traversal of the chemical cloud. Rigorous and extensive experimental evaluations have been performed in simulated chemical plume environments. Using a suite of performance metrics that capture the salient aspects of swarm-specific behavior, we have been able to evaluate and compare the three CPT algorithms. We demonstrate the improved performance of our fluxotaxis approach over both chemotaxis and anemotaxis in these realistic simulation environments, which include obstacles. To test our understanding of CPT on actual hardware

  18. AN ANALYTICAL APPROACH TO RESEARCH ON INSTRUCTIONAL METHODS.

    ERIC Educational Resources Information Center

    GAGE, N.L.

    THE APPROACH USED AT STANFORD UNIVERSITY TO RESEARCH ON TEACHING WAS DISCUSSED, AND THE AUTHOR EXPLAINED THE CONCEPTS OF "TECHNICAL SKILLS,""MICROTEACHING," AND "MICROCRITERIA" THAT WERE THE BASIS OF THE DEVELOPMENT OF THIS APPROACH TO RESEARCH AND TO STANFORD'S SECONDARY-TEACHER EDUCATION PROGRAM. THE AUTHOR PRESENTED A BASIC DISTINCTION BETWEEN…

  19. A method based on local approximate solutions (LAS) for inverting transient flow in heterogeneous aquifers

    NASA Astrophysics Data System (ADS)

    Jiao, Jianying; Zhang, Ye

    2014-06-01

    An inverse method based on local approximate solutions (LAS inverse method) is proposed to invert transient flows in heterogeneous aquifers. Unlike the objective-function-based inversion techniques, the method does not require forward simulations to assess measurement-to-model misfits; thus the knowledge of aquifer initial conditions (IC) and boundary conditions (BC) is not required. Instead, the method employs a set of local approximate solutions of flow to impose continuity of hydraulic head and Darcy fluxes throughout space and time. Given sufficient (but limited) measurements, it yields well-posed systems of nonlinear equations that can be solved efficiently with optimization. Solution of the inversion includes parameters (hydraulic conductivities, specific storage coefficients) and flow field including the unknown IC and BC. Given error-free measurements, the estimated conductivities and specific storages are accurate within 10% of the true values. When increasing measurement errors are imposed, the estimated parameters become less accurate, but the inverse solution is still stable, i.e., parameter, IC, and BC estimation remains bounded. For a problem where parameter variation is unknown, highly parameterized inversion can reveal the underlying parameter structure, whereas equivalent conductivity and average storage coefficient can also be estimated. Because of the physically-based constraints placed in inversion, the number of measurements does not need to exceed the number of parameters for the inverse method to succeed.

  20. A locally stabilized immersed boundary method for the compressible Navier-Stokes equations

    NASA Astrophysics Data System (ADS)

    Brehm, C.; Hader, C.; Fasel, H. F.

    2015-08-01

    A higher-order immersed boundary method for solving the compressible Navier-Stokes equations is presented. The distinguishing feature of this new immersed boundary method is that the coefficients of the irregular finite-difference stencils in the vicinity of the immersed boundary are optimized to obtain improved numerical stability. This basic idea was introduced in a previous publication by the authors for the advection step in the projection method used to solve the incompressible Navier-Stokes equations. This paper extends the original approach to the compressible Navier-Stokes equations considering flux vector splitting schemes and viscous wall boundary conditions at the immersed geometry. In addition to the stencil optimization procedure for the convective terms, this paper discusses other key aspects of the method, such as imposing flux boundary conditions at the immersed boundary and the discretization of the viscous flux in the vicinity of the boundary. Extensive linear stability investigations of the immersed scheme confirm that a linearly stable method is obtained. The method of manufactured solutions is used to validate the expected higher-order accuracy and to study the error convergence properties of this new method. Steady and unsteady, 2D and 3D canonical test cases are used for validation of the immersed boundary approach. Finally, the method is employed to simulate the laminar to turbulent transition process of a hypersonic Mach 6 boundary layer flow over a porous wall and subsonic boundary layer flow over a three-dimensional spherical roughness element.

  1. Scale-adaptive tensor algebra for local many-body methods of electronic structure theory

    SciTech Connect

    Liakh, Dmitry I

    2014-01-01

    While the formalism of multiresolution analysis (MRA), based on wavelets and adaptive integral representations of operators, is actively progressing in electronic structure theory (mostly on the independent-particle level and, recently, second-order perturbation theory), the concepts of multiresolution and adaptivity can also be utilized within the traditional formulation of correlated (many-particle) theory which is based on second quantization and the corresponding (generally nonorthogonal) tensor algebra. In this paper, we present a formalism called scale-adaptive tensor algebra (SATA) which exploits an adaptive representation of tensors of many-body operators via the local adjustment of the basis set quality. Given a series of locally supported fragment bases of a progressively lower quality, we formulate the explicit rules for tensor algebra operations dealing with adaptively resolved tensor operands. The formalism suggested is expected to enhance the applicability and reliability of local correlated many-body methods of electronic structure theory, especially those directly based on atomic orbitals (or any other localized basis functions).

  2. Method for simultaneous localization and parameter estimation in particle tracking experiments

    NASA Astrophysics Data System (ADS)

    Ashley, Trevor T.; Andersson, Sean B.

    2015-11-01

    We present a numerical method for the simultaneous localization and parameter estimation of a fluorescent particle undergoing a discrete-time continuous-state Markov process. In particular, implementation of the method proposed in this work yields an approximation to the posterior density of the particle positions over time in addition to maximum likelihood estimates of fixed, unknown parameters. The method employs sequential Monte Carlo methods and can take into account complex, potentially nonlinear noise models, including shot noise and camera-specific readout noise, as well as a wide variety of motion models and observation models, including those representing recent engineered point spread functions. We demonstrate the technique by applying it to four scenarios, including a particle undergoing free, confined, and tethered diffusions.

  3. A batch sliding window method for local singularity mapping and its application for geochemical anomaly identification

    NASA Astrophysics Data System (ADS)

    Xiao, Fan; Chen, Zhijun; Chen, Jianguo; Zhou, Yongzhang

    2016-05-01

    In this study, a novel batch sliding window (BSW) based singularity mapping approach was proposed. Compared to the traditional sliding window (SW) technique with disadvantages of the empirical predetermination of a fixed maximum window size and outliers sensitivity of least-squares (LS) linear regression method, the BSW based singularity mapping approach can automatically determine the optimal size of the largest window for each estimated position, and utilizes robust linear regression (RLR) which is insensitive to outlier values. In the case study, tin geochemical data in Gejiu, Yunnan, have been processed by BSW based singularity mapping approach. The results show that the BSW approach can improve the accuracy of the calculation of singularity exponent values due to the determination of the optimal maximum window size. The utilization of RLR method in the BSW approach can smoothen the distribution of singularity index values with few or even without much high fluctuate values looking like noise points that usually make a singularity map much roughly and discontinuously. Furthermore, the student's t-statistic diagram indicates a strong spatial correlation between high geochemical anomaly and known tin polymetallic deposits. The target areas within high tin geochemical anomaly could probably have much higher potential for the exploration of new tin polymetallic deposits than other areas, particularly for the areas that show strong tin geochemical anomalies whereas no tin polymetallic deposits have been found in them.

  4. Promising ethical arguments for product differentiation in the organic food sector. A mixed methods research approach.

    PubMed

    Zander, Katrin; Stolz, Hanna; Hamm, Ulrich

    2013-03-01

    Ethical consumerism is a growing trend worldwide. Ethical consumers' expectations are increasing and neither the Fairtrade nor the organic farming concept covers all the ethical concerns of consumers. Against this background the aim of this research is to elicit consumers' preferences regarding organic food with additional ethical attributes and their relevance at the market place. A mixed methods research approach was applied by combining an Information Display Matrix, Focus Group Discussions and Choice Experiments in five European countries. According to the results of the Information Display Matrix, 'higher animal welfare', 'local production' and 'fair producer prices' were preferred in all countries. These three attributes were discussed with Focus Groups in depth, using rather emotive ways of labelling. While the ranking of the attributes was the same, the emotive way of communicating these attributes was, for the most part, disliked by participants. The same attributes were then used in Choice Experiments, but with completely revised communication arguments. According to the results of the Focus Groups, the arguments were presented in a factual manner, using short and concise statements. In this research step, consumers in all countries except Austria gave priority to 'local production'. 'Higher animal welfare' and 'fair producer prices' turned out to be relevant for buying decisions only in Germany and Switzerland. According to our results, there is substantial potential for product differentiation in the organic sector through making use of production standards that exceed existing minimum regulations. The combination of different research methods in a mixed methods approach proved to be very helpful. The results of earlier research steps provided the basis from which to learn - findings could be applied in subsequent steps, and used to adjust and deepen the research design.

  5. Promising ethical arguments for product differentiation in the organic food sector. A mixed methods research approach.

    PubMed

    Zander, Katrin; Stolz, Hanna; Hamm, Ulrich

    2013-03-01

    Ethical consumerism is a growing trend worldwide. Ethical consumers' expectations are increasing and neither the Fairtrade nor the organic farming concept covers all the ethical concerns of consumers. Against this background the aim of this research is to elicit consumers' preferences regarding organic food with additional ethical attributes and their relevance at the market place. A mixed methods research approach was applied by combining an Information Display Matrix, Focus Group Discussions and Choice Experiments in five European countries. According to the results of the Information Display Matrix, 'higher animal welfare', 'local production' and 'fair producer prices' were preferred in all countries. These three attributes were discussed with Focus Groups in depth, using rather emotive ways of labelling. While the ranking of the attributes was the same, the emotive way of communicating these attributes was, for the most part, disliked by participants. The same attributes were then used in Choice Experiments, but with completely revised communication arguments. According to the results of the Focus Groups, the arguments were presented in a factual manner, using short and concise statements. In this research step, consumers in all countries except Austria gave priority to 'local production'. 'Higher animal welfare' and 'fair producer prices' turned out to be relevant for buying decisions only in Germany and Switzerland. According to our results, there is substantial potential for product differentiation in the organic sector through making use of production standards that exceed existing minimum regulations. The combination of different research methods in a mixed methods approach proved to be very helpful. The results of earlier research steps provided the basis from which to learn - findings could be applied in subsequent steps, and used to adjust and deepen the research design. PMID:23207189

  6. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity

    PubMed Central

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-01-01

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203

  7. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

    PubMed

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-01-01

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. PMID:26151203

  8. Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity.

    PubMed

    Oh, Taekjun; Lee, Donghwa; Kim, Hyungjin; Myung, Hyun

    2015-07-03

    Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach.

  9. Robust method to detect and locate local earthquakes by means of amplitude measurements.

    NASA Astrophysics Data System (ADS)

    del Puy Papí Isaba, María; Brückl, Ewald

    2016-04-01

    In this study we present a robust new method to detect and locate medium and low magnitude local earthquakes. This method is based on an empirical model of the ground motion obtained from amplitude data of earthquakes in the area of interest, which were located using traditional methods. The first step of our method is the computation of maximum resultant ground velocities in sliding time windows covering the whole period of interest. In the second step, these maximum resultant ground velocities are back-projected to every point of a grid covering the whole area of interest while applying the empirical amplitude - distance relations. We refer to these back-projected ground velocities as pseudo-magnitudes. The number of operating seismic stations in the local network equals the number of pseudo-magnitudes at each grid-point. Our method introduces the new idea of selecting the minimum pseudo-magnitude at each grid-point for further analysis instead of searching for a minimum of the L2 or L1 norm. In case no detectable earthquake occurred, the spatial distribution of the minimum pseudo-magnitudes constrains the magnitude of weak earthquakes hidden in the ambient noise. In the case of a detectable local earthquake, the spatial distribution of the minimum pseudo-magnitudes shows a significant maximum at the grid-point nearest to the actual epicenter. The application of our method is restricted to the area confined by the convex hull of the seismic station network. Additionally, one must ensure that there are no dead traces involved in the processing. Compared to methods based on L2 and even L1 norms, our new method is almost wholly insensitive to outliers (data from locally disturbed seismic stations). A further advantage is the fast determination of the epicenter and magnitude of a seismic event located within a seismic network. This is possible due to the method of obtaining and storing a back-projected matrix, independent of the registered amplitude, for each seismic

  10. Multi-step approach for comparing the local air pollution contributions of conventional and innovative MSW thermo-chemical treatments.

    PubMed

    Ragazzi, M; Rada, E C

    2012-10-01

    In the sector of municipal solid waste management the debate on the performances of conventional and novel thermo-chemical technologies is still relevant. When a plant must be constructed, decision makers often select a technology prior to analyzing the local environmental impact of the available options, as this type of study is generally developed when the design of the plant has been carried out. Additionally, in the literature there is a lack of comparative analyses of the contributions to local air pollution from different technologies. The present study offers a multi-step approach, based on pollutant emission factors and atmospheric dilution coefficients, for a local comparative analysis. With this approach it is possible to check if some assumptions related to the advantages of the novel thermochemical technologies, in terms of local direct impact on air quality, can be applied to municipal solid waste treatment. The selected processes concern combustion, gasification and pyrolysis, alone or in combination. The pollutants considered are both carcinogenic and non-carcinogenic. A case study is presented concerning the location of a plant in an alpine region and its contribution to the local air pollution. Results show that differences among technologies are less than expected. Performances of each technology are discussed in details. PMID:22795304

  11. Localization microscopy of DNA in situ using Vybrant(®) DyeCycle™ Violet fluorescent probe: A new approach to study nuclear nanostructure at single molecule resolution.

    PubMed

    Żurek-Biesiada, Dominika; Szczurek, Aleksander T; Prakash, Kirti; Mohana, Giriram K; Lee, Hyun-Keun; Roignant, Jean-Yves; Birk, Udo J; Dobrucki, Jurek W; Cremer, Christoph

    2016-05-01

    Higher order chromatin structure is not only required to compact and spatially arrange long chromatids within a nucleus, but have also important functional roles, including control of gene expression and DNA processing. However, studies of chromatin nanostructures cannot be performed using conventional widefield and confocal microscopy because of the limited optical resolution. Various methods of superresolution microscopy have been described to overcome this difficulty, like structured illumination and single molecule localization microscopy. We report here that the standard DNA dye Vybrant(®) DyeCycle™ Violet can be used to provide single molecule localization microscopy (SMLM) images of DNA in nuclei of fixed mammalian cells. This SMLM method enabled optical isolation and localization of large numbers of DNA-bound molecules, usually in excess of 10(6) signals in one cell nucleus. The technique yielded high-quality images of nuclear DNA density, revealing subdiffraction chromatin structures of the size in the order of 100nm; the interchromatin compartment was visualized at unprecedented optical resolution. The approach offers several advantages over previously described high resolution DNA imaging methods, including high specificity, an ability to record images using a single wavelength excitation, and a higher density of single molecule signals than reported in previous SMLM studies. The method is compatible with DNA/multicolor SMLM imaging which employs simple staining methods suited also for conventional optical microscopy.

  12. An investigation of acoustic beam patterns for the sonar localization problem using a beam based method.

    PubMed

    Guarato, Francesco; Windmill, James; Gachagan, Anthony; Harvey, Gerald

    2013-06-01

    Target localization can be accomplished through an ultrasonic sonar system equipped with an emitter and two receivers. Time of flight of the sonar echoes allows the calculation of the distance of the target. The orientation can be estimated from knowledge of the beam pattern of the receivers and the ratio, in the frequency domain, between the emitted and the received signals after compensation for distance effects and air absorption. The localization method is described and, as its performance strongly depends on the beam pattern, the search of the most appropriate sonar receiver in order to ensure the highest accuracy of target orientation estimations is developed in this paper. The structure designs considered are inspired by the ear shapes of some bat species. Parameters like flare rate, truncation angle, and tragus are considered in the design of the receiver structures. Simulations of the localization method allow us to state which combination of those parameters could provide the best real world implementation. Simulation results show the estimates of target orientations are, in the worst case, 2° with SNR = 50 dB using the receiver structure chosen for a potential practical implementation of a sonar system.

  13. An Amplitude-Based Estimation Method for International Space Station (ISS) Leak Detection and Localization Using Acoustic Sensor Networks

    NASA Technical Reports Server (NTRS)

    Tian, Jialin; Madaras, Eric I.

    2009-01-01

    The development of a robust and efficient leak detection and localization system within a space station environment presents a unique challenge. A plausible approach includes the implementation of an acoustic sensor network system that can successfully detect the presence of a leak and determine the location of the leak source. Traditional acoustic detection and localization schemes rely on the phase and amplitude information collected by the sensor array system. Furthermore, the acoustic source signals are assumed to be airborne and far-field. Likewise, there are similar applications in sonar. In solids, there are specialized methods for locating events that are used in geology and in acoustic emission testing that involve sensor arrays and depend on a discernable phase front to the received signal. These methods are ineffective if applied to a sensor detection system within the space station environment. In the case of acoustic signal location, there are significant baffling and structural impediments to the sound path and the source could be in the near-field of a sensor in this particular setting.

  14. A procurement-based pathway for promoting public health: innovative purchasing approaches for state and local government agencies.

    PubMed

    Noonan, Kathleen; Miller, Dorothy; Sell, Katherine; Rubin, David

    2013-11-01

    Through their purchasing powers, government agencies can play a critical role in leveraging markets to create healthier foods. In the United States, state and local governments are implementing creative approaches to procuring healthier foods, moving beyond the traditional regulatory relationship between government and vendors. They are forging new partnerships between government, non-profits, and researchers to increase healthier purchasing. On the basis of case examples, this article proposes a pathway in which state and local government agencies can use the procurement cycle to improve healthy eating.

  15. Efficient and accurate local single reference correlation methods for high-spin open-shell molecules using pair natural orbitals

    NASA Astrophysics Data System (ADS)

    Hansen, Andreas; Liakos, Dimitrios G.; Neese, Frank

    2011-12-01

    A production level implementation of the high-spin open-shell (spin unrestricted) single reference coupled pair, quadratic configuration interaction and coupled cluster methods with up to doubly excited determinants in the framework of the local pair natural orbital (LPNO) concept is reported. This work is an extension of the closed-shell LPNO methods developed earlier [F. Neese, F. Wennmohs, and A. Hansen, J. Chem. Phys. 130, 114108 (2009), 10.1063/1.3086717; F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009), 10.1063/1.3173827]. The internal space is spanned by localized orbitals, while the external space for each electron pair is represented by a truncated PNO expansion. The laborious integral transformation associated with the large number of PNOs becomes feasible through the extensive use of density fitting (resolution of the identity (RI)) techniques. Technical complications arising for the open-shell case and the use of quasi-restricted orbitals for the construction of the reference determinant are discussed in detail. As in the closed-shell case, only three cutoff parameters control the average number of PNOs per electron pair, the size of the significant pair list, and the number of contributing auxiliary basis functions per PNO. The chosen threshold default values ensure robustness and the results of the parent canonical methods are reproduced to high accuracy. Comprehensive numerical tests on absolute and relative energies as well as timings consistently show that the outstanding performance of the LPNO methods carries over to the open-shell case with minor modifications. Finally, hyperfine couplings calculated with the variational LPNO-CEPA/1 method, for which a well-defined expectation value type density exists, indicate the great potential of the LPNO approach for the efficient calculation of molecular properties.

  16. Towards Reduced Parameter Uncertainty in Groundwater Model Calibration: Comparison of Local Gradient and Global Evolutionary Search Methods

    NASA Astrophysics Data System (ADS)

    Zyvoloski, G. A.; Vrugt, J. A.; Wolfsberg, A.; Stauffer, P.; Doherty, J.

    2006-12-01

    The calibration of very large and complex groundwater models is becoming common as a means to help address issues of reliability and uncertainty. Models with many parameters might require thousands of model runs to achieve an acceptable calibration. In addition, larger basin scale models often take hours to run. Obviously, the efficiency of the calibration method can be crucial to practical calibration of these large models. Model-independent estimation packages such as PEST (Doherty, 2005) that are based on the Gauss-Newton- Levenberg-Marquardt (GNLM) method provide inverse modeling capabilities with considerable flexibility in choosing parameters and observations. However, when dealing with highly nonlinear problems, they may converge to a local, rather than global optimum. Recently, Vrugt and Robinson (2006) presented a new concept of genetically adaptive multi-method search that has shown to significantly improve the efficiency of global search, approaching a factor of ten improvement for the more complex, higher dimensional problems. This new optimization method is called AMALGAM. In this study, we compare the GNLM and AMALGAM methods on several different synthetic groundwater models ranging from a layered basin model to a complex unconfined model. Algorithms are compared on a basis of computational efficiency and robustness of the solution.

  17. Exploiting the spatial locality of electron correlation within the parametric two-electron reduced-density-matrix method

    NASA Astrophysics Data System (ADS)

    DePrince, A. Eugene; Mazziotti, David A.

    2010-01-01

    The parametric variational two-electron reduced-density-matrix (2-RDM) method is applied to computing electronic correlation energies of medium-to-large molecular systems by exploiting the spatial locality of electron correlation within the framework of the cluster-in-molecule (CIM) approximation [S. Li et al., J. Comput. Chem. 23, 238 (2002); J. Chem. Phys. 125, 074109 (2006)]. The 2-RDMs of individual molecular fragments within a molecule are determined, and selected portions of these 2-RDMs are recombined to yield an accurate approximation to the correlation energy of the entire molecule. In addition to extending CIM to the parametric 2-RDM method, we (i) suggest a more systematic selection of atomic-orbital domains than that presented in previous CIM studies and (ii) generalize the CIM method for open-shell quantum systems. The resulting method is tested with a series of polyacetylene molecules, water clusters, and diazobenzene derivatives in minimal and nonminimal basis sets. Calculations show that the computational cost of the method scales linearly with system size. We also compute hydrogen-abstraction energies for a series of hydroxyurea derivatives. Abstraction of hydrogen from hydroxyurea is thought to be a key step in its treatment of sickle cell anemia; the design of hydroxyurea derivatives that oxidize more rapidly is one approach to devising more effective treatments.

  18. A hierarchy of local coupled cluster singles and doubles response methods for ionization potentials.

    PubMed

    Wälz, Gero; Usvyat, Denis; Korona, Tatiana; Schütz, Martin

    2016-02-28

    We present a hierarchy of local coupled cluster (CC) linear response (LR) methods to calculate ionization potentials (IPs), i.e., excited states with one electron annihilated relative to a ground state reference. The time-dependent perturbation operator V(t), as well as the operators related to the first-order (with respect to V(t)) amplitudes and multipliers, thus are not number conserving and have half-integer particle rank m. Apart from calculating IPs of neutral molecules, the method offers also the possibility to study ground and excited states of neutral radicals as ionized states of closed-shell anions. It turns out that for comparable accuracy IPs require a higher-order treatment than excitation energies; an IP-CC LR method corresponding to CC2 LR or the algebraic diagrammatic construction scheme through second order performs rather poorly. We therefore systematically extended the order with respect to the fluctuation potential of the IP-CC2 LR Jacobian up to IP-CCSD LR, keeping the excitation space of the first-order (with respect to V(t)) cluster operator restricted to the m=½⊕3/2 subspace and the accuracy of the zero-order (ground-state) amplitudes at the level of CC2 or MP2. For the more expensive diagrams beyond the IP-CC2 LR Jacobian, we employ local approximations. The implemented methods are capable of treating large molecular systems with hundred atoms or more.

  19. Extended Kantorovich method for local stresses in composite laminates upon polynomial stress functions

    NASA Astrophysics Data System (ADS)

    Huang, Bin; Wang, Ji; Du, Jianke; Guo, Yan; Ma, Tingfeng; Yi, Lijun

    2016-06-01

    The extended Kantorovich method is employed to study the local stress concentrations at the vicinity of free edges in symmetrically layered composite laminates subjected to uniaxial tensile load upon polynomial stress functions. The stress fields are initially assumed by means of the Lekhnitskii stress functions under the plane strain state. Applying the principle of complementary virtual work, the coupled ordinary differential equations are obtained in which the solutions can be obtained by solving a generalized eigenvalue problem. Then an iterative procedure is established to achieve convergent stress distributions. It should be noted that the stress function based extended Kantorovich method can satisfy both the traction-free and free edge stress boundary conditions during the iterative processes. The stress components near the free edges and in the interior regions are calculated and compared with those obtained results by finite element method (FEM). The convergent stresses have good agreements with those results obtained by three dimensional (3D) FEM. For generality, various layup configurations are considered for the numerical analysis. The results show that the proposed polynomial stress function based extended Kantorovich method is accurate and efficient in predicting the local stresses in composite laminates and computationally much more efficient than the 3D FEM.

  20. Valuation of IT Courses--A Contingent Valuation Method Approach

    ERIC Educational Resources Information Center

    Liao, Chao-ning; Chiang, LiChun

    2008-01-01

    To help the civil servants in both central and local governments in Taiwan operating administrative works smoothly under a new digitalized system launched in 1994, a series of courses related to information technology were offered free to them annually by the central governments. However, due to the budget deficit in recent years, the government…

  1. Sustainable Development Index in Hong Kong: Approach, Method and Findings

    ERIC Educational Resources Information Center

    Tso, Geoffrey K. F.; Yau, Kelvin K. W.; Yang, C. Y.

    2011-01-01

    Sustainable development is a priority area of research in many countries and regions nowadays. This paper illustrates how a multi-stakeholders engagement process can be applied to identify and prioritize the local community's concerns and issues regarding sustainable development in Hong Kong. Ten priority areas covering a wide range of community's…

  2. Formation of Silicon-Gold Eutectic Bond Using Localized Heating Method

    NASA Astrophysics Data System (ADS)

    Lin, Liwei; Cheng, Yu-Ting; Najafi, Khalil

    1998-11-01

    A new bonding technique is proposed by using localized heating to supplythe bonding energy.Heating is achieved by applying a dc current through micromachined heaters made of gold which serves as both the heating and bonding material.At the interface of silicon and gold, the formation of eutectic bond takes place in about 5 minutes.Assembly of two substrates in microfabrication processescan be achieved by using this method.In this paper the following important results are obtained:1) Gold diffuses into silicon to form a strong eutectic bond by means of localized heating.2) The bonding strength reaches the fracture toughness of the bulk silicon.3) This bonding technique greatly simplifies device fabrication andassembly processes.

  3. Calculating interaction energies in transition metal complexes with local electron correlation methods

    NASA Astrophysics Data System (ADS)

    Hill, J. Grant; Platts, James A.

    2008-10-01

    The results of density fitting and local approximations applied to the calculation of transition metal-ligand binding energies using second order Møller-Plesset perturbation theory are reported. This procedure accurately reproduces counterpoise corrected binding energies from the canonical method for a range of test complexes. While counterpoise corrections for basis set superposition error are generally small, this procedure can be time consuming, and in some cases gives rise to unphysical dissociation of complexes. In circumventing this correction, a local treatment of electron correlation offers major efficiency savings with little loss of accuracy. The use of density fitting for the underlying Hartree-Fock calculations is also tested for sample Ru complexes, leading to further efficiency gains but essentially no loss in accuracy.

  4. Research on the localization method of protecting traditional village landscape: a case study on Tangyin

    NASA Astrophysics Data System (ADS)

    Li, W.

    2015-08-01

    China has over 271 million villages and less than the number in ten years ago in which there are 363 million villages. New rural construction indeed do some good for common villages but still destroy hundreds and thousands traditional village which contain great cultural, science, artistic values. In addition, traditional villages can't meet the increasing needs in more convenient and comfortable living conditions. Increasing population also makes traditional villages out of control in construction. With the background of this, we have to set up in traditional village protection. This article put forward an idea in protection which make use of landscape localization to pursue the sustainable development and vernacular landscape protection. Tangyin Town is a famous trade center in history and left many cultural heritage, especially historical buildings. Take Tangyin as a case study to apply the localization method which could guide other similar villages to achieve same goals.

  5. Two methods for the study of vortex patch evolution on locally refined grids

    SciTech Connect

    Minion, M.L.

    1994-05-01

    Two numerical methods for the solution of the two-dimensional Euler equations for incompressible flow on locally refined grids are presented. The first is a second order projection method adapted from the method of Bell, Colella, and Glaz. The second method is based on the vorticity-stream function form of the Euler equations and is designed to be free-stream preserving and conservative. Second order accuracy of both methods in time and space is established, and they are shown to agree on problems with a localized vorticity distribution. The filamentation of a perturbed patch of circular vorticity and the merger of two smooth vortex patches are studied. It is speculated that for nearly stable patches of vorticity, an arbitrarily small amount of viscosity is sufficient to effectively eliminate vortex filaments from the evolving patch and that the filamentation process affects the evolution of such patches very little. Solutions of the vortex merger problem show that filamentation is responsible for the creation of large gradients in the vorticity which, in the presence of an arbitrarily small viscosity, will lead to vortex merger. It is speculated that a small viscosity in this problem does not substantially affect the transition of the flow to a statistical equilibrium solution. The main contributions of this thesis concern the formulation and implementation of a projection for refined grids. A careful analysis of the adjointness relation between gradient and divergence operators for a refined grid MAC projection is presented, and a uniformly accurate, approximately stable projection is developed. An efficient multigrid method which exactly solves the projection is developed, and a method for casting certain approximate projections as MAC projections on refined grids is presented.

  6. Assessing the options for local government to use legal approaches to combat obesity in the UK: putting theory into practice.

    PubMed

    Mitchell, C; Cowburn, G; Foster, C

    2011-08-01

    The law is recognized as a powerful tool to address some of the structural determinants of chronic disease, including 'obesogenic' environments which are a major factor in the increasing prevalence of obesity worldwide. However, it is often local - as opposed to national - government that has responsibility for an environment, including the built environment, and their role in reducing obesity using law remains relatively unexplored. With the English government shifting emphasis for improvement of public health from central to local government, this paper reviews the potential for regulatory action by local government to reduce obesity. We took a novel approach to assess the evidence and to identify legal options for implementation by local government: conducting reviews of literature, media reports and case law. Our results provide a clear rational for regulatory intervention that encourages a real choice of behaviour. They highlight strategic legal areas for reduction of obesity through restriction of traffic and promotion of active travel, promotion of access to healthy food and construction of a sustainable and active environment. Importantly, we identify current legal mechanisms for adoption by UK local government including the use of planning, licensing and transport legislation to develop local obesity prevention policy.

  7. COST-ES0601: Advances in homogenisation methods of climate series: an integrated approach (HOME)

    NASA Astrophysics Data System (ADS)

    Mestre, O.; Auer, I.; Venema, V.; Stepanek, P.; Szentimrey, T.; Grimvall, A.; Aguilar, E.

    2009-04-01

    The COST Action ES0601: Advances in homogenisation methods of climate series: an integrated approach is nearing the end of its second year of life. The action is intended to provide the best possible tools for the homogenization of time series to the climate research community. The involved scientists have done remarkable progress since COST Action ES0601 was launched (see www.homogenisation.org). HOME has started with a literature review and a survey to the research community to identify those climatic elements and homogenisation techniques to be considered during the action. This allowed the preparation of the benchmark monthly dataset to be used during the remaining time of the action. This monthly benchmark contains real temperature and precipitation data (with real inhomogeneities), as well as synthetic and surrogate networks, including artificially produced missing values, outliers, local trends and break inhomogeneities which are inserted at the usual rate, size and distribution found in actual networks. The location of the outliers and change points is undisclosed to the HOME scientists, who are, at present, applying different homogenisation approaches and uploading the results, to analyse the performances of their techniques. Everyone who works on the homogenization of climate data is cordially invited to join this exercise. HOME is also working on the production of a daily benchmark dataset, to reproduce the experiment described above, but in a lower temporal resolution, and on the preparation of freely available homogenization tools, including the best performing approaches.

  8. A method for local rectification of 2MASS positions with UCAC4

    NASA Astrophysics Data System (ADS)

    Bustos Fierro, I. H.; Calderón, J. H.

    2016-04-01

    We propose to locally rectify 2MASS with respect to UCAC4 in order to diminish the systematic differences between these catalogs. We develop a rectification method that starts computing the weighted mean differences 2MASS-UCAC4 on a regular grid on the sky. The corrections that are later applied to 2MASS positions are obtained by a spline interpolation of the mean values calculated on the grid. The method is tested in four 3° × 3° fields in the ecliptical zone; after rectification in all of them the systematic differences are reduced well below the random differences. The 2MASS catalog rectified with the proposed method can be regarded as an extension of UCAC4 for astrometry, with an accuracy of around 90 mas in the positions, and with negligible systematic errors, for instance for the astrometric reduction of small field CCD images.

  9. A quantitative autoradiographic method for the measurement of local rates of brain protein synthesis

    SciTech Connect

    Dwyer, B.E.; Donatoni, P.; Wasterlain, C.G.

    1982-05-01

    We have developed a new method for measuring local rates of brain protein synthesis in vivo. It combines the intraperitoneal injection of a large dose of low specific activity amino acid with quantitative autoradiography. This method has several advantages: 1) It is ideally suited for young or small animals or where immobilizing an animal is undesirable. 2 The amino acid injection ''floods'' amino acid pools so that errors in estimating precursor specific activity, which is especially important in pathological conditions, are minimized. 3) The method provides for the use of a radioautographic internal standard in which valine incorporation is measured directly. Internal standards from experimental animals correct for tissue protein content and self-absorption of radiation in tissue sections which could vary under experimental conditions.

  10. Implementation and Testing of an Improved Mathematical Framework for the Bayesian Infrasonic Source Localization Method

    NASA Astrophysics Data System (ADS)

    Blom, P. S.; Arrowsmith, S.; Marcillo, O. E.

    2014-12-01

    The Bayesian Infrasonic Source Localization (BISL) framework for estimating the location and time of an infrasonic event using distant observations was proposed in 2010 and expanded in 2013 to allow inclusion of propagation based priors. Recently, modifications to the mathematical framework have been made to remove redundancies in the parameter space and generalize the framework. Such modifications are aimed at improving the performance and efficiency of the method. This new mathematical formulation has been implemented using the Python scripting language and is planned to be included in the InfraPy software package alongside the existing detection and association methods. The details of the new mathematical framework and its implementation will be presented along with results of performance tests. IMS data has been used to evaluate the method at global distances and infrasound from the smaller scale explosions provides an opportunity to study regional performance.

  11. A non-parametric method for measuring the local dark matter density

    NASA Astrophysics Data System (ADS)

    Silverwood, H.; Sivertsson, S.; Steger, P.; Read, J. I.; Bertone, G.

    2016-07-01

    We present a new method for determining the local dark matter density using kinematic data for a population of tracer stars. The Jeans equation in the z-direction is integrated to yield an equation that gives the velocity dispersion as a function of the total mass density, tracer density, and the `tilt' term that describes the coupling of vertical and radial motions. We then fit a dark matter mass profile to tracer density and velocity dispersion data to derive credible regions on the vertical dark matter density profile. Our method avoids numerical differentiation, leading to lower numerical noise, and is able to deal with the tilt term while remaining one dimensional. In this study we present the method and perform initial tests on idealized mock data. We also demonstrate the importance of dealing with the tilt term for tracers that sample ≳1 kpc above the disc plane. If ignored, this results in a systematic underestimation of the dark matter density.

  12. Local structure study of disordered crystalline materials with the atomic pair distribution function method

    NASA Astrophysics Data System (ADS)

    Qiu, Xiangyun

    The employed experimental method in this Ph.D. dissertation research is the atomic pair distribution function (PDF) technique specializing in high real space resolution local structure determination. The PDF is obtained via Fourier transform from powder total scattering data including the important local structural information in the diffuse scattering intensities underneath, and in-between, the Bragg peaks. Having long been used to study liquids and amorphous materials, the PDF technique has been recently successfully applied to highly crystalline materials owing to the advances in modern X-ray and neutron sources and computing power. An integral part of this thesis work has been to make the PDF technique accessible to a wider scientific community. We have recently developed the rapid acquisition PDF (RA-PDF) method featuring high energy X-rays coupled with an image plate area detector, allowing three to four orders of magnitude decrease of data collection time. Correspondingly in software development, I have written a complete X-ray data correction program PDFgetX2 (user friendly with GUI, 32,000+ lines). Those developments sweep away many barriers to the wide-spread application of the PDF technique in complex materials. The RA-PDF development also opens up new fields of research such as time-resolved studies, pump-probe measurements and so on, where the PDF analysis can provide unique insights. Two examples of the RA-PDF applications are described: the distorted T12 square nets in the new binary antimonide Ti2Sb and in-situ chemical reduction of CuO to Cu. The most intellectually enriching has been the local structure studies of the colossal magneto-resistive (CMR) manganites with intrinsic inhomogeneities. The strong coupling between electron, spin, orbital, and lattice degrees of freedom result in extremely rich and interesting phase diagrams. We have carried out careful PDF analysis of neutron powder diffraction data to study the local MnO6 octahedral

  13. Three Adult Education Projects: Local History Sparks ABE Class; Teleteacher; Project TARA: An Approach to AE.

    ERIC Educational Resources Information Center

    Ringley, Ray; And Others

    1979-01-01

    Describes three instructional approaches in adult basic education: a class in which retired coal miners recorded their experiences in early coal mining camps; a telephone-based instructional system using "Teleteacher" specially designed and built machines; and an approach to ABE in New York emphasizing adult functional literacy, Project TARA…

  14. Development of the local magnification method for quantitative evaluation of endoscope geometric distortion

    NASA Astrophysics Data System (ADS)

    Wang, Quanzeng; Cheng, Wei-Chung; Suresh, Nitin; Hua, Hong

    2016-05-01

    With improved diagnostic capabilities and complex optical designs, endoscopic technologies are advancing. As one of the several important optical performance characteristics, geometric distortion can negatively affect size estimation and feature identification related diagnosis. Therefore, a quantitative and simple distortion evaluation method is imperative for both the endoscopic industry and the medical device regulatory agent. However, no such method is available yet. While the image correction techniques are rather mature, they heavily depend on computational power to process multidimensional image data based on complex mathematical model, i.e., difficult to understand. Some commonly used distortion evaluation methods, such as the picture height distortion (DPH) or radial distortion (DRAD), are either too simple to accurately describe the distortion or subject to the error of deriving a reference image. We developed the basic local magnification (ML) method to evaluate endoscope distortion. Based on the method, we also developed ways to calculate DPH and DRAD. The method overcomes the aforementioned limitations, has clear physical meaning in the whole field of view, and can facilitate lesion size estimation during diagnosis. Most importantly, the method can facilitate endoscopic technology to market and potentially be adopted in an international endoscope standard.

  15. Staggered grid lagrangian method with local structured adaptive mesh refinement for modeling shock hydrodynamics

    SciTech Connect

    Anderson, R W; Pember, R B; Elliot, N S

    2000-09-26

    A new method for the solution of the unsteady Euler equations has been developed. The method combines staggered grid Lagrangian techniques with structured local adaptive mesh refinement (AMR). This method is a precursor to a more general adaptive arbitrary Lagrangian Eulerian (ALE-AMR) algorithm under development, which will facilitate the solution of problems currently at and beyond the boundary of soluble problems by traditional ALE methods by focusing computational resources where they are required. Many of the core issues involved in the development of the ALE-AMR method hinge upon the integration of AMR with a Lagrange step, which is the focus of the work described here. The novel components of the method are mainly driven by the need to reconcile traditional AMR techniques, which are typically employed on stationary meshes with cell-centered quantities, with the staggered grids and grid motion employed by Lagrangian methods. These new algorithmic components are first developed in one dimension and are then generalized to two dimensions. Solutions of several model problems involving shock hydrodynamics are presented and discussed.

  16. A Locally Modal B-Spline Based Full-Vector Finite-Element Method with PML for Nonlinear and Lossy Plasmonic Waveguide

    NASA Astrophysics Data System (ADS)

    Karimi, Hossein; Nikmehr, Saeid; Khodapanah, Ehsan

    2016-09-01

    In this paper, we develop a B-spline finite-element method (FEM) based on a locally modal wave propagation with anisotropic perfectly matched layers (PMLs), for the first time, to simulate nonlinear and lossy plasmonic waveguides. Conventional approaches like beam propagation method, inherently omit the wave spectrum and do not provide physical insight into nonlinear modes especially in the plasmonic applications, where nonlinear modes are constructed by linear modes with very close propagation constant quantities. Our locally modal B-spline finite element method (LMBS-FEM) does not suffer from the weakness of the conventional approaches. To validate our method, first, propagation of wave for various kinds of linear, nonlinear, lossless and lossy materials of metal-insulator plasmonic structures are simulated using LMBS-FEM in MATLAB and the comparisons are made with FEM-BPM module of COMSOL Multiphysics simulator and B-spline finite-element finite-difference wide angle beam propagation method (BSFEFD-WABPM). The comparisons show that not only our developed numerical approach is computationally more accurate and efficient than conventional approaches but also it provides physical insight into the nonlinear nature of the propagation modes.

  17. A new EXAFS method for the local structure analysis of low-Z elements.

    PubMed

    Isomura, Noritake; Kamada, Masao; Nonaka, Takamasa; Nakamura, Eiken; Takano, Takumi; Sugiyama, Harue; Kimoto, Yasuji

    2016-01-01

    A unique analytical method is proposed for local structure analysis via extended X-ray absorption fine structure (EXAFS) spectroscopy. The measurement of electron energy distribution curves at various excitation photon energies using an electron energy analyzer is applied to determine a specific elemental Auger spectrum. To demonstrate the method, the N K-edge EXAFS spectra for a silicon nitride film were obtained via simultaneous measurement of the N KLL Auger and background spectra using dual-energy windows. The background spectrum was then used to remove the photoelectrons and secondary electron mixing in the energy distribution curves. The spectrum obtained following this subtraction procedure represents the `true' N K-edge EXAFS spectrum without the other absorptions that are observed in total electron yield N K-edge EXAFS spectra. The first nearest-neighbor distance (N-Si) derived from the extracted N K-edge EXAFS oscillation was in good agreement with the value derived from Si K-edge analysis. This result confirmed that the present method, referred to as differential electron yield (DEY)-EXAFS, is valid for deriving local surface structure information for low-Z elements. PMID:26698075

  18. Comparison of Three Methods for Localizing Interictal Epileptiform Discharges with Magnetoencephalography

    PubMed Central

    Shiraishi, Hideaki; Ahlfors, Seppo P.; Stufflebeam, Steven M.; Knake, Susanne; Larsson, Pål G.; Hämäläinen, Matti S.; Takano, Kyoko; Okajima, Maki; Hatanaka, Keisaku; Saitoh, Shinji; Dale, Anders M.; Halgren, Eric

    2011-01-01

    Purpose To compare three methods of localizing the source of epileptiform activity recorded with magnetoencephalography (MEG): equivalent current dipole (ECD), minimum current estimate (MCE), and dynamic statistical parametric mapping (dSPM), and to evaluate the solutions by comparison with clinical symptoms and other electrophysiological and neuroradiological findings. Methods Fourteen children of 3 to 15 years old were studied. MEG was collected with a whole-head 204-channel helmet-shaped sensor array. We calculated ECDs and made MCE and dSPM movies to estimate the cortical distribution of interictal epileptic discharges (IED) in these patients. Results The results for 4 patients with localization related epilepsy (LRE) and 1 patient with Landau-Kleffner Syndrome were consistent among all 3 analysis methods. In the rest of the patients MCE and dSPM suggested multifocal or widespread activity; in these patients the ECD results were so scattered that interpretation of the results was not possible. For 9 patients with LRE and generalized epilepsy, the epileptiform discharges were wide-spread or only slow waves, but dSPM suggested a possible propagation path of the IED. Conclusion MCE and dSPM could identify the propagation of epileptiform activity with high temporal resolution. The results of dSPM were more stable because the solutions were less sensitive to background brain activity. PMID:21946369

  19. Test particle propagation in magnetostatic turbulence. 2: The local approximation method

    NASA Technical Reports Server (NTRS)

    Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.

    1976-01-01

    An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.

  20. Coded moderator approach for fast neutron source detection and localization at standoff

    NASA Astrophysics Data System (ADS)

    Littell, Jennifer; Lukosi, Eric; Hayward, Jason; Milburn, Robert; Rowan, Allen

    2015-06-01

    Considering the need for directional sensing at standoff for some security applications and scenarios where a neutron source may be shielded by high Z material that nearly eliminates the source gamma flux, this work focuses on investigating the feasibility of using thermal neutron sensitive boron straw detectors for fast neutron source detection and localization. We utilized MCNPX simulations to demonstrate that, through surrounding the boron straw detectors by a HDPE coded moderator, a source-detector orientation-specific response enables potential 1D source localization in a high neutron detection efficiency design. An initial test algorithm has been developed in order to confirm the viability of this detector system's localization capabilities which resulted in identification of a 1 MeV neutron source with a strength equivalent to 8 kg WGPu at 50 m standoff within ±11°.

  1. A finite element approach for shells of revolution with a local deviation

    NASA Technical Reports Server (NTRS)

    Han, K. J.; Gould, P. L.

    1982-01-01

    A finite element model that is suitable for the static analysis of shells of revolution with arbitrary local deviations is presented. The model employs three types of elements: rotational, general, and transitional shell elements. The rotational shell elements are used in the region where the shell is axisymmetric. The general shell element are used in the local region of the deviation. The transitional shell elements connect these two distinctively different types of elements and make it possible to combine them in a single analysis. The form of the global stiffness matrix resulting when different forms of nodal degrees of freedom are combined is illustrated. The coupling of harmonic degrees of freedom due to the locally nonaxisymmetric geometry was studied. The use of a substructuring technique and separate partial harmonic analysis is recommended.

  2. Applying clustering approach in predictive uncertainty estimation: a case study with the UNEEC method

    NASA Astrophysics Data System (ADS)

    Dogulu, Nilay; Solomatine, Dimitri; Lal Shrestha, Durga

    2014-05-01

    Within the context of flood forecasting, assessment of predictive uncertainty has become a necessity for most of the modelling studies in operational hydrology. There are several uncertainty analysis and/or prediction methods available in the literature; however, most of them rely on normality and homoscedasticity assumptions for model residuals occurring in reproducing the observed data. This study focuses on a statistical method analyzing model residuals without having any assumptions and based on a clustering approach: Uncertainty Estimation based on local Errors and Clustering (UNEEC). The aim of this work is to provide a comprehensive evaluation of the UNEEC method's performance in view of clustering approach employed within its methodology. This is done by analyzing normality of model residuals and comparing uncertainty analysis results (for 50% and 90% confidence level) with those obtained from uniform interval and quantile regression methods. An important part of the basis by which the methods are compared is analysis of data clusters representing different hydrometeorological conditions. The validation measures used are PICP, MPI, ARIL and NUE where necessary. A new validation measure linking prediction interval to the (hydrological) model quality - weighted mean prediction interval (WMPI) - is also proposed for comparing the methods more effectively. The case study is Brue catchment, located in the South West of England. A different parametrization of the method than its previous application in Shrestha and Solomatine (2008) is used, i.e. past error values in addition to discharge and effective rainfall is considered. The results show that UNEEC's notable characteristic in its methodology, i.e. applying clustering to data of predictors upon which catchment behaviour information is encapsulated, contributes increased accuracy of the method's results for varying flow conditions. Besides, classifying data so that extreme flow events are individually

  3. Importance Sampling Approach for the Nonstationary Approximation Error Method

    NASA Astrophysics Data System (ADS)

    Huttunen, J. M. J.; Lehikoinen, A.; Hämäläinen, J.; Kaipio, J. P.

    2010-09-01

    The approximation error approach has been earlier proposed to handle modelling, numerical and computational errors in inverse problems. The idea of the approach is to include the errors to the forward model and compute the approximate statistics of the errors using Monte Carlo sampling. This can be a computationally tedious task but the key property of the approach is that the approximate statistics can be calculated off-line before measurement process takes place. In nonstationary problems, however, information is accumulated over time, and the initial uncertainties may turn out to have been exaggerated. In this paper, we propose an importance weighing algorithm with which the approximation error statistics can be updated during the accumulation of measurement information. As a computational example, we study an estimation problem that is related to a convection-diffusion problem in which the velocity field is not accurately specified.

  4. Reconstructing paleo- and initial landscapes using a multi-method approach in hummocky NE Germany

    NASA Astrophysics Data System (ADS)

    van der Meij, Marijn; Temme, Arnaud; Sommer, Michael

    2016-04-01

    The unknown state of the landscape at the onset of soil and landscape formation is one of the main sources of uncertainty in landscape evolution modelling. Reconstruction of these initial conditions is not straightforward due to the problems of polygenesis and equifinality: different initial landscapes can change through different sets of processes to an identical end state. Many attempts have been done to reconstruct this initial landscape. These include remote sensing, reverse modelling and the usage of soil properties. However, each of these methods is only applicable on a certain spatial scale and comes with its own uncertainties. Here we present a new framework and preliminary results of reconstructing paleo-landscapes in an eroding setting, where we combine reverse modelling, remote sensing, geochronology, historical data and present soil data. With the combination of these different approaches, different spatial scales can be covered and the uncertainty in the reconstructed landscape can be reduced. The study area is located in north-east Germany, where the landscape consists of a collection of small local depressions, acting as closed catchments. This postglacial hummocky landscape is suitable to test our new multi-method approach because of several reasons: i) the closed catchments enable a full mass balance of erosion and deposition, due to the collection of colluvium in these depressions, ii) significant topography changes only started recently with medieval deforestation and recent intensification of agriculture and iii) due to extensive previous research a large dataset is readily available.

  5. On dynamical systems approaches and methods in f(R) cosmology

    NASA Astrophysics Data System (ADS)

    Alho, Artur; Carloni, Sante; Uggla, Claes

    2016-08-01

    We discuss dynamical systems approaches and methods applied to flat Robertson-Walker models in f(R)-gravity. We argue that a complete description of the solution space of a model requires a global state space analysis that motivates globally covering state space adapted variables. This is shown explicitly by an illustrative example, f(R) = R + α R2, α > 0, for which we introduce new regular dynamical systems on global compactly extended state spaces for the Jordan and Einstein frames. This example also allows us to illustrate several local and global dynamical systems techniques involving, e.g., blow ups of nilpotent fixed points, center manifold analysis, averaging, and use of monotone functions. As a result of applying dynamical systems methods to globally state space adapted dynamical systems formulations, we obtain pictures of the entire solution spaces in both the Jordan and the Einstein frames. This shows, e.g., that due to the domain of the conformal transformation between the Jordan and Einstein frames, not all the solutions in the Jordan frame are completely contained in the Einstein frame. We also make comparisons with previous dynamical systems approaches to f(R) cosmology and discuss their advantages and disadvantages.

  6. Educational Accountability: A Qualitatively Driven Mixed-Methods Approach

    ERIC Educational Resources Information Center

    Hall, Jori N.; Ryan, Katherine E.

    2011-01-01

    This article discusses the importance of mixed-methods research, in particular the value of qualitatively driven mixed-methods research for quantitatively driven domains like educational accountability. The article demonstrates the merits of qualitative thinking by describing a mixed-methods study that focuses on a middle school's system of…

  7. Method for Determining Local Current Density in 2G HTS Tapes

    NASA Astrophysics Data System (ADS)

    Bludova, A. I.

    Practically important problem is to determine the density and direction of 2G HTS induced currents at each point on the tape in order to examine its local deviations. This problem is resolved indirectly by spatial measurement of generated magnetic field with a scanning Hall sensor at a given height above the tape surface. Current density is subsequently determined by the Biot-Savart law inversion in Fourier domain. Tikhonov regularization is used in order to increase precision. Method is verified with the model current density reconstruction. Optimal calculation parameters and resulting precision are described.

  8. Weakly compressible turbulence in local interstellar medium. Three-dimensional modeling using Large Eddy Simulation method

    SciTech Connect

    Chernyshov, Alexander A.; Karelsky, Kirill V.; Petrosyan, Arakel S.

    2010-06-16

    Using advantages of large eddy simulation method, nontrivial regime of compressible magnetohydrodynamic turbulence of space plasma when initially supersonic fluctuations become weakly compressible is studied. Establishment of weakly compressible limit with Kolmogorov-like density fluctuations spectrum is shown in present work. We use our computations results to study dynamics of the turbulent plasma beta and anisotropic properties of the magnetoplasma fluctuations in the local interstellar medium. An outstanding, as yet unexplained, observation is that density fluctuations in the local interstellar medium exhibit a Kolmogorov-like spectrum over an extraordinary range of scales with a spectral index close to -5/3. In spite of the compressibility and the presence of magnetic field in the local interstellar medium, density fluctuations nevertheless admit a Kolmogorov-like power law. Supersonic flows with high value of large-scale Mach numbers are characterized in interstellar medium, nevertheless, there are subsonic fluctuations of weakly compressible components of interstellar medium. These weakly compressible subsonic fluctuations are responsible for emergence of a Kolmogorov-type spectrum in interstellar turbulence which is observed from experimental data. It is shown that density fluctuations are a passive scalar in a velocity field in weakly compressible magnetohydrodynamic turbulence and demonstrate Kolmogorov-like spectrum.

  9. Application of Learning Methods to Local Electric Field Distributions in Defected Dielectric Materials

    NASA Astrophysics Data System (ADS)

    Ferris, Kim; Jones, Dumont

    2014-03-01

    Local electric fields reflect the structural and dielectric fluctuations in a semiconductor, and affect the material performance both for electron transport and carrier lifetime properties. In this paper, we use the LOCALF methodology with periodic boundary conditions to examine the local electric field distributions and its perturbations for II-VI (CdTe, Cd(1-x)Zn(x)Te) semiconductors, containing Te inclusions and small fluctuations in the local dielectric susceptibility. With inclusion of the induced-field term, the electric field distribution shows enhancements and diminishments compared to the macroscopic applied field, reflecting the microstructure characteristics of the dielectric. Learning methods are applied to these distributions to assess the spatial extent of the perturbation, and determine an electric field defined defect size as compared to its physical dimension. Critical concentrations of defects are assessed in terms of defect formation energies. This work was supported by the US Department of Homeland Security, Domestic Nuclear Detection Office, under competitively awarded contract/IAA HSHQDC-08-X-00872-e. This support does not constitute an express or implied endorsement on the part of the Gov't.

  10. Evaluating methods for estimating local effective population size with and without migration.

    PubMed

    Gilbert, Kimberly J; Whitlock, Michael C

    2015-08-01

    Effective population size is a fundamental parameter in population genetics, evolutionary biology, and conservation biology, yet its estimation can be fraught with difficulties. Several methods to estimate Ne from genetic data have been developed that take advantage of various approaches for inferring Ne . The ability of these methods to accurately estimate Ne , however, has not been comprehensively examined. In this study, we employ seven of the most cited methods for estimating Ne from genetic data (Colony2, CoNe, Estim, MLNe, ONeSAMP, TMVP, and NeEstimator including LDNe) across simulated datasets with populations experiencing migration or no migration. The simulated population demographies are an isolated population with no immigration, an island model metapopulation with a sink population receiving immigrants, and an isolation by distance stepping stone model of populations. We find considerable variance in performance of these methods, both within and across demographic scenarios, with some methods performing very poorly. The most accurate estimates of Ne can be obtained by using LDNe, MLNe, or TMVP; however each of these approaches is outperformed by another in a differing demographic scenario. Knowledge of the approximate demography of population as well as the availability of temporal data largely improves Ne estimates.

  11. Evaluating methods for estimating local effective population size with and without migration.

    PubMed

    Gilbert, Kimberly J; Whitlock, Michael C

    2015-08-01

    Effective population size is a fundamental parameter in population genetics, evolutionary biology, and conservation biology, yet its estimation can be fraught with difficulties. Several methods to estimate Ne from genetic data have been developed that take advantage of various approaches for inferring Ne . The ability of these methods to accurately estimate Ne , however, has not been comprehensively examined. In this study, we employ seven of the most cited methods for estimating Ne from genetic data (Colony2, CoNe, Estim, MLNe, ONeSAMP, TMVP, and NeEstimator including LDNe) across simulated datasets with populations experiencing migration or no migration. The simulated population demographies are an isolated population with no immigration, an island model metapopulation with a sink population receiving immigrants, and an isolation by distance stepping stone model of populations. We find considerable variance in performance of these methods, both within and across demographic scenarios, with some methods performing very poorly. The most accurate estimates of Ne can be obtained by using LDNe, MLNe, or TMVP; however each of these approaches is outperformed by another in a differing demographic scenario. Knowledge of the approximate demography of population as well as the availability of temporal data largely improves Ne estimates. PMID:26118738

  12. A synchrotron-based local computed tomography combined with data-constrained modelling approach for quantitative analysis of anthracite coal microstructure.

    PubMed

    Chen, Wen Hao; Yang, Sam Y S; Xiao, Ti Qiao; Mayo, Sherry C; Wang, Yu Dan; Wang, Hai Peng

    2014-05-01

    Quantifying three-dimensional spatial distributions of pores and material compositions in samples is a key materials characterization challenge, particularly in samples where compositions are distributed across a range of length scales, and where such compositions have similar X-ray absorption properties, such as in coal. Consequently, obtaining detailed information within sub-regions of a multi-length-scale sample by conventional approaches may not provide the resolution and level of detail one might desire. Herein, an approach for quantitative high-definition determination of material compositions from X-ray local computed tomography combined with a data-constrained modelling method is proposed. The approach is capable of dramatically improving the spatial resolution and enabling finer details within a region of interest of a sample larger than the field of view to be revealed than by using conventional techniques. A coal sample containing distributions of porosity and several mineral compositions is employed to demonstrate the approach. The optimal experimental parameters are pre-analyzed. The quantitative results demonstrated that the approach can reveal significantly finer details of compositional distributions in the sample region of interest. The elevated spatial resolution is crucial for coal-bed methane reservoir evaluation and understanding the transformation of the minerals during coal processing. The method is generic and can be applied for three-dimensional compositional characterization of other materials.

  13. Past-time Radar Rainfall Estimates using Radar AWS Rainrate system with Local Gauge Correction method

    NASA Astrophysics Data System (ADS)

    Choi, D.; Lee, M. H.; Suk, M. K.; Nam, K. Y.; Hwang, J.; Ko, J. S.

    2015-12-01

    The Weather Radar Center at Korea Meteorological Administration (KMA) has radar network for warnings for heavy rainfall and severe storms. We have been operating an operational real-time adjusted the Radar-Automatic Weather Station (AWS) Rainrate (RAR) system developed by KMA in 2006 for providing radar-based quantitative precipitation estimation (QPE) to meteorologists. This system has several uncertainty in estimating precipitation by radar reflectivity (Z) and rainfall intensity (R) relationship. To overcome uncertainty of the RAR system and improve the accuracy of QPE, we are applied the Local Gauge Correction (LGC) method which uses geo-statistical effective radius of errors of the QPE to RAR system in 2012. According to the results of previous study in 2014 (Lee et al., 2014), the accuracy of the RAR system with LGC method improved about 7.69% than before in the summer season of 2012 (from June to August). It has also improved the accuracy of hydrograph when we examined the accuracy of flood simulation using hydrologic model and data derived by the RAR system with LGC method. We confirmed to have its effectiveness through these results after the application of LGC method. It is required for high quality data of long term to utilize in hydrology field. To provide QPE data more precisely and collect past-time data, we produce that calculated by the RAR system with LGC method in the summer season from 2006 to 2009 and investigate whether the accuracy of past-time radar rainfall estimation enhance or not. Keywords : Radar-AWS Rainrate system, Local gauge correction, past-time Radar rainfall estimation Acknowledgements : This research is supported by "Development and application of Cross governmental dual-pol radar harmonization (WRC-2013-A-1)" project of the Weather Radar Center, Korea Meteorological Administration in 2015.

  14. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  15. EHD Approach to Tornadic Thunderstorms and Methods of Their Destruction

    NASA Astrophysics Data System (ADS)

    Kikuchi, H.

    2005-05-01

    In many cases, tornadoes are accompanied or involved by lightning discharges and are thought to be com- posed of uncharged and charged components different from each other in terms of velocity, vorticity, heli- city, and appearance (shape and luminosity). Their visible dark portion may correspond to uncharged tor- nadoes, while luminous or bright part may involve charged tornadoes with return strokes. Usually, un- charged tornadoes have been considered to be ascending hot streams of thermohydrodynamic origin. This is the conventional theory of tornadoes, based on hydrodynamics (HD) or thermohydrodynamics (THD) but does not consider electrical effects that are really significant in tornadic thunderstorms..It has been shown, however, that a new electrohydrodynamics (EHD) established and developed over the last more than a decade is applicable to tornadic thunderstorms with lightning. This paper summarizes such an EHD approach and proposes the methods of tornado destruction based on EHD. Space charge and electric field configurations in tornadic thunderstorms are considered to be quadrupole-like, taking into account the cloud-charge images onto the ground. Accordingly, dynamics of particles and EHD flows in an electric quadrupole forming an electric cusp and mirror can straightly apply to those circumstances. When the gas pressure is below the breakdown threshold, there occur helical motion of particles, not only charged but also even uncharged, and/or vortex generation. While for gases whose pressure is beyond the breakdown threshold, the following basic processes succeed one after another. When the grain is uncharged, a dis- charge channel is formed towards each pole as a result of X-type reconnection. For a negatively or posi- tively charged grain, I-type reconnection occurs between the grain and positive or negative poles, respect- ively. For uncharged two grains, O-type reconnection between both grains could be involved in addition to X-type between each pole

  16. IEDC Method: A New Approach to Promote Students' Communicative Competence?

    ERIC Educational Resources Information Center

    Cai, Cui-yun

    2007-01-01

    IEDC is an acronym from interaction of English dormitory and class teaching. It is a new, student-centered teaching approach intended for non-English majors in China. This articles states that communicative competence can be developed through various means of cooperative learning.

  17. Approaches and Methods in Language Teaching: A Description and Analysis.

    ERIC Educational Resources Information Center

    Richards, Jack C.; Rodgers, Theodore S.

    Each major trend in 20th-century second language teaching is explained, and similarities and differences are highlighted. An introductory chapter offers a brief history of second language teaching. The second chapter outlines a model for examining and comparing the different approaches. This model is used in subsequent chapters to describe methods…

  18. TEACHING "MOBY DICK," A METHOD AND AN APPROACH.

    ERIC Educational Resources Information Center

    JOSEPHS, LOIS

    "MOBY DICK" IS SINGULARLY APPROPRIATE FOR HIGH SCHOOL STUDENTS IN ITS PHILOSOPHICAL, PSYCHOLOGICAL, AND SOCIAL EMPHASIS. HOWEVER, TO GUIDE THE STUDENTS INTO THE THEMATIC INTRICACIES OF THE WORK, THE TEACHER MUST USE A CAREFULLY PLANNED, INDUCTIVE APPROACH THAT DEMANDS CLOSE TEXTUAL STUDY IN CLASS. ALTHOUGH EACH TEACHER SHOULD CONCENTRATE ON THE…

  19. Self-Consistent MUSIC: An approach to the localization of true brain interactions from EEG/MEG data.

    PubMed

    Shahbazi, Forooz; Ewald, Arne; Nolte, Guido

    2015-05-15

    MUltiple SIgnal Classification (MUSIC) is a standard localization method which is based on the idea of dividing the vector space of the data into two subspaces: signal subspace and noise subspace. The brain, divided into several grid points, is scanned entirely and the grid point with the maximum consistency with the signal subspace is considered as the source location. In one of the MUSIC variants called Recursively Applied and Projected MUSIC (RAP-MUSIC), multiple iterations are proposed in order to decrease the location estimation uncertainties introduced by subspace estimation errors. In this paper, we suggest a new method called Self-Consistent MUSIC (SC-MUSIC) which extends RAP-MUSIC to a self-consistent algorithm. This method, SC-MUSIC, is based on the idea that the presence of several sources has a bias on the localization of each source. This bias can be reduced by projecting out all other sources mutually rather than iteratively. While the new method is applicable in all situations when MUSIC is applicable we will study here the localization of interacting sources using the imaginary part of the cross-spectrum due to the robustness of this measure to the artifacts of volume conduction. For an odd number of sources this matrix is rank deficient similar to covariance matrices of fully correlated sources. In such cases MUSIC and RAP-MUSIC fail completely while the new method accurately localizes all sources. We present results of the method using simulations of odd and even number of interacting sources in the presence of different noise levels. We compare the method with three other source localization methods: RAP-MUSIC, dipole fit and MOCA (combined with minimum norm estimate) through simulations. SC-MUSIC shows substantial improvement in the localization accuracy compared to these methods. We also show results for real MEG data of a single subject in the resting state. Four sources are localized in the sensorimotor area at f=11Hz which is the expected

  20. Self-Consistent MUSIC: An approach to the localization of true brain interactions from EEG/MEG data.

    PubMed

    Shahbazi, Forooz; Ewald, Arne; Nolte, Guido

    2015-05-15

    MUltiple SIgnal Classification (MUSIC) is a standard localization method which is based on the idea of dividing the vector space of the data into two subspaces: signal subspace and noise subspace. The brain, divided into several grid points, is scanned entirely and the grid point with the maximum consistency with the signal subspace is considered as the source location. In one of the MUSIC variants called Recursively Applied and Projected MUSIC (RAP-MUSIC), multiple iterations are proposed in order to decrease the location estimation uncertainties introduced by subspace estimation errors. In this paper, we suggest a new method called Self-Consistent MUSIC (SC-MUSIC) which extends RAP-MUSIC to a self-consistent algorithm. This method, SC-MUSIC, is based on the idea that the presence of several sources has a bias on the localization of each source. This bias can be reduced by projecting out all other sources mutually rather than iteratively. While the new method is applicable in all situations when MUSIC is applicable we will study here the localization of interacting sources using the imaginary part of the cross-spectrum due to the robustness of this measure to the artifacts of volume conduction. For an odd number of sources this matrix is rank deficient similar to covariance matrices of fully correlated sources. In such cases MUSIC and RAP-MUSIC fail completely while the new method accurately localizes all sources. We present results of the method using simulations of odd and even number of interacting sources in the presence of different noise levels. We compare the method with three other source localization methods: RAP-MUSIC, dipole fit and MOCA (combined with minimum norm estimate) through simulations. SC-MUSIC shows substantial improvement in the localization accuracy compared to these methods. We also show results for real MEG data of a single subject in the resting state. Four sources are localized in the sensorimotor area at f=11Hz which is the expected