Science.gov

Sample records for agri-fooda method based

  1. Attribute-Based Methods

    Treesearch

    Thomas P. Holmes; Wiktor L. Adamowicz

    2003-01-01

    Stated preference methods of environmental valuation have been used by economists for decades where behavioral data have limitations. The contingent valuation method (Chapter 5) is the oldest stated preference approach, and hundreds of contingent valuation studies have been conducted. More recently, and especially over the last decade, a class of stated preference...

  2. Research on BOM based composable modeling method

    NASA Astrophysics Data System (ADS)

    Zhang, Mingxin; He, Qiang; Gong, Jianxing

    2013-03-01

    Composable modeling method has been a research hotpot in the area of Modeling and Simulation for a long time. In order to increase the reuse and interoperability of BOM based model, this paper put forward a composable modeling method based on BOM, studied on the basic theory of composable modeling method based on BOM, designed a general structure of the coupled model based on BOM, and traversed the structure of atomic and coupled model based on BOM. At last, the paper introduced the process of BOM based composable modeling and made a conclusion on composable modeling method based on BOM. From the prototype we developed and accumulative model stocks, we found this method could increase the reuse and interoperability of models.

  3. Method of recovering oil-based fluid

    SciTech Connect

    Brinkley, H.E.

    1993-07-13

    A method is described of recovering oil-based fluid, said method comprising the steps of: applying an oil-based fluid absorbent cloth of man-made fiber to an oil-based fluid, the cloth having at least a portion thereof that is napped so as to raise ends and loops of the man-made fibers and define voids; and absorbing the oil-based fluid into the napped portion of the cloth.

  4. Impulse-based methods for fluid flow

    SciTech Connect

    Cortez, Ricardo

    1995-05-01

    A Lagrangian numerical method based on impulse variables is analyzed. A relation between impulse vectors and vortex dipoles with a prescribed dipole moment is presented. This relation is used to adapt the high-accuracy cutoff functions of vortex methods for use in impulse-based methods. A source of error in the long-time implementation of the impulse method is explained and two techniques for avoiding this error are presented. An application of impulse methods to the motion of a fluid surrounded by an elastic membrane is presented.

  5. METHOD OF JOINING CARBIDES TO BASE METALS

    DOEpatents

    Krikorian, N.H.; Farr, J.D.; Witteman, W.G.

    1962-02-13

    A method is described for joining a refractory metal carbide such as UC or ZrC to a refractory metal base such as Ta or Nb. The method comprises carburizing the surface of the metal base and then sintering the base and carbide at temperatures of about 2000 deg C in a non-oxidizing atmosphere, the base and carbide being held in contact during the sintering step. To reduce the sintering temperature and time, a sintering aid such as iron, nickel, or cobait is added to the carbide, not to exceed 5 wt%. (AEC)

  6. Method for comparing content based image retrieval methods

    NASA Astrophysics Data System (ADS)

    Barnard, Kobus; Shirahatti, Nikhil V.

    2003-01-01

    We assume that the goal of content based image retrieval is to find images which are both semantically and visually relevant to users based on image descriptors. These descriptors are often provided by an example image--the query by example paradigm. In this work we develop a very simple method for evaluating such systems based on large collections of images with associated text. Examples of such collections include the Corel image collection, annotated museum collections, news photos with captions, and web images with associated text based on heuristic reasoning on the structure of typical web pages (such as used by Google(tm)). The advantage of using such data is that it is plentiful, and the method we propose can be automatically applied to hundreds of thousands of queries. However, it is critical that such a method be verified against human usage, and to do this we evaluate over 6000 query/result pairs. Our results strongly suggest that at least in the case of the Corel image collection, the automated measure is a good proxy for human evaluation. Importantly, our human evaluation data can be reused for the evaluation of any content based image retrieval system and/or the verification of additional proxy measures.

  7. Design for validation, based on formal methods

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1990-01-01

    Validation of ultra-reliable systems decomposes into two subproblems: (1) quantification of probability of system failure due to physical failure; (2) establishing that Design Errors are not present. Methods of design, testing, and analysis of ultra-reliable software are discussed. It is concluded that a design-for-validation based on formal methods is needed for the digital flight control systems problem, and also that formal methods will play a major role in the development of future high reliability digital systems.

  8. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  9. Wavelet-based Multiresolution Particle Methods

    NASA Astrophysics Data System (ADS)

    Bergdorf, Michael; Koumoutsakos, Petros

    2006-03-01

    Particle methods offer a robust numerical tool for solving transport problems across disciplines, such as fluid dynamics, quantitative biology or computer graphics. Their strength lies in their stability, as they do not discretize the convection operator, and appealing numerical properties, such as small dissipation and dispersion errors. Many problems of interest are inherently multiscale, and their efficient solution requires either multiscale modeling approaches or spatially adaptive numerical schemes. We present a hybrid particle method that employs a multiresolution analysis to identify and adapt to small scales in the solution. The method combines the versatility and efficiency of grid-based Wavelet collocation methods while retaining the numerical properties and stability of particle methods. The accuracy and efficiency of this method is then assessed for transport and interface capturing problems in two and three dimensions, illustrating the capabilities and limitations of our approach.

  10. Recommendation advertising method based on behavior retargeting

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  11. Topology based methods for vector field comparisons

    NASA Astrophysics Data System (ADS)

    Batra, Rajesh Kumar

    Vector fields are commonly found in almost all branches of the physical sciences. Aerodynamics, dynamical systems, electromagnetism, and global climate modeling are a few examples. These multivariate data fields are often large, and no general, automated method exists for comparing these fields. Existing methods require either subjective visual judgments, or data interface compatibility, or domain specific knowledge. A topology based method intrinsically eliminates all of the above limitations and has the additional advantage of significantly compressing the vector field by representing only key features of the flow. Therefore, large databases are compactly represented and quickly searched. Topology is a natural framework for the study of many vector fields. It provides rules of an organizing principle, a flow grammar, that can describe and connect together the properties common to flows. Helman and Hesselink first introduced automated methods to extract and visualize this grammar. This work extends their method by introducing automated methods for vector topology comparison. Basic two-dimensional flows are first compared. The theory is extended to compare three-dimensional flow fields and the topology on no-slip surfaces. Concepts from graph theory and linear programming are utilized to solve these problems. Finally, the first automated method for higher order singularity comparisons is introduced using mathematical theories from geometric (Clifford) algebra.

  12. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  13. A MULTICORE BASED PARALLEL IMAGE REGISTRATION METHOD

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2012-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  14. Lagrangian based methods for coherent structure detection

    SciTech Connect

    Allshouse, Michael R.; Peacock, Thomas

    2015-09-15

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.

  15. Chapter 11. Community analysis-based methods

    SciTech Connect

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  16. Method for extruding pitch based foam

    DOEpatents

    Klett, James W.

    2002-01-01

    A method and apparatus for extruding pitch based foam is disclosed. The method includes the steps of: forming a viscous pitch foam; passing the precursor through an extrusion tube; and subjecting the precursor in said extrusion tube to a temperature gradient which varies along the length of the extrusion tube to form an extruded carbon foam. The apparatus includes an extrusion tube having a passageway communicatively connected to a chamber in which a viscous pitch foam formed in the chamber paring through the extrusion tube, and a heating mechanism in thermal communication with the tube for heating the viscous pitch foam along the length of the tube in accordance with a predetermined temperature gradient.

  17. Color-based lip localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad B. A.; Jassim, Sabah

    2010-04-01

    This paper is concerned with lip localization for visual speech recognition (VSR) system. We shall present an efficient method for localization human's lips/mouth in video images. This method is based on using the YCbCr approach to find at least any part of the lip as an initial step. Then we use all the available information about the segmented lip-pixels such as r, g, b, warped hue, etc. to segment the rest of the lip. The mean is calculated for each value, then for each pixel in ROI, Euclidian distance from the mean vector is calculated. Pixels with smaller distances are further clustered as lip pixels. Thus, the rest of the pixels in ROI will be clustered (to lip/non-lip pixel) depending on their distances from the mean vector of the initial segmented lip region. The method is evaluated on a new-recorded database of 780,000 frames; the experiments show that the method localizes the lips efficiently, with high level of accuracy (91.15%) that outperforms existing lip detection approaches.

  18. Dreamlet-based interpolation using POCS method

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Wu, Ru-Shan; Geng, Yu; Chen, Xiaohong

    2014-10-01

    Due to incomplete and non-uniform coverage of the acquisition system and dead traces, real seismic data always has some missing traces which affect the performance of a multi-channel algorithm, such as Surface-Related Multiple Elimination (SRME), imaging and inversion. Therefore, it is necessary to interpolate seismic data. Dreamlet transform has been successfully used in the modeling of seismic wave propagation and imaging, and this paper explains the application of dreamlet transform to seismic data interpolation. In order to avoid spatial aliasing in transform domain thus getting arbitrary under-sampling rate, improved Jittered under-sampling strategy is proposed to better control the dataset. With L0 constraint and Projection Onto Convex Sets (POCS) method, performances of dreamlet-based and curvelet-based interpolation are compared in terms of recovered signal to noise ratio (SNR) and convergence rate. Tests on synthetic and real cases demonstrate that dreamlet transform has superior performance to curvelet transform.

  19. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  20. Pseudorange Measurement Method Based on AIS Signals

    PubMed Central

    Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng

    2017-01-01

    In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system. PMID:28531153

  1. Decision Support Method with AHP Based on Evaluation Grid Method

    NASA Astrophysics Data System (ADS)

    Yumoto, Masaki

    In the Decision Support Method with AHP, there is a tendency for accuracy to fall remarkably when only qualitative criteria estimate alternatives. To solve this problem, it is necessary to define the setting method of criteria clearly. Evaluation Grid Method can construct the recognition structure, which is the element of the target causality model. Through the verification of the hypothesis, the criteria of AHP can be extracted. This paper proposes how to model human's recognition structure with Evaluation Grid Method, and how to support the decision with AHP using the criteria which constructs the model. In practical experiments, the proposal method contributed to creation of objective criteria, and examinees were able to receive the good decision support.

  2. Graph-based Methods for Orbit Classification

    SciTech Connect

    Bagherjeiran, A; Kamath, C

    2005-09-29

    An important step in the quest for low-cost fusion power is the ability to perform and analyze experiments in prototype fusion reactors. One of the tasks in the analysis of experimental data is the classification of orbits in Poincare plots. These plots are generated by the particles in a fusion reactor as they move within the toroidal device. In this paper, we describe the use of graph-based methods to extract features from orbits. These features are then used to classify the orbits into several categories. Our results show that existing machine learning algorithms are successful in classifying orbits with few points, a situation which can arise in data from experiments.

  3. Software detection method based on running state

    NASA Astrophysics Data System (ADS)

    Zhao, XiaoLin; Chen, Quanbao; Shan, Chun; Wang, Ting; Zhang, Yiman

    2017-08-01

    It is extremely important for the study of software behavior modeling for software security research. This article determines whether the security of software operation by analyzing the credibility of software behavior, monitoring software running status. This paper focuses on a modeling algorithm, namely the GK-tail algorithm, which based on software behavior modeling method. At the same time, this paper improves the GK-tail algorithm, which focuses on data constraints and the interaction between software components. Restrictions on extending finite automaton can be obtained by using a combination of Daikon and ESC/JAVA tools. Restrictions can improve the accuracy of the generated model. So the generated behavior model can capture more accurate information. Finally, the paper designs and implements the software running state generator. It is feasible through the software state diagram to determine the feasibility of software security proved by the experiment.

  4. A flocking based method for brain tractography.

    PubMed

    Aranda, Ramon; Rivera, Mariano; Ramirez-Manzanares, Alonso

    2014-04-01

    We propose a new method to estimate axonal fiber pathways from Multiple Intra-Voxel Diffusion Orientations. Our method uses the multiple local orientation information for leading stochastic walks of particles. These stochastic particles are modeled with mass and thus they are subject to gravitational and inertial forces. As result, we obtain smooth, filtered and compact trajectory bundles. This gravitational interaction can be seen as a flocking behavior among particles that promotes better and robust axon fiber estimations because they use collective information to move. However, the stochastic walks may generate paths with low support (outliers), generally associated to incorrect brain connections. In order to eliminate the outlier pathways, we propose a filtering procedure based on principal component analysis and spectral clustering. The performance of the proposal is evaluated on Multiple Intra-Voxel Diffusion Orientations from two realistic numeric diffusion phantoms and a physical diffusion phantom. Additionally, we qualitatively demonstrate the performance on in vivo human brain data. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Imaging Earth's Interior Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Komatitsch, D.; Liu, Q.; Tape, C.; Maggi, A.

    2008-12-01

    Modern numerical methods in combination with rapid advances in parallel computing have enabled the simulation of seismic wave propagation in 3D Earth models at unpredcented resolution and accuracy. On a modest PC cluster one can now simulate global seismic wave propagation at periods of 20~s longer accounting for heterogeneity in the crust and mantle, topography, anisotropy, attenuation, fluid-solid interactions, self-gravitation, rotation, and the oceans. On the 'Ranger' system at the Texas Advanced Computing Center one can break the 2~s barrier. By drawing connections between seismic tomography, adjoint methods popular in climate and ocean dynamics, time-reversal imaging, and finite-frequency 'banana-doughnut' kernels, it has been demonstrated that Fréchet derivatives for tomographic and (finite) source inversions in complex 3D Earth models may be obtained based upon just two numerical simulations for each earthquake: one calculation for the current model and a second, 'adjoint', calculation that uses time-reversed signals at the receivers as simultaneous, fictitious sources. The adjoint wavefield is calculated while the regular wavefield is reconstructed on the fly by propagating the last frame of the wavefield saved by a previous forward simulation backward in time. This aproach has been used to calculate sensitivity kernels in regional and global Earth models for various body- and surface-wave arrivals. These kernels illustrate the sensitivity of the observations to the structural parameters and form the basis of 'adjoint tomography'. We use a non-linear conjugate gradient method in combination with a source subspace projection preconditioning technique to iterative minimize the misfit function. Using an automated time window selection algorithm, our emphasis is on matching targeted, frequency-dependent body-wave traveltimes and surface-wave phase anomalies, rather than entire waveforms. To avoid reaching a local minimum in the optimization procedure, we

  6. Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases

    SciTech Connect

    Archibald, Richard K; Fann, George I; Shelton Jr, William Allison

    2011-01-01

    We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.

  7. Literature Based Discovery: models, methods, and trends.

    PubMed

    Sam Henry, M S; McInnes, Bridget T

    2017-08-21

    This paper provides an introduction and overview of literature based discovery (LBD) in the biomedical domain. It introduces the reader to modern and historical LBD models, key system components, evaluation methodologies, and current trends. After completion, the reader will be familiar with the challenges and methodologies of LBD. The reader will be capable of distinguishing between recent LBD systems and publications, and be capable of designing an LBD system for a specific application. From biomedical researchers curious about LBD, to someone looking to design an LBD system, to an LBD expert trying to catch up on trends in the field. The reader need not be familiar with LBD, but knowledge of biomedical text processing tools is helpful. This paper describes a unifying framework for LBD systems. Within this framework, different models and methods are presented to both distinguish and show overlap between systems. Topics include term and document representation, system components, and an overview of models including co-occurrence models, semantic models, and distributional models. Other topics include uninformative term filtering, term ranking, results display, system evaluation, an overview of the application areas of drug development, drug repurposing, and adverse drug event prediction, and challenges and future directions. A timeline showing contributions to LBD, and a table summarizing the works of several authors is provided. Topics are presented from a high level perspective. References are given if more detailed analysis is required. Copyright © 2017. Published by Elsevier Inc.

  8. An inpainting-based deinterlacing method.

    PubMed

    Ballester, Coloma; Bertalmío, Marcelo; Caselles, Vicent; Garrido, Luis; Marques, Adrián; Ranchin, Florent

    2007-10-01

    Video is usually acquired in interlaced format, where each image frame is composed of two image fields, each field holding same parity lines. However, many display devices require progressive video as input; also, many video processing tasks perform better on progressive material than on interlaced video. In the literature, there exist a great number of algorithms for interlaced to progressive video conversion, with a great tradeoff between the speed and quality of the results. The best algorithms in terms of image quality require motion compensation; hence, they are computationally very intensive. In this paper, we propose a novel deinterlacing algorithm based on ideas from the image inpainting arena. We view the lines to interpolate as gaps that we need to inpaint. Numerically, this is implemented using a dynamic programming procedure, which ensures a complexity of O(S), where S is the number of pixels in the image. The results obtained with our algorithm compare favorably, in terms of image quality, with state-of-the-art methods, but at a lower computational cost, since we do not need to perform motion field estimation.

  9. Towards Seismic Tomography Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Liu, Q.; Tape, C.; Maggi, A.

    2006-12-01

    We outline the theory behind tomographic inversions based on 3D reference models, fully numerical 3D wave propagation, and adjoint methods. Our approach involves computing the Fréchet derivatives for tomographic inversions via the interaction between a forward wavefield, propagating from the source to the receivers, and an `adjoint' wavefield, propagating from the receivers back to the source. The forward wavefield is computed using a spectral-element method (SEM) and a heterogeneous wave-speed model, and stored as synthetic seismograms at particular receivers for which there is data. We specify an objective or misfit function that defines a measure of misfit between data and synthetics. For a given receiver, the differences between the data and the synthetics are time reversed and used as the source of the adjoint wavefield. For each earthquake, the interaction between the regular and adjoint wavefields is used to construct finite-frequency sensitivity kernels, which we call event kernel. These kernels may be thought of as weighted sums of measurement-specific banana-donut kernels, with weights determined by the measurements. The overall sensitivity is simply the sum of event kernels, which defines the misfit kernel. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. The misfit kernel is multiplied by convenient orthonormal basis functions that are embedded in the SEM code, resulting in the gradient of the misfit function, i.e., the Fréchet derivatives. A conjugate gradient algorithm is used to iteratively improve the model while reducing the misfit function. Using 2D examples for Rayleigh wave phase-speed maps of southern California, we illustrate the construction of the gradient and the minimization algorithm, and consider various tomographic experiments, including source inversions, structural inversions, and joint source

  10. DNA-based methods of geochemical prospecting

    DOEpatents

    Ashby, Matthew [Mill Valley, CA

    2011-12-06

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  11. Iterative methods based upon residual averaging

    NASA Technical Reports Server (NTRS)

    Neuberger, J. W.

    1980-01-01

    Iterative methods for solving boundary value problems for systems of nonlinear partial differential equations are discussed. The methods involve subtracting an average of residuals from one approximation in order to arrive at a subsequent approximation. Two abstract methods in Hilbert space are given and application of these methods to quasilinear systems to give numerical schemes for such problems is demonstrated. Potential theoretic matters related to the iteration schemes are discussed.

  12. Multifractal Framework Based on Blanket Method

    PubMed Central

    Paskaš, Milorad P.; Reljin, Irini S.; Reljin, Branimir D.

    2014-01-01

    This paper proposes two local multifractal measures motivated by blanket method for calculation of fractal dimension. They cover both fractal approaches familiar in image processing. The first two measures (proposed Methods 1 and 3) support model of image with embedded dimension three, while the other supports model of image embedded in space of dimension three (proposed Method 2). While the classical blanket method provides only one value for an image (fractal dimension) multifractal spectrum obtained by any of the proposed measures gives a whole range of dimensional values. This means that proposed multifractal blanket model generalizes classical (monofractal) blanket method and other versions of this monofractal approach implemented locally. Proposed measures are validated on Brodatz image database through texture classification. All proposed methods give similar classification results, while average computation time of Method 3 is substantially longer. PMID:24578664

  13. Method for sequencing DNA base pairs

    DOEpatents

    Sessler, Andrew M.; Dawson, John

    1993-01-01

    The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source.

  14. Servo Control Using Wave-Based Method

    NASA Astrophysics Data System (ADS)

    Marek, O.

    The wave-based control of flexible mechanical systems has been developed. It is based on the idea of sending waves into the mechanical system, measuring of the incoming waves and avoiding the re-sending of these waves into the continuum again. This approach actually absorbs the energy coming from the system. It has been successfully applied for the number of simulations. This paper deals with the implementation of the wave-based control for experiments using servomotor. It particularly describes the implementation on Yaskawa servo-motor and its PLC system.

  15. Method for sequencing DNA base pairs

    DOEpatents

    Sessler, A.M.; Dawson, J.

    1993-12-14

    The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source. 6 figures.

  16. Accelerator-based method of producing isotopes

    DOEpatents

    Nolen, Jr., Jerry A.; Gomes, Itacil C.

    2015-11-03

    The invention provides a method using accelerators to produce radio-isotopes in high quantities. The method comprises: supplying a "core" of low-enrichment fissile material arranged in a spherical array of LEU combined with water moderator. The array is surrounded by substrates which serve as multipliers and moderators as well as neutron shielding substrates. A flux of neutrons enters the low-enrichment fissile material and causes fissions therein for a time sufficient to generate desired quantities of isotopes from the fissile material. The radio-isotopes are extracted from said fissile material by chemical processing or other means.

  17. HMM-Based Gene Annotation Methods

    SciTech Connect

    Haussler, David; Hughey, Richard; Karplus, Keven

    1999-09-20

    Development of new statistical methods and computational tools to identify genes in human genomic DNA, and to provide clues to their functions by identifying features such as transcription factor binding sites, tissue, specific expression and splicing patterns, and remove homologies at the protein level with genes of known function.

  18. Immunoassay control method based on light scattering

    NASA Astrophysics Data System (ADS)

    Bilyi, Olexander I.; Kiselyov, Eugene M.; Petrina, R. O.; Ferensovich, Yaroslav P.; Yaremyk, Roman Y.

    1999-11-01

    The physics principle of registration immune reaction by light scattering methods is concerned. The operation of laser nephelometry for measuring antigen-antibody reaction is described. The technique of obtaining diagnostic and immune reactions of interaction latex agglutination for diphtheria determination is described.

  19. Structural and Network-based Methods for Knowledge-Based Systems

    DTIC Science & Technology

    2011-12-01

    NORTHWESTERN UNIVERSITY STRUCTURAL AND NETWORK-BASED METHODS FOR KNOWLEDGE -BASED SYSTEMS A DISSERTATION SUBMITTED TO THE GRADUATE SCHOOL...4. TITLE AND SUBTITLE Structural and Network-based Methods for Knowledge -based Systems 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT...ABSTRACT Structural and Network-based Methods for Knowledge -based Systems In recent years, there has been

  20. Adaptive Kernel Based Machine Learning Methods

    DTIC Science & Technology

    2012-10-15

    Multiscale collocation methods are developed in [3] for solving a system of integral equations which is a reformulation of the Tikhonov - regularized ...Direct numerical solutions of the Tikhonov regularization equation require one to gener- ate a matrix representation of the composition of the...issue, rather than directly solving the Tikhonov - regularized equation, we propose to solve an equivalent coupled system of integral equations. We apply a

  1. Method of casting pitch based foam

    DOEpatents

    Klett, James W.

    2002-01-01

    A process for producing molded pitch based foam is disclosed which minimizes cracking. The process includes forming a viscous pitch foam in a container, and then transferring the viscous pitch foam from the container into a mold. The viscous pitch foam in the mold is hardened to provide a carbon foam having a relatively uniform distribution of pore sizes and a highly aligned graphic structure in the struts.

  2. Roadside-based communication system and method

    NASA Technical Reports Server (NTRS)

    Bachelder, Aaron D. (Inventor)

    2007-01-01

    A roadside-based communication system providing backup communication between emergency mobile units and emergency command centers. In the event of failure of a primary communication, the mobile units transmit wireless messages to nearby roadside controllers that may take the form of intersection controllers. The intersection controllers receive the wireless messages, convert the messages into standard digital streams, and transmit the digital streams along a citywide network to a destination intersection or command center.

  3. Alaska climate divisions based on objective methods

    NASA Astrophysics Data System (ADS)

    Angeloff, H.; Bieniek, P. A.; Bhatt, U. S.; Thoman, R.; Walsh, J. E.; Daly, C.; Shulski, M.

    2010-12-01

    Alaska is vast geographically, is located at high latitudes, is surrounded on three sides by oceans and has complex topography, encompassing several climate regions. While climate zones exist, there has not been an objective analysis to identify regions of homogeneous climate. In this study we use cluster analysis on a robust set of weather observation stations in Alaska to develop climate divisions for the state. Similar procedures have been employed in the contiguous United States and other parts of the world. Our analysis, based on temperature and precipitation, yielded a set of 10 preliminary climate divisions. These divisions include an eastern and western Arctic (bounded by the Brooks Range to the south), a west coast region along the Bering Sea, and eastern and western Interior regions (bounded to the south by the Alaska Range). South of the Alaska Range there were the following divisions: an area around Cook Inlet (also including Valdez), coastal and inland areas along Bristol Bay including Kodiak and Lake Iliamna, the Aleutians, and Southeast Alaska. To validate the climate divisions based on relatively sparse station data, additional sensitivity analysis was performed. Additional clustering analysis utilizing the gridded North American Regional Reanalysis (NARR) was also conducted. In addition, the divisions were evaluated using correlation analysis. These sensitivity tests support the climate divisions based on cluster analysis.

  4. Method for producing iron-based catalysts

    DOEpatents

    Farcasiu, Malvina; Kaufman, Phillip B.; Diehl, J. Rodney; Kathrein, Hendrik

    1999-01-01

    A method for preparing an acid catalyst having a long shelf-life is provided comprising doping crystalline iron oxides with lattice-compatible metals and heating the now-doped oxide with halogen compounds at elevated temperatures. The invention also provides for a catalyst comprising an iron oxide particle having a predetermined lattice structure, one or more metal dopants for said iron oxide, said dopants having an ionic radius compatible with said lattice structure; and a halogen bound with the iron and the metal dopants on the surface of the particle.

  5. Method for producing iron-based catalysts

    SciTech Connect

    Farcasiu, M.; Kaufman, P.B.; Diehl, J.R.; Kathrein, H.

    1999-09-07

    A method for preparing an acid catalyst having a long shelf-life is provided comprising doping crystalline iron oxides with lattice-compatible metals and heating the now-doped oxide with halogen compounds at elevated temperatures. The invention also provides for a catalyst comprising an iron oxide particle having a predetermined lattice structure, one or more metal dopants for said iron oxide, said dopants having an ionic radius compatible with said lattice structure; and a halogen bound with the iron and the metal dopants on the surface of the particle.

  6. A power function method for estimating base flow.

    PubMed

    Lott, Darline A; Stewart, Mark T

    2013-01-01

    Analytical base flow separation techniques are often used to determine the base flow contribution to total stream flow. Most analytical methods derive base flow from discharge records alone without using basin-specific variables other than basin area. This paper derives a power function for estimating base flow, the form being aQ(b) + cQ, an analytical method calibrated against an integrated basin variable, specific conductance, relating base flow to total discharge, and is consistent with observed mathematical behavior of dissolved solids in stream flow with varying discharge. Advantages of the method are being uncomplicated, reproducible, and applicable to hydrograph separation in basins with limited specific conductance data. The power function relationship between base flow and discharge holds over a wide range of basin areas. It better replicates base flow determined by mass balance methods than analytical methods such as filters or smoothing routines that are not calibrated to natural tracers or empirical basin and gauge-specific variables. Also, it can be used with discharge during periods without specific conductance values, including separating base flow from quick flow for single events. However, it may overestimate base flow during very high flow events. Application of geochemical mass balance and power function base flow separation methods to stream flow and specific conductance records from multiple gauges in the same basin suggests that analytical base flow separation methods must be calibrated at each gauge. Using average values of coefficients introduces a potentially significant and unknown error in base flow as compared with mass balance methods.

  7. Method for hardfacing a ferrous base material

    SciTech Connect

    Sakaguchi, S.; Ito, H.; Shiroyama, M.

    1984-10-23

    Tungsten carbide and nickel-phosphorus alloy coexist in individual particles. The composite powder produced by a mechanical mix of these two substances consists of 30 about 95 percent by weight of tungsten carbide and valanced nickel-phosphorus alloy. This powder is sprayed to the ferrous base material, resulting in a uniform dispersion of both tungsten carbide and nickel-phosphorus, causing tight adhesion to the surface because the tungsten carbide and nickel-phosphorus alloy coexist in individual particles in the composite. A hard metal coating is produced having high hardness and excellent wear resistance, after the surface of the hard metal coating is heated and the high temperature of the nickel-phosphorus alloy causes a liquid phase under the condition of a nonoxidizing atmosphere. This hard metal coating is used for various kinds of the wear-resistant materials.

  8. PCLC flake-based apparatus and method

    DOEpatents

    Cox, Gerald P; Fromen, Cathy A; Marshall, Kenneth L; Jacobs, Stephen D

    2012-10-23

    A PCLC flake/fluid host suspension that enables dual-frequency, reverse drive reorientation and relaxation of the PCLC flakes is composed of a fluid host that is a mixture of: 94 to 99.5 wt % of a non-aqueous fluid medium having a dielectric constant value .di-elect cons., where 1<.di-elect cons.<7, a conductivity value .sigma., where 10.sup.-9>.sigma.>10.sup.-7 Siemens per meter (S/m), and a resistivity r, where 10.sup.7>r>10.sup.10 ohm-meters (.OMEGA.-m), and which is optically transparent in a selected wavelength range .DELTA..lamda.; 0.0025 to 0.25 wt % of an inorganic chloride salt; 0.0475 to 4.75 wt % water; and 0.25 to 2 wt % of an anionic surfactant; and 1 to 5 wt % of PCLC flakes suspended in the fluid host mixture. Various encapsulation forms and methods are disclosed including a Basic test cell, a Microwell, a Microcube, Direct encapsulation (I), Direct encapsulation (II), and Coacervation encapsulation. Applications to display devices are disclosed.

  9. FOCUS: a deconvolution method based on algorithmic complexity

    NASA Astrophysics Data System (ADS)

    Delgado, C.

    2006-07-01

    A new method for improving the resolution of images is presented. It is based on Occam's razor principle implemented using algorithmic complexity arguments. The performance of the method is illustrated using artificial and real test data.

  10. Residual-based Methods for Controlling Discretization Error in CFD

    DTIC Science & Technology

    2015-08-24

    Adjoint- based h–p Adaptive Discontinuous Galerkin Methods for the 2D Compressible Euler Equations,” Journal of Computational Physics, Vol. 228, No. 20...AFRL-AFOSR-VA-TR-2015-0256 Residual- based Methods for Controlling Discretization Error in CFD Chris Roy VIRGINIA POLYTECHNIC INST AND STATE...30-04-2015 4. TITLE AND SUBTITLE Residual- based Methods for Controlling Discretization Error in CFD 5a. CONTRACT NUMBER FA9550-12-1-0173 5b. GRANT

  11. Brain Based Teaching: Fad or Promising Teaching Method.

    ERIC Educational Resources Information Center

    Winters, Clyde A.

    This paper discusses brain-based teaching and examines its relevance as a teaching method and knowledge base. Brain-based teaching is very popular among early childhood educators. Positive attributes of brain-based education include student engagement and active involvement in their own learning, teachers teaching for meaning and understanding,…

  12. A New Color-based Lawn Weed Detection Method and Its Integration with Texture-based Methods: A Hybrid Approach

    NASA Astrophysics Data System (ADS)

    Watchareeruetai, Ukrit; Ohnishi, Noboru

    We propose a color-based weed detection method specifically designed for detecting lawn weeds in winter. The proposed method exploits fuzzy logic to make inference from color information. Genetic algorithm is adopted to search for the optimal combination of color information, fuzzy membership functions, as well as fuzzy rules used in the method. Experimental results show that the proposed color-based method outperforms the conventional texture-based methods when testing with a winter dataset. In addition, we propose a hybrid system that incorporates both texture-based and color-based weed detection methods. It can automatically select a better method to perform weed detection, depending on an input image. The results show that the use of the hybrid system can significantly improve weed control performances for the overall datasets.

  13. Pyrolyzed-parylene based sensors and method of manufacture

    NASA Technical Reports Server (NTRS)

    Tai, Yu-Chong (Inventor); Liger, Matthieu (Inventor); Miserendino, Scott (Inventor); Konishi, Satoshi (Inventor)

    2007-01-01

    A method (and resulting structure) for fabricating a sensing device. The method includes providing a substrate comprising a surface region and forming an insulating material overlying the surface region. The method also includes forming a film of carbon based material overlying the insulating material and treating to the film of carbon based material to pyrolyzed the carbon based material to cause formation of a film of substantially carbon based material having a resistivity ranging within a predetermined range. The method also provides at least a portion of the pyrolyzed carbon based material in a sensor application and uses the portion of the pyrolyzed carbon based material in the sensing application. In a specific embodiment, the sensing application is selected from chemical, humidity, piezoelectric, radiation, mechanical strain or temperature.

  14. Oriented Connectivity-Based Method for Segmenting Solar Loops

    NASA Technical Reports Server (NTRS)

    Lee, J. K.; Newman, T. S.; Gary, G. A.

    2005-01-01

    A method based on oriented connectivity that can automatically segnient arc-like structures (solar loops) from intensity images of the Sun's corona is introduced. The method is a constructive approach that uses model-guided processing to enable extraction of credible loop structures. Since the solar loops are vestiges of the solar magnetic field, the model-guided processing exploits external estimates of this field s local orientations that are derived from a physical magnetic field model. Empirical studies of the method s effectiveness are also presented. The Oriented Connectivity- Based Method is the first automatic method for the segmentation of solar loops.

  15. IFCM Based Segmentation Method for Liver Ultrasound Images.

    PubMed

    Jain, Nishant; Kumar, Vinod

    2016-11-01

    In this paper we have proposed an iterative Fuzzy C-Mean (IFCM) method which divides the pixels present in the image into a set of clusters. This set of clusters is then used to segment a focal liver lesion from a liver ultrasound image. Advantage of IFCM methods is that n-clusters FCM method may lead to non-uniform distribution of centroids, whereas in IFCM method centroids will always be uniformly distributed. Proposed method is compared with the edge based Active contour Chan-Vese (CV) method, and MAP-MRF method by implementing the methods on MATLAB. Proposed method is also compared with region based active contour region-scalable fitting energy (RSFE) method whose MATLAB code is available in author's website. Since no comparison is available on a common database, the performance of three methods and the proposed method have been compared on liver ultrasound (US) images available with us. Proposed method gives the best accuracy of 99.8 % as compared to accuracy of 99.46 %, 95.81 % and 90.08 % given by CV, MAP-MRF and RSFE methods respectively. Computation time taken by the proposed segmentation method for segmentation is 14.25 s as compared to 44.71, 41.27 and 49.02 s taken by CV, MAP-MRF and RSFE methods respectively.

  16. Modeling electrokinetic flow by Lagrangian particle-based method

    NASA Astrophysics Data System (ADS)

    Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; Tartakovsky, Alexandre; Parks, Mike

    2015-11-01

    This work focuses on mathematical models and numerical schemes based on Lagrangian particle-based method that can effectively capture mesoscale multiphysics (hydrodynamics, electrostatics, and advection-diffusion) associated in applications of micro-/nano-transport and technology. The order of accuracy is significantly improved for particle-based method with the presented implicit consistent numerical scheme. Specifically, we show simulation results on electrokinetic flows and microfluidic mixing processes in micro-/nano-channel and through semi-permeable porous structures.

  17. Space target image fusion method based on image clarity criterion

    NASA Astrophysics Data System (ADS)

    Gao, Zhisheng; Yang, Miao; Xie, Chunzhi

    2017-05-01

    Optical and infrared imaging is often used in ground-based optical space target observation. The fusion of the two types of images for a more detailed observation is the key problem to be solved. A space target multimodal image fusion scheme based on the joint sparsity model, which takes the correlations among the native sparse characteristics of the image, clarity features of the image, and multisource images into consideration, is proposed. First, using an overcomplete dictionary, the source images are represented as a combination of a shared sparse component and exclusive sparse components. Second, a method for image clarity feature extraction is proposed to design the fusion rules of exclusive sparse components to obtain the fused exclusive sparse components. Finally, the fused image is reconstructed with the fused sparse components and overcompleted dictionary. The proposed method was tested on the space target image and nature scene image data sets. Compared with traditional methods such as the multiscale transform-based methods, sparse representation-based methods, and joint sparsity representation-based methods, the final experimental results demonstrated that our method outperforms the existing state-of-the-art methods on the human visual effect and the objective evaluation indexes. In particular, for the evaluation indexes Q and QE, the scores increase to nearly 10% more than those for traditional methods, which indicates that the fused image of our method has better edge clarity.

  18. Language Practitioners' Reflections on Method-Based and Post-Method Pedagogies

    ERIC Educational Resources Information Center

    Soomro, Abdul Fattah; Almalki, Mansoor S.

    2017-01-01

    Method-based pedagogies are commonly applied in teaching English as a foreign language all over the world. However, in the last quarter of the 20th century, the concept of such pedagogies based on the application of a single best method in EFL started to be viewed with concerns by some scholars. In response to the growing concern against the…

  19. Fusion Segmentation Method Based on Fuzzy Theory for Color Images

    NASA Astrophysics Data System (ADS)

    Zhao, J.; Huang, G.; Zhang, J.

    2017-09-01

    The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  20. Islanding detection scheme based on adaptive identifier signal estimation method.

    PubMed

    Bakhshi, M; Noroozian, R; Gharehpetian, G B

    2017-09-12

    This paper proposes a novel, passive-based anti-islanding method for both inverter and synchronous machine-based distributed generation (DG) units. Unfortunately, when the active/reactive power mismatches are near to zero, majority of the passive anti-islanding methods cannot detect the islanding situation, correctly. This study introduces a new islanding detection method based on exponentially damped signal estimation method. The proposed method uses adaptive identifier method for estimating of the frequency deviation of the point of common coupling (PCC) link as a target signal that can detect the islanding condition with near-zero active power imbalance. Main advantage of the adaptive identifier method over other signal estimation methods is its small sampling window. In this paper, the adaptive identifier based islanding detection method introduces a new detection index entitled decision signal by estimating of oscillation frequency of the PCC frequency and can detect islanding conditions, properly. In islanding conditions, oscillations frequency of PCC frequency reach to zero, thus threshold setting for decision signal is not a tedious job. The non-islanding transient events, which can cause a significant deviation in the PCC frequency are considered in simulations. These events include different types of faults, load changes, capacitor bank switching, and motor starting. Further, for islanding events, the capability of the proposed islanding detection method is verified by near-to-zero active power mismatches. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Structure-Based Subspace Method for Multichannel Blind System Identification

    NASA Astrophysics Data System (ADS)

    Mayyala, Qadri; Abed-Meraim, Karim; Zerguine, Azzedine

    2017-08-01

    In this work, a novel subspace-based method for blind identification of multichannel finite impulse response (FIR) systems is presented. Here, we exploit directly the impeded Toeplitz channel structure in the signal linear model to build a quadratic form whose minimization leads to the desired channel estimation up to a scalar factor. This method can be extended to estimate any predefined linear structure, e.g. Hankel, that is usually encountered in linear systems. Simulation findings are provided to highlight the appealing advantages of the new structure-based subspace (SSS) method over the standard subspace (SS) method in certain adverse identification scenarii.

  2. Comparison of conventional staining methods and monoclonal antibody-based methods for Cryptosporidium oocyst detection.

    PubMed Central

    Arrowood, M J; Sterling, C R

    1989-01-01

    The sensitivity and specificity of seven microscopy-based Cryptosporidium oocyst detection methods were compared after application to unconcentrated fecal smears. The seven methods were as follows: (i) a commercial acid-fast (AF) stain (VOLU-SOL) method, (ii) Truant auramine-rhodamine (AR) stain method, (iii) fluorescein-conjugated C1B3 monoclonal antibody (MAb) direct fluorescence method, (iv) OW3 MAb indirect fluorescence method, (v) biotinylated OW3 indirect fluorescence method, (vi) biotinylated OW3-indirect diaminobenzidine (DAB) method, and (vii) biotinylated OW3-aminoethylcarbazole (AEC) method. A total of 281 randomly collected Formalin-fixed fecal samples (submitted to the Maricopa County Health Department, Phoenix, Ariz.) and 30 known positives (Formalin-fixed and K2Cr2O7-preserved stools from our laboratory) were examined in a blind test; 32 of 311 samples (10.3%) were confirmed positive. Of the confirmed positives, 40.6% were identified by the AF method, 93.8% were identified by the AR method, 93.8% were identified by the C1B3 method, 81.3% were identified by the OW3-DAB method, 71.9% were identified by the OW3-AEC method, 100% were identified by the OW3 indirect fluorescence method, and 100% were identified by the biotinylated OW3 indirect fluorescence method. False-positives were encountered by the AF and AR methods (52.0 and 85.7% specificity, respectively), while no false-positives encountered by the MAb-based methods. Oocysts in infected tissue sections were easily detected by the MAb-based methods. Images PMID:2475523

  3. DNA-Based Methods in the Immunohematology Reference Laboratory

    PubMed Central

    Denomme, Gregory A

    2010-01-01

    Although hemagglutination serves the immunohematology reference laboratory well, when used alone, it has limited capability to resolve complex problems. This overview discusses how molecular approaches can be used in the immunohematology reference laboratory. In order to apply molecular approaches to immunohematology, knowledge of genes, DNA-based methods, and the molecular bases of blood groups are required. When applied correctly, DNA-based methods can predict blood groups to resolve ABO/Rh discrepancies, identify variant alleles, and screen donors for antigen-negative units. DNA-based testing in immunohematology is a valuable tool used to resolve blood group incompatibilities and to support patients in their transfusion needs. PMID:21257350

  4. A novel planarization method based on photoinduced confined chemical etching.

    PubMed

    Fang, Qiuyan; Zhou, Jian-Zhang; Zhan, Dongping; Shi, Kang; Tian, Zhao-Wu; Tian, Zhong-Qun

    2013-07-21

    A photoinduced confined chemical etching system based on TiO2 nanotube arrays is developed for the planarization of the copper surface, which is proved to be a prospective stress-free chemical planarization method for metals and semiconductors.

  5. Propensity Score-Based Methods versus MTE-Based Methods in Causal Inference: Identification, Estimation, and Application.

    PubMed

    Zhou, Xiang; Xie, Y U

    2016-02-01

    Since the seminal introduction of the propensity score by Rosenbaum and Rubin, propensity-score-based (PS-based) methods have been widely used for drawing causal inferences in the behavioral and social sciences. However, the propensity score approach depends on the ignorability assumption: there are no unobserved confounders once observed covariates are taken into account. For situations where this assumption may be violated, Heckman and his associates have recently developed a novel approach based on marginal treatment effects (MTE). In this paper, we (1) explicate consequences for PS-based methods when aspects of the ignorability assumption are violated; (2) compare PS-based methods and MTE-based methods by making a close examination of their identification assumptions and estimation performances; (3) apply these two approaches in estimating the economic return to college using data from NLSY 1979 and discuss their discrepancies in results. When there is a sorting gain but no systematic baseline difference between treated and untreated units given observed covariates, PS-based methods can identify the treatment effect of the treated (TT). The MTE approach performs best when there is a valid and strong instrumental variable (IV). In addition, this paper introduces the "smoothing-difference PS-based method," which enables us to uncover heterogeneity across people of different propensity scores in both counterfactual outcomes and treatment effects.

  6. Comparison of Two Distance Based Alignment Method in Medical Imaging

    DTIC Science & Technology

    2001-10-25

    very helpful to register large datasets of contours or surfaces, commonly encountered in medical imaging . They do not require special ordering or...COMPARISON OF TWO DISTANCE BASED ALIGNMENT METHOD IN MEDICAL IMAGING G. Bulan, C. Ozturk Institute of Biomedical Engineering, Bogazici University...Two Distance Based Alignment Method in Medical Imaging Contract Number Grant Number Program Element Number Author(s) Project Number Task Number

  7. An entropy-based objective evaluation method for image segmentation

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Fritts, Jason E.; Goldman, Sally A.

    2003-12-01

    Accurate image segmentation is important for many image, video and computer vision applications. Over the last few decades, many image segmentation methods have been proposed. However, the results of these segmentation methods are usually evaluated only visually, qualitatively, or indirectly by the effectiveness of the segmentation on the subsequent processing steps. Such methods are either subjective or tied to particular applications. They do not judge the performance of a segmentation method objectively, and cannot be used as a means to compare the performance of different segmentation techniques. A few quantitative evaluation methods have been proposed, but these early methods have been based entirely on empirical analysis and have no theoretical grounding. In this paper, we propose a novel objective segmentation evaluation method based on information theory. The new method uses entropy as the basis for measuring the uniformity of pixel characteristics (luminance is used in this paper) within a segmentation region. The evaluation method provides a relative quality score that can be used to compare different segmentations of the same image. This method can be used to compare both various parameterizations of one particular segmentation method as well as fundamentally different segmentation techniques. The results from this preliminary study indicate that the proposed evaluation method is superior to the prior quantitative segmentation evaluation techniques, and identify areas for future research in objective segmentation evaluation.

  8. Copula Based Post-processing Method for Hydrologic Ensemble Forecast

    NASA Astrophysics Data System (ADS)

    Duan, Q.; Li, W.

    2016-12-01

    Hydrology forecasts often contain uncertainties from model inputs, model parameters and model structure. Post processing methods can be applied to the original hydrology ensemble forecasts to correct the bias and spread error. Most existing post-processing methods applied the Normal Quantile Transform (NQT) to transform the hydrology variables to normal distribution for convenient statistical inference. However, the NQT based algorithm suffer several problems, such as the extrapolation problem in back-transform process. In this research, a copula based post-processing method was developed. The copula function estimates the joint distribution of observation and model forecast directly, and then the conditional distribution of observation given the model forecasts could be obtained without NQT. The proposed post-processing method was tested and compared with two other popular methods based on NQT, namely the Hydrology Uncertainty Processor (HUP) and General Linear Model Post-Processor (GLMPP) using the observation and simulation dataset from the Model Parameter Estimation Experiment (MOPEX) project. The results show that the drawback of NQT based post-processing methods can be alleviated in the proposed algorithm. Some suitable conditions and suggestions on the application of copula based post-processing method for hydrology ensemble forecast were also provided.

  9. A new ultrasound based method for rapid microorganism detection

    NASA Astrophysics Data System (ADS)

    Shukla, Shiva Kant; Segura, Luis Elvira; Sánchez, Carlos José Sierra; López, Pablo Resa

    2012-05-01

    A new method for rapid detection of catalase positive microorganisms by using an ultrasonic measuring method is proposed in this work. The developed technique is based on the detection of oxygen bubbles produced by the hydrolysis of hydrogen peroxide induced by the enzyme catalase which is present in many microorganisms. The bubbles are trapped in a media based on agar gel which was especially developed for microbiological evaluation. It is found that microorganism concentrations of the order of 105 c.f.u./ml can be detected by using this method. The results obtained show up that the proposed method is competitive with other modern commercial methods like luminescence by ATP system. The method can also be used for characterization of enzyme activity.

  10. Qualitative Assessment of Inquiry-Based Teaching Methods

    ERIC Educational Resources Information Center

    Briggs, Michael; Long, George; Owens, Katrina

    2011-01-01

    A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…

  11. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  12. A Novel Method for Learner Assessment Based on Learner Annotations

    ERIC Educational Resources Information Center

    Noorbehbahani, Fakhroddin; Samani, Elaheh Biglar Beigi; Jazi, Hossein Hadian

    2013-01-01

    Assessment is one of the most essential parts of any instructive learning process which aims to evaluate a learner's knowledge about learning concepts. In this work, a new method for learner assessment based on learner annotations is presented. The proposed method exploits the M-BLEU algorithm to find the most similar reference annotations…

  13. [Lossless ECG hybrid compression method based on JPEG 2000].

    PubMed

    Lu, Ying-Ying; Duan, Hui-Long; Lu, Xu-Dong

    2008-07-01

    After a study on the characteristic of ECG data, we propose here in this paper a lossless compression method of ECG data, which is based on JPEG2000. It integrates both 1D and 2D compression. The method has been verified through all forty-eight records in MIT-BIH Arrhythmia database. And the result shows that the method has a better compression rate and a good computational efficiency.

  14. [Synchrotron-based characterization methods applied to ancient materials (I)].

    PubMed

    Anheim, Étienne; Thoury, Mathieu; Bertrand, Loïc

    2015-12-01

    This article aims at presenting the first results of a transdisciplinary research programme in heritage sciences. Based on the growing use and on the potentialities of micro- and nano-characterization synchrotron-based methods to study ancient materials (archaeology, palaeontology, cultural heritage, past environments), this contribution will identify and test conceptual and methodological elements of convergence between physicochemical and historical sciences.

  15. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, G.F.; Steindler, M.J.

    1985-05-21

    A method of removing a phosphorus-based poisonous substance from water contaminated is presented. In addition, the toxicity of the phosphorus-based substance is also subsequently destroyed. A water-immiscible organic solvent is first immobilized on a supported liquid membrane before the contaminated water is contacted with one side of the supported liquid membrane to absorb the phosphorus-based substance in the organic solvent. The other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react with phosphorus-based solvated species to form a non-toxic product.

  16. Method of recovering oil-based fluid and apparatus

    SciTech Connect

    Brinkley, H.E.

    1993-07-20

    A method is described for recovering oil-based fluid from a surface having oil-based fluid thereon comprising the steps of: applying to the oil-based fluid on the surface an oil-based fluid absorbent cloth of man-made fibers, the cloth having at least one napped surface that defines voids therein, the nap being formed of raised ends or loops of the fibers; absorbing, with the cloth, oil-based fluid; feeding the cloth having absorbed oil-based fluid to a means for applying a force to the cloth to recover oil-based fluid; and applying force to the cloth to recover oil-based fluid therefrom using the force applying means.

  17. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1990-10-09

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  18. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Moyzis, Robert K.; Ratliff, Robert L.; Shera, E. Brooks; Stewart, Carleton C.

    1990-01-01

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed.

  19. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1987-10-07

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  20. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  1. Optimizing distance-based methods for large data sets

    NASA Astrophysics Data System (ADS)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  2. Evaluation of read count based RNAseq analysis methods.

    PubMed

    Guo, Yan; Li, Chung-I; Ye, Fei; Shyr, Yu

    2013-01-01

    RNAseq technology is replacing microarray technology as the tool of choice for gene expression profiling. While providing much richer data than microarray, analysis of RNAseq data has been much more challenging. To date, there has not been a consensus on the best approach for conducting robust RNAseq analysis. In this study, we designed a thorough experiment to evaluate six read count-based RNAseq analysis methods (DESeq, DEGseq, edgeR, NBPSeq, TSPM and baySeq) using both real and simulated data. We found the six methods produce similar fold changes and reasonable overlapping of differentially expressed genes based on p-values. However, all six methods suffer from over-sensitivity. Based on the evaluation of runtime using real data and area under the receiver operating characteristic curve (AUC-ROC) using simulated data, we found that edgeR achieves a better balance between speed and accuracy than the other methods.

  3. Computer based safety training: an investigation of methods

    PubMed Central

    Wallen, E; Mulloy, K

    2005-01-01

    Background: Computer based methods are increasingly being used for training workers, although our understanding of how to structure this training has not kept pace with the changing abilities of computers. Information on a computer can be presented in many different ways and the style of presentation can greatly affect learning outcomes and the effectiveness of the learning intervention. Many questions about how adults learn from different types of presentations and which methods best support learning remain unanswered. Aims: To determine if computer based methods, which have been shown to be effective on younger students, can also be an effective method for older workers in occupational health and safety training. Methods: Three versions of a computer based respirator training module were developed and presented to manufacturing workers: one consisting of text only; one with text, pictures, and animation; and one with narration, pictures, and animation. After instruction, participants were given two tests: a multiple choice test measuring low level, rote learning; and a transfer test measuring higher level learning. Results: Participants receiving the concurrent narration with pictures and animation scored significantly higher on the transfer test than did workers receiving the other two types of instruction. There were no significant differences between groups on the multiple choice test. Conclusions: Narration with pictures and text may be a more effective method for training workers about respirator safety than other popular methods of computer based training. Further study is needed to determine the conditions for the effective use of this technology. PMID:15778259

  4. An overview of modal-based damage identification methods

    SciTech Connect

    Farrar, C.R.; Doebling, S.W.

    1997-09-01

    This paper provides an overview of methods that examine changes in measured vibration response to detect, locate, and characterize damage in structural and mechanical systems. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is first provided. The methods are then categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. This overview is limited to methods that can be adapted to a wide range of structures (i.e., are not dependent on a particular assumed model form for the system such as beam-bending behavior and methods and that are not based on updating finite element models). Next, the methods are described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of modal-based damage identification.

  5. Provenance graph query method based on double layer index structure

    NASA Astrophysics Data System (ADS)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Order to solve the problem that the efficiency of the existing source map is low and the resource occupancy rate is high, considering the relationship between the origin information and the data itself and the internal structure of the origin information, a method of provenance graph query based on double layer index structure is proposed. Firstly, we propose a two layer index structure based on the global index of the dictionary table and the local index based on the bitmap. The global index is used to query the server nodes stored in the source map. The local index is used to query the global index. Finally, based on the double-level index structure, a method of starting map query is designed. The experimental results show that the proposed method not only improves the efficiency of query and reduces the waste of memory resources.

  6. Integrated navigation method based on inertial navigation system and Lidar

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyue; Shi, Haitao; Pan, Jianye; Zhang, Chunxi

    2016-04-01

    An integrated navigation method based on the inertial navigational system (INS) and Lidar was proposed for land navigation. Compared with the traditional integrated navigational method and dead reckoning (DR) method, the influence of the inertial measurement unit (IMU) scale factor and misalignment was considered in the new method. First, the influence of the IMU scale factor and misalignment on navigation accuracy was analyzed. Based on the analysis, the integrated system error model of INS and Lidar was established, in which the IMU scale factor and misalignment error states were included. Then the observability of IMU error states was analyzed. According to the results of the observability analysis, the integrated system was optimized. Finally, numerical simulation and a vehicle test were carried out to validate the availability and utility of the proposed INS/Lidar integrated navigational method. Compared with the test result of a traditional integrated navigation method and DR method, the proposed integrated navigational method could result in a higher navigation precision. Consequently, the IMU scale factor and misalignment error were effectively compensated by the proposed method and the new integrated navigational method is valid.

  7. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  8. The Reality-Based Learning Method: A Simple Method for Keeping Teaching Activities Relevant and Effective

    ERIC Educational Resources Information Center

    Smith, Louise W.; Van Doren, Doris C.

    2004-01-01

    Active and experiential learning theory have not dramatically changed collegiate classroom teaching methods, although they have long been included in the pedagogical literature. This article presents an evolved method, reality based learning, that aids professors in including active learning activities with feelings of clarity and confidence. The…

  9. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  10. Fast simulation method for airframe analysis based on big data

    NASA Astrophysics Data System (ADS)

    Liu, Dongliang; Zhang, Lixin

    2016-10-01

    In this paper, we employ the big data method to structural analysis by considering the correlations between loads and loads, loads and results and results and results. By means of fundamental mathematics and physical rules, the principle, feasibility and error control of the method are discussed. We then establish the analysis process and procedures. The method is validated by two examples. The results show that the fast simulation method based on big data is fast and precise when it is applied to structural analysis.

  11. Energy-Based Acoustic Source Localization Methods: A Survey.

    PubMed

    Meng, Wei; Xiao, Wendong

    2017-02-15

    Energy-based source localization is an important problem in wireless sensor networks (WSNs), which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE) and nonlinear-least-squares (NLS) methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions.

  12. Energy-Based Acoustic Source Localization Methods: A Survey

    PubMed Central

    Meng, Wei; Xiao, Wendong

    2017-01-01

    Energy-based source localization is an important problem in wireless sensor networks (WSNs), which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE) and nonlinear-least-squares (NLS) methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions. PMID:28212281

  13. System and method for deriving a process-based specification

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael Gerard (Inventor); Rash, James Larry (Inventor); Rouff, Christopher A. (Inventor)

    2009-01-01

    A system and method for deriving a process-based specification for a system is disclosed. The process-based specification is mathematically inferred from a trace-based specification. The trace-based specification is derived from a non-empty set of traces or natural language scenarios. The process-based specification is mathematically equivalent to the trace-based specification. Code is generated, if applicable, from the process-based specification. A process, or phases of a process, using the features disclosed can be reversed and repeated to allow for an interactive development and modification of legacy systems. The process is applicable to any class of system, including, but not limited to, biological and physical systems, electrical and electro-mechanical systems in addition to software, hardware and hybrid hardware-software systems.

  14. Group-Based Image Retrieval Method for Video Image Annotation

    NASA Astrophysics Data System (ADS)

    Murabayashi, Noboru; Kurahashi, Setsuya; Yoshida, Kenichi

    This paper proposes a group-based image retrieval method for video image annotation systems. Although the wide spread use of video camera recorders has increased the demand for an automated annotation system for personal videos, conventional image retrieval methods cannot achieve enough accuracy to be used as an annotation engine. Recording conditions, such as change of the brightness by weather condition, shadow by the surroundings, and etc, affect the qualities of images recorded by the personal video camera recorders. The degraded image of personal video makes the retrieval task difficult. Furthermore, it is difficult to discriminate similar images without any auxiliary information. To cope with these difficulties, this paper proposes a group-based image retrieval method. Its characteristics are 1) the use of image similarity based on the wavelet transformation based features and the scale invariant feature transform based features, and 2) the pre-grouping of related images and screening using group information. Experimental results show that the proposed method can improve image retrieval accuracy to 90% up from the conventional method of 40%.

  15. a Minimum Spanning Tree Based Method for Uav Image Segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Wei, Zheng; Cui, Weihong; Lin, Zhiyong

    2016-06-01

    This paper proposes a Minimum Span Tree (MST) based image segmentation method for UAV images in coastal area. An edge weight based optimal criterion (merging predicate) is defined, which based on statistical learning theory (SLT). And we used a scale control parameter to control the segmentation scale. Experiments based on the high resolution UAV images in coastal area show that the proposed merging predicate can keep the integrity of the objects and prevent results from over segmentation. The segmentation results proves its efficiency in segmenting the rich texture images with good boundary of objects.

  16. A hybrid method for pancreas extraction from CT image based on level set methods.

    PubMed

    Jiang, Huiyan; Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction.

  17. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.

  18. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  19. MProfiler: A Profile-Based Method for DNA Motif Discovery

    NASA Astrophysics Data System (ADS)

    Altarawy, Doaa; Ismail, Mohamed A.; Ghanem, Sahar M.

    Motif Finding is one of the most important tasks in gene regulation which is essential in understanding biological cell functions. Based on recent studies, the performance of current motif finders is not satisfactory. A number of ensemble methods have been proposed to enhance the accuracy of the results. Existing ensemble methods overall performance is better than stand-alone motif finders. A recent ensemble method, MotifVoter, significantly outperforms all existing stand-alone and ensemble methods. In this paper, we propose a method, MProfiler, to increase the accuracy of MotifVoter without increasing the run time by introducing an idea called center profiling. Our experiments show improvement in the quality of generated clusters over MotifVoter in both accuracy and cluster compactness. Using 56 datasets, the accuracy of the final results using our method achieves 80% improvement in correlation coefficient nCC, and 93% improvement in performance coefficient nPC over MotifVoter.

  20. Scientific method by using project method in acid, base and salt material

    NASA Astrophysics Data System (ADS)

    Febriana, Beta Wulan; Arlianty, Widinda Normalia; Diniaty, Artina

    2017-03-01

    This study aims to determine the effect of scientific method using project method on student's achievement. This research was conducted at SMPN 2 Karanganyar. The descriptive quantitative method was used in this research. Samples were taken two classes using cluster random sampling. Data was obtained based on cognitive instruments. This data represents the value pretest and posttest. Data was analyzed by using descriptive analysis techniques. The results show that the class which using scientific method using project method has an average value of 37.50% of students' achievement (high), 37.50% (moderate) and 4.16% (very low). On the other hand, the class which using one method, scientific method, have the students' achievement at 33.3% (high), 8.33% (moderate) and 20.83% (very low).

  1. Therapy Decision Support Based on Recommender System Methods

    PubMed Central

    Gräßer, Felix; Beckert, Stefanie; Küster, Denise; Schmitt, Jochen; Abraham, Susanne; Malberg, Hagen

    2017-01-01

    We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender, are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system.

  2. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGES

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  3. A Triangle Mesh Standardization Method Based on Particle Swarm Optimization

    PubMed Central

    Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang

    2016-01-01

    To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh. PMID:27509129

  4. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    SciTech Connect

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.

  5. Simple noise-reduction method based on nonlinear forecasting.

    PubMed

    Tan, James P L

    2017-03-01

    Nonparametric detrending or noise reduction methods are often employed to separate trends from noisy time series when no satisfactory models exist to fit the data. However, conventional noise reduction methods depend on subjective choices of smoothing parameters. Here we present a simple multivariate noise reduction method based on available nonlinear forecasting techniques. These are in turn based on state-space reconstruction for which a strong theoretical justification exists for their use in nonparametric forecasting. The noise reduction method presented here is conceptually similar to Schreiber's noise reduction method using state-space reconstruction. However, we show that Schreiber's method has a minor flaw that can be overcome with forecasting. Furthermore, our method contains a simple but nontrivial extension to multivariate time series. We apply the method to multivariate time series generated from the Van der Pol oscillator, the Lorenz equations, the Hindmarsh-Rose model of neuronal spiking activity, and to two other univariate real-world data sets. It is demonstrated that noise reduction heuristics can be objectively optimized with in-sample forecasting errors that correlate well with actual noise reduction errors.

  6. Simple noise-reduction method based on nonlinear forecasting

    NASA Astrophysics Data System (ADS)

    Tan, James P. L.

    2017-03-01

    Nonparametric detrending or noise reduction methods are often employed to separate trends from noisy time series when no satisfactory models exist to fit the data. However, conventional noise reduction methods depend on subjective choices of smoothing parameters. Here we present a simple multivariate noise reduction method based on available nonlinear forecasting techniques. These are in turn based on state-space reconstruction for which a strong theoretical justification exists for their use in nonparametric forecasting. The noise reduction method presented here is conceptually similar to Schreiber's noise reduction method using state-space reconstruction. However, we show that Schreiber's method has a minor flaw that can be overcome with forecasting. Furthermore, our method contains a simple but nontrivial extension to multivariate time series. We apply the method to multivariate time series generated from the Van der Pol oscillator, the Lorenz equations, the Hindmarsh-Rose model of neuronal spiking activity, and to two other univariate real-world data sets. It is demonstrated that noise reduction heuristics can be objectively optimized with in-sample forecasting errors that correlate well with actual noise reduction errors.

  7. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, George F.; Steindler, Martin J.

    1989-01-01

    A method of removing organic phosphorus-based poisonous substances from water contaminated therewith and of subsequently destroying the toxicity of the substance is disclosed. Initially, a water-immiscible organic is immobilized on a supported liquid membrane. Thereafter, the contaminated water is contacted with one side of the supported liquid membrane to selectively dissolve the phosphorus-based substance in the organic extractant. At the same time, the other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react the phosphorus-based substance dissolved by the organic extractant with a hydroxy ion. This forms a non-toxic reaction product in the base. The organic extractant can be a water-insoluble trialkyl amine, such as trilauryl amine. The phosphorus-based substance can be phosphoryl or a thiophosphoryl.

  8. Multilayer neural network models based on grid methods

    NASA Astrophysics Data System (ADS)

    Lazovskaya, T.; Tarkhov, D.

    2016-11-01

    The article discusses building hybrid models relating classical numerical methods for solving ordinary and partial differential equations and the universal neural network approach being developed by D Tarkhov and A Vasilyev. The different ways of constructing multilayer neural network structures based on grid methods are considered. The technique of building a continuous approximation using one simple modification of classical schemes is presented. Introduction non-linear relationships into the classic models with and without posterior learning are investigated. The numerical experiments are conducted.

  9. Review of atom probe FIB-based specimen preparation methods.

    PubMed

    Miller, Michael K; Russell, Kaye F; Thompson, Keith; Alvis, Roger; Larson, David J

    2007-12-01

    Several FIB-based methods that have been developed to fabricate needle-shaped atom probe specimens from a variety of specimen geometries, and site-specific regions are reviewed. These methods have enabled electronic device structures to be characterized. The atom probe may be used to quantify the level and range of gallium implantation and has demonstrated that the use of low accelerating voltages during the final stages of milling can dramatically reduce the extent of gallium implantation.

  10. LINEAR SCANNING METHOD BASED ON THE SAFT COARRAY

    SciTech Connect

    Martin, C. J.; Martinez-Graullera, O.; Romero, D.; Ullate, L. G.; Higuti, R. T.

    2010-02-22

    This work presents a method to obtain B-scan images based on linear array scanning and 2R-SAFT. Using this technique some advantages are obtained: the ultrasonic system is very simple; it avoids the grating lobes formation, characteristic in conventional SAFT; and subaperture size and focussing lens (to compensate emission-reception) can be adapted dynamically to every image point. The proposed method has been experimentally tested in the inspection of CFRP samples.

  11. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2012-01-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  12. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2011-12-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  13. A Method for Industrial Base Analysis: An Aerospace Case Study

    DTIC Science & Technology

    1993-01-01

    industrial base to build advanced weapon systems in the quantities required and at a reasonable price. A particular concern is the health of the subcontractor industrial base , which provides critical parts and technologies to prime contractors and is less visible than major prime contractors. This report presents a method to assess the impact of DoD budget cuts on both prime contractors and their first-tier subcontractors. The method considers supplier- contractor relationships, by weapon system, and builds a time profile of...sector and/or on the

  14. Fault Diagnosis Method of Fault Indicator Based on Maximum Probability

    NASA Astrophysics Data System (ADS)

    Yin, Zili; Zhang, Wei

    2017-05-01

    In order to solve the problem of distribution fault diagnosis in case of misreporting or failed-report of fault indicator information, the characteristics of the fault indicator are analyzed, and the concept of the minimum fault judgment area of the distribution network is developed. Based on which, the mathematical model of fault indicator fault diagnosis is evaluated. The characteristics of fault indicator signals are analyzed. Based on two-in-three principle, a probabilistic fault indicator combination signal processing method is proposed. Based on the combination of the minimum fault judgment area model, the fault indicator combination signal and the interdependence between the fault indicators, a fault diagnosis method based on maximum probability is proposed. The method is based on the similarity between the simulated fault signal and the real fault signal, and the detailed formula is given. The method has good fault-tolerance in the case of misreporting or failed-report of fault indicator information, which can more accurately determine the fault area. The probability of each area is given, and fault alternatives are provided. The proposed approach is feasible and valuable for the dispatching and maintenance personnel to deal with the fault.

  15. Robust wrapping-free phase retrieval method based on weighted least squares method

    NASA Astrophysics Data System (ADS)

    Wang, Minmin; Zhou, Canlin; Si, Shuchun; Li, XiaoLei; Lei, Zhenkun; Li, YanJie

    2017-10-01

    Phase unwrapping is one of the most challenging processes in many profilometry techniques. To sidestep the phase unwrapping process, Perciante et al. (2015) proposed a wrapping-free method to retrieve the phase based on the direct integration of the spatial derivatives of the fringe patterns. However, this method is only applicable to objects with phase continuity, so it may fail to handle fringe patterns containing complicated singularities such as noise, shadows, shears and surface discontinuities. In the light of this problems, a robust wrapping-free phase retrieval method is proposed that is based on the combined use of Perciante's method and the weighted least squares method. Two partial derivatives of the desired phase are obtained from the fringe patterns, while the carrier is eliminated using the direct phase difference method. The phase singularities are determined using a derivative variance correlation map (DVCM), and the weighting coefficient is obtained from the binary mask of the reverse DVCM. Simulations and experiments are conducted to prove the validity of the proposed method. The results are analyzed and compared with those of Perciante's method demonstrating that in addition to maintaining the advantage of sidestepping the phase unwrapping process, the proposed method is available for measuring objects with some types of singularities sources.

  16. Global gravimetric geoid model based a new method

    NASA Astrophysics Data System (ADS)

    Shen, W. B.; Han, J. C.

    2012-04-01

    The geoid, defined as the equipotential surface nearest to the mean sea level, plays a key role in physical geodesy and unification of height datum system. In this study, we introduce a new method, which is quite different from the conventional geoid modeling methods (e.g., Stokes method, Molodensky method), to determine the global gravimetric geoid (GGG). Based on the new method, using the dada base of the external Earth gravity field model EGM2008, digital topographic model DTM2006.0 and crust density distribution model CRUST2.0, we first determined the inner geopotential field until to the depth of D, and then established a GGG model , the accuracy of which is evaluated by comparing with the observations from USA, AUS, some parts of Canada, and some parts of China. The main idea of the new method is stated as follows. Given the geopotential field (e.g. EGM2008) outside the Earth, we may determine the inner geopotential field until to the depth of D by using Newtonian integral, once the density distribution model (e.g. CRUST2.0) of a shallow layer until to the depth D is given. Then, based on the definition of the geoid (i.e. an equipotential surface nearest to the mean sea level) one may determine the GGG. This study is supported by Natural Science Foundation China (grant No.40974015; No.41174011; No.41021061; No.41128003).

  17. How to Reach Evidence-Based Usability Evaluation Methods.

    PubMed

    Marcilly, Romaric; Peute, Linda

    2017-01-01

    This paper discusses how and why to build evidence-based knowledge on usability evaluation methods. At each step of building evidence, requisites and difficulties to achieve it are highlighted. Specifically, the paper presents how usability evaluation studies should be designed to allow capitalizing evidence. Reciprocally, it presents how evidence-based usability knowledge will help improve usability practice. Finally, it underlines that evaluation and evidence participate in a virtuous circle that will help improve scientific knowledge and evaluation practice.

  18. Innovating Method of Existing Mechanical Product Based on TRIZ Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Cunyou; Shi, Dongyan; Wu, Han

    Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.

  19. Biorthogonal wavelet-based method of moments for electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Zhang, Qinke

    Wavelet analysis is a technique developed in recent years in mathematics and has found usefulness in signal processing and many other engineering areas. The practical use of wavelets for the solution of partial differential and integral equations in computational electromagnetics has been investigated in this dissertation, with the emphasis on development of biorthogonal wavelet based method of moments for the solution of electric and magnetic field integral equations. The fundamentals and numerical analysis aspects of wavelet theory have been studied. In particular, a family of compactly supported biorthogonal spline wavelet bases on the n-cube (0,1) n has been studied in detail. The wavelet bases were used in this work as a building block to construct biorthogonal wavelet bases on general domain geometry. A specific and practical way of adapting the wavelet bases to certain n- dimensional blocks or elements is proposed based on the domain decomposition and local transformation techniques used in traditional finite element methods and computer aided graphics. The element, with the biorthogonal wavelet base embedded in it, is called a wavelet element in this work. The physical domains which can be treated with this method include general curves, surfaces in 2D and 3D, and 3D volume domains. A two-step mapping is proposed for the purpose of taking full advantage of the zero-moments of wavelets. The wavelet element approach appears to offer several important advantages. It avoids the need of generating very complicated meshes required in traditional finite element based methods, and makes the adaptive analysis easy to implement. A specific implementation procedure for performing adaptive analysis is proposed. The proposed biorthogonal wavelet based method of moments (BWMoM) has been implemented by using object-oriented programming techniques. The main computational issues have been detailed, discussed, and implemented in the whole package. Numerical examples show

  20. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).

  1. A online credit evaluation method based on AHP and SPA

    NASA Astrophysics Data System (ADS)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  2. Sonoclot(®)-based method to detect iron enhanced coagulation.

    PubMed

    Nielsen, Vance G; Henderson, Jon

    2016-07-01

    Thrombelastographic methods have been recently introduced to detect iron mediated hypercoagulability in settings such as sickle cell disease, hemodialysis, mechanical circulatory support, and neuroinflammation. However, these inflammatory situations may have heme oxygenase-derived, coexistent carbon monoxide present, which also enhances coagulation as assessed by the same thrombelastographic variables that are affected by iron. This brief report presents a novel, Sonoclot-based method to detect iron enhanced coagulation that is independent of carbon monoxide influence. Future investigation will be required to assess the sensitivity of this new method to detect iron mediated hypercoagulability in clinical settings compared to results obtained with thrombelastographic techniques.

  3. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  4. A method for density estimation based on expectation identities

    NASA Astrophysics Data System (ADS)

    Peralta, Joaquín; Loyola, Claudia; Loguercio, Humberto; Davis, Sergio

    2017-06-01

    We present a simple and direct method for non-parametric estimation of a one-dimensional probability density, based on the application of the recent conjugate variables theorem. The method expands the logarithm of the probability density ln P(x|I) in terms of a complete basis and numerically solves for the coefficients of the expansion using a linear system of equations. No Monte Carlo sampling is needed. We present preliminary results that show the practical usefulness of the method for modeling statistical data.

  5. A new dataset evaluation method based on category overlap.

    PubMed

    Oh, Sejong

    2011-02-01

    The quality of dataset has a profound effect on classification accuracy, and there is a clear need for some method to evaluate this quality. In this paper, we propose a new dataset evaluation method using the R-value measure. This proposed method is based on the ratio of overlapping areas among categories in a dataset. A high R-value for a dataset indicates that the dataset contains wide overlapping areas among its categories, and classification accuracy on the dataset may become low. We can use the R-value measure to understand the characteristics of a dataset, the feature selection process, and the proper design of new classifiers.

  6. Sterility Test Method for Petrolatum-Based Ophthalmic Ointments

    PubMed Central

    Tsuji, Kiyoshi; Stapert, E. M.; Robertson, John H.; Waiyaki, Peter M.

    1970-01-01

    A sensitive sterility testing procedure for the detection of microbial contamination in petrolatum-based ointments is described. The method involves dissolving the ointment in filter-sterilized isopropyl myristate and filtering through a membrane filter. Improved sensitivity is obtained by blending the membrane in Trypticase Soy Broth before incubation. Filter-sterilized isopropyl myristate is shown to be less toxic to microorganisms than heat-sterilized isopropyl myristate. The isopropyl myristate method is more sensitive than the polyethylene glycol-ether method for the detection of microbial contamination. PMID:4991924

  7. Adaptive Set-Based Methods for Association Testing.

    PubMed

    Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo

    2016-02-01

    With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test. © 2015 WILEY PERIODICALS, INC.

  8. A Localization Method for Multistatic SAR Based on Convex Optimization

    PubMed Central

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  9. A Localization Method for Multistatic SAR Based on Convex Optimization.

    PubMed

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment.

  10. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    PubMed Central

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  11. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    NASA Astrophysics Data System (ADS)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  12. An agarose-gel based method for transporting cell lines.

    PubMed

    Yang, Lingzhi; Li, Chufang; Chen, Ling; Li, Zhiyuan

    2009-12-16

    Cryopreserved cells stored in dry ice or liquid nitrogen is the classical method for transporting cells between research laboratories in different cities around the world in order to maintain cell viability. An alternative method is to ship the live cells in flasks filled with cell culture medium. Both methods have limitations of either a requirement on special shipping container or short times for the cells to survive on the shipping process. We have recently developed an agarose gel based method for directly transporting the live adherent cells in cell culture plates or dishes in ambient temperature. This convenient method simplifies the transportation of live cells in long distance that can maintain cells in good viability for several days.

  13. A Novel Camera Calibration Method Based on Polar Coordinate

    PubMed Central

    Gai, Shaoyan; Da, Feipeng; Fang, Xu

    2016-01-01

    A novel calibration method based on polar coordinate is proposed. The world coordinates are expressed in the form of polar coordinates, which are converted to world coordinates in the calibration process. In the beginning, the calibration points are obtained in polar coordinates. By transformation between polar coordinates and rectangular coordinates, the points turn into form of rectangular coordinates. Then, the points are matched with the corresponding image coordinates. At last, the parameters are obtained by objective function optimization. By the proposed method, the relationships between objects and cameras are expressed in polar coordinates easily. It is suitable for multi-camera calibration. Cameras can be calibrated with fewer points. The calibration images can be positioned according to the location of cameras. The experiment results demonstrate that the proposed method is an efficient calibration method. By the method, cameras are calibrated conveniently with high accuracy. PMID:27798651

  14. Method of coating an iron-based article

    SciTech Connect

    Magdefrau, Neal; Beals, James T.; Sun, Ellen Y.; Yamanis, Jean

    2016-11-29

    A method of coating an iron-based article includes a first heating step of heating a substrate that includes an iron-based material in the presence of an aluminum source material and halide diffusion activator. The heating is conducted in a substantially non-oxidizing environment, to cause the formation of an aluminum-rich layer in the iron-based material. In a second heating step, the substrate that has the aluminum-rich layer is heated in an oxidizing environment to oxidize the aluminum in the aluminum-rich layer.

  15. A New Coronal Loop Identification Method Based on Phase Congruency

    NASA Astrophysics Data System (ADS)

    Li, Hong-bo; Zhao, Ming-yu; Liu, Yu

    2017-07-01

    We have tried to apply the enhanced image by a phase congruency method to the identification of coronal loop structures, and proposed a new coronal loop identification method (simply called the PCB method) based on the enhanced image by the phase congruency method. On account of the smooth morphological variation in the direction along a coronal loop, the propelling direction of coronal loop identification is restricted in a small range for improving the identified result. Beyond that, inspired by the structural characteristics of coronal loops, we firstly suggest that both the variation of propelling direction in the identification process and the magnitude of phase congruency at the identified point are simultaneously taken as the criterion to terminate the identification. Finally, several coronal images are used for testing our coronal loop identification method, and the result indicates that the enhanced image by the phase congruency method is really suitable for the coronal loop identification, and the coronal loop structures identified by the PCB method have simultaneously a good completeness and a high accuracy, hence, the PCB method is a set of practical and feasible method of automated coronal loop identification.

  16. PDEs on moving surfaces via the closest point method and a modified grid based particle method

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ruuth, S. J.

    2016-05-01

    Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.

  17. Acoustic radiation force-based elasticity imaging methods

    PubMed Central

    Palmeri, Mark L.; Nightingale, Kathryn R.

    2011-01-01

    Conventional diagnostic ultrasound images portray differences in the acoustic properties of soft tissues, whereas ultrasound-based elasticity images portray differences in the elastic properties of soft tissues (i.e. stiffness, viscosity). The benefit of elasticity imaging lies in the fact that many soft tissues can share similar ultrasonic echogenicities, but may have different mechanical properties that can be used to clearly visualize normal anatomy and delineate pathological lesions. Acoustic radiation force-based elasticity imaging methods use acoustic radiation force to transiently deform soft tissues, and the dynamic displacement response of those tissues is measured ultrasonically and is used to estimate the tissue's mechanical properties. Both qualitative images and quantitative elasticity metrics can be reconstructed from these measured data, providing complimentary information to both diagnose and longitudinally monitor disease progression. Recently, acoustic radiation force-based elasticity imaging techniques have moved from the laboratory to the clinical setting, where clinicians are beginning to characterize tissue stiffness as a diagnostic metric, and commercial implementations of radiation force-based ultrasonic elasticity imaging are beginning to appear on the commercial market. This article provides an overview of acoustic radiation force-based elasticity imaging, including a review of the relevant soft tissue material properties, a review of radiation force-based methods that have been proposed for elasticity imaging, and a discussion of current research and commercial realizations of radiation force based-elasticity imaging technologies. PMID:22419986

  18. Preparing Students for Flipped or Team-Based Learning Methods

    ERIC Educational Resources Information Center

    Balan, Peter; Clark, Michele; Restall, Gregory

    2015-01-01

    Purpose: Teaching methods such as Flipped Learning and Team-Based Learning require students to pre-learn course materials before a teaching session, because classroom exercises rely on students using self-gained knowledge. This is the reverse to "traditional" teaching when course materials are presented during a lecture, and students are…

  19. Metaphoric Investigation of the Phonic-Based Sentence Method

    ERIC Educational Resources Information Center

    Dogan, Birsen

    2012-01-01

    This study aimed to understand the views of prospective teachers with "phonic-based sentence method" through metaphoric images. In this descriptive study, the participants involve the prospective teachers who take reading-writing instruction courses in Primary School Classroom Teaching Program of the Education Faculty of Pamukkale…

  20. Explorations in Using Arts-Based Self-Study Methods

    ERIC Educational Resources Information Center

    Samaras, Anastasia P.

    2010-01-01

    Research methods courses typically require students to conceptualize, describe, and present their research ideas in writing. In this article, the author describes her exploration in using arts-based techniques for teaching research to support the development of students' self-study research projects. The pedagogical approach emerged from the…

  1. Bead Collage: An Arts-Based Research Method

    ERIC Educational Resources Information Center

    Kay, Lisa

    2013-01-01

    In this paper, "bead collage," an arts-based research method that invites participants to reflect, communicate and construct their experience through the manipulation of beads and found objects is explained. Emphasizing the significance of one's personal biography and experiences as a researcher, I discuss how my background as an…

  2. Preparing Students for Flipped or Team-Based Learning Methods

    ERIC Educational Resources Information Center

    Balan, Peter; Clark, Michele; Restall, Gregory

    2015-01-01

    Purpose: Teaching methods such as Flipped Learning and Team-Based Learning require students to pre-learn course materials before a teaching session, because classroom exercises rely on students using self-gained knowledge. This is the reverse to "traditional" teaching when course materials are presented during a lecture, and students are…

  3. Effective Teaching Methods--Project-based Learning in Physics

    ERIC Educational Resources Information Center

    Holubova, Renata

    2008-01-01

    The paper presents results of the research of new effective teaching methods in physics and science. It is found out that it is necessary to educate pre-service teachers in approaches stressing the importance of the own activity of students, in competences how to create an interdisciplinary project. Project-based physics teaching and learning…

  4. A Quantum-Based Similarity Method in Virtual Screening.

    PubMed

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2015-10-02

    One of the most widely-used techniques for ligand-based virtual screening is similarity searching. This study adopted the concepts of quantum mechanics to present as state-of-the-art similarity method of molecules inspired from quantum theory. The representation of molecular compounds in mathematical quantum space plays a vital role in the development of quantum-based similarity approach. One of the key concepts of quantum theory is the use of complex numbers. Hence, this study proposed three various techniques to embed and to re-represent the molecular compounds to correspond with complex numbers format. The quantum-based similarity method that developed in this study depending on complex pure Hilbert space of molecules called Standard Quantum-Based (SQB). The recall of retrieved active molecules were at top 1% and top 5%, and significant test is used to evaluate our proposed methods. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiment show that the effectiveness of SQB method was significantly increased due to the role of representational power of molecular compounds in complex numbers forms compared to Tanimoto benchmark similarity measure.

  5. A Natural Teaching Method Based on Learning Theory.

    ERIC Educational Resources Information Center

    Smilkstein, Rita

    1991-01-01

    The natural teaching method is active and student-centered, based on schema and constructivist theories, and informed by research in neuroplasticity. A schema is a mental picture or understanding of something we have learned. Humans can have knowledge only to the degree to which they have constructed schemas from learning experiences and practice.…

  6. Highly efficient preparation of sphingoid bases from glucosylceramides by chemoenzymatic method[S

    PubMed Central

    Gowda, Siddabasave Gowda B.; Usuki, Seigo; Hammam, Mostafa A. S.; Murai, Yuta; Igarashi, Yasuyuki; Monde, Kenji

    2016-01-01

    Sphingoid base derivatives have attracted increasing attention as promising chemotherapeutic candidates against lifestyle diseases such as diabetes and cancer. Natural sphingoid bases can be a potential resource instead of those derived by time-consuming total organic synthesis. In particular, glucosylceramides (GlcCers) in food plants are enriched sources of sphingoid bases, differing from those of animals. Several chemical methodologies to transform GlcCers to sphingoid bases have already investigated; however, these conventional methods using acid or alkaline hydrolysis are not efficient due to poor reaction yield, producing complex by-products and resulting in separation problems. In this study, an extremely efficient and practical chemoenzymatic transformation method has been developed using microwave-enhanced butanolysis of GlcCers and a large amount of readily available almond β-glucosidase for its deglycosylation reaction of lysoGlcCers. The method is superior to conventional acid/base hydrolysis methods in its rapidity and its reaction cleanness (no isomerization, no rearrangement) with excellent overall yield. PMID:26667669

  7. Liver 4DMRI: A retrospective image-based sorting method

    SciTech Connect

    Paganelli, Chiara; Summers, Paul; Bellomi, Massimo; Baroni, Guido; Riboldi, Marco

    2015-08-15

    Purpose: Four-dimensional magnetic resonance imaging (4DMRI) is an emerging technique in radiotherapy treatment planning for organ motion quantification. In this paper, the authors present a novel 4DMRI retrospective image-based sorting method, providing reduced motion artifacts than using a standard monodimensional external respiratory surrogate. Methods: Serial interleaved 2D multislice MRI data were acquired from 24 liver cases (6 volunteers + 18 patients) to test the proposed 4DMRI sorting. Image similarity based on mutual information was applied to automatically identify a stable reference phase and sort the image sequence retrospectively, without the use of additional image or surrogate data to describe breathing motion. Results: The image-based 4DMRI provided a smoother liver profile than that obtained from standard resorting based on an external surrogate. Reduced motion artifacts were observed in image-based 4DMRI datasets with a fitting error of the liver profile measuring 1.2 ± 0.9 mm (median ± interquartile range) vs 2.1 ± 1.7 mm of the standard method. Conclusions: The authors present a novel methodology to derive a patient-specific 4DMRI model to describe organ motion due to breathing, with improved image quality in 4D reconstruction.

  8. Structure-based Methods for Computational Protein Functional Site Prediction

    PubMed Central

    Dukka, B KC

    2013-01-01

    Due to the advent of high throughput sequencing techniques and structural genomic projects, the number of gene and protein sequences has been ever increasing. Computational methods to annotate these genes and proteins are even more indispensable. Proteins are important macromolecules and study of the function of proteins is an important problem in structural bioinformatics. This paper discusses a number of methods to predict protein functional site especially focusing on protein ligand binding site prediction. Initially, a short overview is presented on recent advances in methods for selection of homologous sequences. Furthermore, a few recent structural based approaches and sequence-and-structure based approaches for protein functional sites are discussed in details. PMID:24688745

  9. Kinetic Plasma Simulation Using a Quadrature-based Moment Method

    NASA Astrophysics Data System (ADS)

    Larson, David J.

    2008-11-01

    The recently developed quadrature-based moment method [Desjardins, Fox, and Villedieu, J. Comp. Phys. 227 (2008)] is an interesting alternative to standard Lagrangian particle simulations. The two-node quadrature formulation allows multiple flow velocities within a cell, thus correctly representing crossing particle trajectories and lower-order velocity moments without resorting to Lagrangian methods. Instead of following many particles per cell, the Eulerian transport equations are solved for selected moments of the kinetic equation. The moments are then inverted to obtain a discrete representation of the velocity distribution function. Potential advantages include reduced computational cost, elimination of statistical noise, and a simpler treatment of collisional effects. We present results obtained using the quadrature-based moment method applied to the Vlasov equation in simple one-dimensional electrostatic plasma simulations. In addition we explore the use of the moment inversion process in modeling collisional processes within the Complex Particle Kinetics framework.

  10. A Reliability-Based Method to Sensor Data Fusion

    PubMed Central

    Zhuang, Miaoyan; Xie, Chunhe

    2017-01-01

    Multi-sensor data fusion technology based on Dempster–Shafer evidence theory is widely applied in many fields. However, how to determine basic belief assignment (BBA) is still an open issue. The existing BBA methods pay more attention to the uncertainty of information, but do not simultaneously consider the reliability of information sources. Real-world information is not only uncertain, but also partially reliable. Thus, uncertainty and partial reliability are strongly associated with each other. To take into account this fact, a new method to represent BBAs along with their associated reliabilities is proposed in this paper, which is named reliability-based BBA. Several examples are carried out to show the validity of the proposed method. PMID:28678179

  11. A Reliability-Based Method to Sensor Data Fusion.

    PubMed

    Jiang, Wen; Zhuang, Miaoyan; Xie, Chunhe

    2017-07-05

    Multi-sensor data fusion technology based on Dempster-Shafer evidence theory is widely applied in many fields. However, how to determine basic belief assignment (BBA) is still an open issue. The existing BBA methods pay more attention to the uncertainty of information, but do not simultaneously consider the reliability of information sources. Real-world information is not only uncertain, but also partially reliable. Thus, uncertainty and partial reliability are strongly associated with each other. To take into account this fact, a new method to represent BBAs along with their associated reliabilities is proposed in this paper, which is named reliability-based BBA. Several examples are carried out to show the validity of the proposed method.

  12. Efficient method of image edge detection based on FSVM

    NASA Astrophysics Data System (ADS)

    Cai, Aiping; Xiong, Xiaomei

    2013-07-01

    For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.

  13. Matrix-based image reconstruction methods for tomography

    SciTech Connect

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.

  14. Method for locating and purifying DNA containing single base mismatches

    SciTech Connect

    Ford, J.P.; Novack, D.F.; Casna, N.J.

    1988-12-27

    A method is described for detecting guanine and thymine bases which are unpaired according to the Watson-Crick base pairing scheme in a double stranded polynucleotide molecule, each unpaired guanine or thymine base being immediately preceded by at least one base which is paired, and immediately followed by at least one base which is paired, the preceding and following paired bases being on the same polynucleotide sequence as the unpaired guanine or thymine base comprising: (a) reacting the double stranded polynucleotide molecule with a reagent capable of altering the electrophoretic mobility of a double stranded polynucleotide molecule by derivatizing unpaired guanine and thymine bases in the double stranded polynucleotide molecule, wherein the double stranded polynucleotide molecule is not a covalently closed circular DNA; (b) observing the electrophoretic mobility of the double stranded polynucleotide molecule which has been reacted with the reagent; and (c) determining the presence or absence of an alteration in the electrophoretic mobility; whereby the presence or absence of unpaired guanine and thymine bases in the double stranded polynucleotide molecule is detected.

  15. Discontinuous Galerkin method based on non-polynomial approximation spaces

    SciTech Connect

    Yuan Ling . E-mail: lyuan@dam.brown.edu; Shu Chiwang . E-mail: shu@dam.brown.edu

    2006-10-10

    In this paper, we develop discontinuous Galerkin (DG) methods based on non-polynomial approximation spaces for numerically solving time dependent hyperbolic and parabolic and steady state hyperbolic and elliptic partial differential equations (PDEs). The algorithm is based on approximation spaces consisting of non-polynomial elementary functions such as exponential functions, trigonometric functions, etc., with the objective of obtaining better approximations for specific types of PDEs and initial and boundary conditions. It is shown that L {sup 2} stability and error estimates can be obtained when the approximation space is suitably selected. It is also shown with numerical examples that a careful selection of the approximation space to fit individual PDE and initial and boundary conditions often provides more accurate results than the DG methods based on the polynomial approximation spaces of the same order of accuracy.

  16. Photonic arbitrary waveform generator based on Taylor synthesis method.

    PubMed

    Liao, Shasha; Ding, Yunhong; Dong, Jianji; Yan, Siqi; Wang, Xu; Zhang, Xinliang

    2016-10-17

    Arbitrary waveform generation has been widely used in optical communication, radar system and many other applications. We propose and experimentally demonstrate a silicon-on-insulator (SOI) on chip optical arbitrary waveform generator, which is based on Taylor synthesis method. In our scheme, a Gaussian pulse is launched to some cascaded microrings to obtain first-, second- and third-order differentiations. By controlling amplitude and phase of the initial pulse and successive differentiations, we can realize an arbitrary waveform generator according to Taylor expansion. We obtain several typical waveforms such as square waveform, triangular waveform, flat-top waveform, sawtooth waveform, Gaussian waveform and so on. Unlike other schemes based on Fourier synthesis or frequency-to-time mapping, our scheme is based on Taylor synthesis method. Our scheme does not require any spectral disperser or large dispersion, which are difficult to fabricate on chip. Our scheme is compact and capable for integration with electronics.

  17. Method of plasma etching Ga-based compound semiconductors

    DOEpatents

    Qiu, Weibin; Goddard, Lynford L.

    2012-12-25

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent to the process chamber. The process chamber contains a sample comprising a Ga-based compound semiconductor. The sample is in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. The method includes flowing SiCl.sub.4 gas into the chamber, flowing Ar gas into the chamber, and flowing H.sub.2 gas into the chamber. RF power is supplied independently to the source electrode and the platen. A plasma is generated based on the gases in the process chamber, and regions of a surface of the sample adjacent to one or more masked portions of the surface are etched to create a substantially smooth etched surface including features having substantially vertical walls beneath the masked portions.

  18. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Posner, Richard G.; Marrone, Babetta L.; Hammond, Mark L.; Simpson, Daniel J.

    1995-01-01

    Method for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand.

  19. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Posner, R.G.; Marrone, B.L.; Hammond, M.L.; Simpson, D.J.

    1995-04-11

    A method is described for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand. 4 figures.

  20. Screw thread parameter measurement system based on image processing method

    NASA Astrophysics Data System (ADS)

    Rao, Zhimin; Huang, Kanggao; Mao, Jiandong; Zhang, Yaya; Zhang, Fan

    2013-08-01

    In the industrial production, as an important transmission part, the screw thread is applied extensively in many automation equipments. The traditional measurement methods of screw thread parameter, including integrated test methods of multiparameters and the single parameter measurement method, belong to contact measurement method. In practical the contact measurement exists some disadvantages, such as relatively high time cost, introducing easily human error and causing thread damage. In this paper, as a new kind of real-time and non-contact measurement method, a screw thread parameter measurement system based on image processing method is developed to accurately measure the outside diameter, inside diameter, pitch diameter, pitch, thread height and other parameters of screw thread. In the system the industrial camera is employed to acquire the image of screw thread, some image processing methods are used to obtain the image profile of screw thread and a mathematics model is established to compute the parameters. The C++Builder 6.0 is employed as the software development platform to realize the image process and computation of screw thread parameters. For verifying the feasibility of the measurement system, some experiments were carried out and the measurement errors were analyzed. The experiment results show the image measurement system satisfies the measurement requirements and suitable for real-time detection of screw thread parameters mentioned above. Comparing with the traditional methods the system based on image processing method has some advantages, such as, non-contact, easy operation, high measuring accuracy, no work piece damage, fast error analysis and so on. In the industrial production, this measurement system can provide an important reference value for development of similar parameter measurement system.

  1. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.

  2. Nonlinear model-based method for clustering periodically expressed genes.

    PubMed

    Tian, Li-Ping; Liu, Li-Zhi; Zhang, Qian-Wei; Wu, Fang-Xiang

    2011-01-01

    Clustering periodically expressed genes from their time-course expression data could help understand the molecular mechanism of those biological processes. In this paper, we propose a nonlinear model-based clustering method for periodically expressed gene profiles. As periodically expressed genes are associated with periodic biological processes, the proposed method naturally assumes that a periodically expressed gene dataset is generated by a number of periodical processes. Each periodical process is modelled by a linear combination of trigonometric sine and cosine functions in time plus a Gaussian noise term. A two stage method is proposed to estimate the model parameter, and a relocation-iteration algorithm is employed to assign each gene to an appropriate cluster. A bootstrapping method and an average adjusted Rand index (AARI) are employed to measure the quality of clustering. One synthetic dataset and two biological datasets were employed to evaluate the performance of the proposed method. The results show that our method allows the better quality clustering than other clustering methods (e.g., k-means) for periodically expressed gene data, and thus it is an effective cluster analysis method for periodically expressed gene data.

  3. Do dynamic-based MR knee kinematics methods produce the same results as static methods?

    PubMed

    d'Entremont, Agnes G; Nordmeyer-Massner, Jurek A; Bos, Clemens; Wilson, David R; Pruessmann, Klaas P

    2013-06-01

    MR-based methods provide low risk, noninvasive assessment of joint kinematics; however, these methods often use static positions or require many identical cycles of movement. The study objective was to compare the 3D kinematic results approximated from a series of sequential static poses of the knee with the 3D kinematic results obtained from continuous dynamic movement of the knee. To accomplish this objective, we compared kinematic data from a validated static MR method to a fast static MR method, and compared kinematic data from both static methods to a newly developed dynamic MR method. Ten normal volunteers were imaged using the three kinematic methods (dynamic, static standard, and static fast). Results showed that the two sets of static results were in agreement, indicating that the sequences (standard and fast) may be used interchangeably. Dynamic kinematic results were significantly different from both static results in eight of 11 kinematic parameters: patellar flexion, patellar tilt, patellar proximal translation, patellar lateral translation, patellar anterior translation, tibial abduction, tibial internal rotation, and tibial anterior translation. Three-dimensional MR kinematics measured from dynamic knee motion are often different from those measured in a static knee at several positions, indicating that dynamic-based kinematics provides information that is not obtainable from static scans.

  4. Diabatization based on the dipole and quadrupole: The DQ method

    NASA Astrophysics Data System (ADS)

    Hoyer, Chad E.; Xu, Xuefei; Ma, Dongxia; Gagliardi, Laura; Truhlar, Donald G.

    2014-09-01

    In this work, we present a method, called the DQ scheme (where D and Q stand for dipole and quadrupole, respectively), for transforming a set of adiabatic electronic states to diabatic states by using the dipole and quadrupole moments to determine the transformation coefficients. It is more broadly applicable than methods based only on the dipole moment; for example, it is not restricted to electron transfer reactions, and it works with any electronic structure method and for molecules with and without symmetry, and it is convenient in not requiring orbital transformations. We illustrate this method by prototype applications to two cases, LiH and phenol, for which we compare the results to those obtained by the fourfold-way diabatization scheme.

  5. [Fast Implementation Method of Protein Spots Detection Based on CUDA].

    PubMed

    Xiong, Bangshu; Ye, Yijia; Ou, Qiaofeng; Zhang, Haodong

    2016-02-01

    In order to improve the efficiency of protein spots detection, a fast detection method based on CUDA was proposed. Firstly, the parallel algorithms of the three most time-consuming parts in the protein spots detection algorithm: image preprocessing, coarse protein point detection and overlapping point segmentation were studied. Then, according to single instruction multiple threads executive model of CUDA to adopted data space strategy of separating two-dimensional (2D) images into blocks, various optimizing measures such as shared memory and 2D texture memory are adopted in this study. The results show that the operative efficiency of this method is obviously improved compared to CPU calculation. As the image size increased, this method makes more improvement in efficiency, such as for the image with the size of 2,048 x 2,048, the method of CPU needs 52,641 ms, but the GPU needs only 4,384 ms.

  6. A Novel Robot Visual Homing Method Based on SIFT Features

    PubMed Central

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-01-01

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method. PMID:26473880

  7. Topography measurement of micro structure by modulation-based method

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Tang, Yan; Liu, Junbo; Deng, Qinyuan; Cheng, Yiguang; Hu, Song

    2016-10-01

    Dimensional metrology for micro structure plays an important role in addressing quality issues and observing the performance of micro-fabricated products. Different from the traditional white-light interferometry approach, the modulation-based method is expected to measure topography of micro structure by the obtained modulation of each interferometry image. Through seeking the maximum modulation of every pixel respectively in Z direction, the method could obtain the corresponding height of individual pixel and finally get topography of the structure. Owing to the characteristic of modulation, the proposed method which is not influenced by the change of background light intensity caused by instable light source and different reflection index of the structure could be widely applied with high stability. The paper both illustrates the principle of this novel method and conducts the experiment to verify the feasibility.

  8. A history-based method to estimate animal preference

    PubMed Central

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  9. Diabatization based on the dipole and quadrupole: The DQ method

    SciTech Connect

    Hoyer, Chad E.; Xu, Xuefei; Ma, Dongxia; Gagliardi, Laura E-mail: truhlar@umn.edu; Truhlar, Donald G. E-mail: truhlar@umn.edu

    2014-09-21

    In this work, we present a method, called the DQ scheme (where D and Q stand for dipole and quadrupole, respectively), for transforming a set of adiabatic electronic states to diabatic states by using the dipole and quadrupole moments to determine the transformation coefficients. It is more broadly applicable than methods based only on the dipole moment; for example, it is not restricted to electron transfer reactions, and it works with any electronic structure method and for molecules with and without symmetry, and it is convenient in not requiring orbital transformations. We illustrate this method by prototype applications to two cases, LiH and phenol, for which we compare the results to those obtained by the fourfold-way diabatization scheme.

  10. Orientation sampling for dictionary-based diffraction pattern indexing methods

    NASA Astrophysics Data System (ADS)

    Singh, S.; De Graef, M.

    2016-12-01

    A general framework for dictionary-based indexing of diffraction patterns is presented. A uniform sampling method of orientation space using the cubochoric representation is introduced and used to derive an empirical relation between the average disorientation between neighboring sampling points and the number of grid points sampled along the semi-edge of the cubochoric cube. A method to uniformly sample misorientation iso-surfaces is also presented. This method is used to show that the dot product serves as a proxy for misorientation. Furthermore, it is shown that misorientation iso-surfaces in Rodrigues space are quadractic surfaces. Finally, using the concept of Riesz energies, it is shown that the sampling method results in a near optimal covering of orientation space.

  11. A Novel Method for Pulsometry Based on Traditional Iranian Medicine

    PubMed Central

    Yousefipoor, Farzane; Nafisi, Vahidreza

    2015-01-01

    Arterial pulse measurement is one of the most important methods for evaluation of healthy conditions. In traditional Iranian medicine (TIM), physician may detect radial pulse by holding four fingers on the patient's wrist. By using this method, under standard condition, the detected pulses are subjective and erroneous, in case of weak and/or abnormal pulses, the ambiguity of diagnosis may rise. In this paper, we present an equipment which is designed and implemented for automation of traditional pulse detection method. By this novel system, the developed noninvasive diagnostic method and database based on the TIM are way forward to apply traditional medicine and diagnose patients with present technology. The accuracy for period measuring is 76% and systolic peak is 72%. PMID:26955566

  12. Moving sound source localization based on triangulation method

    NASA Astrophysics Data System (ADS)

    Miao, Feng; Yang, Diange; Wen, Junjie; Lian, Xiaomin

    2016-12-01

    This study develops a sound source localization method that extends traditional triangulation to moving sources. First, the possible sound source locating plane is scanned. Secondly, for each hypothetical source location in this possible plane, the Doppler effect is removed through the integration of sound pressure. Taking advantage of the de-Dopplerized signals, the moving time difference of arrival (MTDOA) is calculated, and the sound source is located based on triangulation. Thirdly, the estimated sound source location is compared to the original hypothetical location and the deviations are recorded. Because the real sound source location leads to zero deviation, the sound source can be finally located by minimizing the deviation matrix. Simulations have shown the superiority of MTDOA method over traditional triangulation in case of moving sound sources. The MTDOA method can be used to locate moving sound sources with as high resolution as DAMAS beamforming, as shown in the experiments, offering thus a new method for locating moving sound sources.

  13. Spindle extraction method for ISAR image based on Radon transform

    NASA Astrophysics Data System (ADS)

    Wei, Xia; Zheng, Sheng; Zeng, Xiangyun; Zhu, Daoyuan; Xu, Gaogui

    2015-12-01

    In this paper, a method of spindle extraction of target in inverse synthetic aperture radar (ISAR) image is proposed which depends on Radon Transform. Firstly, utilizing Radon Transform to detect all straight lines which are collinear with these line segments in image. Then, using Sobel operator to detect image contour. Finally, finding all intersections of each straight line and image contour, the two intersections which have maximum distance between them is the two ends of this line segment and the longest line segment of all line segments is spindle of target. According to the proposed spindle extraction method, one hundred simulated ISAR images which are respectively rotated 0 degrees, 10 degrees, 20 degrees, 30 degrees and 40 degrees in counterclockwise are used to do experiment and the proposed method and the detection results are more close to the real spindle of target than the method based on Hough Transform .

  14. Object Recognition using Feature- and Color-Based Methods

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  15. An efficient frequency recognition method based on likelihood ratio test for SSVEP-based BCI.

    PubMed

    Zhang, Yangsong; Dong, Li; Zhang, Rui; Yao, Dezhong; Zhang, Yu; Xu, Peng

    2014-01-01

    An efficient frequency recognition method is very important for SSVEP-based BCI systems to improve the information transfer rate (ITR). To address this aspect, for the first time, likelihood ratio test (LRT) was utilized to propose a novel multichannel frequency recognition method for SSVEP data. The essence of this new method is to calculate the association between multichannel EEG signals and the reference signals which were constructed according to the stimulus frequency with LRT. For the simulation and real SSVEP data, the proposed method yielded higher recognition accuracy with shorter time window length and was more robust against noise in comparison with the popular canonical correlation analysis- (CCA-) based method and the least absolute shrinkage and selection operator- (LASSO-) based method. The recognition accuracy and information transfer rate (ITR) obtained by the proposed method was higher than those of the CCA-based method and LASSO-based method. The superior results indicate that the LRT method is a promising candidate for reliable frequency recognition in future SSVEP-BCI.

  16. Lunar-base construction equipment and methods evaluation

    NASA Technical Reports Server (NTRS)

    Boles, Walter W.; Ashley, David B.; Tucker, Richard L.

    1993-01-01

    A process for evaluating lunar-base construction equipment and methods concepts is presented. The process is driven by the need for more quantitative, systematic, and logical methods for assessing further research and development requirements in an area where uncertainties are high, dependence upon terrestrial heuristics is questionable, and quantitative methods are seldom applied. Decision theory concepts are used in determining the value of accurate information and the process is structured as a construction-equipment-and-methods selection methodology. Total construction-related, earth-launch mass is the measure of merit chosen for mathematical modeling purposes. The work is based upon the scope of the lunar base as described in the National Aeronautics and Space Administration's Office of Exploration's 'Exploration Studies Technical Report, FY 1989 Status'. Nine sets of conceptually designed construction equipment are selected as alternative concepts. It is concluded that the evaluation process is well suited for assisting in the establishment of research agendas in an approach that is first broad, with a low level of detail, followed by more-detailed investigations into areas that are identified as critical due to high degrees of uncertainty and sensitivity.

  17. An endoscopic diffuse optical tomographic method with high resolution based on the improved FOCUSS method

    NASA Astrophysics Data System (ADS)

    Qin, Zhuanping; Ma, Wenjuan; Ren, Shuyan; Geng, Liqing; Li, Jing; Yang, Ying; Qin, Yingmei

    2017-02-01

    Endoscopic DOT has the potential to apply to cancer-related imaging in tubular organs. Although the DOT has relatively large tissue penetration depth, the endoscopic DOT is limited by the narrow space of the internal tubular tissue, so as to the relatively small penetration depth. Because some adenocarcinomas including cervical adenocarcinoma are located in deep canal, it is necessary to improve the imaging resolution under the limited measurement condition. To improve the resolution, a new FOCUSS algorithm along with the image reconstruction algorithm based on the effective detection range (EDR) is developed. This algorithm is based on the region of interest (ROI) to reduce the dimensions of the matrix. The shrinking method cuts down the computation burden. To reduce the computational complexity, double conjugate gradient method is used in the matrix inversion. For a typical inner size and optical properties of the cervix-like tubular tissue, reconstructed images from the simulation data demonstrate that the proposed method achieves equivalent image quality to that obtained from the method based on EDR when the target is close the inner boundary of the model, and with higher spatial resolution and quantitative ratio when the targets are far from the inner boundary of the model. The quantitative ratio of reconstructed absorption and reduced scattering coefficient can be up to 70% and 80% under 5mm depth, respectively. Furthermore, the two close targets with different depths can be separated from each other. The proposed method will be useful to the development of endoscopic DOT technologies in tubular organs.

  18. Inverse Method of Centrifugal Pump Impeller Based on Proper Orthogonal Decomposition (POD) Method

    NASA Astrophysics Data System (ADS)

    Zhang, Ren-Hui; Guo, Rong; Yang, Jun-Hu; Luo, Jia-Qi

    2017-07-01

    To improve the accuracy and reduce the calculation cost for the inverse problem of centrifugal pump impeller, the new inverse method based on proper orthogonal decomposition (POD) is proposed. The pump blade shape is parameterized by quartic Bezier curve, and the initial snapshots is generated by introducing the perturbation of the blade shape control parameters. The internal flow field and its hydraulic performance is predicted by CFD method. The snapshots vector includes the blade shape parameter and the distribution of blade load. The POD basis for the snapshots set are deduced by proper orthogonal decomposition. The sample vector set is expressed in terms of the linear combination of the orthogonal basis. The objective blade shape corresponding to the objective distribution of blade load is obtained by least square fit. The Iterative correction algorithm for the centrifugal pump blade inverse method based on POD is proposed. The objective blade load distributions are corrected according to the difference of the CFD result and the POD result. The two dimensional and three dimensional blade calculation cases show that the proposed centrifugal pump blade inverse method based on POD have good convergence and high accuracy, and the calculation cost is greatly reduced. After two iterations, the deviation of the blade load and the pump hydraulic performance are limited within 4.0% and 6.0% individually for most of the flow rate range. This paper provides a promising inverse method for centrifugal pump impeller, which will benefit the hydraulic optimization of centrifugal pump.

  19. Frame synchronization methods based on channel symbol measurements

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.

    1989-01-01

    The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.

  20. Evaluation of base widening methods on flexible pavements in Wyoming

    NASA Astrophysics Data System (ADS)

    Offei, Edward

    The surface transportation system forms the biggest infrastructure investment in the United States of which the roadway pavement is an integral part. Maintaining the roadways can involve rehabilitation in the form of widening, which requires a longitudinal joint between the existing and new pavement sections to accommodate wider travel lanes, additional travel lanes or modification to shoulder widths. Several methods are utilized for the joint construction between the existing and new pavement sections including vertical, tapered and stepped joints. The objective of this research is to develop a formal recommendation for the preferred joint construction method that provides the best base layer support for the state of Wyoming. Field collection of Dynamic Cone Penetrometer (DCP) data, Falling Weight Deflectometer (FWD) data, base samples for gradation and moisture content were conducted on 28 existing and 4 newly constructed pavement widening projects. A survey of constructability issues on widening projects as experienced by WYDOT engineers was undertaken. Costs of each joint type were compared as well. Results of the analyses indicate that the tapered joint type showed relatively better pavement strength compared to the vertical joint type and could be the preferred joint construction method. The tapered joint type also showed significant base material savings than the vertical joint type. The vertical joint has an 18% increase in cost compared to the tapered joint. This research is intended to provide information and/or recommendation to state policy makers as to which of the base widening joint techniques (vertical, tapered, stepped) for flexible pavement provides better pavement performance.

  1. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  2. Springback Compensation Based on FDM-DTF Method

    SciTech Connect

    Liu Qiang; Kang Lan

    2010-06-15

    Stamping part error caused by springback is usually considered to be a tooling defect in sheet metal forming process. This problem can be corrected by adjusting the tooling shape to appropriate shape. In this paper, springback compensation based on FDM-DTF method is proposed to be used for design and modification of the tooling shape. Firstly, based on FDM method, the tooling shape is designed by reversing inner force's direction at the end of forming simulation, the required tooling shape can be got through some iterations. Secondly actual tooling is produced based on results got in the first step. When the tooling and part surface discrete data are investigated, the transfer function between numerical springback error and real springback error can be calculated based on wavelet transform results, which can be used in predicting the tooling shape for the desired product. Finally the FDM-DTF method is proved to control springback effectively after it has been applied in the 2D irregular product springback control.

  3. Propensity Score-Based Methods versus MTE-Based Methods in Causal Inference: Identification, Estimation, and Application

    ERIC Educational Resources Information Center

    Zhou, Xiang; Xie, Yu

    2016-01-01

    Since the seminal introduction of the propensity score (PS) by Rosenbaum and Rubin, PS-based methods have been widely used for drawing causal inferences in the behavioral and social sciences. However, the PS approach depends on the ignorability assumption: there are no unobserved confounders once observed covariates are taken into account. For…

  4. Calibration of base flow separation methods with streamflow conductivity.

    PubMed

    Stewart, Mark; Cimino, Joseph; Ross, Mark

    2007-01-01

    The conductivity mass-balance (CMB) method can be used to calibrate analytical base flow separation methods. The principal CMB assumptions are base flow conductivity is equal to streamflow conductivity at lowest flows, runoff conductivity is equal to streamflow conductivity at highest flows, and base flow and runoff conductivities are assumed to be constants over the period of record. To test the CMB assumptions, fluid conductivities of ground water, surface runoff, and streamflow were measured during wet and dry conditions in a 12-km(2) stream basin. Ground water conductivities at wells varied an average of 6% from dry to wet conditions, while stream conductivities varied 58%. Shallow ground water conductivity varied significantly with distance from the stream, with lowest conductivities of 87 microS/cm near the divide, a maximum of 520 microS/cm 59 m from the stream, and 215 microS/cm 22 m from the stream. Runoff conductivities measured in three rain events remained nearly constant, with lower conductivities of 35 microS/cm near the divide and 50 microS/cm near the stream. The CMB method was applied to the records from 10 USGS stream-gauging stations in Texas, Kentucky, Georgia, and Florida to calibrate the USGS base flow separation technique, HYSEP, by varying the time parameter 2N*. There is a statistically significant relationship between basin areas and calibrated values of 2N*, expressed as N = 0.46A(0.44), with N in days and A in km(2). The widely accepted relationship N = 0.83A(0.2) is not valid for these basins. Other analytic methods can also be calibrated with the CMB method.

  5. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  6. Performance of variable selection methods using stability-based selection.

    PubMed

    Lu, Danny; Weljie, Aalim; de Leon, Alexander R; McConnell, Yarrow; Bathe, Oliver F; Kopciuk, Karen

    2017-04-04

    Variable selection is frequently carried out during the analysis of many types of high-dimensional data, including those in metabolomics. This study compared the predictive performance of four variable selection methods using stability-based selection, a new secondary selection method that is implemented in the R package BioMark. Two of these methods were evaluated using the more well-known false discovery rate (FDR) as well. Simulation studies varied factors relevant to biological data studies, with results based on the median values of 200 partial area under the receiver operating characteristic curve. There was no single top performing method across all factor settings, but the student t test based on stability selection or with FDR adjustment and the variable importance in projection (VIP) scores from partial least squares regression models obtained using a stability-based approach tended to perform well in most settings. Similar results were found with a real spiked-in metabolomics dataset. Group sample size, group effect size, number of significant variables and correlation structure were the most important factors whereas the percentage of significant variables was the least important. Researchers can improve prediction scores for their study data by choosing VIP scores based on stability variable selection over the other approaches when the number of variables is small to modest and by increasing the number of samples even moderately. When the number of variables is high and there is block correlation amongst the significant variables (i.e., true biomarkers), the FDR-adjusted student t test performed best. The R package BioMark is an easy-to-use open-source program for variable selection that had excellent performance characteristics for the purposes of this study.

  7. Spline-based self-controlled case series method.

    PubMed

    Ghebremichael-Weldeselassie, Yonas; Whitaker, Heather J; Farrington, C Paddy

    2017-08-30

    The self-controlled case series (SCCS) method is an alternative to study designs such as cohort and case control methods and is used to investigate potential associations between the timing of vaccine or other drug exposures and adverse events. It requires information only on cases, individuals who have experienced the adverse event at least once, and automatically controls all fixed confounding variables that could modify the true association between exposure and adverse event. Time-varying confounders such as age, on the other hand, are not automatically controlled and must be allowed for explicitly. The original SCCS method used step functions to represent risk periods (windows of exposed time) and age effects. Hence, exposure risk periods and/or age groups have to be prespecified a priori, but a poor choice of group boundaries may lead to biased estimates. In this paper, we propose a nonparametric SCCS method in which both age and exposure effects are represented by spline functions at the same time. To avoid a numerical integration of the product of these two spline functions in the likelihood function of the SCCS method, we defined the first, second, and third integrals of I-splines based on the definition of integrals of M-splines. Simulation studies showed that the new method performs well. This new method is applied to data on pediatric vaccines. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  8. Efficient variational Bayesian approximation method based on subspace optimization.

    PubMed

    Zheng, Yuling; Fraysse, Aurélia; Rodet, Thomas

    2015-02-01

    Variational Bayesian approximations have been widely used in fully Bayesian inference for approximating an intractable posterior distribution by a separable one. Nevertheless, the classical variational Bayesian approximation (VBA) method suffers from slow convergence to the approximate solution when tackling large dimensional problems. To address this problem, we propose in this paper a more efficient VBA method. Actually, variational Bayesian issue can be seen as a functional optimization problem. The proposed method is based on the adaptation of subspace optimization methods in Hilbert spaces to the involved function space, in order to solve this optimization problem in an iterative way. The aim is to determine an optimal direction at each iteration in order to get a more efficient method. We highlight the efficiency of our new VBA method and demonstrate its application to image processing by considering an ill-posed linear inverse problem using a total variation prior. Comparisons with state of the art variational Bayesian methods through a numerical example show a notable improvement in computation time.

  9. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  10. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  11. CEMS using hot wet extractive method based on DOAS

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Zhang, Chi; Sun, Changku

    2011-11-01

    A continuous emission monitoring system (CEMS) using hot wet extractive method based on differential optical absorption spectroscopy (DOAS) is designed. The developed system is applied to retrieving the concentration of SO2 and NOx in flue gas on-site. The flue gas is carried along a heated sample line into the sample pool at a constant temperature above the dew point. In this case, the adverse impact of water vapor on measurement accuracy is reduced greatly, and the on-line calibration is implemented. And then the flue gas is discharged from the sample pool after the measuring process is complete. The on-site applicability of the system is enhanced by using Programmable Logic Controller (PLC) to control each valve in the system during the measuring and on-line calibration process. The concentration retrieving method used in the system is based on the partial least squares (PLS) regression nonlinear method. The relationship between the known concentration and the differential absorption feature gathered by the PLS nonlinear method can be figured out after the on-line calibration process. Then the concentration measurement of SO2 and NOx can be easily implemented according to the definite relationship. The concentration retrieving method can identify the information and noise effectively, which improves the measuring accuracy of the system. SO2 with four different concentrations are measured by the system under laboratory conditions. The results proved that the full-scale error of this system is less than 2%FS.

  12. Development of redesign method of production system based on QFD

    NASA Astrophysics Data System (ADS)

    Kondoh, Shinsuke; Umeda, Yasusi; Togawa, Hisashi

    In order to catch up with rapidly changing market environment, rapid and flexible redesign of production system is quite important. For effective and rapid redesign of production system, a redesign support system is eagerly needed. To this end, this paper proposes a redesign method of production system based on Quality Function Deployment (QFD). This method represents a designer's intention in the form of QFD, collects experts' knowledge as “Production Method (PM) modules,” and formulates redesign guidelines as seven redesign operations so as to support a designer to find out improvement ideas in a systematical manner. This paper also illustrates a redesign support tool of a production system we have developed based on this method, and demonstrates its feasibility with a practical example of a production system of a contact probe. A result from this example shows that comparable cost reduction to those of veteran designers can be achieved by a novice designer. From this result, we conclude our redesign method is effective and feasible for supporting redesign of a production system.

  13. Sensitivity based method for structural dynamic model improvement

    NASA Astrophysics Data System (ADS)

    Lin, R. M.; Du, H.; Ong, J. H.

    1993-05-01

    Sensitivity analysis, the study of how a structure's dynamic characteristics change with design variables, has been used to predict structural modification effects in design for many decades. In this paper, methods for calculating the eigensensitivity, frequency response function sensitivity and its modified new formulation are presented. The implementation of these sensitivity analyses to the practice of finite element model improvement using vibration test data, which is one of the major applications of experimental modal testing, is discussed. Since it is very difficult in practice to measure all the coordinates which are specified in the finite element model, sensitivity based methods become essential and are, in fact, the only appropriate methods of tackling the problem of finite element model improvement. Comparisons of these methods are made in terms of the amount of measured data required, the speed of convergence and the magnitudes of modelling errors. Also, it is identified that the inverse iteration technique can be effectively used to minimize the computational costs involved. The finite element model of a plane truss structure is used in numerical case studies to demonstrate the effectiveness of the applications of these sensitivity based methods to practical engineering structures.

  14. A decomposition method based on a model of continuous change.

    PubMed

    Horiuchi, Shiro; Wilmoth, John R; Pletcher, Scott D

    2008-11-01

    A demographic measure is often expressed as a deterministic or stochastic function of multiple variables (covariates), and a general problem (the decomposition problem) is to assess contributions of individual covariates to a difference in the demographic measure (dependent variable) between two populations. We propose a method of decomposition analysis based on an assumption that covariates change continuously along an actual or hypothetical dimension. This assumption leads to a general model that logically justifies the additivity of covariate effects and the elimination of interaction terms, even if the dependent variable itself is a nonadditive function. A comparison with earlier methods illustrates other practical advantages of the method: in addition to an absence of residuals or interaction terms, the method can easily handle a large number of covariates and does not require a logically meaningful ordering of covariates. Two empirical examples show that the method can be applied flexibly to a wide variety of decomposition problems. This study also suggests that when data are available at multiple time points over a long interval, it is more accurate to compute an aggregated decomposition based on multiple subintervals than to compute a single decomposition for the entire study period.

  15. Research on Assessment Method for Ruralinformatization Level Based on Ahp

    NASA Astrophysics Data System (ADS)

    Jing, Du; Li, Daoliang; Li, Hongwen; Zhang, Yanjun

    Based on rural informatization connotation and five essential elements that affect rural informatization assessment, which are development environment, information infrastructure, information resource, information service system and application of information technology in rural areas, This paper designs an indicator system for rural informatization level assessment. Through AHP method, it sets up the hierarchical construction model of rural informatization assessment and weight of each indicator is calculated. Thus the evaluation method for assessment on rural informatization level is proposed in this paper. It combines subjective evaluation with objective appraisal and will help direct rural informatization management departments with jobs and promotes rural informatization development.

  16. An improved Bayesian matting method based on image statistic characteristics

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Luo, Siwei; Wu, Lina

    2015-03-01

    Image matting is an important task in image and video editing and has been studied for more than 30 years. In this paper we propose an improved interactive matting method. Starting from a coarse user-guided trimap, we first perform a color estimation based on texture and color information and use the result to refine the original trimap. Then with the new trimap, we apply soft matting process which is improved Bayesian matting with smoothness constraints. Experimental results on natural image show that this method is useful, especially for the images have similar texture feature in the background or the images which is hard to give a precise trimap.

  17. Iterative-decreasing calibration method based on regional circle

    NASA Astrophysics Data System (ADS)

    Zhao, Hongyang

    2017-07-01

    In the field of computer vision, camera calibration is a hot issue. For the existing coupled problem of calculating distortion center and the distortion factor in the process of camera calibration, this paper presents an iterative-decreasing calibration method based on regional circle, uses the local area of the circle plate to calculate the distortion center coordinates by iterative declining, and then uses the distortion center to calculate the local area calibration factors. Finally, makes distortion center and the distortion factor for the global optimization. The calibration results show that the proposed method has high calibration accuracy.

  18. a SAR Image Registration Method Based on Sift Algorithm

    NASA Astrophysics Data System (ADS)

    Lu, W.; Yue, X.; Zhao, Y.; Han, C.

    2017-09-01

    In order to improve the stability and rapidity of synthetic aperture radar (SAR) images matching, an effective method was presented. Firstly, the adaptive smoothing filtering was employed for image denoising in image processing based on Wallis filtering to avoid the follow-up noise is amplified. Secondly, feature points were extracted by a simplified SIFT algorithm. Finally, the exact matching of the images was achieved with these points. Compared with the existing methods, it not only maintains the richness of features, but a-lso reduces the noise of the image. The simulation results show that the proposed algorithm can achieve better matching effect.

  19. Microwave active filters based on coupled negative resistance method

    NASA Astrophysics Data System (ADS)

    Chang, Chi-Yang; Itoh, Tatsuo

    1990-12-01

    A novel coupled negative resistance method for building a microwave active bandpass filter is introduced. Based on this method, four microstrip line end-coupled filters were built. Two are fixed-frequency one-pole and two-pole filters, and two are tunable one-pole and two-pole filters. In order to broaden the bandwidth of the end-coupled filter, a modified end-coupled structure is proposed. Using the modified structure, an active filter with a bandwidth up to 7.5 percent was built. All of the filters show significant passband performance improvement. Specifically, the passband bandwidth was broadened by a factor of 5 to 20.

  20. Cepstrum based feature extraction method for fungus detection

    NASA Astrophysics Data System (ADS)

    Yorulmaz, Onur; Pearson, Tom C.; Çetin, A. Enis

    2011-06-01

    In this paper, a method for detection of popcorn kernels infected by a fungus is developed using image processing. The method is based on two dimensional (2D) mel and Mellin-cepstrum computation from popcorn kernel images. Cepstral features that were extracted from popcorn images are classified using Support Vector Machines (SVM). Experimental results show that high recognition rates of up to 93.93% can be achieved for both damaged and healthy popcorn kernels using 2D mel-cepstrum. The success rate for healthy popcorn kernels was found to be 97.41% and the recognition rate for damaged kernels was found to be 89.43%.

  1. Improved Image Fusion Method Based on NSCT and Accelerated NMF

    PubMed Central

    Wang, Juan; Lai, Siyu; Li, Mingdong

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms. PMID:22778618

  2. Improved image fusion method based on NSCT and accelerated NMF.

    PubMed

    Wang, Juan; Lai, Siyu; Li, Mingdong

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.

  3. Methods for preparing colloidal nanocrystal-based thin films

    DOEpatents

    Kagan, Cherie R.; Fafarman, Aaron T.; Choi, Ji-Hyuk; Koh, Weon-kyu; Kim, David K.; Oh, Soong Ju; Lai, Yuming; Hong, Sung-Hoon; Saudari, Sangameshwar Rao; Murray, Christopher B.

    2016-05-10

    Methods of exchanging ligands to form colloidal nanocrystals (NCs) with chalcogenocyanate (xCN)-based ligands and apparatuses using the same are disclosed. The ligands may be exchanged by assembling NCs into a thin film and immersing the thin film in a solution containing xCN-based ligands. The ligands may also be exchanged by mixing a xCN-based solution with a dispersion of NCs, flocculating the mixture, centrifuging the mixture, discarding the supernatant, adding a solvent to the pellet, and dispersing the solvent and pellet to form dispersed NCs with exchanged xCN-ligands. The NCs with xCN-based ligands may be used to form thin film devices and/or other electronic, optoelectronic, and photonic devices. Devices comprising nanocrystal-based thin films and methods for forming such devices are also disclosed. These devices may be constructed by depositing NCs on to a substrate to form an NC thin film and then doping the thin film by evaporation and thermal diffusion.

  4. Test method on infrared system range based on space compression

    NASA Astrophysics Data System (ADS)

    Chen, Zhen-xing; Shi, Sheng-bing; Han, Fu-li

    2016-09-01

    Infrared thermal imaging system generates image based on infrared radiation difference between object and background and is a passive work mode. Range is important performance and necessary appraised test item in appraisal test for infrared system. In this paper, aim is carrying out infrared system range test in laboratory , simulated test ground is designed based on object equivalent, background analog, object characteristic control, air attenuation characteristic, infrared jamming analog and so on, repeatable and controllable tests are finished, problem of traditional field test method is solved.

  5. A Flow SPR Immunosensor Based on a Sandwich Direct Method

    PubMed Central

    Tomassetti, Mauro; Conta, Giorgia; Campanella, Luigi; Favero, Gabriele; Sanzò, Gabriella; Mazzei, Franco; Antiochia, Riccarda

    2016-01-01

    In this study, we report the development of an SPR (Surface Plasmon Resonance) immunosensor for the detection of ampicillin, operating under flow conditions. SPR sensors based on both direct (with the immobilization of the antibody) and competitive (with the immobilization of the antigen) methods did not allow the detection of ampicillin. Therefore, a sandwich-based sensor was developed which showed a good linear response towards ampicillin between 10−3 and 10−1 M, a measurement time of ≤20 min and a high selectivity both towards β-lactam antibiotics and antibiotics of different classes. PMID:27187486

  6. Endoscopic Skull Base Reconstruction: An Evolution of Materials and Methods.

    PubMed

    Sigler, Aaron C; D'Anza, Brian; Lobo, Brian C; Woodard, Troy; Recinos, Pablo F; Sindwani, Raj

    2017-03-31

    Endoscopic skull base surgery has developed rapidly over the last decade, in large part because of the expanding armamentarium of endoscopic repair techniques. This article reviews the available technologies and techniques, including vascularized and nonvascularized flaps, synthetic grafts, sealants and glues, and multilayer reconstruction. Understanding which of these repair methods is appropriate and under what circumstances is paramount to achieving success in this challenging but rewarding field. A graduated approach to skull base reconstruction is presented to provide a systematic framework to guide selection of repair technique to ensure a successful outcome while minimizing morbidity for the patient.

  7. Drug exposure in register-based research—An expert-opinion based evaluation of methods

    PubMed Central

    Taipale, Heidi; Koponen, Marjaana; Tolppanen, Anna-Maija; Hartikainen, Sirpa; Ahonen, Riitta; Tiihonen, Jari

    2017-01-01

    Background In register-based pharmacoepidemiological studies, construction of drug exposure periods from drug purchases is a major methodological challenge. Various methods have been applied but their validity is rarely evaluated. Our objective was to conduct an expert-opinion based evaluation of the correctness of drug use periods produced by different methods. Methods Drug use periods were calculated with three fixed methods: time windows, assumption of one Defined Daily Dose (DDD) per day and one tablet per day, and with PRE2DUP that is based on modelling of individual drug purchasing behavior. Expert-opinion based evaluation was conducted with 200 randomly selected purchase histories of warfarin, bisoprolol, simvastatin, risperidone and mirtazapine in the MEDALZ-2005 cohort (28,093 persons with Alzheimer’s disease). Two experts reviewed purchase histories and judged which methods had joined correct purchases and gave correct duration for each of 1000 drug exposure periods. Results The evaluated correctness of drug use periods was 70–94% for PRE2DUP, and depending on grace periods and time window lengths 0–73% for tablet methods, 0–41% for DDD methods and 0–11% for time window methods. The highest rate of evaluated correct solutions for each method class were observed for 1 tablet per day with 180 days grace period (TAB_1_180, 43–73%), and 1 DDD per day with 180 days grace period (1–41%). Time window methods produced at maximum only 11% correct solutions. The best performing fixed method TAB_1_180 reached highest correctness for simvastatin 73% (95% CI 65–81%) whereas 89% (95% CI 84–94%) of PRE2DUP periods were judged as correct. Conclusions This study shows inaccuracy of fixed methods and the urgent need for new data-driven methods. In the expert-opinion based evaluation, the lowest error rates were observed with data-driven method PRE2DUP. PMID:28886089

  8. Real reproduction and evaluation of color based on BRDF method

    NASA Astrophysics Data System (ADS)

    Qin, Feng; Yang, Weiping; Yang, Jia; Li, Hongning; Luo, Yanlin; Long, Hongli

    2013-12-01

    It is difficult to reproduce the original color of targets really in different illuminating environment using the traditional methods. So a function which can reconstruct the characteristics of reflection about every point on the surface of target is required urgently to improve the authenticity of color reproduction, which known as the Bidirectional Reflectance Distribution Function(BRDF). A method of color reproduction based on the BRDF measurement is introduced in this paper. Radiometry is combined with the colorimetric theories to measure the irradiance and radiance of GretagMacbeth 24 ColorChecker by using PR-715 Radiation Spectrophotometer of PHOTO RESEARCH, Inc, USA. The BRDF and BRF (Bidirectional Reflectance Factor) values of every color piece corresponding to the reference area are calculated according to irradiance and radiance, thus color tristimulus values of 24 ColorChecker are reconstructed. The results reconstructed by BRDF method are compared with values calculated by the reflectance using PR-715, at last, the chromaticity coordinates in color space and color difference between each other are analyzed. The experimental result shows average color difference and sample standard deviation between the method proposed in this paper and traditional reconstruction method depended on reflectance are 2.567 and 1.3049 respectively. The conclusion indicates that the method of color reproduction based on BRDF has the more obvious advantages to describe the color information of object than the reflectance in hemisphere space through the theoretical and experimental analysis. This method proposed in this paper is effective and feasible during the research of reproducing the chromaticity.

  9. a Modeling Method of Fluttering Leaves Based on Point Cloud

    NASA Astrophysics Data System (ADS)

    Tang, J.; Wang, Y.; Zhao, Y.; Hao, W.; Ning, X.; Lv, K.; Shi, Z.; Zhao, M.

    2017-09-01

    Leaves falling gently or fluttering are common phenomenon in nature scenes. The authenticity of leaves falling plays an important part in the dynamic modeling of natural scenes. The leaves falling model has a widely applications in the field of animation and virtual reality. We propose a novel modeling method of fluttering leaves based on point cloud in this paper. According to the shape, the weight of leaves and the wind speed, three basic trajectories of leaves falling are defined, which are the rotation falling, the roll falling and the screw roll falling. At the same time, a parallel algorithm based on OpenMP is implemented to satisfy the needs of real-time in practical applications. Experimental results demonstrate that the proposed method is amenable to the incorporation of a variety of desirable effects.

  10. Method of pectus excavatum measurement based on structured light technique

    NASA Astrophysics Data System (ADS)

    Glinkowski, Wojciech; Sitnik, Robert; Witkowski, Marcin; Kocoń, Hanna; Bolewicki, Pawel; Górecki, Andrzej

    2009-07-01

    We present an automatic method for assessment of pectus excavatum severity based on an optical 3-D markerless shape measurement. A four-directional measurement system based on a structured light projection method is built to capture the shape of the body surface of the patients. The system setup is described and typical measurement parameters are given. The automated data analysis path is explained. Their main steps are: normalization of trunk model orientation, cutting the model into slices, analysis of each slice shape, selecting the proper slice for the assessment of pectus excavatum of the patient, and calculating its shape parameter. We develop a new shape parameter (I3ds) that shows high correlation with the computed tomography (CT) Haller index widely used for assessment of pectus excavatum. Clinical results and the evaluation of developed indexes are presented.

  11. Interferometric measurement method of thin film thickness based on FFT

    NASA Astrophysics Data System (ADS)

    Shuai, Gaolong; Su, Junhong; Yang, Lihong; Xu, Junqi

    2009-05-01

    The kernel of modern interferometry is to obtain necessary surface shape and parameter by processing interferogram with reasonable algorithm. The paper studies the basic principle of interferometry involving 2-D FFT, proposes a new method for measuring thin film thickness based on FFT: by CCD receiving and acquired card collecting with the help of Twyman-Green interferometer, can a fringe interferogram of the measured thin film be obtained. Based on the interferogram processing knowledge, an algorithm processing software/program can be prepared to realize identification of the edge films, regional extension, filtering, unwrapping the wrapped phase etc. And in this way can the distribution of film information-coated surface be obtained and the thickness of thin film samples automatically measured. The findings indicate the PV value and RMS value of the measured film samples are 0.256 λ and 0.068 λ respectively and prove the new method has high precision.

  12. Microbial detection method based on sensing molecular hydrogen

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Stoner, G. E.; Boykin, E. H.

    1974-01-01

    A simple method for detecting bacteria, based on the time of hydrogen evolution, was developed and tested against various members of the Enterobacteriaceae group. The test system consisted of (1) two electrodes, platinum and a reference electrode, (2) a buffer amplifier, and (3) a strip-chart recorder. Hydrogen evolution was measured by an increase in voltage in the negative (cathodic) direction. A linear relationship was established between inoculum size and the time hydrogen was detected (lag period). Lag times ranged from 1 h for 1 million cells/ml to 7 h for 1 cell/ml. For each 10-fold decrease in inoculum, length of the lag period increased 60 to 70 min. Based on the linear relationship between inoculum and lag period, these results indicate the potential application of the hydrogen-sensing method for rapidly detecting coliforms and other gas-producing microorganisms in a variety of clinical, food, and other samples.

  13. Microbial detection method based on sensing molecular hydrogen

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Stoner, G. E.; Boykin, E. H.

    1974-01-01

    A simple method for detecting bacteria, based on the time of hydrogen evolution, was developed and tested against various members of the Enterobacteriaceae group. The test system consisted of (1) two electrodes, platinum and a reference electrode, (2) a buffer amplifier, and (3) a strip-chart recorder. Hydrogen evolution was measured by an increase in voltage in the negative (cathodic) direction. A linear relationship was established between inoculum size and the time hydrogen was detected (lag period). Lag times ranged from 1 h for 1 million cells/ml to 7 h for 1 cell/ml. For each 10-fold decrease in inoculum, length of the lag period increased 60 to 70 min. Based on the linear relationship between inoculum and lag period, these results indicate the potential application of the hydrogen-sensing method for rapidly detecting coliforms and other gas-producing microorganisms in a variety of clinical, food, and other samples.

  14. Geophysics-based method of locating a stationary earth object

    DOEpatents

    Daily, Michael R.; Rohde, Steven B.; Novak, James L.

    2008-05-20

    A geophysics-based method for determining the position of a stationary earth object uses the periodic changes in the gravity vector of the earth caused by the sun- and moon-orbits. Because the local gravity field is highly irregular over a global scale, a model of local tidal accelerations can be compared to actual accelerometer measurements to determine the latitude and longitude of the stationary object.

  15. Uncertainty-Based Design Methods for Flow-Structure Interactions

    DTIC Science & Technology

    2007-06-01

    07 Final _ 2/01/05 - 01/31/07 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Uncertainty-based Design Methods for Flow- N00014-04-1-0007 Structure ...project is to develop advanced tools for efficient simulations of flow- structure interactions that account for random excitation and uncertain input...with emphasis on realistic three-dimensional nonlinear representatiol of the structures of interest. This capability will set the foundation for the

  16. A model based security testing method for protocol implementation.

    PubMed

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  17. A Model Based Security Testing Method for Protocol Implementation

    PubMed Central

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation. PMID:25105163

  18. A vision-based method for planar position measurement

    NASA Astrophysics Data System (ADS)

    Chen, Zong-Hao; Huang, Peisen S.

    2016-12-01

    In this paper, a vision-based method is proposed for three-degree-of-freedom (3-DOF) planar position (XY{θZ} ) measurement. This method uses a single camera to capture the image of a 2D periodic pattern and then uses the 2D discrete Fourier transform (2D DFT) method to estimate the phase of its fundamental frequency component for position measurement. To improve position measurement accuracy, the phase estimation error of 2D DFT is analyzed and a phase estimation method is proposed. Different simulations are done to verify the feasibility of this method and study the factors that influence the accuracy and precision of phase estimation. To demonstrate the performance of the proposed method for position measurement, a prototype encoder consisting of a black-and-white industrial camera with VGA resolution (480  ×  640 pixels) and an iPhone 4s has been developed. Experimental results show the peak-to-peak resolutions to be 3.5 nm in X axis, 8 nm in Y axis and 4 μ \\text{rad} in {θZ} axis. The corresponding RMS resolutions are 0.52 nm, 1.06 nm, and 0.60 μ \\text{rad} respectively.

  19. A PDE-Based Fast Local Level Set Method

    NASA Astrophysics Data System (ADS)

    Peng, Danping; Merriman, Barry; Osher, Stanley; Zhao, Hongkai; Kang, Myungjoo

    1999-11-01

    We develop a fast method to localize the level set method of Osher and Sethian (1988, J. Comput. Phys.79, 12) and address two important issues that are intrinsic to the level set method: (a) how to extend a quantity that is given only on the interface to a neighborhood of the interface; (b) how to reset the level set function to be a signed distance function to the interface efficiently without appreciably moving the interface. This fast local level set method reduces the computational effort by one order of magnitude, works in as much generality as the original one, and is conceptually simple and easy to implement. Our approach differs from previous related works in that we extract all the information needed from the level set function (or functions in multiphase flow) and do not need to find explicitly the location of the interface in the space domain. The complexity of our method to do tasks such as extension and distance reinitialization is O(N), where N is the number of points in space, not O(N log N) as in works by Sethian (1996, Proc. Nat. Acad. Sci. 93, 1591) and Helmsen and co-workers (1996, SPIE Microlithography IX, p. 253). This complexity estimation is also valid for quite general geometrically based front motion for our localized method.

  20. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  1. Expiratory model-based method to monitor ARDS disease state

    PubMed Central

    2013-01-01

    Introduction Model-based methods can be used to characterise patient-specific condition and response to mechanical ventilation (MV) during treatment for acute respiratory distress syndrome (ARDS). Conventional metrics of respiratory mechanics are based on inspiration only, neglecting data from the expiration cycle. However, it is hypothesised that expiratory data can be used to determine an alternative metric, offering another means to track patient condition and guide positive end expiratory pressure (PEEP) selection. Methods Three fully sedated, oleic acid induced ARDS piglets underwent three experimental phases. Phase 1 was a healthy state recruitment manoeuvre. Phase 2 was a progression from a healthy state to an oleic acid induced ARDS state. Phase 3 was an ARDS state recruitment manoeuvre. The expiratory time-constant model parameter was determined for every breathing cycle for each subject. Trends were compared to estimates of lung elastance determined by means of an end-inspiratory pause method and an integral-based method. All experimental procedures, protocols and the use of data in this study were reviewed and approved by the Ethics Committee of the University of Liege Medical Faculty. Results The overall median absolute percentage fitting error for the expiratory time-constant model across all three phases was less than 10 %; for each subject, indicating the capability of the model to capture the mechanics of breathing during expiration. Provided the respiratory resistance was constant, the model was able to adequately identify trends and fundamental changes in respiratory mechanics. Conclusion Overall, this is a proof of concept study that shows the potential of continuous monitoring of respiratory mechanics in clinical practice. Respiratory system mechanics vary with disease state development and in response to MV settings. Therefore, titrating PEEP to minimal elastance theoretically results in optimal PEEP selection. Trends matched clinical

  2. Robust PCA based method for discovering differentially expressed genes.

    PubMed

    Liu, Jin-Xing; Wang, Yu-Tian; Zheng, Chun-Hou; Sha, Wen; Mi, Jian-Xun; Xu, Yong

    2013-01-01

    How to identify a set of genes that are relevant to a key biological process is an important issue in current molecular biology. In this paper, we propose a novel method to discover differentially expressed genes based on robust principal component analysis (RPCA). In our method, we treat the differentially and non-differentially expressed genes as perturbation signals S and low-rank matrix A, respectively. Perturbation signals S can be recovered from the gene expression data by using RPCA. To discover the differentially expressed genes associated with special biological progresses or functions, the scheme is given as follows. Firstly, the matrix D of expression data is decomposed into two adding matrices A and S by using RPCA. Secondly, the differentially expressed genes are identified based on matrix S. Finally, the differentially expressed genes are evaluated by the tools based on Gene Ontology. A larger number of experiments on hypothetical and real gene expression data are also provided and the experimental results show that our method is efficient and effective.

  3. Bearing diagnostics: A method based on differential geometry

    NASA Astrophysics Data System (ADS)

    Tian, Ye; Wang, Zili; Lu, Chen; Wang, Zhipeng

    2016-12-01

    The structures around bearings are complex, and the working environment is variable. These conditions cause the collected vibration signals to become nonlinear, non-stationary, and chaotic characteristics that make noise reduction, feature extraction, fault diagnosis, and health assessment significantly challenging. Thus, a set of differential geometry-based methods with superiorities in nonlinear analysis is presented in this study. For noise reduction, the Local Projection method is modified by both selecting the neighborhood radius based on empirical mode decomposition and determining noise subspace constrained by neighborhood distribution information. For feature extraction, Hessian locally linear embedding is introduced to acquire manifold features from the manifold topological structures, and singular values of eigenmatrices as well as several specific frequency amplitudes in spectrograms are extracted subsequently to reduce the complexity of the manifold features. For fault diagnosis, information geometry-based support vector machine is applied to classify the fault states. For health assessment, the manifold distance is employed to represent the health information; the Gaussian mixture model is utilized to calculate the confidence values, which directly reflect the health status. Case studies on Lorenz signals and vibration datasets of bearings demonstrate the effectiveness of the proposed methods.

  4. Development of DNA-based Identification methods to track the ...

    EPA Pesticide Factsheets

    The ability to track the identity and abundance of larval fish, which are ubiquitous during spawning season, may lead to a greater understanding of fish species distributions in Great Lakes nearshore areas including early-detection of invasive fish species before they become established. However, larval fish are notoriously hard to identify using traditional morphological techniques. While DNA-based identification methods could increase the ability of aquatic resource managers to determine larval fish composition, use of these methods in aquatic surveys is still uncommon and presents many challenges. In response to this need, we have been working with the U. S. Fish and Wildlife Service to develop field and laboratory methods to facilitate the identification of larval fish using DNA-meta-barcoding. In 2012, we initiated a pilot-project to develop a workflow for conducting DNA-based identification, and compared the species composition at sites within the St. Louis River Estuary of Lake Superior using traditional identification versus DNA meta-barcoding. In 2013, we extended this research to conduct DNA-identification of fish larvae collected from multiple nearshore areas of the Great Lakes by the USFWS. The species composition of larval fish generally mirrored that of fish species known from the same areas, but was influenced by the timing and intensity of sampling. Results indicate that DNA-based identification needs only very low levels of biomass to detect pre

  5. Development of DNA-based Identification methods to track the ...

    EPA Pesticide Factsheets

    The ability to track the identity and abundance of larval fish, which are ubiquitous during spawning season, may lead to a greater understanding of fish species distributions in Great Lakes nearshore areas including early-detection of invasive fish species before they become established. However, larval fish are notoriously hard to identify using traditional morphological techniques. While DNA-based identification methods could increase the ability of aquatic resource managers to determine larval fish composition, use of these methods in aquatic surveys is still uncommon and presents many challenges. In response to this need, we have been working with the U. S. Fish and Wildlife Service to develop field and laboratory methods to facilitate the identification of larval fish using DNA-meta-barcoding. In 2012, we initiated a pilot-project to develop a workflow for conducting DNA-based identification, and compared the species composition at sites within the St. Louis River Estuary of Lake Superior using traditional identification versus DNA meta-barcoding. In 2013, we extended this research to conduct DNA-identification of fish larvae collected from multiple nearshore areas of the Great Lakes by the USFWS. The species composition of larval fish generally mirrored that of fish species known from the same areas, but was influenced by the timing and intensity of sampling. Results indicate that DNA-based identification needs only very low levels of biomass to detect pre

  6. Accurate measurement method for tube's endpoints based on machine vision

    NASA Astrophysics Data System (ADS)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2017-01-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  7. Evolutionary game theory using agent-based methods.

    PubMed

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Crack Diagnosis of Wind Turbine Blades Based on EMD Method

    NASA Astrophysics Data System (ADS)

    Hong-yu, CUI; Ning, DING; Ming, HONG

    2016-11-01

    Wind turbine blades are both the source of power and the core technology of wind generators. After long periods of time or in some extreme conditions, cracks or damage can occur on the surface of the blades. If the wind generators continue to work at this time, the crack will expand until the blade breaks, which can lead to incalculable losses. Therefore, a crack diagnosis method based on EMD for wind turbine blades is proposed in this paper. Based on aerodynamics and fluid-structure coupling theory, an aero-elastic analysis on wind turbine blades model is first made in ANSYS Workbench. Second, based on the aero-elastic analysis and EMD method, the blade cracks are diagnosed and identified in the time and frequency domains, respectively. Finally, the blade model, strain gauge, dynamic signal acquisition and other equipment are used in an experimental study of the aero-elastic analysis and crack damage diagnosis of wind turbine blades to verify the crack diagnosis method proposed in this paper.

  9. Advances in nucleic acid-based detection methods.

    PubMed Central

    Wolcott, M J

    1992-01-01

    Laboratory techniques based on nucleic acid methods have increased in popularity over the last decade with clinical microbiologists and other laboratory scientists who are concerned with the diagnosis of infectious agents. This increase in popularity is a result primarily of advances made in nucleic acid amplification and detection techniques. Polymerase chain reaction, the original nucleic acid amplification technique, changed the way many people viewed and used nucleic acid techniques in clinical settings. After the potential of polymerase chain reaction became apparent, other methods of nucleic acid amplification and detection were developed. These alternative nucleic acid amplification methods may become serious contenders for application to routine laboratory analyses. This review presents some background information on nucleic acid analyses that might be used in clinical and anatomical laboratories and describes some recent advances in the amplification and detection of nucleic acids. PMID:1423216

  10. The professional portfolio: an evidence-based assessment method.

    PubMed

    Byrne, Michelle; Schroeter, Kathryn; Carter, Shannon; Mower, Julie

    2009-12-01

    Competency assessment is critical for a myriad of disciplines, including medicine, law, education, and nursing. Many nurse managers and educators are responsible for nursing competency assessment, and assessment results are often used for annual reviews, promotions, and satisfying accrediting agencies' requirements. Credentialing bodies continually seek methods to measure and document the continuing competence of licensees or certificants. Many methods and frameworks for continued competency assessment exist. The portfolio process is one method to validate personal and professional accomplishments in an interactive, multidimensional manner. This article illustrates how portfolios can be used to assess competence. One specialty nursing certification board's process of creating an evidence-based portfolio for recertification or reactivation of a credential is used as an example. The theoretical background, development process, implementation, and future implications may serve as a template for other organizations in developing their own portfolio models.

  11. Fatigue crack identification method based on strain amplitude changing

    NASA Astrophysics Data System (ADS)

    Guo, Tiancai; Gao, Jun; Wang, Yonghong; Xu, Youliang

    2017-09-01

    Aiming at the difficulties in identifying the location and time of crack initiation in the castings of helicopter transmission system during fatigue tests, by introducing the classification diagnostic criteria of similar failure mode to find out the similarity of fatigue crack initiation among castings, an engineering method and quantitative criterion for detecting fatigue cracks based on strain amplitude changing is proposed. This method is applied on the fatigue test of a gearbox housing, whose results indicates: during the fatigue test, the system alarms when SC strain meter reaches the quantitative criterion. The afterwards check shows that a fatigue crack less than 5mm is found at the corresponding location of SC strain meter. The test result proves that the method can provide accurate test data for strength life analysis.

  12. A Method for Weight Multiplicity Computation Based on Berezin Quantization

    NASA Astrophysics Data System (ADS)

    Bar-Moshe, David

    2009-09-01

    Let G be a compact semisimple Lie group and T be a maximal torus of G. We describe a method for weight multiplicity computation in unitary irreducible representations of G, based on the theory of Berezin quantization on G/T. Let Γhol(Lλ) be the reproducing kernel Hilbert space of holomorphic sections of the homogeneous line bundle Lλ over G/T associated with the highest weight λ of the irreducible representation πλ of G. The multiplicity of a weight m in πλ is computed from functional analytical structure of the Berezin symbol of the projector in Γhol(Lλ) onto subspace of weight m. We describe a method of the construction of this symbol and the evaluation of the weight multiplicity as a rank of a Hermitian form. The application of this method is described in a number of examples.

  13. Calibration method for misaligned catadioptric camera based on planar conic

    NASA Astrophysics Data System (ADS)

    Zhu, Qidan; Xu, Congying; Cai, Chengtao

    2013-03-01

    Based on conventional camera calibration methods, a flexible approach for misaligned catadioptric camera from planar conic was presented. The catadioptric camera composed of a perspective camera and a hyperboloid mirror. The projection model of misaligned catadioptric camera was built and the projection functions were derived. Having known camera parameters, this method only requires the camera to observe a model plane which contains three concentric conics on the back of the revolution mirror. Then the mirror posture and attitude relative to the camera can be calculated linearly. The center of the concentric conics is on the mirror axis. This technique is easy to use and all computations are matrix operations in the linear algebra. The closed-form solution can be obtained without nonlinear optimization. Experiments are conducted on real images to evaluate the correctness and the feasibility of this method.

  14. Automatic seamless image mosaic method based on SIFT features

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wen, Desheng

    2017-02-01

    An automatic seamless image mosaic method based on SIFT features is proposed. First a scale-invariant feature extracting algorithm SIFT is used for feature extraction and matching, which gains sub-pixel precision for features extraction. Then, the transforming matrix H is computed with improved PROSAC algorithm , compared with RANSAC algorithm, the calculate efficiency is advanced, and the number of the inliers are more. Then the transforming matrix H is purify with LM algorithm. And finally image mosaic is completed with smoothing algorithm. The method implements automatically and avoids the disadvantages of traditional image mosaic method under different scale and illumination conditions. Experimental results show the image mosaic effect is wonderful and the algorithm is stable very much. It is high valuable in practice.

  15. Traffic speed data imputation method based on tensor completion.

    PubMed

    Ran, Bin; Tan, Huachun; Feng, Jianshuai; Liu, Ying; Wang, Wuhong

    2015-01-01

    Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches.

  16. Traffic Speed Data Imputation Method Based on Tensor Completion

    PubMed Central

    Ran, Bin; Feng, Jianshuai; Liu, Ying; Wang, Wuhong

    2015-01-01

    Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches. PMID:25866501

  17. A Micromechanics-Based Method for Multiscale Fatigue Prediction

    NASA Astrophysics Data System (ADS)

    Moore, John Allan

    An estimated 80% of all structural failures are due to mechanical fatigue, often resulting in catastrophic, dangerous and costly failure events. However, an accurate model to predict fatigue remains an elusive goal. One of the major challenges is that fatigue is intrinsically a multiscale process, which is dependent on a structure's geometric design as well as its material's microscale morphology. The following work begins with a microscale study of fatigue nucleation around non- metallic inclusions. Based on this analysis, a novel multiscale method for fatigue predictions is developed. This method simulates macroscale geometries explicitly while concurrently calculating the simplified response of microscale inclusions. Thus, providing adequate detail on multiple scales for accurate fatigue life predictions. The methods herein provide insight into the multiscale nature of fatigue, while also developing a tool to aid in geometric design and material optimization for fatigue critical devices such as biomedical stents and artificial heart valves.

  18. Method of stereo matching based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Chaohui; An, Ping; Zhang, Zhaoyang

    2003-09-01

    A new stereo matching scheme based on image edge and genetic algorithm (GA) is presented to improve the conventional stereo matching method in this paper. In order to extract robust edge feature for stereo matching, infinite symmetric exponential filter (ISEF) is firstly applied to remove the noise of image, and nonlinear Laplace operator together with local variance of intensity are then used to detect edges. Apart from the detected edge, the polarity of edge pixels is also obtained. As an efficient search method, genetic algorithm is applied to find the best matching pair. For this purpose, some new ideas are developed for applying genetic algorithm to stereo matching. Experimental results show that the proposed methods are effective and can obtain good results.

  19. Score-based resampling method for evolutionary algorithms.

    PubMed

    Park, Jonghwan; Jeon, Moongu; Pedrycz, Witold

    2008-10-01

    In this paper, a gene-handling method for evolutionary algorithms (EAs) is proposed. Such algorithms are characterized by a nonanalytic optimization process when dealing with complex systems as multiple behavioral responses occur in the realization of intelligent tasks. In generic EAs which optimize internal parameters of a given system, evaluation and selection are performed at the chromosome level. When a survived chromosome includes noneffective genes, the solution can be trapped in a local optimum during evolution, which causes an increase in the uncertainty of the results and reduces the quality of the overall system. This phenomenon also results in an unbalanced performance of partial behaviors. To alleviate this problem, a score-based resampling method is proposed, where a score function of a gene is introduced as a criterion of handling genes in each allele. The proposed method was empirically evaluated with various test functions, and the results show its effectiveness.

  20. A simple method to improve ensemble-based ozone forecasts

    NASA Astrophysics Data System (ADS)

    Pagowski, M.; Grell, G. A.; McKeen, S. A.; Dévényi, D.; Wilczak, J. M.; Bouchet, V.; Gong, W.; McHenry, J.; Peckham, S.; McQueen, J.; Moffet, R.; Tang, Y.

    2005-04-01

    Forecasts from seven air quality models and ozone data collected over the eastern USA and southern Canada during July and August 2004 are used in creating a simple method to improve ensemble-based forecasts of maximum daily 1-hr and 8-hr averaged ozone concentrations. The method minimizes least-square error of ensemble forecasts by assigning weights for its members. The real-time ozone (O3) forecasts from this ensemble of models are statistically evaluated against the ozone observations collected for the AIRNow database comprising more than 350 stations. Application of this method is shown to significantly improve overall statistics (e.g., bias, root mean square error, and index of agreement) of the weighted ensemble compared to the averaged ensemble or any individual ensemble member. If a sufficient number of observations is available, we recommend that weights be calculated daily; if not, a longer training phase will still provide a positive benefit.

  1. Tilt correction method of text image based on wavelet pyramid

    NASA Astrophysics Data System (ADS)

    Yu, Mingyang; Zhu, Qiguo

    2017-04-01

    Text images captured by camera may be tilted and distorted, which is unfavorable for document character recognition. Therefore,a method of text image tilt correction based on wavelet pyramid is proposed in this paper. The first step is to convert the text image captured by cameras to binary images. After binarization, the images are layered by wavelet transform to achieve noise reduction, enhancement and compression of image. Afterwards,the image would bedetected for edge by Canny operator, and extracted for straight lines by Radon transform. In the final step, this method calculates the intersection of straight lines and gets the corrected text images according to the intersection points and perspective transformation. The experimental result shows this method can correct text images accurately.

  2. Gradient-based image recovery methods from incomplete Fourier measurements.

    PubMed

    Patel, Vishal M; Maleh, Ray; Gilbert, Anna C; Chellappa, Rama

    2012-01-01

    A major problem in imaging applications such as magnetic resonance imaging and synthetic aperture radar is the task of trying to reconstruct an image with the smallest possible set of Fourier samples, every single one of which has a potential time and/or power cost. The theory of compressive sensing (CS) points to ways of exploiting inherent sparsity in such images in order to achieve accurate recovery using sub-Nyquist sampling schemes. Traditional CS approaches to this problem consist of solving total-variation (TV) minimization programs with Fourier measurement constraints or other variations thereof. This paper takes a different approach. Since the horizontal and vertical differences of a medical image are each more sparse or compressible than the corresponding TV image, CS methods will be more successful in recovering these differences individually. We develop an algorithm called GradientRec that uses a CS algorithm to recover the horizontal and vertical gradients and then estimates the original image from these gradients. We present two methods of solving the latter inverse problem, i.e., one based on least-square optimization and the other based on a generalized Poisson solver. After a thorough derivation of our complete algorithm, we present the results of various experiments that compare the effectiveness of the proposed method against other leading methods.

  3. An efficient neural network based method for medical image segmentation.

    PubMed

    Torbati, Nima; Ayatollahi, Ahmad; Kermani, Ali

    2014-01-01

    The aim of this research is to propose a new neural network based method for medical image segmentation. Firstly, a modified self-organizing map (SOM) network, named moving average SOM (MA-SOM), is utilized to segment medical images. After the initial segmentation stage, a merging process is designed to connect the objects of a joint cluster together. A two-dimensional (2D) discrete wavelet transform (DWT) is used to build the input feature space of the network. The experimental results show that MA-SOM is robust to noise and it determines the input image pattern properly. The segmentation results of breast ultrasound images (BUS) demonstrate that there is a significant correlation between the tumor region selected by a physician and the tumor region segmented by our proposed method. In addition, the proposed method segments X-ray computerized tomography (CT) and magnetic resonance (MR) head images much better than the incremental supervised neural network (ISNN) and SOM-based methods.

  4. A MUSIC-based method for SSVEP signal processing.

    PubMed

    Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei

    2016-03-01

    The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications.

  5. A novel duplicate images detection method based on PLSA model

    NASA Astrophysics Data System (ADS)

    Liao, Xiaofeng; Wang, Yongji; Ding, Liping; Gu, Jian

    2012-01-01

    Web image search results usually contain duplicate copies. This paper considers the problem of detecting and clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space. Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this method.

  6. A novel duplicate images detection method based on PLSA model

    NASA Astrophysics Data System (ADS)

    Liao, Xiaofeng; Wang, Yongji; Ding, Liping; Gu, Jian

    2011-12-01

    Web image search results usually contain duplicate copies. This paper considers the problem of detecting and clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space. Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this method.

  7. Quench Protection System based on Active Power Method

    NASA Astrophysics Data System (ADS)

    Nanato, Nozomu

    In superconducting coils, local and excessive joule heating may give damage to the superconducting windings when a quench occurs and therefore it is essential that the quench is detected quickly and precisely so that the coils can be safely discharged. We have presented a quench protection system based on the active power method which detects a quench by measuring the instantaneous active power generated in a superconducting coil. The protection system based on this method is strong against the inductive voltage and noise which may cause insufficient quench recognition. However, the proposed system is useful for a single coil but it is vulnerable to the magnetically coupled multi-coil such as high field superconducting coils. Because the proposed system can not avoid insufficient quench recognition by the mutual inductive voltage from the other coils. This paper presents a method to improve the characteristics of the active power method by cancelling the mutual inductive voltage. The experimental results of the quench protection for small Bi2223 coils show that the proposed system is useful for the magnetically coupled coils.

  8. Guided filter-based fusion method for multiexposure images

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei

    2016-11-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.

  9. Novel Cylinder Movement Modeling Method Based on Aerodynamics

    NASA Astrophysics Data System (ADS)

    Zhang, Jian-Qing; Hu, Xiao-Mei; Kang, Jin-Sheng; Xiong, Feng; Zeng, Ning

    2017-09-01

    The cylinder movement is affected by multiple factors and it is difficult to establish the accurate movement model of the cylinder. In order to improve the reliability of the production line design and to speed up the production line debugging, a novel cylinder movement modeling method based on aerodynamics is proposed. The kinetic theory, thermodynamic theory and kinematics knowledge are applied and integrated various factors which affect the movement characteristics of the cylinder are considered. According to the proposed mathematical model of cylinder movement, thecombined simulation software of cylinder movement based on Visual Studio and Visual Component (3D Create) is developed to calculate thevelocity, acceleration and movement time of the cylinders during the running of the assembly line. Comparison results of cylinder's movement time under different intake air and displacement show that the mathematical model of cylinder movement based on aerodynamic is more accurate and the degree of fittingis 0.9846, which proves the effectiveness of the combined simulation software of cylinder movement. By the cylinder movement modeling method based on aerodynamic, accurate value of takt and the debug parameters can be calculated as a reference for the designers and debuggers of the cylinder-driven assembly lines.

  10. A refined wideband acoustical holography based on equivalent source method

    PubMed Central

    Ping, Guoli; Chu, Zhigang; Xu, Zhongming; Shen, Linbang

    2017-01-01

    This paper is concerned with acoustical engineering and mathematical physics problem for the near-field acoustical holography based on equivalent source method (ESM-based NAH). An important mathematical physics problem in ESM-based NAH is to solve the equivalent source strength, which has multiple solving algorithms, such as Tikhonov regularization ESM (TRESM), iterative weighted ESM (IWESM) and steepest descent iteration ESM (SDIESM). To explore a new solving algorithm which can achieve better reconstruction performance in wide frequency band, a refined wideband acoustical holography (RWAH) is proposed. RWAH adopts IWESM below a transition frequency and switches to SDIESM above that transition frequency, and the principal components of input data in RWAH have been truncated. Further, the superiority of RWAH is verified by the comparison of comprehensive performance of TRESM, IWESM, SDIESM and RWAH. Finally, the experiments are conducted, confirming that RWAH can achieve better reconstruction performance in wide frequency band. PMID:28266531

  11. A refined wideband acoustical holography based on equivalent source method

    NASA Astrophysics Data System (ADS)

    Ping, Guoli; Chu, Zhigang; Xu, Zhongming; Shen, Linbang

    2017-03-01

    This paper is concerned with acoustical engineering and mathematical physics problem for the near-field acoustical holography based on equivalent source method (ESM-based NAH). An important mathematical physics problem in ESM-based NAH is to solve the equivalent source strength, which has multiple solving algorithms, such as Tikhonov regularization ESM (TRESM), iterative weighted ESM (IWESM) and steepest descent iteration ESM (SDIESM). To explore a new solving algorithm which can achieve better reconstruction performance in wide frequency band, a refined wideband acoustical holography (RWAH) is proposed. RWAH adopts IWESM below a transition frequency and switches to SDIESM above that transition frequency, and the principal components of input data in RWAH have been truncated. Further, the superiority of RWAH is verified by the comparison of comprehensive performance of TRESM, IWESM, SDIESM and RWAH. Finally, the experiments are conducted, confirming that RWAH can achieve better reconstruction performance in wide frequency band.

  12. Research on ghost imaging method based on wavelet transform

    NASA Astrophysics Data System (ADS)

    Li, Mengying; He, Ruiqing; Chen, Qian; Gu, Guohua; Zhang, Wenwen

    2017-09-01

    We present an algorithm of extracting the wavelet coefficients of object based on ghost imaging (GI) system. Through modification of the projected random patterns by using a series of templates, wavelet transform GI (WTGI) can directly measure the high frequency components of wavelet coefficients without needing the original image. In this study, we theoretically and experimentally perform the high frequency components of wavelet coefficients detection with an arrow and a letter A based on GI and WTGI. Comparing with the traditional method, the use of the algorithm proposed in this paper can significantly improve the quality of the image of wavelet coefficients in both cases. The special advantages of GI will make the wavelet coefficient detection based on WTGI very valuable in real applications.

  13. A refined wideband acoustical holography based on equivalent source method.

    PubMed

    Ping, Guoli; Chu, Zhigang; Xu, Zhongming; Shen, Linbang

    2017-03-07

    This paper is concerned with acoustical engineering and mathematical physics problem for the near-field acoustical holography based on equivalent source method (ESM-based NAH). An important mathematical physics problem in ESM-based NAH is to solve the equivalent source strength, which has multiple solving algorithms, such as Tikhonov regularization ESM (TRESM), iterative weighted ESM (IWESM) and steepest descent iteration ESM (SDIESM). To explore a new solving algorithm which can achieve better reconstruction performance in wide frequency band, a refined wideband acoustical holography (RWAH) is proposed. RWAH adopts IWESM below a transition frequency and switches to SDIESM above that transition frequency, and the principal components of input data in RWAH have been truncated. Further, the superiority of RWAH is verified by the comparison of comprehensive performance of TRESM, IWESM, SDIESM and RWAH. Finally, the experiments are conducted, confirming that RWAH can achieve better reconstruction performance in wide frequency band.

  14. Sparse coding based feature representation method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender

    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further

  15. Selection of construction methods: a knowledge-based approach.

    PubMed

    Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  16. Sphere-based calibration method for trinocular vision sensor

    NASA Astrophysics Data System (ADS)

    Lu, Rui; Shao, Mingwei

    2017-03-01

    A new method to calibrate a trinocular vision sensor is proposed and two main tasks are finished in this paper, i.e. to determine the transformation matrix between each two cameras and the trifocal tensor of the trinocular vision sensor. A flexible sphere target with several spherical circles is designed. As the isotropy of a sphere, trifocal tensor of the three cameras can be determined exactly from the feature on the sphere target. Then the fundamental matrix between each two cameras can be obtained. Easily, compatible rotation matrix and translation matrix can be deduced base on the singular value decomposition of the fundamental matrix. In our proposed calibration method, image points are not requested one-to-one correspondence. When image points locates in the same feature are obtained, the transformation matrix between each two cameras with the trifocal tensor of trinocular vision sensor can be determined. Experiment results show that the proposed calibration method can obtain precise results, including measurement and matching results. The root mean square error of distance is 0.026 mm with regard to the view field of about 200×200 mm and the feature matching of three images is strict. As a sphere projection is not concerned with its orientation, the calibration method is robust and with an easy operation. Moreover, our calibration method also provides a new approach to obtain the trifocal tensor.

  17. A Blade Tip Timing Method Based on a Microwave Sensor.

    PubMed

    Zhang, Jilong; Duan, Fajie; Niu, Guangyue; Jiang, Jiajia; Li, Jie

    2017-05-11

    Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy.

  18. A Blade Tip Timing Method Based on a Microwave Sensor

    PubMed Central

    Zhang, Jilong; Duan, Fajie; Niu, Guangyue; Jiang, Jiajia; Li, Jie

    2017-01-01

    Blade tip timing is an effective method for blade vibration measurements in turbomachinery. This method is increasing in popularity because it is non-intrusive and has several advantages over the conventional strain gauge method. Different kinds of sensors have been developed for blade tip timing, including optical, eddy current and capacitance sensors. However, these sensors are unsuitable in environments with contaminants or high temperatures. Microwave sensors offer a promising potential solution to overcome these limitations. In this article, a microwave sensor-based blade tip timing measurement system is proposed. A patch antenna probe is used to transmit and receive the microwave signals. The signal model and process method is analyzed. Zero intermediate frequency structure is employed to maintain timing accuracy and dynamic performance, and the received signal can also be used to measure tip clearance. The timing method uses the rising and falling edges of the signal and an auto-gain control circuit to reduce the effect of tip clearance change. To validate the accuracy of the system, it is compared experimentally with a fiber optic tip timing system. The results show that the microwave tip timing system achieves good accuracy. PMID:28492469

  19. TRUST-TECH based Methods for Optimization and Learning

    NASA Astrophysics Data System (ADS)

    Reddy, Chandan K.

    2007-12-01

    Many problems that arise in machine learning domain deal with nonlinearity and quite often demand users to obtain global optimal solutions rather than local optimal ones. Optimization problems are inherent in machine learning algorithms and hence many methods in machine learning were inherited from the optimization literature. Popularly known as the initialization problem, the ideal set of parameters required will significantly depend on the given initialization values. The recently developed TRUST-TECH (TRansformation Under STability-reTaining Equilibria CHaracterization) methodology systematically explores the subspace of the parameters to obtain a complete set of local optimal solutions. In this thesis work, we propose TRUST-TECH based methods for solving several optimization and machine learning problems. Two stages namely, the local stage and the neighborhood-search stage, are repeated alternatively in the solution space to achieve improvements in the quality of the solutions. Our methods were tested on both synthetic and real datasets and the advantages of using this novel framework are clearly manifested. This framework not only reduces the sensitivity to initialization, but also allows the flexibility for the practitioners to use various global and local methods that work well for a particular problem of interest. Other hierarchical stochastic algorithms like evolutionary algorithms and smoothing algorithms are also studied and frameworks for combining these methods with TRUST-TECH have been proposed and evaluated on several test systems.

  20. Assessment of mesoscopic particle-based methods in microfluidic geometries

    NASA Astrophysics Data System (ADS)

    Zhao, Tongyang; Wang, Xiaogong; Jiang, Lei; Larson, Ronald G.

    2013-08-01

    We assess the accuracy and efficiency of two particle-based mesoscopic simulation methods, namely, Dissipative Particle Dynamics (DPD) and Stochastic Rotation Dynamics (SRD) for predicting a complex flow in a microfluidic geometry. Since both DPD and SRD use soft or weakly interacting particles to carry momentum, both methods contain unavoidable inertial effects and unphysically high fluid compressibility. To assess these effects, we compare the predictions of DPD and SRD for both an exact Stokes-flow solution and nearly exact solutions at finite Reynolds numbers from the finite element method for flow in a straight channel with periodic slip boundary conditions. This flow represents a periodic electro-osmotic flow, which is a complex flow with an analytical solution for zero Reynolds number. We find that SRD is roughly ten-fold faster than DPD in predicting the flow field, with better accuracy at low Reynolds numbers. However, SRD has more severe problems with compressibility effects than does DPD, which limits the Reynolds numbers attainable in SRD to around 25-50, while DPD can achieve Re higher than this before compressibility effects become too large. However, since the SRD method runs much faster than DPD does, we can afford to enlarge the number of grid cells in SRD to reduce the fluid compressibility at high Reynolds number. Our simulations provide a method to estimate the range of conditions for which SRD or DPD is preferable for mesoscopic simulations.

  1. Variation block-based genomics method for crop plants

    PubMed Central

    2014-01-01

    Background In contrast with wild species, cultivated crop genomes consist of reshuffled recombination blocks, which occurred by crossing and selection processes. Accordingly, recombination block-based genomics analysis can be an effective approach for the screening of target loci for agricultural traits. Results We propose the variation block method, which is a three-step process for recombination block detection and comparison. The first step is to detect variations by comparing the short-read DNA sequences of the cultivar to the reference genome of the target crop. Next, sequence blocks with variation patterns are examined and defined. The boundaries between the variation-containing sequence blocks are regarded as recombination sites. All the assumed recombination sites in the cultivar set are used to split the genomes, and the resulting sequence regions are termed variation blocks. Finally, the genomes are compared using the variation blocks. The variation block method identified recurring recombination blocks accurately and successfully represented block-level diversities in the publicly available genomes of 31 soybean and 23 rice accessions. The practicality of this approach was demonstrated by the identification of a putative locus determining soybean hilum color. Conclusions We suggest that the variation block method is an efficient genomics method for the recombination block-level comparison of crop genomes. We expect that this method will facilitate the development of crop genomics by bringing genomics technologies to the field of crop breeding. PMID:24929792

  2. Numeric character recognition method based on fractal dimension

    NASA Astrophysics Data System (ADS)

    He, Tao; Xie, Yulang; Chen, Jiuyin; Cheng, Longfei; Yuan, Ye

    2013-10-01

    An image processing method based on fractal dimension is proposed in this paper. This method uses fractal dimension to process the character images firstly, and rises the analysis of each grid to the analysis of interrelation between the grids to eliminate interference. Box-counting method is commonly used for calculating fractal dimension of fractal, which uses small box whose side length is r ,that is the topological dimension of the box is d, to cover up the image. Because there are various levels of cavities and cracks, some small boxes are empty and some small boxes cover a part of fractal image which is called non-empty box (here refers to the average gray of the part that contained in the small box is larger than a certain threshold). We note down the number of non-empty boxes, analyze and calculate them. The method is used to image process the polluted characters, which can remove ink and scratches around the contour of the characters and remain basic contour, then the characters can be recognized by using template matching. In computer simulation experiment for polluted character recognition, this method can recognize the polluted characters quickly, which improve the accuracy of the recognition of the polluted characters.

  3. Mode separation of Lamb waves based on dispersion compensation method.

    PubMed

    Xu, Kailiang; Ta, Dean; Moilanen, Petro; Wang, Weiqi

    2012-04-01

    Ultrasonic Lamb modes typically propagate as a combination of multiple dispersive wave packets. Frequency components of each mode distribute widely in time domain due to dispersion and it is very challenging to separate individual modes by traditional signal processing methods. In the present study, a method of dispersion compensation is proposed for the purpose of mode separation. This numerical method compensates, i.e., compresses, the individual dispersive waveforms into temporal pulses, which thereby become nearly un-overlapped in time and frequency and can thus be extracted individually by rectangular time windows. It was further illustrated that the dispersion compensation also provided a method for predicting the plate thickness. Finally, based on reversibility of the numerical compensation method, an artificial dispersion technique was used to restore the original waveform of each mode from the separated compensated pulse. Performances of the compensation separation techniques were evaluated by processing synthetic and experimental signals which consisted of multiple Lamb modes with high dispersion. Individual modes were extracted with good accordance with the original waveforms and theoretical predictions.

  4. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  5. A Fatigue Life Prediction Method Based on Strain Intensity Factor.

    PubMed

    Zhang, Wei; Liu, Huili; Wang, Qiang; He, Jingjing

    2017-06-22

    In this paper, a strain-intensity-factor-based method is proposed to calculate the fatigue crack growth under the fully reversed loading condition. A theoretical analysis is conducted in detail to demonstrate that the strain intensity factor is likely to be a better driving parameter correlated with the fatigue crack growth rate than the stress intensity factor (SIF), especially for some metallic materials (such as 316 austenitic stainless steel) in the low cycle fatigue region with negative stress ratios R (typically R = -1). For fully reversed cyclic loading, the constitutive relation between stress and strain should follow the cyclic stress-strain curve rather than the monotonic one (it is a nonlinear function even within the elastic region). Based on that, a transformation algorithm between the SIF and the strain intensity factor is developed, and the fatigue crack growth rate testing data of 316 austenitic stainless steel and AZ31 magnesium alloy are employed to validate the proposed model. It is clearly observed that the scatter band width of crack growth rate vs. strain intensity factor is narrower than that vs. the SIF for different load ranges (which indicates that the strain intensity factor is a better parameter than the stress intensity factor under the fully reversed load condition). It is also shown that the crack growth rate is not uniquely determined by the SIF range even under the same R, but is also influenced by the maximum loading. Additionally, the fatigue life data (strain-life curve) of smooth cylindrical specimens are also used for further comparison, where a modified Paris equation and the equivalent initial flaw size (EIFS) are involved. The results of the proposed method have a better agreement with the experimental data compared to the stress intensity factor based method. Overall, the strain intensity factor method shows a fairly good ability in calculating the fatigue crack propagation, especially for the fully reversed cyclic loading

  6. A Fatigue Life Prediction Method Based on Strain Intensity Factor

    PubMed Central

    Zhang, Wei; Liu, Huili; Wang, Qiang; He, Jingjing

    2017-01-01

    In this paper, a strain-intensity-factor-based method is proposed to calculate the fatigue crack growth under the fully reversed loading condition. A theoretical analysis is conducted in detail to demonstrate that the strain intensity factor is likely to be a better driving parameter correlated with the fatigue crack growth rate than the stress intensity factor (SIF), especially for some metallic materials (such as 316 austenitic stainless steel) in the low cycle fatigue region with negative stress ratios R (typically R = −1). For fully reversed cyclic loading, the constitutive relation between stress and strain should follow the cyclic stress-strain curve rather than the monotonic one (it is a nonlinear function even within the elastic region). Based on that, a transformation algorithm between the SIF and the strain intensity factor is developed, and the fatigue crack growth rate testing data of 316 austenitic stainless steel and AZ31 magnesium alloy are employed to validate the proposed model. It is clearly observed that the scatter band width of crack growth rate vs. strain intensity factor is narrower than that vs. the SIF for different load ranges (which indicates that the strain intensity factor is a better parameter than the stress intensity factor under the fully reversed load condition). It is also shown that the crack growth rate is not uniquely determined by the SIF range even under the same R, but is also influenced by the maximum loading. Additionally, the fatigue life data (strain-life curve) of smooth cylindrical specimens are also used for further comparison, where a modified Paris equation and the equivalent initial flaw size (EIFS) are involved. The results of the proposed method have a better agreement with the experimental data compared to the stress intensity factor based method. Overall, the strain intensity factor method shows a fairly good ability in calculating the fatigue crack propagation, especially for the fully reversed cyclic

  7. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes.

  8. SVM Method used to Study Gender Differences Based on Microelement

    NASA Astrophysics Data System (ADS)

    Chun, Yang; Yuan, Liu; Jun, Du; Bin, Tang

    [objective] Intelligent Algorithm of SVM is used for studying gender differences based on microelement data, which provide reference For the application of Microelement in healthy people, such as providing technical support for the investigation of cases.[Method] Our Long-term test results on hair microelement of health people were consolidated. Support vector machine (SVM) is used to classified model of male and female based on microelement data. The radical basis function (RBF) is adopted as a kernel function of SVM, and the model adjusts C and σ to build the optimization classifier, [Result] Healthy population of men and women of manganese, cadmium and nickel are quite different, The classified model of Microelement based on SVM can classifies the male and female, the correct classification ratio set to be 81.71% and 66.47% by SVM based on 7 test date and 3 test data selection. [conclusion] The classified model of microelement data based on SVM can classifies male and female.

  9. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  10. A supervoxel-based segmentation method for prostate MR images

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, LiZhi; Fei, Baowei

    2015-03-01

    Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a "Supervoxel" based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A threedimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%+/-3.2%. The segmentation method can be used not only for the prostate but also for other organs.

  11. Warped document image correction method based on heterogeneous registration strategies

    NASA Astrophysics Data System (ADS)

    Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan

    2013-03-01

    With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.

  12. Erythrocyte shape classification using integral-geometry-based methods.

    PubMed

    Gual-Arnau, X; Herold-García, S; Simó, A

    2015-07-01

    Erythrocyte shape deformations are related to different important illnesses. In this paper, we focus on one of the most important: the Sickle cell disease. This disease causes the hardening or polymerization of the hemoglobin that contains the erythrocytes. The study of this process using digital images of peripheral blood smears can offer useful results in the clinical diagnosis of these illnesses. In particular, it would be very valuable to find a rapid and reproducible automatic classification method to quantify the number of deformed cells and so gauge the severity of the illness. In this paper, we show the good results obtained in the automatic classification of erythrocytes in normal cells, sickle cells, and cells with other deformations, when we use a set of functions based on integral-geometry methods, an active contour-based segmentation method, and a k-NN classification algorithm. Blood specimens were obtained from patients with Sickle cell disease. Seventeen peripheral blood smears were obtained for the study, and 45 images of different fields were obtained. A specialist selected the cells to use, determining those cells which were normal, elongated, and with other deformations present in the images. A process of automatic classification, with cross-validation of errors with the proposed descriptors and with other two functions used in previous studies, was realized.

  13. Design of time interval generator based on hybrid counting method

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  14. Integrated method for the measurement of trace atmospheric bases

    NASA Astrophysics Data System (ADS)

    Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.

    2011-09-01

    Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace atmospheric nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  15. Integrated method for the measurement of trace nitrogenous atmospheric bases

    NASA Astrophysics Data System (ADS)

    Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.

    2011-12-01

    Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv), as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  16. Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods

    NASA Astrophysics Data System (ADS)

    Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan

    2017-03-01

    Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.

  17. Wave-equation based traveltime seismic tomography - Part 1: Method

    NASA Astrophysics Data System (ADS)

    Tong, P.; Zhao, D.; Yang, D.; Yang, X.; Chen, J.; Liu, Q.

    2014-08-01

    In this paper, we propose a wave-equation based traveltime seismic tomography method with a detailed description of its step-by-step process. First, a linear relationship between the traveltime residual Δt = Tobs - Tsyn and the relative velocity perturbation δc(x) / c(x) connected by a finite-frequency traveltime sensitivity kernel K(x) is theoretically derived using the adjoint method. To accurately calculate the traveltime residual Δt, two automatic arrival-time picking techniques including the envelop energy ratio method and the combined ray and cross-correlation method are then developed to compute the arrival times Tsyn for synthetic seismograms. The arrival times Tobs of observed seismograms are usually determined by manual hand picking in real applications. Traveltime sensitivity kernel K(x) is constructed by convolving a forward wavefield u(t,x) with an adjoint wavefield q(t,x). The calculations of synthetic seismograms and sensitivity kernels rely on forward modelling. To make it computationally feasible for tomographic problems involving a large number of seismic records, the forward problem is solved in the two-dimensional (2-D) vertical plane passing through the source and the receiver by a high-order central difference method. The final model is parameterized on 3-D regular grid (inversion) nodes with variable spacings, while model values on each 2-D forward modelling node are linearly interpolated by the values at its eight surrounding 3-D inversion grid nodes. Finally, the tomographic inverse problem is formulated as a regularized optimization problem, which can be iteratively solved by either the LSQR solver or a non-linear conjugate-gradient method. To provide some insights into future 3-D tomographic inversions, Fréchet kernels for different seismic phases are also demonstrated in this study.

  18. Utility of Combining a Simulation-Based Method With a Lecture-Based Method for Fundoscopy Training in Neurology Residency.

    PubMed

    Gupta, Deepak K; Khandker, Namir; Stacy, Kristin; Tatsuoka, Curtis M; Preston, David C

    2017-09-11

    Fundoscopic examination is an essential component of the neurologic examination. Competence in its performance is mandated as a required clinical skill for neurology residents by the American Council of Graduate Medical Education. Government and private insurance agencies require its performance and documentation for moderate- and high-level neurologic evaluations. Traditionally, assessment and teaching of this key clinical examination technique have been difficult in neurology residency training. To evaluate the utility of a simulation-based method and the traditional lecture-based method for assessment and teaching of fundoscopy to neurology residents. This study was a prospective, single-blinded, education research study of 48 neurology residents recruited from July 1, 2015, through June 30, 2016, at a large neurology residency training program. Participants were equally divided into control and intervention groups after stratification by training year. Baseline and postintervention assessments were performed using questionnaire, survey, and fundoscopy simulators. After baseline assessment, both groups initially received lecture-based training, which covered fundamental knowledge on the components of fundoscopy and key neurologic findings observed on fundoscopic examination. The intervention group additionally received simulation-based training, which consisted of an instructor-led, hands-on workshop that covered practical skills of performing fundoscopic examination and identifying neurologically relevant findings on another fundoscopy simulator. The primary outcome measures were the postintervention changes in fundoscopy knowledge, skills, and total scores. A total of 30 men and 18 women were equally distributed between the 2 groups. The intervention group had significantly higher mean (SD) increases in skills (2.5 [2.3] vs 0.8 [1.8], P = .01) and total (9.3 [4.3] vs 5.3 [5.8], P = .02) scores compared with the control group. Knowledge scores (6.8 [3

  19. Method to implement the CCD timing generator based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  20. Dominant partition method. [based on a wave function formalism

    NASA Technical Reports Server (NTRS)

    Dixon, R. M.; Redish, E. F.

    1979-01-01

    By use of the L'Huillier, Redish, and Tandy (LRT) wave function formalism, a partially connected method, the dominant partition method (DPM) is developed for obtaining few body reductions of the many body problem in the LRT and Bencze, Redish, and Sloan (BRS) formalisms. The DPM maps the many body problem to a fewer body one by using the criterion that the truncated formalism must be such that consistency with the full Schroedinger equation is preserved. The DPM is based on a class of new forms for the irreducible cluster potential, which is introduced in the LRT formalism. Connectivity is maintained with respect to all partitions containing a given partition, which is referred to as the dominant partition. Degrees of freedom corresponding to the breakup of one or more of the clusters of the dominant partition are treated in a disconnected manner. This approach for simplifying the complicated BRS equations is appropriate for physical problems where a few body reaction mechanism prevails.

  1. An Optimization-based Atomistic-to-Continuum Coupling Method

    DOE PAGES

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; ...

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less

  2. Effectiveness of Spray-Based Decontamination Methods for ...

    EPA Pesticide Factsheets

    Report The objective of this project was to assess the effectiveness of spray-based common decontamination methods for inactivating Bacillus (B.) atrophaeus (surrogate for B. anthracis) spores and bacteriophage MS2 (surrogate for foot and mouth disease virus [FMDV]) on selected test surfaces (with or without a model agricultural soil load). Relocation of viable viruses or spores from the contaminated coupon surfaces into aerosol or liquid fractions during the decontamination methods was investigated. This project was conducted to support jointly held missions of the U.S. Department of Homeland Security (DHS) and the U.S. Environmental Protection Agency (EPA). Within the EPA, the project supports the mission of EPA’s Homeland Security Research Program (HSRP) by providing relevant information pertinent to the decontamination of contaminated areas resulting from a biological incident.

  3. Improved Artificial Bee Colony Algorithm Based Gravity Matching Navigation Method

    PubMed Central

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-01-01

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position. PMID:25046019

  4. Blue noise sampling method based on mixture distance

    NASA Astrophysics Data System (ADS)

    Qin, Hongxing; Hong, XiaoYang; Xiao, Bin; Zhang, Shaoting; Wang, Guoyin

    2014-11-01

    Blue noise sampling is a core component for a large number of computer graphic applications such as imaging, modeling, animation, and rendering. However, most existing methods are concentrated on preserving spatial domain properties like density and anisotropy, while ignoring feature preserving. In order to solve the problem, we present a new distance metric called mixture distance for blue noise sampling, which is a combination of geodesic and feature distances. Based on mixture distance, the blue noise property and features can be preserved by controlling the ratio of the geodesic distance to the feature distance. With the intention of meeting different requirements from various applications, an adaptive adjustment for parameters is also proposed to achieve a balance between the preservation of features and spatial properties. Finally, implementation on a graphic processing unit is introduced to improve the efficiency of computation. The efficacy of the method is demonstrated by the results of image stippling, surface sampling, and remeshing.

  5. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  6. Method for rectifying image deviation based on perspective transformation

    NASA Astrophysics Data System (ADS)

    Li, Xin; Li, Shengrong; Bai, Wei; Cui, Xiaoxiao; Yang, Guoqing; Zhou, Hao; Zhang, Chuanyou

    2017-09-01

    A new method for rectifying image deviation of circular instrument based on perspective transformation is presented in the paper, and the correction of circular instrument image in substation environment is realized. First of all, the digital image processing technology is used to pre-process the site image. Secondly, Canny operator is used for edge detection. According to the edge feature points, the equipment area is detected and the regional parameters can be computed. Then the perspective transformation is used to correct the image, and the positive image of the circular instrument image is obtained. Finally, the corrected tilt image is done by the rotation operation. Experimental results show that the algorithm can realize image rectification, which is simple with fast speed and high precision. The proposed method is helpful for the further recognition.

  7. Neural cell image segmentation method based on support vector machine

    NASA Astrophysics Data System (ADS)

    Niu, Shiwei; Ren, Kan

    2015-10-01

    In the analysis of neural cell images gained by optical microscope, accurate and rapid segmentation is the foundation of nerve cell detection system. In this paper, a modified image segmentation method based on Support Vector Machine (SVM) is proposed to reduce the adverse impact caused by low contrast ratio between objects and background, adherent and clustered cells' interference etc. Firstly, Morphological Filtering and OTSU Method are applied to preprocess images for extracting the neural cells roughly. Secondly, the Stellate Vector, Circularity and Histogram of Oriented Gradient (HOG) features are computed to train SVM model. Finally, the incremental learning SVM classifier is used to classify the preprocessed images, and the initial recognition areas identified by the SVM classifier are added to the library as the positive samples for training SVM model. Experiment results show that the proposed algorithm can achieve much better segmented results than the classic segmentation algorithms.

  8. An Optimization-based Atomistic-to-Continuum Coupling Method

    SciTech Connect

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally, we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.

  9. Application of DNA-based methods in forensic entomology.

    PubMed

    Wells, Jeffrey D; Stevens, Jamie R

    2008-01-01

    A forensic entomological investigation can benefit from a variety of widely practiced molecular genotyping methods. The most commonly used is DNA-based specimen identification. Other applications include the identification of insect gut contents and the characterization of the population genetic structure of a forensically important insect species. The proper application of these procedures demands that the analyst be technically expert. However, one must also be aware of the extensive list of standards and expectations that many legal systems have developed for forensic DNA analysis. We summarize the DNA techniques that are currently used in, or have been proposed for, forensic entomology and review established genetic analyses from other scientific fields that address questions similar to those in forensic entomology. We describe how accepted standards for forensic DNA practice and method validation are likely to apply to insect evidence used in a death or other forensic entomological investigation.

  10. New method of underwater passive navigation based on gravity gradient

    NASA Astrophysics Data System (ADS)

    Wu, Lin; Gong, Jiaqi; Cheng, Hua; Ma, Jie; Tian, Jinwen

    2007-11-01

    A new method of underwater passive navigation based on gravity gradient is proposed in this paper. In comparison with some other geophysical characteristics such as gravity or gravity anomaly, gravity gradient which is the second derivative of gravitational potential has better spatial resolution and more sensitive to terrain changes. Through it, the digitally stored gravity gradient maps and real-time gravity gradient measurements can be taken as input information, with gravity gradient linearization techniques and extended Kalman filter, the navigation errors of INS are estimated by using gravity gradient error, therefore the output in the inertial navigation system are corrected. Simulation test has been done and the results show that, the method is effective and efficient for the positioning precision improvement.

  11. Effectiveness of Spray-Based Decontamination Methods for ...

    EPA Pesticide Factsheets

    Report The objective of this project was to assess the effectiveness of spray-based common decontamination methods for inactivating Bacillus (B.) atrophaeus (surrogate for B. anthracis) spores and bacteriophage MS2 (surrogate for foot and mouth disease virus [FMDV]) on selected test surfaces (with or without a model agricultural soil load). Relocation of viable viruses or spores from the contaminated coupon surfaces into aerosol or liquid fractions during the decontamination methods was investigated. This project was conducted to support jointly held missions of the U.S. Department of Homeland Security (DHS) and the U.S. Environmental Protection Agency (EPA). Within the EPA, the project supports the mission of EPA’s Homeland Security Research Program (HSRP) by providing relevant information pertinent to the decontamination of contaminated areas resulting from a biological incident.

  12. Novel parameter-based flexure bearing design method

    NASA Astrophysics Data System (ADS)

    Amoedo, Simon; Thebaud, Edouard; Gschwendtner, Michael; White, David

    2016-06-01

    A parameter study was carried out on the design variables of a flexure bearing to be used in a Stirling engine with a fixed axial displacement and a fixed outer diameter. A design method was developed in order to assist identification of the optimum bearing configuration. This was achieved through a parameter study of the bearing carried out with ANSYS®. The parameters varied were the number and the width of the arms, the thickness of the bearing, the eccentricity, the size of the starting and ending holes, and the turn angle of the spiral. Comparison was made between the different designs in terms of axial and radial stiffness, the natural frequency, and the maximum induced stresses. Moreover, the Finite Element Analysis (FEA) was compared to theoretical results for a given design. The results led to a graphical design method which assists the selection of flexure bearing geometrical parameters based on pre-determined geometric and material constraints.

  13. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  14. Modified risk graph method using fuzzy rule-based approach.

    PubMed

    Nait-Said, R; Zidani, F; Ouzraoui, N

    2009-05-30

    The risk graph is one of the most popular methods used to determine the safety integrity level for safety instrumented functions. However, conventional risk graph as described in the IEC 61508 standard is subjective and suffers from an interpretation problem of risk parameters. Thus, it can lead to inconsistent outcomes that may result in conservative SILs. To overcome this difficulty, a modified risk graph using fuzzy rule-based system is proposed. This novel version of risk graph uses fuzzy scales to assess risk parameters and calibration may be made by varying risk parameter values. Furthermore, the outcomes which are numerical values of risk reduction factor (the inverse of the probability of failure on demand) can be compared directly with those given by quantitative and semi-quantitative methods such as fault tree analysis (FTA), quantitative risk assessment (QRA) and layers of protection analysis (LOPA).

  15. Feasible methods to estimate disease based price indexes.

    PubMed

    Bradley, Ralph

    2013-05-01

    There is a consensus that statistical agencies should report medical data by disease rather than by service. This study computes price indexes that are necessary to deflate nominal disease expenditures and to decompose their growth into price, treated prevalence and output per patient growth. Unlike previous studies, it uses methods that can be implemented by the Bureau of Labor Statistics (BLS). For the calendar years 2005-2010, I find that these feasible disease based indexes are approximately 1% lower on an annual basis than indexes computed by current methods at BLS. This gives evidence that traditional medical price indexes have not accounted for the more efficient use of medical inputs in treating most diseases.

  16. Recent advances in implicit solvent based methods for biomolecular simulations

    PubMed Central

    Chen, Jianhan; Brooks, Charles L.; Khandogin, Jana

    2008-01-01

    Implicit solvent based methods play an increasingly important role in molecular modeling of biomolecular structure and dynamics. Recent methodological developments have mainly focused on extension of the generalized Born (GB) formalism for variable dielectric environments and accurate treatment of nonpolar solvation. Extensive efforts in parameterization of GB models and implicit solvent force fields have enabled ab initio simulation of protein folding to native or near-native structures. Another exciting area that has benefitted from the advances in implicit solvent models is the development of constant pH molecular dynamics methods, which have recently been applied to calculations of protein pKa values and studies of pH-dependent peptide and protein folding. PMID:18304802

  17. Design Method for EPS Control System Based on KANSEI Structure

    NASA Astrophysics Data System (ADS)

    Saitoh, Yumi; Itoh, Hideaki; Ozaki, Fuminori; Nakamura, Takenobu; Kawaji, Shigeyasu

    Recently, it has been identified that a KANSEI engineering plays an important role in functional design developing for realizing highly sophisticated products. However, in practical development methods, we design products and optimise the design trial and error, which indecates that we depend on the skill set of experts. In this paper, we focus on an automobile electric power steering (EPS) for which a functional design is required. First, the KANSEI structure is determined on the basis of the steering feeling of an experienced driver, and an EPS control design based on this KANSEI structure is proposed. Then, the EPS control parameters are adjusted in accordance with the KANSEI index. Finally, by assessing the experimental results obtained from the driver, the effectiveness of the proposed design method is verified.

  18. Density functional theory based generalized effective fragment potential method

    SciTech Connect

    Nguyen, Kiet A. E-mail: ruth.pachter@wpafb.af.mil; Pachter, Ruth E-mail: ruth.pachter@wpafb.af.mil; Day, Paul N.

    2014-06-28

    We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes.

  19. Optimal sensor placement using FRFs-based clustering method

    NASA Astrophysics Data System (ADS)

    Li, Shiqi; Zhang, Heng; Liu, Shiping; Zhang, Zhe

    2016-12-01

    The purpose of this work is to develop an optimal sensor placement method by selecting the most relevant degrees of freedom as actual measure position. Based on observation matrix of a structure's frequency response, two optimal criteria are used to avoid the information redundancy of the candidate degrees of freedom. By using principal component analysis, the frequency response matrix can be decomposed into principal directions and their corresponding singular. A relatively small number of principal directions will maintain a system's dominant response information. According to the dynamic similarity of each degree of freedom, the k-means clustering algorithm is designed to classify the degrees of freedom, and effective independence method deletes the sensors which are redundant of each cluster. Finally, two numerical examples and a modal test are included to demonstrate the efficient of the derived method. It is shown that the proposed method provides a way to extract sub-optimal sets and the selected sensors are well distributed on the whole structure.

  20. Evaluation of Anomaly Detection Method Based on Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Fontugne, Romain; Himura, Yosuke; Fukuda, Kensuke

    The number of threats on the Internet is rapidly increasing, and anomaly detection has become of increasing importance. High-speed backbone traffic is particularly degraded, but their analysis is a complicated task due to the amount of data, the lack of payload data, the asymmetric routing and the use of sampling techniques. Most anomaly detection schemes focus on the statistical properties of network traffic and highlight anomalous traffic through their singularities. In this paper, we concentrate on unusual traffic distributions, which are easily identifiable in temporal-spatial space (e.g., time/address or port). We present an anomaly detection method that uses a pattern recognition technique to identify anomalies in pictures representing traffic. The main advantage of this method is its ability to detect attacks involving mice flows. We evaluate the parameter set and the effectiveness of this approach by analyzing six years of Internet traffic collected from a trans-Pacific link. We show several examples of detected anomalies and compare our results with those of two other methods. The comparison indicates that the only anomalies detected by the pattern-recognition-based method are mainly malicious traffic with a few packets.

  1. A novel virtual viewpoint merging method based on machine learning

    NASA Astrophysics Data System (ADS)

    Zheng, Di; Peng, Zongju; Wang, Hui; Jiang, Gangyi; Chen, Fen

    2014-11-01

    In multi-view video system, multiple video plus depth is main data format of 3D scene representation. Continuous virtual views can be generated by using depth image based rendering (DIBR) technique. DIBR process includes geometric mapping, hole filling and merging. Unique weights, inversely proportional to the distance between the virtual and real cameras, are used to merge the virtual views. However, the weights might not the optimal ones in terms of virtual view quality. In this paper, a novel virtual view merging algorithm is proposed. In the proposed algorithm, machine learning method is utilized to establish an optimal weight model. In the model, color, depth, color gradient and sequence parameters are taken into consideration. Firstly, we render the same virtual view from left and right views, and select the training samples by using a threshold. Then, the eigenvalues of the samples are extracted and the optimal merging weights are calculated as training labels. Finally, support vector classifier (SVC) is adopted to establish the model which is used for guiding virtual views rendering. Experimental results show that the proposed method can improve the quality of virtual views for most sequences. Especially, it is effective in the case of large distance between the virtual and real cameras. And compared to the original method of virtual view synthesis, the proposed method can obtain more than 0.1dB gain for some sequences.

  2. Rapid Mapping Method Based on Free Blocks of Surveys

    NASA Astrophysics Data System (ADS)

    Yu, Xianwen; Wang, Huiqing; Wang, Jinling

    2016-06-01

    While producing large-scale larger than 1:2000 maps in cities or towns, the obstruction from buildings leads to difficult and heavy tasks of measuring mapping control points. In order to avoid measuring the mapping control points and shorten the time of fieldwork, in this paper, a quick mapping method is proposed. This method adjusts many free blocks of surveys together, and transforms the points from all free blocks of surveys into the same coordinate system. The entire surveying area is divided into many free blocks, and connection points are set on the boundaries between free blocks. An independent coordinate system of every free block is established via completely free station technology, and the coordinates of the connection points, detail points and control points in every free block in the corresponding independent coordinate systems are obtained based on poly-directional open traverses. Error equations are established based on connection points, which are determined together to obtain the transformation parameters. All points are transformed from the independent coordinate systems to a transitional coordinate system via the transformation parameters. Several control points are then measured by GPS in a geodetic coordinate system. All the points can then be transformed from the transitional coordinate system to the geodetic coordinate system. In this paper, the implementation process and mathematical formulas of the new method are presented in detail, and the formula to estimate the precision of surveys is given. An example has demonstrated that the precision of using the new method could meet large-scale mapping needs.

  3. Method of plasma etching GA-based compound semiconductors

    DOEpatents

    Qiu, Weibin; Goddard, Lynford L.

    2013-01-01

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent thereto. The chamber contains a Ga-based compound semiconductor sample in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. SiCl.sub.4 and Ar gases are flowed into the chamber. RF power is supplied to the platen at a first power level, and RF power is supplied to the source electrode. A plasma is generated. Then, RF power is supplied to the platen at a second power level lower than the first power level and no greater than about 30 W. Regions of a surface of the sample adjacent to one or more masked portions of the surface are etched at a rate of no more than about 25 nm/min to create a substantially smooth etched surface.

  4. Hybrid Modeling Method for a DEP Based Particle Manipulation

    PubMed Central

    Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad

    2013-01-01

    In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results. PMID:23364197

  5. An Improved Spectral Background Subtraction Method Based on Wavelet Energy.

    PubMed

    Zhao, Fengkui; Wang, Jian; Wang, Aimin

    2016-12-01

    Most spectral background subtraction methods rely on the difference in frequency response of background compared with characteristic peaks. It is difficult to extract accurately the background components from the spectrum when characteristic peaks and background have overlaps in frequency domain. An improved background estimation algorithm based on iterative wavelet transform (IWT) is presented. The wavelet entropy principle is used to select the best wavelet basis. A criterion based on wavelet energy theory to determine the optimal iteration times is proposed. The case of energy dispersive X-ray spectroscopy is discussed for illustration. A simulated spectrum with a prior known background and an experimental spectrum are tested. The processing results of the simulated spectrum is compared with non-IWT and it demonstrates the superiority of the IWT. It has great significance to improve the accuracy for spectral analysis.

  6. Big data mining analysis method based on cloud computing

    NASA Astrophysics Data System (ADS)

    Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao

    2017-08-01

    Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.

  7. Transistor-based particle detection systems and methods

    DOEpatents

    Jain, Ankit; Nair, Pradeep R.; Alam, Muhammad Ashraful

    2015-06-09

    Transistor-based particle detection systems and methods may be configured to detect charged and non-charged particles. Such systems may include a supporting structure contacting a gate of a transistor and separating the gate from a dielectric of the transistor, and the transistor may have a near pull-in bias and a sub-threshold region bias to facilitate particle detection. The transistor may be configured to change current flow through the transistor in response to a change in stiffness of the gate caused by securing of a particle to the gate, and the transistor-based particle detection system may configured to detect the non-charged particle at least from the change in current flow.

  8. Method for fabricating beryllium-based multilayer structures

    DOEpatents

    Skulina, Kenneth M.; Bionta, Richard M.; Makowiecki, Daniel M.; Alford, Craig S.

    2003-02-18

    Beryllium-based multilayer structures and a process for fabricating beryllium-based multilayer mirrors, useful in the wavelength region greater than the beryllium K-edge (111 .ANG. or 11.1 nm). The process includes alternating sputter deposition of beryllium and a metal, typically from the fifth row of the periodic table, such as niobium (Nb), molybdenum (Mo), ruthenium (Ru), and rhodium (Rh). The process includes not only the method of sputtering the materials, but the industrial hygiene controls for safe handling of beryllium. The mirrors made in accordance with the process may be utilized in soft x-ray and extreme-ultraviolet projection lithography, which requires mirrors of high reflectivity (>60%) for x-rays in the range of 60-140 .ANG. (60-14.0 nm).

  9. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  10. Detection of biological thiols based on a colorimetric method*

    PubMed Central

    Xu, Yuan-yuan; Sun, Yang-yang; Zhang, Yu-juan; Lu, Chen-he; Miao, Jin-feng

    2016-01-01

    Biological thiols (biothiols), an important kind of functional biomolecules, such as cysteine (Cys) and glutathione (GSH), play vital roles in maintaining the stability of the intracellular environment. In past decades, studies have demonstrated that metabolic disorder of biothiols is related to many serious disease processes and will lead to extreme damage in human and numerous animals. We carried out a series of experiments to detect biothiols in biosamples, including bovine plasma and cell lysates of seven different cell lines based on a simple colorimetric method. In a typical test, the color of the test solution could gradually change from blue to colorless after the addition of biothiols. Based on the color change displayed, experimental results reveal that the percentage of biothiols in the embryonic fibroblast cell line is significantly higher than those in the other six cell lines, which provides the basis for the following biothiols-related study. PMID:27704750

  11. Multiresolution subspace-based optimization method for inverse scattering problems.

    PubMed

    Oliveri, Giacomo; Zhong, Yu; Chen, Xudong; Massa, Andrea

    2011-10-01

    This paper investigates an approach to inverse scattering problems based on the integration of the subspace-based optimization method (SOM) within a multifocusing scheme in the framework of the contrast source formulation. The scattering equations are solved by a nested three-step procedure composed of (a) an outer multiresolution loop dealing with the identification of the regions of interest within the investigation domain through an iterative information-acquisition process, (b) a spectrum analysis step devoted to the reconstruction of the deterministic components of the contrast sources, and (c) an inner optimization loop aimed at retrieving the ambiguous components of the contrast sources through a conjugate gradient minimization of a suitable objective function. A set of representative reconstruction results is discussed to provide numerical evidence of the effectiveness of the proposed algorithmic approach as well as to assess the features and potentialities of the multifocusing integration in comparison with the state-of-the-art SOM implementation.

  12. Hybrid modeling method for a DEP based particle manipulation.

    PubMed

    Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad

    2013-01-30

    In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.

  13. Emerging Methods for Ensemble-Based Virtual Screening

    PubMed Central

    Amaro, Rommie E.; Li, Wilfred W.

    2011-01-01

    Ensemble based virtual screening refers to the use of conformational ensembles from crystal structures, NMR studies or molecular dynamics simulations. It has gained greater acceptance as advances in the theoretical framework, computational algorithms, and software packages enable simulations at longer time scales. Here we focus on the use of computationally generated conformational ensembles and emerging methods that use these ensembles for discovery, such as the Relaxed Complex Scheme or Dynamic Pharmacophore Model. We also discuss the more rigorous physics-based computational techniques such as accelerated molecular dynamics and thermodynamic integration and their applications in improving conformational sampling or the ranking of virtual screening hits. Finally, technological advances that will help make virtual screening tools more accessible to a wider audience in computer aided drug design are discussed. PMID:19929833

  14. Web-based methods in terrorism and disaster research.

    PubMed

    Schlenger, William E; Silver, Roxane Cohen

    2006-04-01

    This article provides an overview of the use of the Internet for conducting studies after terrorist attacks and other large-scale disasters. We begin with a brief summary of the scientific and logistical challenges of conducting such research, followed by a description of some of the most important design features that are required to produce valid findings. We then describe one approach to Internet surveys that, although not perfect, addresses many of the challenges well. We close with some thoughts about how the Internet-based methods available today are likely to develop further in coming years.

  15. Study on torpedo fuze signal denoising method based on WPT

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Sun, Changcun; Zhang, Tao; Ren, Zhiliang

    2013-07-01

    Torpedo fuze signal denoising is an important action to ensure reliable operation of fuze. Based on the good characteristics of wavelet packet transform (WPT) in signal denoising, the paper used wavelet packet transform to denoise the fuze signal under a complex background interference, and a simulation of the denoising results with Matlab is performed. Simulation result shows that the WPT denoising method can effectively eliminate background noise exist in torpedo fuze target signal with higher precision and less distortion, leading to advance the reliability of torpedo fuze operation.

  16. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Smith, Timothy A. (Inventor); Urnes, James M., Sr. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  17. Efficient local-orbitals based method for ultrafast dynamics

    NASA Astrophysics Data System (ADS)

    Boleininger, Max; Horsfield, Andrew P.

    2017-07-01

    Computer simulations are invaluable for the study of ultrafast phenomena, as they allow us to directly access the electron dynamics. We present an efficient method for simulating the evolution of electrons in molecules under the influence of time-dependent electric fields, based on the Gaussian tight binding model. This model improves upon standard self-charge-consistent tight binding by the inclusion of polarizable orbitals and a self-consistent description of charge multipoles. Using the examples of bithiophene, terthiophene, and tetrathiophene, we show that this model produces electrostatic, electrodynamic, and explicitly time-dependent properties in strong agreement with density-functional theory, but at a small fraction of the cost.

  18. [Others physical methods in psychiatric treatment based on electromagnetic stimulation].

    PubMed

    Zyss, Tomasz; Rachel, Wojciech; Datka, Wojciech; Hese, Robert T; Gorczyca, Piotr; Zięba, Andrzej; Piekoszewski, Wojciech

    2016-01-01

    In the last decades a few new physical methods based on the electromagnetic head stimulation were subjected to the clinical research. To them belong:--vagus nerve stimulation (VNS),--magnetic seizure therapy/magnetoconvulsive therapy (MST/MCT),--deep stimulation of the brain (DBS) and--transcranial direct current stimulation (tDCS). The paper presents a description of mentioned techniques (nature, advantages, defects, restrictions), which were compared to the applied electroconvulsive treatment ECT, earlier described transcranial magnetic stimulation TMS and the pharmacotherapy (the basis of the psychiatric treatment).

  19. Methods and applications of positron-based medical imaging

    NASA Astrophysics Data System (ADS)

    Herzog, H.

    2007-02-01

    Positron emission tomography (PET) is a diagnostic imaging method to examine metabolic functions and their disorders. Dedicated ring systems of scintillation detectors measure the 511 keV γ-radiation produced in the course of the positron emission from radiolabelled metabolically active molecules. A great number of radiopharmaceuticals labelled with 11C, 13N, 15O, or 18F positron emitters have been applied both for research and clinical purposes in neurology, cardiology and oncology. The recent success of PET with rapidly increasing installations is mainly based on the use of [ 18F]fluorodeoxyglucose (FDG) in oncology where it is most useful to localize primary tumours and their metastases.

  20. Human Temporal Bone Removal: The Skull Base Block Method.

    PubMed

    Dinh, Christine; Szczupak, Mikhaylo; Moon, Seo; Angeli, Simon; Eshraghi, Adrien; Telischi, Fred F

    2015-08-01

    Objectives To describe a technique for harvesting larger temporal bone specimens from human cadavers for the training of otolaryngology residents and fellows on the various approaches to the lateral and posterolateral skull base. Design Human cadaveric anatomical study. The calvarium was excised 6 cm above the superior aspect of the ear canal. The brain and cerebellum were carefully removed, and the cranial nerves were cut sharply. Two bony cuts were performed, one in the midsagittal plane and the other in the coronal plane at the level of the optic foramen. Setting Medical school anatomy laboratory. Participants Human cadavers. Main Outcome Measures Anatomical contents of specimens and technical effort required. Results Larger temporal bone specimens containing portions of the parietal, occipital, and sphenoidal bones were consistently obtained using this technique of two bone cuts. All specimens were inspected and contained pertinent surface and skull base landmarks. Conclusions The skull base block method allows for larger temporal bone specimens using a two bone cut technique that is efficient and reproducible. These specimens have the necessary anatomical bony landmarks for studying the complexity, utility, and limitations of lateral and posterolateral approaches to the skull base, important for the education of otolaryngology residents and fellows.

  1. Analysis of Hylocereus spp. diversity based on phenetic method

    NASA Astrophysics Data System (ADS)

    Hamidah, Tsawab, Husnus; Rosmanida

    2017-06-01

    This study was aimed to determine number of distinguishing characters; the most dominant characters on dragonfruit (Hylocereus) classification; and dragonfruit classification relationship based on their morphological characters. Sampling was performed in Bhakti Alam Agrotourism, Pasuruan. Amount of observed parameters were 63 characters, including stem/branches segments, areolas, flower, fruit, and seeds characters. These characters were analyzed using descriptive and phenetic methods. Based on descriptive result, there were 59 distinguishing characters that affected classification of five dragonfruit species. They were white dragonfruit, pink dragonfruit, red dragonfruit, purplish-red dragonfruit, and yellow dragonfruit. Based on phenetic analysis, it was obtained a dendogram which showed the relationship of dragonfruit classification. Purplish-red and red dragonfruit were closely related with 50.7% in similarity value, which then these groups were referred as group VI. Pink dragonfruit and group VI were closely related with 43.3% in similarity value, which then these groups were referred as group IV. White dragonfruit and group IV were closely related with 21.5% in similarity value, which then these groups were referred to as group II. Meanwhile, yellow dragonfruit and group II were closely related with 8.5% in similarity value. Based on principal component analysis, there were 34 characters which influenced strongly dragonfruit classification. Two of them were the most dominant character that affected dragonfruit classification. They were curvature stem and number of fruit bracteola remnants, with component value 0,955.

  2. Developing sub-domain verification methods based on GIS tools

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Foley, T. A.; Raby, J. W.

    2014-12-01

    The meteorological community makes extensive use of the Model Evaluation Tools (MET) developed by National Center for Atmospheric Research for numerical weather prediction model verification through grid-to-point, grid-to-grid and object-based domain level analyses. MET Grid-Stat has been used to perform grid-to-grid neighborhood verification to account for the uncertainty inherent in high resolution forecasting, and MET Method for Object-based Diagnostic Evaluation (MODE) has been used to develop techniques for object-based spatial verification of high resolution forecast grids for continuous meteorological variables. High resolution modeling requires more focused spatial and temporal verification over parts of the domain. With a Geographical Information System (GIS), researchers can now consider terrain type/slope and land use effects and other spatial and temporal variables as explanatory metrics in model assessments. GIS techniques, when coupled with high resolution point and gridded observations sets, allow location-based approaches that permit discovery of spatial and temporal scales where models do not sufficiently resolve the desired phenomena. In this paper we discuss our initial GIS approach to verify WRF-ARW with a one-kilometer horizontal resolution inner domain centered over Southern California. Southern California contains a mixture of urban, sub-urban, agricultural and mountainous terrain types along with a rich array of observational data with which to illustrate our ability to conduct sub-domain verification.

  3. Gradient-based optimum aerodynamic design using adjoint methods

    NASA Astrophysics Data System (ADS)

    Xie, Lei

    2002-09-01

    Continuous adjoint methods and optimal control theory are applied to a pressure-matching inverse design problem of quasi 1-D nozzle flows. Pontryagin's Minimum Principle is used to derive the adjoint system and the reduced gradient of the cost functional. The properties of adjoint variables at the sonic throat and the shock location are studied, revealing a log-arithmic singularity at the sonic throat and continuity at the shock location. A numerical method, based on the Steger-Warming flux-vector-splitting scheme, is proposed to solve the adjoint equations. This scheme can finely resolve the singularity at the sonic throat. A non-uniform grid, with points clustered near the throat region, can resolve it even better. The analytical solutions to the adjoint equations are also constructed via Green's function approach for the purpose of comparing the numerical results. The pressure-matching inverse design is then conducted for a nozzle parameterized by a single geometric parameter. In the second part, the adjoint methods are applied to the problem of minimizing drag coefficient, at fixed lift coefficient, for 2-D transonic airfoil flows. Reduced gradients of several functionals are derived through application of a Lagrange Multiplier Theorem. The adjoint system is carefully studied including the adjoint characteristic boundary conditions at the far-field boundary. A super-reduced design formulation is also explored by treating the angle of attack as an additional state; super-reduced gradients can be constructed either by solving adjoint equations with non-local boundary conditions or by a direct Lagrange multiplier method. In this way, the constrained optimization reduces to an unconstrained design problem. Numerical methods based on Jameson's finite volume scheme are employed to solve the adjoint equations. The same grid system generated from an efficient hyperbolic grid generator are adopted in both the Euler flow solver and the adjoint solver. Several

  4. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method

    PubMed Central

    2011-01-01

    Background Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. Methods We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views), knee (three views), wrist (two views), and other. Indirect costs were allocated to cost objects using the ABC method. Results The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. Conclusions The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients. PMID:21961846

  5. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  6. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  7. Impact of merging methods on radar based nowcasting of rainfall

    NASA Astrophysics Data System (ADS)

    Shehu, Bora; Haberlandt, Uwe

    2017-04-01

    Radar data with high spatial and temporal resolution are commonly used to track and predict rainfall patterns that serve as input for hydrological applications. To mitigate the high errors associated with the radar, many merging methods employing ground measurements have been developed. However these methods have been investigated mainly for simulation purposes, while for nowcasting they are limited to the application of the mean field bias correction. Therefore this study aims to investigate the impact of different merging methods on the nowcasting of the rainfall volumes regarding urban floods. Radar bias correction based on mean fields and quantile mapping are analyzed individually and also are implemented in conditional merging. Special attention is given to the impact of spatial and temporal filters on the predictive skill of all methods. The relevance of the radar merging techniques is demonstrated by comparing the performance of the forecasted rainfall field from the radar tracking algorithm HyRaTrac for both raw and merged radar data. For this purpose several extreme events are selected and the respective performance is evaluated by cross validation of the continuous criteria (bias and rmse) and categorical criteria (POD, FAR and GSS) for lead times up to 2 hours. The study area is located within the 128 km radius of Hannover radar in Lower Saxony, Germany and the data set constitutes of 80 recording stations in 5 min time steps for the period 2000-2012. The results reveal how the choice of merging method and the implementation of filters impacts the performance of the forecast algorithm.

  8. An improved unsupervised clustering-based intrusion detection method

    NASA Astrophysics Data System (ADS)

    Hai, Yong J.; Wu, Yu; Wang, Guo Y.

    2005-03-01

    Practical Intrusion Detection Systems (IDSs) based on data mining are facing two key problems, discovering intrusion knowledge from real-time network data, and automatically updating them when new intrusions appear. Most data mining algorithms work on labeled data. In order to set up basic data set for mining, huge volumes of network data need to be collected and labeled manually. In fact, it is rather difficult and impractical to label intrusions, which has been a big restrict for current IDSs and has led to limited ability of identifying all kinds of intrusion types. An improved unsupervised clustering-based intrusion model working on unlabeled training data is introduced. In this model, center of a cluster is defined and used as substitution of this cluster. Then all cluster centers are adopted to detect intrusions. Testing on data sets of KDDCUP"99, experimental results demonstrate that our method has good performance in detection rate. Furthermore, the incremental-learning method is adopted to detect those unknown-type intrusions and it decreases false positive rate.

  9. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  10. A Progressive Image Compression Method Based on EZW Algorithm

    NASA Astrophysics Data System (ADS)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  11. Microbial detection method based on sensing molecular hydrogen.

    PubMed

    Wilkins, J R; Stoner, G E; Boykin, E H

    1974-05-01

    A simple method for detecting bacteria, based on the time of hydrogen evolution, was developed and tested against various members of the Enterobacteriaceae group. The test system consisted of (i) two electrodes, platinum and a reference electrode, (ii) a buffer amplifier, and (iii) a strip-chart recorder. Hydrogen evolution was measured by an increase in voltage in the negative (cathodic) direction and recorded on a strip-chart recorder. Hydrogen response curves consisted of (i) a lag period, (ii) a period of rapid buildup in potential due to hydrogen, and (iii) a period of decline in potential. A linear relationship was established between inoculum size and the time hydrogen was detected (lag period). Lag times ranged from 1 h for 10(6) cells/ml to 7 h for 10(0) cells/ml. For each 10-fold decrease in inoculum, length of the lag period increased 60 to 70 min. Mean cell concentrations at the time of hydrogen evolution were 10(6)/ml. Based on the linear relationship between inoculum size and lag period, these results indicate the potential application of the hydrogen-sensing method for rapidly detecting coliforms and other gas-producing microorganisms in a variety of clinical, food, and other samples.

  12. Methods and applications of structure based pharmacophores in drug discovery.

    PubMed

    Pirhadi, Somayeh; Shiri, Fereshteh; Ghasemi, Jahan B

    2013-01-01

    A pharmacophore model does not describe a real molecule or a real association of functional groups but illustrates a molecular recognition of a biological target shared by a group of compounds. Pharmacophores also represent the spatial arrangement of essential interactions in a receptor-binding pocket. Structure based pharmacophores (SBPs) can work both with a free (apo) structure or a macromolecule-ligand complex (holo) structure. The SBP methods that derive pharmacophore from protein-ligand complexes use the potential interactions observed between ligand and protein, whereas, the SBP method that aims to derive pharmacophore from ligand free protein, uses only protein active site information. Therefore SBPs do not encounter to challenging problems such as ligand flexibility, molecular alignment as well as proper selection of training set compounds in ligand based pharmacophore modeling. The current review deals with Hot Spot' analysis of binding site to feature generation, several approaches to feature reduction, and considers shape and excluded volumes to SBP model building. This review continues to represent several applications of SBPs in virtual screening especially in parallel screening approach and multi-target drug design. Also it reports the applications of SBPs in QSAR. This review emphasizes that SBPs are valuable tools for hit to lead optimization, virtual screening, scaffold hopping, and multi-target drug design.

  13. A microarray-based method to perform nucleic acid selections.

    PubMed

    Aminova, Olga; Disney, Matthew D

    2010-01-01

    This method describes a microarray-based platform to perform nucleic acid selections. Chemical ligands to which a nucleic acid binder is desired are immobilized onto an agarose microarray surface; the array is then incubated with an RNA library. Bound RNA library members are harvested directly from the array surface via gel excision at the position on the array where a ligand was immobilized. The RNA is then amplified via RT-PCR, cloned, and sequenced. This method has the following advantages over traditional resin-based Systematic Evolution of Ligands by Exponential Enrichment (SELEX): (1) multiple selections can be completed in parallel on a single microarray surface; (2) kinetic biases in the selections are mitigated since all RNA binders are harvested from an array via gel excision; (3) the amount of chemical ligand needed to perform a selection is minimized; (4) selections do not require expensive resins or equipment; and (5) the matrix used for selections is inexpensive and easy to prepare. Although this protocol was demonstrated for RNA selections, it should be applicable for any nucleic acid selection.

  14. Evaluation methods for association rules in spatial knowlegde base

    NASA Astrophysics Data System (ADS)

    Niu, X.; Ji, X.

    2014-04-01

    Association rule is an important model in data mining. It describes the relationship between predicates in transactions, makes the expression of knowledge hidden in data more specific and clear. While the developing and applying of remote sensing technology and automatic data collection tools in recent decades, tremendous amounts of spatial and non-spatial data have been collected and stored in large spatial database, so association rules mining from spatial database becomes a significant research area with extensive applications. How to find effective, reliable and interesting association rules from vast information for helping people analyze and make decision has become a significant issue. Evaluation methods measure spatial association rules with evaluation criteria. On the basis of analyzing the existing evaluation criteria, this paper improved the novelty evaluation method, built a spatial knowledge base, and proposed a new evaluation process based on the support-confidence evaluation system. Finally, the feasibility of the new evaluation process was validated by an experiment with real-world geographical spatial data.

  15. Framework of a Contour Based Depth Map Coding Method

    NASA Astrophysics Data System (ADS)

    Wang, Minghui; He, Xun; Jin, Xin; Goto, Satoshi

    Stereo-view and multi-view video formats are heavily investigated topics given their vast application potential. Depth Image Based Rendering (DIBR) system has been developed to improve Multiview Video Coding (MVC). Depth image is introduced to synthesize virtual views on the decoder side in this system. Depth image is a piecewise image, which is filled with sharp contours and smooth interior. Contours in a depth image show more importance than interior in view synthesis process. In order to improve the quality of the synthesized views and reduce the bitrate of depth image, a contour based coding strategy is proposed. First, depth image is divided into layers by different depth value intervals. Then regions, which are defined as the basic coding unit in this work, are segmented from each layer. The region is further divided into the contour and the interior. Two different procedures are employed to code contours and interiors respectively. A vector-based strategy is applied to code the contour lines. Straight lines in contours cost few of bits since they are regarded as vectors. Pixels, which are out of straight lines, are coded one by one. Depth values in the interior of a region are modeled by a linear or nonlinear formula. Coefficients in the formula are retrieved by regression. This process is called interior painting. Unlike conventional block based coding method, the residue between original frame and reconstructed frame (by contour rebuilt and interior painting) is not sent to decoder. In this proposal, contour is coded in a lossless way whereas interior is coded in a lossy way. Experimental results show that the proposed Contour Based Depth map Coding (CBDC) achieves a better performance than JMVC (reference software of MVC) in the high quality scenarios.

  16. Iterative support detection-based split Bregman method for wavelet frame-based image inpainting.

    PubMed

    He, Liangtian; Wang, Yilun

    2014-12-01

    The wavelet frame systems have been extensively studied due to their capability of sparsely approximating piece-wise smooth functions, such as images, and the corresponding wavelet frame-based image restoration models are mostly based on the penalization of the l1 norm of wavelet frame coefficients for sparsity enforcement. In this paper, we focus on the image inpainting problem based on the wavelet frame, propose a weighted sparse restoration model, and develop a corresponding efficient algorithm. The new algorithm combines the idea of iterative support detection method, first proposed by Wang and Yin for sparse signal reconstruction, and the split Bregman method for wavelet frame l1 model of image inpainting, and more important, naturally makes use of the specific multilevel structure of the wavelet frame coefficients to enhance the recovery quality. This new algorithm can be considered as the incorporation of prior structural information of the wavelet frame coefficients into the traditional l1 model. Our numerical experiments show that the proposed method is superior to the original split Bregman method for wavelet frame-based l1 norm image inpainting model as well as some typical l(p) (0 ≤ p < 1) norm-based nonconvex algorithms such as mean doubly augmented Lagrangian method, in terms of better preservation of sharp edges, due to their failing to make use of the structure of the wavelet frame coefficients.

  17. On Using a Fast Multipole Method-based Poisson Solver in anApproximate Projection Method

    SciTech Connect

    Williams, Sarah A.; Almgren, Ann S.; Puckett, E. Gerry

    2006-03-28

    Approximate projection methods are useful computational tools for solving the equations of time-dependent incompressible flow.Inthis report we will present a new discretization of the approximate projection in an approximate projection method. The discretizations of divergence and gradient will be identical to those in existing approximate projection methodology using cell-centered values of pressure; however, we will replace inversion of the five-point cell-centered discretization of the Laplacian operator by a Fast Multipole Method-based Poisson Solver (FMM-PS).We will show that the FMM-PS solver can be an accurate and robust component of an approximation projection method for constant density, inviscid, incompressible flow problems. Computational examples exhibiting second-order accuracy for smooth problems will be shown. The FMM-PS solver will be found to be more robust than inversion of the standard five-point cell-centered discretization of the Laplacian for certain time-dependent problems that challenge the robustness of the approximate projection methodology.

  18. A Vocal-Based Analytical Method for Goose Behaviour Recognition

    PubMed Central

    Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole

    2012-01-01

    Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86–97% sensitivity, 89–98% precision) and a reasonable recognition of flushing (79–86%, 66–80%) and landing behaviour(73–91%, 79–92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system. PMID:22737037

  19. Wavelet-based group and phase velocity measurements: Method

    NASA Astrophysics Data System (ADS)

    Yang, H. Y.; Wang, W. W.; Hung, S. H.

    2016-12-01

    Measurements of group and phase velocities of surface waves are often carried out by applying a series of narrow bandpass or stationary Gaussian filters localized at specific frequencies to wave packets and estimating the corresponding arrival times at the peak envelopes and phases of the Fourier spectra. However, it's known that seismic waves are inherently nonstationary and not well represented by a sum of sinusoids. Alternatively, a continuous wavelet transform (CWT) which decomposes a time series into a family of wavelets, translated and scaled copies of a generally fast oscillating and decaying function known as the mother wavelet, is capable of retaining localization in both the time and frequency domain and well-suited for the time-frequency analysis of nonstationary signals. Here we develop a wavelet-based method to measure frequency-dependent group and phase velocities, an essential dataset used in crust and mantle tomography. For a given time series, we employ the complex morlet wavelet to obtain the scalogram of amplitude modulus |Wg| and phase φ on the time-frequency plane. The instantaneous frequency (IF) is then calculated by taking the derivative of phase with respect to time, i.e., (1/2π)dφ(f, t)/dt. Time windows comprising strong energy arrivals to be measured can be identified by those IFs close to the frequencies with the maximum modulus and varying smoothly and monotonically with time. The respective IFs in each selected time window are further interpolated to yield a smooth branch of ridge points or representative IFs at which the arrival time, tridge(f), and phase, φridge(f), after unwrapping and correcting cycle skipping based on a priori knowledge of the possible velocity range, are determined for group and phase velocity estimation. We will demonstrate our measurement method using both ambient noise cross correlation functions and multi-mode surface waves from earthquakes. The obtained dispersion curves will be compared with those by a

  20. [A Standing Balance Evaluation Method Based on Largest Lyapunov Exponent].

    PubMed

    Liu, Kun; Wang, Hongrui; Xiao, Jinzhuang; Zhao, Qing

    2015-12-01

    In order to evaluate the ability of human standing balance scientifically, we in this study proposed a new evaluation method based on the chaos nonlinear analysis theory. In this method, a sinusoidal acceleration stimulus in forward/backward direction was forced under the subjects' feet, which was supplied by a motion platform. In addition, three acceleration sensors, which were fixed to the shoulder, hip and knee of each subject, were applied to capture the balance adjustment dynamic data. Through reconstructing the system phase space, we calculated the largest Lyapunov exponent (LLE) of the dynamic data of subjects' different segments, then used the sum of the squares of the difference between each LLE (SSDLLE) as the balance capabilities evaluation index. Finally, 20 subjects' indexes were calculated, and compared with evaluation results of existing methods. The results showed that the SSDLLE were more in line with the subjects' performance during the experiment, and it could measure the body's balance ability to some extent. Moreover, the results also illustrated that balance level was determined by the coordinate ability of various joints, and there might be more balance control strategy in the process of maintaining balance.

  1. Development of Cross-Assembly Phage PCR-Based Methods ...

    EPA Pesticide Factsheets

    Technologies that can characterize human fecal pollution in environmental waters offer many advantages over traditional general indicator approaches. However, many human-associated methods cross-react with non-human animal sources and lack suitable sensitivity for fecal source identification applications. The genome of a newly discovered bacteriophage (~97 kbp), the Cross-Assembly phage or “crAssphage”, assembled from a human gut metagenome DNA sequence library is predicted to be both highly abundant and predominately occur in human feces suggesting that this double stranded DNA virus may be an ideal human fecal pollution indicator. We report the development of two human-associated crAssphage endpoint PCR methods (crAss056 and crAss064). A shotgun strategy was employed where 384 candidate primers were designed to cover ~41 kbp of the crAssphage genome deemed favorable for method development based on a series of bioinformatics analyses. Candidate primers were subjected to three rounds of testing to evaluate assay optimization, specificity, limit of detection (LOD95), geographic variability, and performance in environmental water samples. The top two performing candidate primer sets exhibited 100% specificity (n = 70 individual samples from 8 different animal species), >90% sensitivity (n = 10 raw sewage samples from different geographic locations), LOD95 of 0.01 ng/µL of total DNA per reaction, and successfully detected human fecal pollution in impaired envi

  2. Novel Parachlamydia acanthamoebae quantification method based on coculture with amoebae.

    PubMed

    Matsuo, Junji; Hayashi, Yasuhiro; Nakamura, Shinji; Sato, Marie; Mizutani, Yoshihiko; Asaka, Masahiro; Yamaguchi, Hiroyuki

    2008-10-01

    Parachlamydia acanthamoebae, belonging to the order Chlamydiales, is an obligately intracellular bacterium that infects free-living amoebae and is a potential human pathogen. However, no method exists to accurately quantify viable bacterial numbers. We present a novel quantification method for P. acanthamoebae based on coculture with amoebae. P. acanthamoebae was cultured either with Acanthamoeba spp. or with mammalian epithelial HEp-2 or Vero cells. The infection rate of P. acanthamoebae (amoeba-infectious dose [AID]) was determined by DAPI (4',6-diamidino-2-phenylindole) staining and was confirmed by fluorescent in situ hybridization. AIDs were plotted as logistic sigmoid dilution curves, and P. acanthamoebae numbers, defined as amoeba-infectious units (AIU), were calculated. During culture, amoeba numbers and viabilities did not change, and amoebae did not change from trophozoites to cysts. Eight amoeba strains showed similar levels of P. acanthamoebae growth, and bacterial numbers reached ca. 1,000-fold (10(9) AIU preculture) after 4 days. In contrast, no increase was observed for P. acanthamoebae in either mammalian cell line. However, aberrant structures in epithelial cells, implying possible persistent infection, were seen by transmission electron microscopy. Thus, our method could monitor numbers of P. acanthamoebae bacteria in host cells and may be useful for understanding chlamydiae present in the natural environment as human pathogens.

  3. Interior reconstruction method based on rotation-translation scanning model.

    PubMed

    Wang, Xianchao; Tang, Ziyue; Yan, Bin; Li, Lei; Bao, Shanglian

    2014-01-01

    In various applications of computed tomography (CT), it is common that the reconstructed object is over the field of view (FOV) or we may intend to sue a FOV which only covers the region of interest (ROI) for the sake of reducing radiation dose. These kinds of imaging situations often lead to interior reconstruction problems which are difficult cases in the reconstruction field of CT, due to the truncated projection data at every view angle. In this paper, an interior reconstruction method is developed based on a rotation-translation (RT) scanning model. The method is implemented by first scanning the reconstructed region, and then scanning a small region outside the support of the reconstructed object after translating the rotation centre. The differentiated backprojection (DBP) images of the reconstruction region and the small region outside the object can be respectively obtained from the two-time scanning data without data rebinning process. At last, the projection onto convex sets (POCS) algorithm is applied to reconstruct the interior region. Numerical simulations are conducted to validate the proposed reconstruction method.

  4. Celestial positioning method based on centroid correction of STAR trajectory

    NASA Astrophysics Data System (ADS)

    Liu, Dianjian; Zhang, Zhili; Zhou, Zhaofa; Zhao, Junyang; Liu, Xianyi

    2017-05-01

    In order to reduce the position deviations between the centroid extracted from the star points and the actual centroid and reduce the influence of coarse errors contained in the astronomical information of star calculation on astronomical positioning accuracy of the digital zenith equipment, an astronomical locating method to correct star centroid location is proposed. Based on the star trajectory equation obtained from the imaging model of the starimage pointson the image plane of the zenith equipment, the center-of-mass centroid trajectory parameters are obtained by least-square method with the information of star point centroid position extracted from the multi-frame star images shot at the same observation station, which is used to identify the theoretical centroid position and correct the original centroid position, so as to participate in the astronomical positioning solution. For the simulation star map of gaussian white noise with distribution n (0, 52), after the centroid correction, the accuracy of the longitude of the astronomical rectangle is improved by 0.058″, the latitude improvement by 0.176″ to the mostand the position of the satellite by about 5m. Experimental results show that this method has good applicability, which can improve the celestial position accuracy of digital zenith equipment effectively.

  5. Improved reliability analysis method based on the failure assessment diagram

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Zhang, Zheng; Zhong, Qunpeng

    2012-07-01

    With the uncertainties related to operating conditions, in-service non-destructive testing (NDT) measurements and material properties considered in the structural integrity assessment, probabilistic analysis based on the failure assessment diagram (FAD) approach has recently become an important concern. However, the point density revealing the probabilistic distribution characteristics of the assessment points is usually ignored. To obtain more detailed and direct knowledge from the reliability analysis, an improved probabilistic fracture mechanics (PFM) assessment method is proposed. By integrating 2D kernel density estimation (KDE) technology into the traditional probabilistic assessment, the probabilistic density of the randomly distributed assessment points is visualized in the assessment diagram. Moreover, a modified interval sensitivity analysis is implemented and compared with probabilistic sensitivity analysis. The improved reliability analysis method is applied to the assessment of a high pressure pipe containing an axial internal semi-elliptical surface crack. The results indicate that these two methods can give consistent sensitivities of input parameters, but the interval sensitivity analysis is computationally more efficient. Meanwhile, the point density distribution and its contour are plotted in the FAD, thereby better revealing the characteristics of PFM assessment. This study provides a powerful tool for the reliability analysis of critical structures.

  6. Diffusion-based method for producing density-equalizing maps

    PubMed Central

    Gastner, Michael T.; Newman, M. E. J.

    2004-01-01

    Map makers have for many years searched for a way to construct cartograms, maps in which the sizes of geographic regions such as countries or provinces appear in proportion to their population or some other analogous property. Such maps are invaluable for the representation of census results, election returns, disease incidence, and many other kinds of human data. Unfortunately, to scale regions and still have them fit together, one is normally forced to distort the regions' shapes, potentially resulting in maps that are difficult to read. Many methods for making cartograms have been proposed, some of them are extremely complex, but all suffer either from this lack of readability or from other pathologies, like overlapping regions or strong dependence on the choice of coordinate axes. Here, we present a technique based on ideas borrowed from elementary physics that suffers none of these drawbacks. Our method is conceptually simple and produces useful, elegant, and easily readable maps. We illustrate the method with applications to the results of the 2000 U.S. presidential election, lung cancer cases in the State of New York, and the geographical distribution of stories appearing in the news. PMID:15136719

  7. Methods for assessing relative importance in preference based outcome measures.

    PubMed

    Kaplan, R M; Feeny, D; Revicki, D A

    1993-12-01

    This paper reviews issues relevant to preference assessment for utility based measures of health-related quality of life. Cost/utility studies require a common measurement of health outcome, such as the quality adjusted life year (QALY). A key element in the QALY methodology is the measure of preference that estimates subjective health quality. Economists and psychologists differ on their preferred approach to preference measurement. Economists rely on utility assessment methods that formally consider economic trades. These methods include the standard gamble, time-trade off and person trade-off. However, some evidence suggests that many of the assumptions that underlie economic measurements of choice are open to challenge because human information processors do poorly at integrating complex probability information when making decisions that involve risk. Further, economic analysis assumes that choices accurately correspond to the way rational humans use information. Psychology experiments suggest that methods commonly used for economic analysis do not represent the underlying true preference continuum and some evidence supports the use of simple rating scales. More recent research by economists attempts integrated cognitive models, while contemporary research by psychologists considers economic models of choice. The review also suggests that difference in preference between different social groups tends to be small.

  8. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  9. A method for MREIT-based source imaging: simulation studies

    NASA Astrophysics Data System (ADS)

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data.

  10. Histogram-Based Calibration Method for Pipeline ADCs

    PubMed Central

    Son, Hyeonuk; Jang, Jaewon; Kim, Heetae; Kang, Sungho

    2015-01-01

    Measurement and calibration of an analog-to-digital converter (ADC) using a histogram-based method requires a large volume of data and a long test duration, especially for a high resolution ADC. A fast and accurate calibration method for pipelined ADCs is proposed in this research. The proposed calibration method composes histograms through the outputs of each stage and calculates error sources. The digitized outputs of a stage are influenced directly by the operation of the prior stage, so the results of the histogram provide the information of errors in the prior stage. The composed histograms reduce the required samples and thus calibration time being implemented by simple modules. For 14-bit resolution pipelined ADC, the measured maximum integral non-linearity (INL) is improved from 6.78 to 0.52 LSB, and the spurious-free dynamic range (SFDR) and signal-to-noise-and-distortion ratio (SNDR) are improved from 67.0 to 106.2dB and from 65.6 to 84.8dB, respectively. PMID:26070196

  11. A shape-based inter-layer contours correspondence method for ICT-based reverse engineering

    PubMed Central

    Duan, Liming; Yang, Shangpeng; Zhang, Gui; Feng, Fei; Gu, Minghui

    2017-01-01

    The correspondence of a stack of planar contours in ICT (industrial computed tomography)-based reverse engineering, a key step in surface reconstruction, is difficult when the contours or topology of the object are complex. Given the regularity of industrial parts and similarity of the inter-layer contours, a specialized shape-based inter-layer contours correspondence method for ICT-based reverse engineering was presented to solve the above problem based on the vectorized contours. In this paper, the vectorized contours extracted from the slices consist of three graphical primitives: circles, arcs and segments. First, the correspondence of the inter-layer primitives is conducted based on the characteristics of the primitives. Second, based on the corresponded primitives, the inter-layer contours correspond with each other using the proximity rules and exhaustive search. The proposed method can make full use of the shape information to handle industrial parts with complex structures. The feasibility and superiority of this method have been demonstrated via the related experiments. This method can play an instructive role in practice and provide a reference for the related research. PMID:28489867

  12. A shape-based inter-layer contours correspondence method for ICT-based reverse engineering.

    PubMed

    Duan, Liming; Yang, Shangpeng; Zhang, Gui; Feng, Fei; Gu, Minghui

    2017-01-01

    The correspondence of a stack of planar contours in ICT (industrial computed tomography)-based reverse engineering, a key step in surface reconstruction, is difficult when the contours or topology of the object are complex. Given the regularity of industrial parts and similarity of the inter-layer contours, a specialized shape-based inter-layer contours correspondence method for ICT-based reverse engineering was presented to solve the above problem based on the vectorized contours. In this paper, the vectorized contours extracted from the slices consist of three graphical primitives: circles, arcs and segments. First, the correspondence of the inter-layer primitives is conducted based on the characteristics of the primitives. Second, based on the corresponded primitives, the inter-layer contours correspond with each other using the proximity rules and exhaustive search. The proposed method can make full use of the shape information to handle industrial parts with complex structures. The feasibility and superiority of this method have been demonstrated via the related experiments. This method can play an instructive role in practice and provide a reference for the related research.

  13. Method of estimation of cloud base height using ground-based digital stereophotography

    NASA Astrophysics Data System (ADS)

    Chulichkov, Alexey I.; Andreev, Maksim S.; Emilenko, Aleksandr S.; Ivanov, Victor A.; Medvedev, Andrey P.; Postylyakov, Oleg V.

    2015-11-01

    Errors of the retrieval of the atmospheric composition using optical methods (DOAS et al.) are under the determining influence of the cloudiness during the measurements. Information on cloud characteristics helps to adjust the optical model of the atmosphere used to interpret the measurements and to reduce the retrieval errors are. For the reconstruction of some geometrical characteristics of clouds a method was developed based on taking pictures of the sky by a pair of digital photo cameras and subsequent processing of the obtained sequence of stereo frames to obtain the height of the cloud base. Since the directions of the optical axes of the stereo cameras are not exactly known, a procedure of adjusting of obtained frames was developed which use photographs of the night starry sky. In the second step, the method of the morphological analysis of images is used to determine the relative shift of the coordinates of some fragment of cloud. The shift is used to estimate the searched cloud base height. The proposed method can be used for automatic processing of stereo data and getting the cloud base height. The report describes a mathematical model of stereophotography measurement, poses and solves the problem of adjusting of optical axes of the cameras, describes method of searching of cloud fragments at another frame by the morphological image analysis; the problem of estimating the cloud base height is formulated and solved. Theoretical investigation shows that for the stereo base of 60 m and shooting with a resolution of 1600x1200 pixels in field of view of 60° the errors do not exceed 10% for the cloud base height up to 4 km. Optimization of camera settings can farther improve the accuracy. Available for authors experimental setup with the stereo base of 17 m and a resolution of 640x480 pixels preliminary confirmed theoretical estimations of the accuracy in comparison with laser rangefinder.

  14. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    NASA Astrophysics Data System (ADS)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  15. Updating National Topographic Data Base Using Change Detection Methods

    NASA Astrophysics Data System (ADS)

    Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.

    2016-06-01

    The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.

  16. A window-based time series feature extraction method.

    PubMed

    Katircioglu-Öztürk, Deniz; Güvenir, H Altay; Ravens, Ursula; Baykal, Nazife

    2017-08-09

    This study proposes a robust similarity score-based time series feature extraction method that is termed as Window-based Time series Feature ExtraCtion (WTC). Specifically, WTC generates domain-interpretable results and involves significantly low computational complexity thereby rendering itself useful for densely sampled and populated time series datasets. In this study, WTC is applied to a proprietary action potential (AP) time series dataset on human cardiomyocytes and three precordial leads from a publicly available electrocardiogram (ECG) dataset. This is followed by comparing WTC in terms of predictive accuracy and computational complexity with shapelet transform and fast shapelet transform (which constitutes an accelerated variant of the shapelet transform). The results indicate that WTC achieves a slightly higher classification performance with significantly lower execution time when compared to its shapelet-based alternatives. With respect to its interpretable features, WTC has a potential to enable medical experts to explore definitive common trends in novel datasets. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. An unbalanced spectra classification method based on entropy

    NASA Astrophysics Data System (ADS)

    Liu, Zhong-bao; Zhao, Wen-juan

    2017-05-01

    How to solve the problem of distinguishing the minority spectra from the majority of the spectra is quite important in astronomy. In view of this, an unbalanced spectra classification method based on entropy (USCM) is proposed in this paper to deal with the unbalanced spectra classification problem. USCM greatly improves the performances of the traditional classifiers on distinguishing the minority spectra as it takes the data distribution into consideration in the process of classification. However, its time complexity is exponential with the training size, and therefore, it can only deal with the problem of small- and medium-scale classification. How to solve the large-scale classification problem is quite important to USCM. It can be easily obtained by mathematical computation that the dual form of USCM is equivalent to the minimum enclosing ball (MEB), and core vector machine (CVM) is introduced, USCM based on CVM is proposed to deal with the large-scale classification problem. Several comparative experiments on the 4 subclasses of K-type spectra, 3 subclasses of F-type spectra and 3 subclasses of G-type spectra from Sloan Digital Sky Survey (SDSS) verify USCM and USCM based on CVM perform better than kNN (k nearest neighbor) and SVM (support vector machine) in dealing with the problem of rare spectra mining respectively on the small- and medium-scale datasets and the large-scale datasets.

  18. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method.

    PubMed

    Muto, Hiroshi; Tani, Yuji; Suzuki, Shigemasa; Yokooka, Yuki; Abe, Tamotsu; Sase, Yuji; Terashita, Takayoshi; Ogasawara, Katsuhiko

    2011-09-30

    Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views), knee (three views), wrist (two views), and other. Indirect costs were allocated to cost objects using the ABC method. The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients.

  19. Methods for Evaluating Respondent Attrition in Web-Based Surveys

    PubMed Central

    Sabo, Roy T; Krist, Alex H; Day, Teresa; Cyrus, John; Woolf, Steven H

    2016-01-01

    Background Electronic surveys are convenient, cost effective, and increasingly popular tools for collecting information. While the online platform allows researchers to recruit and enroll more participants, there is an increased risk of participant dropout in Web-based research. Often, these dropout trends are simply reported, adjusted for, or ignored altogether. Objective To propose a conceptual framework that analyzes respondent attrition and demonstrates the utility of these methods with existing survey data. Methods First, we suggest visualization of attrition trends using bar charts and survival curves. Next, we propose a generalized linear mixed model (GLMM) to detect or confirm significant attrition points. Finally, we suggest applications of existing statistical methods to investigate the effect of internal survey characteristics and patient characteristics on dropout. In order to apply this framework, we conducted a case study; a seventeen-item Informed Decision-Making (IDM) module addressing how and why patients make decisions about cancer screening. Results Using the framework, we were able to find significant attrition points at Questions 4, 6, 7, and 9, and were also able to identify participant responses and characteristics associated with dropout at these points and overall. Conclusions When these methods were applied to survey data, significant attrition trends were revealed, both visually and empirically, that can inspire researchers to investigate the factors associated with survey dropout, address whether survey completion is associated with health outcomes, and compare attrition patterns between groups. The framework can be used to extract information beyond simple responses, can be useful during survey development, and can help determine the external validity of survey results. PMID:27876687

  20. Artificial Boundary Conditions Based on the Difference Potentials Method

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.

    1996-01-01

    While numerically solving a problem initially formulated on an unbounded domain, one typically truncates this domain, which necessitates setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The issue of setting the ABC's appears to be most significant in many areas of scientific computing, for example, in problems originating from acoustics, electrodynamics, solid mechanics, and fluid dynamics. In particular, in computational fluid dynamics (where external problems present a wide class of practically important formulations) the proper treatment of external boundaries may have a profound impact on the overall quality and performance of numerical algorithms. Most of the currently used techniques for setting the ABC's can basically be classified into two groups. The methods from the first group (global ABC's) usually provide high accuracy and robustness of the numerical procedure but often appear to be fairly cumbersome and (computationally) expensive. The methods from the second group (local ABC's) are, as a rule, algorithmically simple, numerically cheap, and geometrically universal; however, they usually lack accuracy of computations. In this paper we first present a survey and provide a comparative assessment of different existing methods for constructing the ABC's. Then, we describe a relatively new ABC's technique of ours and review the corresponding results. This new technique, in our opinion, is currently one of the most promising in the field. It enables one to construct such ABC's that combine the advantages relevant to the two aforementioned classes of existing methods. Our approach is based on application of the difference potentials method attributable to V. S. Ryaben'kii. This approach allows us to obtain highly accurate ABC's in the form of certain (nonlocal) boundary operator equations. The operators involved are analogous to the pseudodifferential boundary projections first introduced by A. P. Calderon and then

  1. ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS

    SciTech Connect

    E. Blanford; E. Keldrauk; M. Laufer; M. Mieler; J. Wei; B. Stojadinovic; P.F. Peterson

    2010-09-20

    Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement, and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using

  2. Physics-Based Imaging Methods for Terahertz Nondestructive Evaluation Applications

    NASA Astrophysics Data System (ADS)

    Kniffin, Gabriel Paul

    Lying between the microwave and far infrared (IR) regions, the "terahertz gap" is a relatively unexplored frequency band in the electromagnetic spectrum that exhibits a unique combination of properties from its neighbors. Like in IR, many materials have characteristic absorption spectra in the terahertz (THz) band, facilitating the spectroscopic "fingerprinting" of compounds such as drugs and explosives. In addition, non-polar dielectric materials such as clothing, paper, and plastic are transparent to THz, just as they are to microwaves and millimeter waves. These factors, combined with sub-millimeter wavelengths and non-ionizing energy levels, makes sensing in the THz band uniquely suited for many NDE applications. In a typical nondestructive test, the objective is to detect a feature of interest within the object and provide an accurate estimate of some geometrical property of the feature. Notable examples include the thickness of a pharmaceutical tablet coating layer or the 3D location, size, and shape of a flaw or defect in an integrated circuit. While the material properties of the object under test are often tightly controlled and are generally known a priori, many objects of interest exhibit irregular surface topographies such as varying degrees of curvature over the extent of their surfaces. Common THz pulsed imaging (TPI) methods originally developed for objects with planar surfaces have been adapted for objects with curved surfaces through use of mechanical scanning procedures in which measurements are taken at normal incidence over the extent of the surface. While effective, these methods often require expensive robotic arm assemblies, the cost and complexity of which would likely be prohibitive should a large volume of tests be needed to be carried out on a production line. This work presents a robust and efficient physics-based image processing approach based on the mature field of parabolic equation methods, common to undersea acoustics, seismology

  3. Image-based method for automated phase correction of ghost.

    PubMed

    Chen, Chunxiao; Luo, Limin; Tao, Hua; Wang, Shijie

    2005-01-01

    One of the most common artifacts for echo planar imaging is the ghost artifact, typically overcome with the aid of a reference scan preceding the actual image acquisition. In this work, we describe an automated free-scan-reference method for reducing ghost artifact using image-based correction. The two dimensional Fourier transformation of an entire data of image matrix is used to reconstruct two new images, one is reconstructed only by even rows, the other is only by odd rows, with the remaining ones zero-filled. Phase shift between even echoes and odd echoes can be computed by using the two images. Unwrapped phase shift gained by Marquardt-Levenber unlinear fitting can be used to suppress the ghost effectively.

  4. Study of Flapping Flight Using Discrete Vortex Method Based Simulations

    NASA Astrophysics Data System (ADS)

    Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.

    2013-12-01

    In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.

  5. Material measurement method based on femtosecond laser plasma shock wave

    NASA Astrophysics Data System (ADS)

    Zhong, Dong; Li, Zhongming

    2017-03-01

    The acoustic emission signal of laser plasma shock wave, which comes into being when femtosecond laser ablates pure Cu, Fe, and Al target material, has been detected by using the fiber Fabry-Perot (F-P) acoustic emission sensing probe. The spectrum characters of the acoustic emission signals for three kinds of materials have been analyzed and studied by using Fourier transform. The results show that the frequencies of the acoustic emission signals detected from the three kinds of materials are different. Meanwhile, the frequencies are almost identical for the same materials under different ablation energies and detection ranges. Certainly, the amplitudes of the spectral character of the three materials show a fixed pattern. The experimental results and methods suggest a potential application of the plasma shock wave on-line measurement based on the femtosecond laser ablating target by using the fiber F-P acoustic emission sensor probe.

  6. Note: A manifold ranking based saliency detection method for camera

    NASA Astrophysics Data System (ADS)

    Zhang, Libo; Sun, Yihan; Luo, Tiejian; Rahman, Mohammad Muntasir

    2016-09-01

    Research focused on salient object region in natural scenes has attracted a lot in computer vision and has widely been used in many applications like object detection and segmentation. However, an accurate focusing on the salient region, while taking photographs of the real-world scenery, is still a challenging task. In order to deal with the problem, this paper presents a novel approach based on human visual system, which works better with the usage of both background prior and compactness prior. In the proposed method, we eliminate the unsuitable boundary with a fixed threshold to optimize the image boundary selection which can provide more precise estimations. Then, the object detection, which is optimized with compactness prior, is obtained by ranking with background queries. Salient objects are generally grouped together into connected areas that have compact spatial distributions. The experimental results on three public datasets demonstrate that the precision and robustness of the proposed algorithm have been improved obviously.

  7. Method and apparatus for making articles from particle based materials

    DOEpatents

    Moorhead, Arthur J.; Menchhofer, Paul A.

    1995-01-01

    A method and apparatus for the production of articles made of a particle-based material; e.g., ceramics and sintered metals. In accordance with the invention, a thermally settable slurry containing a relatively high concentration of the particles is conveyed through an elongate flow area having a desired cross-sectional configuration. The slurry is heated as it is advanced through the flow area causing the slurry to set or harden in a shape which conforms to the cross-sectional configuration of the flow area. The material discharges from the flow area as a self-supporting solid of near net final dimensions. The article may then be sintered to consolidate the particles and provide a high density product.

  8. Optical center alignment technique based on inner profile measurement method

    NASA Astrophysics Data System (ADS)

    Wakayama, Toshitaka; Yoshizawa, Toru

    2014-05-01

    Center alignment is important technique to tune up the spindle of various precision machines in manufacturing industry. Conventionally such a tool as a dial indicator has been used to adjust and to position the axis by manual operations of a technical worker. However, it is not easy to precisely control its axis. In this paper, we developed the optical center alignment technique based on inner profile measurement using a ring beam device. In this case, the center position of the cylinder hole can be determined from circular profile detected by optical sectioning method using a ring beam device. In our trials, the resolution of the center position is proved less than 10 micrometers in extreme cases. This technique is available for practical applications in machine tool industry.

  9. Method and apparatus for making articles from particle based materials

    DOEpatents

    Moorhead, A.J.; Menchhofer, P.A.

    1995-12-19

    A method and apparatus are disclosed for the production of articles made of a particle-based material; e.g., ceramics and sintered metals. In accordance with the invention, a thermally settable slurry containing a relatively high concentration of the particles is conveyed through an elongate flow area having a desired cross-sectional configuration. The slurry is heated as it is advanced through the flow area causing the slurry to set or harden in a shape which conforms to the cross-sectional configuration of the flow area. The material discharges from the flow area as a self-supporting solid of near net final dimensions. The article may then be sintered to consolidate the particles and provide a high density product. 10 figs.

  10. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  11. Proteomic mass spectra classification using decision tree based ensemble methods.

    PubMed

    Geurts, Pierre; Fillet, Marianne; de Seny, Dominique; Meuwis, Marie-Alice; Malaise, Michel; Merville, Marie-Paule; Wehenkel, Louis

    2005-07-15

    Modern mass spectrometry allows the determination of proteomic fingerprints of body fluids like serum, saliva or urine. These measurements can be used in many medical applications in order to diagnose the current state or predict the evolution of a disease. Recent developments in machine learning allow one to exploit such datasets, characterized by small numbers of very high-dimensional samples. We propose a systematic approach based on decision tree ensemble methods, which is used to automatically determine proteomic biomarkers and predictive models. The approach is validated on two datasets of surface-enhanced laser desorption/ionization time of flight measurements, for the diagnosis of rheumatoid arthritis and inflammatory bowel diseases. The results suggest that the methodology can handle a broad class of similar problems.

  12. Study of a smart platform based on backstepping control method

    NASA Astrophysics Data System (ADS)

    Li, Luyu; Cheng, Baowei; Zhang, Yu; Qin, Han

    2017-07-01

    A structural model is significant for the verification of structural control algorithms. However, for nonlinear behavior, experiments are mostly destructive tests that are costly, and conducting repetitive structural experiments is difficult. Therefore, a repetitive structural vibration model is important for structural vibration control. In this study, a smart platform to realize different structural behaviors is developed based on the backstepping control algorithm. Lyapunov functions are used to derive the control law. Simulations show that the designed model can track the structural responses of different arbitrary linear structures very well. In addition, the proposed platform can track responses of different piecewise linear structures and desired models with various hysteresis very well. Numerical results verify the effectiveness of the proposed tracking controller through the backstepping method for the established platform.

  13. Fiber optic pressure sensing method based on Sagnac interferometer

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Zhuang, Zhi; Chen, Ying; Yang, Yuanhong

    2014-11-01

    Pressure method using polarization-maintaining photonic crystal fiber (PM-PCF) as sensing element based on Sagnac interferometer is proposed to monitor inter layer pressure in especial compact structure. Sensing model is analyzed and test system is set up, which is validated by experiment. The birefringence can be modified by the deformation of PM-PCF under transverse pressure, realizing pressure measurement by detecting the wavelength shift of one specific valley from output of the Sagnac interferometer. The experiment results show that the output interference fringes were shifted linearly with pressure. The dynamic range of 0 kN ~10kN, sensing precision of 2.6%, and pressure sensitivity of 0.4414nm/kN are achieved, and the strain relaxation phenomenon of cushion can be observed obviously. The sensor has better engineering practicability and capability to restrain interference brought up by fluctuation of environment temperature, which temperature sensitivity is -11.8pm/°C.

  14. A novel classification method based on membership function

    NASA Astrophysics Data System (ADS)

    Peng, Yaxin; Shen, Chaomin; Wang, Lijia; Zhang, Guixu

    2011-03-01

    We propose a method for medical image classification using membership function. Our aim is to classify the image as several classes based on a prior knowledge. For every point, we calculate its membership function, i.e., the probability that the point belongs to each class. The point is finally labeled as the class with the highest value of membership function. The classification is reduced to a minimization problem of a functional with arguments of membership functions. Three novelties are in our paper. First, bias correction and Rudin-Osher-Fatemi (ROF) model are adopted to the input image to enhance the image quality. Second, unconstrained functional is used. We use variable substitution to avoid the constraints that membership functions should be positive and with sum one. Third, several techniques are used to fasten the computation. The experimental result of ventricle shows the validity of this approach.

  15. Classification data mining method based on dynamic RBF neural networks

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Xu, Min; Zhang, Zhang; Duan, Luping

    2009-04-01

    With the widely application of databases and sharp development of Internet, The capacity of utilizing information technology to manufacture and collect data has improved greatly. It is an urgent problem to mine useful information or knowledge from large databases or data warehouses. Therefore, data mining technology is developed rapidly to meet the need. But DM (data mining) often faces so much data which is noisy, disorder and nonlinear. Fortunately, ANN (Artificial Neural Network) is suitable to solve the before-mentioned problems of DM because ANN has such merits as good robustness, adaptability, parallel-disposal, distributing-memory and high tolerating-error. This paper gives a detailed discussion about the application of ANN method used in DM based on the analysis of all kinds of data mining technology, and especially lays stress on the classification Data Mining based on RBF neural networks. Pattern classification is an important part of the RBF neural network application. Under on-line environment, the training dataset is variable, so the batch learning algorithm (e.g. OLS) which will generate plenty of unnecessary retraining has a lower efficiency. This paper deduces an incremental learning algorithm (ILA) from the gradient descend algorithm to improve the bottleneck. ILA can adaptively adjust parameters of RBF networks driven by minimizing the error cost, without any redundant retraining. Using the method proposed in this paper, an on-line classification system was constructed to resolve the IRIS classification problem. Experiment results show the algorithm has fast convergence rate and excellent on-line classification performance.

  16. A quantitative dimming method for LED based on PWM

    NASA Astrophysics Data System (ADS)

    Wang, Jiyong; Mou, Tongsheng; Wang, Jianping; Tian, Xiaoqing

    2012-10-01

    Traditional light sources were required to provide stable and uniform illumination for a living or working environment considering performance of visual function of human being. The requirement was always reasonable until non-visual functions of the ganglion cells in the retina photosensitive layer were found. New generation of lighting technology, however, is emerging based on novel lighting materials such as LED and photobiological effects on human physiology and behavior. To realize dynamic lighting of LED whose intensity and color were adjustable to the need of photobiological effects, a quantitative dimming method based on Pulse Width Modulation (PWM) and light-mixing technology was presented. Beginning with two channels' PWM, this paper demonstrated the determinacy and limitation of PWM dimming for realizing Expected Photometric and Colorimetric Quantities (EPCQ), in accordance with the analysis on geometrical, photometric, colorimetric and electrodynamic constraints. A quantitative model which mapped the EPCQ into duty cycles was finally established. The deduced model suggested that the determinacy was a unique individuality only for two channels' and three channels' PWM, but the limitation was an inevitable commonness for multiple channels'. To examine the model, a light-mixing experiment with two kinds of white LED simulated variations of illuminance and Correlation Color Temperature (CCT) from dawn to midday. Mean deviations between theoretical values and measured values were obtained, which were 15lx and 23K respectively. Result shows that this method can effectively realize the light spectrum which has a specific requirement of EPCQ, and provides a theoretical basis and a practical way for dynamic lighting of LED.

  17. Comparison of Text-Based and Visual-Based Programming Input Methods for First-Time Learners

    ERIC Educational Resources Information Center

    Saito, Daisuke; Washizaki, Hironori; Fukazawa, Yoshiaki

    2017-01-01

    Aim/Purpose: When learning to program, both text-based and visual-based input methods are common. However, it is unclear which method is more appropriate for first-time learners (first learners). Background: The differences in the learning effect between text-based and visual-based input methods for first learners are compared the using a…

  18. A Molecular Selection Index Method Based on Eigenanalysis

    PubMed Central

    Cerón-Rojas, J. Jesús; Castillo-González, Fernando; Sahagún-Castellanos, Jaime; Santacruz-Varela, Amalio; Benítez-Riquelme, Ignacio; Crossa, José

    2008-01-01

    The traditional molecular selection index (MSI) employed in marker-assisted selection maximizes the selection response by combining information on molecular markers linked to quantitative trait loci (QTL) and phenotypic values of the traits of the individuals of interest. This study proposes an MSI based on an eigenanalysis method (molecular eigen selection index method, MESIM), where the first eigenvector is used as a selection index criterion, and its elements determine the proportion of the trait's contribution to the selection index. This article develops the theoretical framework of MESIM. Simulation results show that the genotypic means and the expected selection response from MESIM for each trait are equal to or greater than those from the traditional MSI. When several traits are simultaneously selected, MESIM performs well for traits with relatively low heritability. The main advantages of MESIM over the traditional molecular selection index are that its statistical sampling properties are known and that it does not require economic weights and thus can be used in practical applications when all or some of the traits need to be improved simultaneously. PMID:18716338

  19. An analytic reconstruction method for PET based on cubic splines

    NASA Astrophysics Data System (ADS)

    Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.

    2014-03-01

    PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.

  20. A wavelet-based method for multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Zhang, Chaoyang; Zhou, Zhaoxian

    2012-06-01

    A wavelet-based method is proposed for multispectral face recognition in this paper. Gabor wavelet transform is a common tool for orientation analysis of a 2D image; whereas Hamming distance is an efficient distance measurement for face identification. Specifically, at each frequency band, an index number representing the strongest orientational response is selected, and then encoded in binary format to favor the Hamming distance calculation. Multiband orientation bit codes are then organized into a face pattern byte (FPB) by using order statistics. With the FPB, Hamming distances are calculated and compared to achieve face identification. The FPB algorithm was initially created using thermal images, while the EBGM method was originated with visible images. When two or more spectral images from the same subject are available, the identification accuracy and reliability can be enhanced using score fusion. We compare the identification performance of applying five recognition algorithms to the three-band (visible, near infrared, thermal) face images, and explore the fusion performance of combing the multiple scores from three recognition algorithms and from three-band face images, respectively. The experimental results show that the FPB is the best recognition algorithm, the HMM yields the best fusion result, and the thermal dataset results in the best fusion performance compared to other two datasets.

  1. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  2. a Mapping Method of Slam Based on Look up Table

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Li, J.; Wang, A.; Wang, J.

    2017-09-01

    In the last years several V-SLAM(Visual Simultaneous Localization and Mapping) approaches have appeared showing impressive reconstructions of the world. However these maps are built with far more than the required information. This limitation comes from the whole process of each key-frame. In this paper we present for the first time a mapping method based on the LOOK UP TABLE(LUT) for visual SLAM that can improve the mapping effectively. As this method relies on extracting features in each cell divided from image, it can get the pose of camera that is more representative of the whole key-frame. The tracking direction of key-frames is obtained by counting the number of parallax directions of feature points. LUT stored all mapping needs the number of cell corresponding to the tracking direction which can reduce the redundant information in the key-frame, and is more efficient to mapping. The result shows that a better map with less noise is build using less than one-third of the time. We believe that the capacity of LUT efficiently building maps makes it a good choice for the community to investigate in the scene reconstruction problems.

  3. Unbiased methods for population-based association studies.

    PubMed

    Devlin, B; Roeder, K; Bacanu, S A

    2001-12-01

    Large, population-based samples and large-scale genotyping are being used to evaluate disease/gene associations. A substantial drawback to such samples is the fact that population substructure can induce spurious associations between genes and disease. We review two methods, called genomic control (GC) and structured association (SA), that obviate many of the concerns about population substructure by using the features of the genomes present in the sample to correct for stratification. The GC approach exploits the fact that population substructure generates "over dispersion" of statistics used to assess association. By testing multiple polymorphisms throughout the genome, only some of which are pertinent to the disease of interest, the degree of overdispersion generated by population substructure can be estimated and taken into account. The SA approach assumes that the sampled population, although heterogeneous, is composed of subpopulations that are themselves homogeneous. By using multiple polymorphisms throughout the genome, this "latent class method" estimates the probability sampled individuals derive from each of these latent subpopulations. GC has the advantage of robustness, simplicity, and wide applicability, even to experimental designs such as DNA pooling. SA is a bit more complicated but has the advantage of greater power in some realistic settings, such as admixed populations or when association varies widely across subpopulations. It, too, is widely applicable. Both also have weaknesses, as elaborated in our review.

  4. Three-Dimensional Imaging Methods Based on Multiview Images

    NASA Astrophysics Data System (ADS)

    Son, Jung-Young; Javidi, Bahram

    2005-09-01

    Three-dimensional imaging methods, based on parallaxes as their depth cues, can be classified into the stereoscopic providing binocular parallax only, and multiview providing both binocular and motion parallaxes. In these methods, the parallaxes are provided by creating a viewing zone with use of either a special optical eyeglasses or a special optical plate as their viewing zone-forming optics. For the stereoscopic image generations, either the eyeglasses or the optical plate can be employed, but for the multiview the optical plate or the eyeglasses with a tracking device. The stereoscopic image pair and the multiview images are presented either simultaneously or as a time sequence with use of projectors or display panels. For the case of multiview images,they can also be presented as two images at a time according to the viewer's movements. The presence of the viewing zone-forming optics often causes undesirable problems, such as appearance of moiré fringes, image quality deterioration,depth reversion, limiting viewing regions, low image brightness, image burring,and inconveniences of wearing.

  5. GPU based contouring method on grid DEM data

    NASA Astrophysics Data System (ADS)

    Tan, Liheng; Wan, Gang; Li, Feng; Chen, Xiaohui; Du, Wenlong

    2017-08-01

    This paper presents a novel method to generate contour lines from grid DEM data based on the programmable GPU pipeline. The previous contouring approaches often use CPU to construct a finite element mesh from the raw DEM data, and then extract contour segments from the elements. They also need a tracing or sorting strategy to generate the final continuous contours. These approaches can be heavily CPU-costing and time-consuming. Meanwhile the generated contours would be unsmooth if the raw data is sparsely distributed. Unlike the CPU approaches, we employ the GPU's vertex shader to generate a triangular mesh with arbitrary user-defined density, in which the height of each vertex is calculated through a third-order Cardinal spline function. Then in the same frame, segments are extracted from the triangles by the geometry shader, and translated to the CPU-side with an internal order in the GPU's transform feedback stage. Finally we propose a ;Grid Sorting; algorithm to achieve the continuous contour lines by travelling the segments only once. Our method makes use of multiple stages of GPU pipeline for computation, which can generate smooth contour lines, and is significantly faster than the previous CPU approaches. The algorithm can be easily implemented with OpenGL 3.3 API or higher on consumer-level PCs.

  6. Bacteria counting method based on polyaniline/bacteria thin film.

    PubMed

    Zhihua, Li; Xuetao, Hu; Jiyong, Shi; Xiaobo, Zou; Xiaowei, Huang; Xucheng, Zhou; Tahir, Haroon Elrasheid; Holmes, Mel; Povey, Malcolm

    2016-07-15

    A simple and rapid bacteria counting method based on polyaniline (PANI)/bacteria thin film was proposed. Since the negative effects of immobilized bacteria on the deposition of PANI on glass carbon electrode (GCE), PANI/bacteria thin films containing decreased amount of PANI would be obtained when increasing the bacteria concentration. The prepared PANI/bacteria film was characterized with cyclic voltammetry (CV) technique to provide quantitative index for the determination of the bacteria count, and electrochemical impedance spectroscopy (EIS) was also performed to further investigate the difference in the PANI/bacteria films. Good linear relationship of the peak currents of the CVs and the log total count of bacteria (Bacillus subtilis) could be established using the equation Y=-30.413X+272.560 (R(2)=0.982) over the range of 5.3×10(4) to 5.3×10(8)CFUmL(-1), which also showed acceptable stability, reproducibility and switchable ability. The proposed method was feasible for simple and rapid counting of bacteria. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Trinocular stereo vision method based on mesh candidates

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Xu, Gang; Li, Haibin

    2010-10-01

    One of the most interesting goals of machine vision is 3D structure recovery of the scenes. This recovery has many applications, such as object recognition, reverse engineering, automatic cartography, autonomous robot navigation, etc. To meet the demand of measuring the complex prototypes in reverse engineering, a trinocular stereo vision method based on mesh candidates was proposed. After calibration of the cameras, the joint field of view can be defined in the world coordinate system. Mesh grid is established along the coordinate axes, and the mesh nodes are considered as potential depth data of the object surface. By similarity measure of the correspondence pairs which are projected from a certain group of candidates, the depth data can be obtained readily. With mesh nodes optimization, the interval between the neighboring nodes in depth direction could be designed reasonably. The potential ambiguity can be eliminated efficiently in correspondence matching with the constraint of a third camera. The cameras can be treated as two independent pairs, left-right and left-centre. Due to multiple peaks of the correlation values, the binocular method may not satisfy the accuracy of the measurement. Another image pair is involved if the confidence coefficient is less than the preset threshold. The depth is determined by the highest sum of correlation of both camera pairs. The measurement system was simulated using 3DS MAX and Matlab software for reconstructing the surface of the object. The experimental result proved that the trinocular vision system has good performance in depth measurement.

  8. Digital image registration method based upon binary boundary maps

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.; Andrus, J. F.; Campbell, C. W.

    1974-01-01

    A relatively fast method is presented for matching or registering the digital data of imagery from the same ground scene acquired at different times, or from different multispectral images, sensors, or both. It is assumed that the digital images can be registed by using translations and rotations only, that the images are of the same scale, and that little or no distortion exists between images. It is further assumed that by working with several local areas of the image, the rotational effects in the local areas can be neglected. Thus, by treating the misalignments of local areas as translations, it is possible to determine rotational and translational misalignments for a larger portion of the image containing the local areas. This procedure of determining the misalignment and then registering the data according to the misalignment can be repeated until the desired degree of registration is achieved. The method to be presented is based upon the use of binary boundary maps produced from the raw digital imagery rather than the raw digital data.

  9. Data Bases in Writing: Method, Practice, and Metaphor.

    ERIC Educational Resources Information Center

    Schwartz, Helen J.

    1985-01-01

    Points out the need for informed and experienced users of data bases. Discusses the definition of a data base, creating a data base for research, comparison use, and checking written text as a data base. (EL)

  10. Data Bases in Writing: Method, Practice, and Metaphor.

    ERIC Educational Resources Information Center

    Schwartz, Helen J.

    1985-01-01

    Points out the need for informed and experienced users of data bases. Discusses the definition of a data base, creating a data base for research, comparison use, and checking written text as a data base. (EL)

  11. Region-based and pathway-based QTL mapping using a p-value combination method.

    PubMed

    Yang, Hsin-Chou; Chen, Chia-Wei

    2011-11-29

    Quantitative trait locus (QTL) mapping using deep DNA sequencing data is a challenging task. In this study we performed region-based and pathway-based QTL mappings using a p-value combination method to analyze the simulated quantitative traits Q1 and Q4 and the exome sequencing data. The aims were to evaluate the performance of the QTL mapping approaches that were used and to suggest plausible strategies for QTL mapping of DNA sequencing data. We conducted single-locus QTL mappings using a linear regression model with adjustments for age and smoking status, and we also conducted region-based and pathway-based QTL mappings using a truncated product method for combining p-values from the single-locus QTL mapping. To account for the features of rare variants and common single-nucleotide polymorphisms (SNPs), we considered independently rare-variant-only, common-SNP-only, and combined analyses. An analysis of 200 simulated replications showed that the three region-based methods reasonably controlled type I error, whereas the combined analysis yielded the greatest statistical power. Rare-variant-only, common-SNP-only, and combined analyses were also applied to pathway-based QTL mappings. We found that pathway-based QTL mappings had a power of approximately 100% when the significance of the vascular endothelial growth factor pathway was evaluated, but type I errors were slightly inflated. Our approach complements single-locus QTL mapping. An integrated approach using single-locus, combined region-based, and combined pathway-based analyses should yield promising results for QTL mapping of DNA sequencing data.

  12. High accuracy operon prediction method based on STRING database scores.

    PubMed

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-07-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/.

  13. Methods for Evaluating Respondent Attrition in Web-Based Surveys.

    PubMed

    Hochheimer, Camille J; Sabo, Roy T; Krist, Alex H; Day, Teresa; Cyrus, John; Woolf, Steven H

    2016-11-22

    Electronic surveys are convenient, cost effective, and increasingly popular tools for collecting information. While the online platform allows researchers to recruit and enroll more participants, there is an increased risk of participant dropout in Web-based research. Often, these dropout trends are simply reported, adjusted for, or ignored altogether. To propose a conceptual framework that analyzes respondent attrition and demonstrates the utility of these methods with existing survey data. First, we suggest visualization of attrition trends using bar charts and survival curves. Next, we propose a generalized linear mixed model (GLMM) to detect or confirm significant attrition points. Finally, we suggest applications of existing statistical methods to investigate the effect of internal survey characteristics and patient characteristics on dropout. In order to apply this framework, we conducted a case study; a seventeen-item Informed Decision-Making (IDM) module addressing how and why patients make decisions about cancer screening. Using the framework, we were able to find significant attrition points at Questions 4, 6, 7, and 9, and were also able to identify participant responses and characteristics associated with dropout at these points and overall. When these methods were applied to survey data, significant attrition trends were revealed, both visually and empirically, that can inspire researchers to investigate the factors associated with survey dropout, address whether survey completion is associated with health outcomes, and compare attrition patterns between groups. The framework can be used to extract information beyond simple responses, can be useful during survey development, and can help determine the external validity of survey results.

  14. High accuracy operon prediction method based on STRING database scores

    PubMed Central

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-01-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8–a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412–D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/. PMID:20385580

  15. IR-based method for copper electrolysis short circuit detection

    NASA Astrophysics Data System (ADS)

    Makipaa, Esa; Tanttu, Juha T.; Virtanen, Henri

    1997-04-01

    In the copper electrorefining process short-circuits between the anodes and cathodes are harmful. They cause decreasing production rate and poor cathode copper quality. Short- circuits should be detected and eliminated as soon as possible. Manual inspection methods often take a lot of time and excessive walking on the electrodes can not be avoided. For these reasons there is a lot of interest to develop short-circuit detection and quality control. In this paper an IR based method for short circuit detection is presented. In the case of the short-circuited anode and cathode pair especially cathode bar becomes significantly warmer than bar in the normal condition. Using IR camera mounted on a moving crane these hot spots among the electrodes were easily detected. IR imaging was tested in the harsh conditions of the refinery hall with various crane speeds. Image processing is a tool to interpret the obtained IR images. In this paper an algorithm for searching the locations of the short-circuits in the electrolytic cell using imaging results as test material is proposed. The basic idea of the developed algorithm is first to search and calculate necessary edges and initial lines of the electrolytic cell. The second step is to determine the exact position of each cathode plate in the cell so that using thresholding the location of the short-circuited cathode can be determined. IR imaging combined with image processing has proven to be a superior method for predictive maintenance and process control compared to manual ones in the copper electrorefining process. It also makes it possible to collect valuable information for the quality control purposes.

  16. Structural topology design of container ship based on knowledge-based engineering and level set method

    NASA Astrophysics Data System (ADS)

    Cui, Jin-ju; Wang, De-yu; Shi, Qi-qi

    2015-06-01

    Knowledge-Based Engineering (KBE) is introduced into the ship structural design in this paper. From the implementation of KBE, the design solutions for both Rules Design Method (RDM) and Interpolation Design Method (IDM) are generated. The corresponding Finite Element (FE) models are generated. Topological design of the longitudinal structures is studied where the Gaussian Process (GP) is employed to build the surrogate model for FE analysis. Multi-objective optimization methods inspired by Pareto Front are used to reduce the design tank weight and outer surface area simultaneously. Additionally, an enhanced Level Set Method (LSM) which employs implicit algorithm is applied to the topological design of typical bracket plate which is used extensively in ship structures. Two different sets of boundary conditions are considered. The proposed methods show satisfactory efficiency and accuracy.

  17. A GIS-based method for flood risk assessment

    NASA Astrophysics Data System (ADS)

    Kalogeropoulos, Kleomenis; Stathopoulos, Nikos; Psarogiannis, Athanasios; Penteris, Dimitris; Tsiakos, Chrisovalantis; Karagiannopoulou, Aikaterini; Krikigianni, Eleni; Karymbalis, Efthimios; Chalkias, Christos

    2016-04-01

    Floods are physical global hazards with negative environmental and socio-economic impacts on local and regional scale. The technological evolution during the last decades, especially in the field of geoinformatics, has offered new advantages in hydrological modelling. This study seeks to use this technology in order to quantify flood risk assessment. The study area which was used is an ungauged catchment and by using mostly GIS hydrological and geomorphological analysis together with a GIS-based distributed Unit Hydrograph model, a series of outcomes have risen. More specifically, this paper examined the behaviour of the Kladeos basin (Peloponnese, Greece) using real rainfall data, as well hypothetical storms. The hydrological analysis held using a Digital Elevation Model of 5x5m pixel size, while the quantitative drainage basin characteristics were calculated and were studied in terms of stream order and its contribution to the flood. Unit Hydrographs are, as it known, useful when there is lack of data and in this work, based on time-area method, a sequences of flood risk assessments have been made using the GIS technology. Essentially, the proposed methodology estimates parameters such as discharge, flow velocity equations etc. in order to quantify flood risk assessment. Keywords Flood Risk Assessment Quantification; GIS; hydrological analysis; geomorphological analysis.

  18. A DNA-based method for detecting homologous blood doping.

    PubMed

    Manokhina, Irina; Rupert, James L

    2013-12-01

    Homologous (or allogeneic) blood doping, in which blood is transferred from a donor into a recipient athlete, is the easiest, cheapest, and fastest way to increase red cell mass (hematocrit) and therefore the oxygen-carrying capacity of the blood. Although thought to have been rendered obsolete as a doping strategy by the increased use of rhEPO to increased hematocrits, there is evidence that athletes are still using this potentially dangerous method to improve endurance performance. Current testing for homologous blood doping is based on identification of mixed populations of red blood cells by flow cytometry. This paper proposes that homologous blood doping could also be tested for by high-resolution qPCR-based genotyping and demonstrates that assays could be developed that would detect second populations of cells even if the "donor" blood was depleted of 99% of the DNA-containing leukocytes. Issues of test specificity and sensitivity are discussed as well as some of the ethical considerations that would have to be addressed if athletes' genotypes were to be used by the anti-doping authorities to prevent, or detect, the use of prohibited ergogenic practices.

  19. A rail wear measurement method based on structured light scanning

    NASA Astrophysics Data System (ADS)

    Chen, Peng; Wang, Peijun; Lauer, Martin; Tang, Xiaomin; Wang, Jindong

    2017-02-01

    Rail wear measurement is a necessary task in railway infrastructure inspection. To acquire the wear amounts accurately with more continuous scanning data, a rail wear measurement method based on structured light scanning is proposed in this paper. The CAD model of the rail is converted into a point set, and the data registration is implemented by aligning the scanning data to the point cloud generated by the CAD model. On a cross section plane of the rail, the vertical and lateral wear amounts are calculated by the nearby points projected onto the plane. To verify the accuracy of wear measurement based on structured light scanning, the wear amounts calculated by laser scanning data are compared. For the comparison, an experiment is designed to ensure that the same plane is sliced in two different kinds of measurement. On the cross section plane, the wear amounts are calculated by the distances from these points to the 2D profile of the rail CAD model, and then the results are compared with those from laser scanning data for the accuracy evaluation. It indicates that the accuracy of the structured light scanning is sufficient for railway track wear measurement.

  20. Post-Fragmentation Whole Genome Amplification-Based Method

    NASA Technical Reports Server (NTRS)

    Benardini, James; LaDuc, Myron T.; Langmore, John

    2011-01-01

    This innovation is derived from a proprietary amplification scheme that is based upon random fragmentation of the genome into a series of short, overlapping templates. The resulting shorter DNA strands (<400 bp) constitute a library of DNA fragments with defined 3 and 5 termini. Specific primers to these termini are then used to isothermally amplify this library into potentially unlimited quantities that can be used immediately for multiple downstream applications including gel eletrophoresis, quantitative polymerase chain reaction (QPCR), comparative genomic hybridization microarray, SNP analysis, and sequencing. The standard reaction can be performed with minimal hands-on time, and can produce amplified DNA in as little as three hours. Post-fragmentation whole genome amplification-based technology provides a robust and accurate method of amplifying femtogram levels of starting material into microgram yields with no detectable allele bias. The amplified DNA also facilitates the preservation of samples (spacecraft samples) by amplifying scarce amounts of template DNA into microgram concentrations in just a few hours. Based on further optimization of this technology, this could be a feasible technology to use in sample preservation for potential future sample return missions. The research and technology development described here can be pivotal in dealing with backward/forward biological contamination from planetary missions. Such efforts rely heavily on an increasing understanding of the burden and diversity of microorganisms present on spacecraft surfaces throughout assembly and testing. The development and implementation of these technologies could significantly improve the comprehensiveness and resolving power of spacecraft-associated microbial population censuses, and are important to the continued evolution and advancement of planetary protection capabilities. Current molecular procedures for assaying spacecraft-associated microbial burden and diversity have

  1. Geomorphometry-based method of landform assessment for geodiversity

    NASA Astrophysics Data System (ADS)

    Najwer, Alicja; Zwoliński, Zbigniew

    2015-04-01

    Climate variability primarily induces the variations in the intensity and frequency of surface processes and consequently, principal changes in the landscape. As a result, abiotic heterogeneity may be threatened and the key elements of the natural diversity even decay. The concept of geodiversity was created recently and has rapidly gained the approval of scientists around the world. However, the problem recognition is still at an early stage. Moreover, little progress has been made concerning its assessment and geovisualisation. Geographical Information System (GIS) tools currently provide wide possibilities for the Earth's surface studies. Very often, the main limitation in that analysis is acquisition of geodata in appropriate resolution. The main objective of this study was to develop a proceeding algorithm for the landform geodiversity assessment using geomorphometric parameters. Furthermore, final maps were compared to those resulting from thematic layers method. The study area consists of two peculiar valleys, characterized by diverse landscape units and complex geological setting: Sucha Woda in Polish part of Tatra Mts. and Wrzosowka in Sudetes Mts. Both valleys are located in the National Park areas. The basis for the assessment is a proper selection of geomorphometric parameters with reference to the definition of geodiversity. Seven factor maps were prepared for each valley: General Curvature, Topographic Openness, Potential Incoming Solar Radiation, Topographic Position Index, Topographic Wetness Index, Convergence Index and Relative Heights. After the data integration and performing the necessary geoinformation analysis, the next step with a certain degree of subjectivity is score classification of the input maps using an expert system and geostatistical analysis. The crucial point to generate the final maps of geodiversity by multi-criteria evaluation (MCE) with GIS-based Weighted Sum technique is to assign appropriate weights for each factor map by

  2. Iron-based amorphous alloys and methods of synthesizing iron-based amorphous alloys

    DOEpatents

    Saw, Cheng Kiong; Bauer, William A.; Choi, Jor-Shan; Day, Dan; Farmer, Joseph C.

    2016-05-03

    A method according to one embodiment includes combining an amorphous iron-based alloy and at least one metal selected from a group consisting of molybdenum, chromium, tungsten, boron, gadolinium, nickel phosphorous, yttrium, and alloys thereof to form a mixture, wherein the at least one metal is present in the mixture from about 5 atomic percent (at %) to about 55 at %; and ball milling the mixture at least until an amorphous alloy of the iron-based alloy and the at least one metal is formed. Several amorphous iron-based metal alloys are also presented, including corrosion-resistant amorphous iron-based metal alloys and radiation-shielding amorphous iron-based metal alloys.

  3. Sensitivity kernels for viscoelastic loading based on adjoint methods

    NASA Astrophysics Data System (ADS)

    Al-Attar, David; Tromp, Jeroen

    2014-01-01

    Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity

  4. A Satellite Based Method for Wetland Inundation Mapping

    NASA Astrophysics Data System (ADS)

    Di Vittorio, C.; Georgakakos, A. P.

    2016-12-01

    Hydrologic models of wetlands enable hydrologists and water resources managers to appreciate the environmental and societal roles of wetlands and manage them in ways that preserve their integrity and sustain their valuable services. However, wetland model reliability and accuracy are often unsatisfactory due to the complexity of the underlying processes and the lack of adequate in-situ data. In this research, we demonstrate how MODIS satellite imagery can be used to characterize wetland flooding over time and to support the development of more reliable wetland models. We apply this method to the Sudd, a seasonal wetland in South Sudan that is part of the Nile River Basin. The database consists of 16 years of 8-day composite ground surface reflectance data with a 500 m spatial resolution downloaded from Earth Explorer. After masking poor quality pixels, monthly mean NDWI and NDVI values were extracted. Based on literature and personal accounts describing the Sudd as well as Google Earth imagery, a set of ground truth locations were identified for each land class and monthly distributions of the indices were derived. The indices were then combined in a unique way and statistics of the new distributions were used to classify land types present in the full area of interest. Subsequently, annual statistics were derived from the same indices and used to identify pixels that undergo flooding as well as the timing and duration of flooding for each year (2000-2015). An independent set of ground truth locations were selected for method validation. The combined indices demonstrate high land classification accuracy and outperform the individual indices as well as other existing land classification algorithms. The derived monthly inundation series agrees well with literature and anecdotal observations. This information is currently being used to develop wetland models as part of a comprehensive modeling system for the Nile River Basin. The new method is general and can be used

  5. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.

  6. Optimal higher order modeling methodology based on method of moments and finite element method for electromagnetics

    NASA Astrophysics Data System (ADS)

    Klopf, Eve Marian

    General guidelines and quantitative recipes for adoptions of optimal higher order parameters for computational electromagnetics (CEM) modeling using the method of moments and the finite element method are established and validated, based on an exhaustive series of numerical experiments and comprehensive case studies on higher order hierarchical CEM models of metallic and dielectric scatterers. The modeling parameters considered are: electrical dimensions of elements (subdivisions) in the model (h -refinement), polynomial orders of basis and testing functions ( p-refinement), orders of Gauss-Legendre integration formulas (numbers of integration points -- integration accuracy), and geometrical orders of elements (orders of Lagrange-type curvature) in the model. The goal of the study, which is the first such study of higher order parameters in CEM, is to reduce the dilemmas and uncertainties associated with the great modeling flexibility of higher order elements, basis and testing functions, and integration procedures (this flexibility is the principal advantage but also the greatest shortcoming of the higher order CEM), and to ease and facilitate the decisions to be made on how to actually use them, by both CEM developers and practitioners. The ultimate goal is to close the large gap between the rising academic interest in higher order CEM, which evidently shows great numerical potential, and its actual usefulness and application to electromagnetics research and engineering applications.

  7. 3D modeling method for computer animate based on modified weak structured light method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  8. Rough Precipitation Forecasts based on Analogue Method: an Operational System

    NASA Astrophysics Data System (ADS)

    Raffa, Mario; Mercogliano, Paola; Lacressonnière, Gwendoline; Guillaume, Bruno; Deandreis, Céline; Castanier, Pierre

    2017-04-01

    In the framework of the Climate KIC partnership, has been funded the project Wat-Ener-Cast (WEC), coordinated by ARIA Technologies, having the goal to adapt, through tailored weather-related forecast, the water and energy operations to the increased weather fluctuation and to climate change. The WEC products allow providing high quality forecast suited in risk and opportunities assessment dashboard for water and energy operational decisions and addressing the needs of sewage/water distribution operators, energy transport & distribution system operators, energy manager and wind energy producers. A common "energy water" web platform, able to interface with newest smart water-energy IT network have been developed. The main benefit by sharing resources through the "WEC platform" is the possibility to optimize the cost and the procedures of safety and maintenance team, in case of alerts and, finally to reduce overflows. Among the different services implemented on the WEC platform, ARIA have developed a product having the goal to support sewage/water distribution operators, based on a gradual forecast information system ( at 48hrs/24hrs/12hrs horizons) of heavy precipitation. For each fixed deadline different type of operation are implemented: 1) 48hour horizon, organisation of "on call team", 2) 24 hour horizon, update and confirm the "on call team", 3) 12 hour horizon, secure human resources and equipment (emptying storage basins, pipes manipulations …). More specifically CMCC have provided a statistical downscaling method in order to provide a "rough" daily local precipitation at 24 hours, especially when high precipitation values are expected. This statistical technique consists of an adaptation of analogue method based on ECMWF data (analysis and forecast at 24 hours). One of the most advantages of this technique concerns a lower computational burden and budget compared to running a Numerical Weather Prediction (NWP) model, also if, of course it provides only this

  9. Method of Heating a Foam-Based Catalyst Bed

    NASA Technical Reports Server (NTRS)

    Fortini, Arthur J.; Williams, Brian E.; McNeal, Shawn R.

    2009-01-01

    A method of heating a foam-based catalyst bed has been developed using silicon carbide as the catalyst support due to its readily accessible, high surface area that is oxidation-resistant and is electrically conductive. The foam support may be resistively heated by passing an electric current through it. This allows the catalyst bed to be heated directly, requiring less power to reach the desired temperature more quickly. Designed for heterogeneous catalysis, the method can be used by the petrochemical, chemical processing, and power-generating industries, as well as automotive catalytic converters. Catalyst beds must be heated to a light-off temperature before they catalyze the desired reactions. This typically is done by heating the assembly that contains the catalyst bed, which results in much of the power being wasted and/or lost to the surrounding environment. The catalyst bed is heated indirectly, thus requiring excessive power. With the electrically heated catalyst bed, virtually all of the power is used to heat the support, and only a small fraction is lost to the surroundings. Although the light-off temperature of most catalysts is only a few hundred degrees Celsius, the electrically heated foam is able to achieve temperatures of 1,200 C. Lower temperatures are achievable by supplying less electrical power to the foam. Furthermore, because of the foam s open-cell structure, the catalyst can be applied either directly to the foam ligaments or in the form of a catalyst- containing washcoat. This innovation would be very useful for heterogeneous catalysis where elevated temperatures are needed to drive the reaction.

  10. Ensemble-based methods for forecasting census in hospital units

    PubMed Central

    2013-01-01

    Background The ability to accurately forecast census counts in hospital departments has considerable implications for hospital resource allocation. In recent years several different methods have been proposed forecasting census counts, however many of these approaches do not use available patient-specific information. Methods In this paper we present an ensemble-based methodology for forecasting the census under a framework that simultaneously incorporates both (i) arrival trends over time and (ii) patient-specific baseline and time-varying information. The proposed model for predicting census has three components, namely: current census count, number of daily arrivals and number of daily departures. To model the number of daily arrivals, we use a seasonality adjusted Poisson Autoregressive (PAR) model where the parameter estimates are obtained via conditional maximum likelihood. The number of daily departures is predicted by modeling the probability of departure from the census using logistic regression models that are adjusted for the amount of time spent in the census and incorporate both patient-specific baseline and time varying patient-specific covariate information. We illustrate our approach using neonatal intensive care unit (NICU) data collected at Women & Infants Hospital, Providence RI, which consists of 1001 consecutive NICU admissions between April 1st 2008 and March 31st 2009. Results Our results demonstrate statistically significant improved prediction accuracy for 3, 5, and 7 day census forecasts and increased precision of our forecasting model compared to a forecasting approach that ignores patient-specific information. Conclusions Forecasting models that utilize patient-specific baseline and time-varying information make the most of data typically available and have the capacity to substantially improve census forecasts. PMID:23721123

  11. Knowledge Discovery from Climate Data using Graph-Based Methods

    NASA Astrophysics Data System (ADS)

    Steinhaeuser, K.

    2012-04-01

    Climate and Earth sciences have recently experienced a rapid transformation from a historically data-poor to a data-rich environment, thus bringing them into the realm of the Fourth Paradigm of scientific discovery - a term coined by the late Jim Gray (Hey et al. 2009), the other three being theory, experimentation and computer simulation. In particular, climate-related observations from remote sensors on satellites and weather radars, in situ sensors and sensor networks, as well as outputs of climate or Earth system models from large-scale simulations, provide terabytes of spatio-temporal data. These massive and information-rich datasets offer a significant opportunity for advancing climate science and our understanding of the global climate system, yet current analysis techniques are not able to fully realize their potential benefits. We describe a class of computational approaches, specifically from the data mining and machine learning domains, which may be novel to the climate science domain and can assist in the analysis process. Computer scientists have developed spatial and spatio-temporal analysis techniques for a number of years now, and many of them may be applicable and/or adaptable to problems in climate science. We describe a large-scale, NSF-funded project aimed at addressing climate science question using computational analysis methods; team members include computer scientists, statisticians, and climate scientists from various backgrounds. One of the major thrusts is in the development of graph-based methods, and several illustrative examples of recent work in this area will be presented.

  12. Hazard identification by methods of animal-based toxicology.

    PubMed

    Barlow, S M; Greig, J B; Bridges, J W; Carere, A; Carpy, A J M; Galli, C L; Kleiner, J; Knudsen, I; Koëter, H B W M; Levy, L S; Madsen, C; Mayer, S; Narbonne, J-F; Pfannkuch, F; Prodanchuk, M G; Smith, M R; Steinberg, P

    2002-01-01

    This paper is one of several prepared under the project "Food Safety In Europe: Risk Assessment of Chemicals in Food and Diet" (FOSIE), a European Commission Concerted Action Programme, organised by the International Life Sciences Institute, Europe (ILSI). The aim of the FOSIE project is to review the current state of the science of risk assessment of chemicals in food and diet, by consideration of the four stages of risk assessment, that is, hazard identification, hazard characterisation, exposure assessment and risk characterisation. The contribution of animal-based methods in toxicology to hazard identification of chemicals in food and diet is discussed. The importance of first applying existing technical and chemical knowledge to the design of safety testing programs for food chemicals is emphasised. There is consideration of the presently available and commonly used toxicity testing approaches and methodologies, including acute and repeated dose toxicity, reproductive and developmental toxicity, neurotoxicity, genotoxicity, carcinogenicity, immunotoxicity and food allergy. They are considered from the perspective of whether they are appropriate for assessing food chemicals and whether they are adequate to detect currently known or anticipated hazards from food. Gaps in knowledge and future research needs are identified; research on these could lead to improvements in the methods of hazard identification for food chemicals. The potential impact of some emerging techniques and toxicological issues on hazard identification for food chemicals, such as new measurement techniques, the use of transgenic animals, assessment of hormone balance and the possibilities for conducting studies in which common human diseases have been modelled, is also considered.

  13. Chemical Sensors Based On Oxygen Detection By Optical Methods

    NASA Astrophysics Data System (ADS)

    Parker, Jennifer W.; Cox, M. E.; Dunn, Bruce S.

    1986-08-01

    Fluorescence quenching is shown to be a viable method of measuring oxygen concentration. Two oxygen/optical transducers based on fluorescence quenching have been developed and characterized: one is hydrophobic and the other is hydrophilic. The development of both transducers provides great flexibility in the application of fluorescence to oxygen measurement. One transducer is produced by entrapping a fluorophor, 9,10-diphenyl anthracene, in poly(dimethyl siloxane) to yield a homogeneous composite polymer matrix. The resulting matrix is hydrophobic. This transducer is extremely sensitive to PO2 as a result of oxygen quenching the fluorescence of 9,10-diphenyl anthracene. This quenching is utilized in the novel method employed to measure the transport properties of oxygen within Ulf 2matrix. Results show large values for the diffusion coefficient at 25°C, D = 3.5 x 10-5 cm /s. The fluorescence intensity varies inversely with P02. The second oxygen transducer is fabricated by entrapping 9,10-diphenyl anthracene in poly(hydroxy ethyl methacrylate). Free radical, room temperature polymerization is employed. This transducer is hydrophilic, and contains 37% water. The transport properties of oxygen within this transducer are compared with those of the hydrophobic transducer. The feasibility of generalizing the oxygen transducers to a wider class of chemical sensors through coupling to other chemistries is proposed. An example of such coupling is given in a glucose/oxygen transducer. The glucose transducer is produced by entrapping an enzyme, glucose oxidase, in the composite matrix of the hydrophilic oxygen transducer. Glucose oxidase catalyzes a reaction between glucose and oxygen, thereby lowering the local oxygen concentration. This transducer yields a glucose modified optical oxygen signal. The operation of this transducer and preliminary results of its characterization are presented.

  14. Estimation of Convective Momentum Fluxes Using Satellite-Based Methods

    NASA Astrophysics Data System (ADS)

    Jewett, C.; Mecikalski, J. R.

    2009-12-01

    Research and case studies have shown that convection plays a significant role in large-scale environmental circulations. Convective momentum fluxes (CMFs) have been studied for many years using in-situ and aircraft measurements, along with numerical simulations. However, despite these successes, little work has been conducted on methods that use satellite remote sensing as a tool to diagnose these fluxes. Uses of satellite data have the capability to provide continuous analysis across regions void of ground-based remote sensing. Therefore, the project's overall goal is to develop a synergistic approach for retrieving CMFs using a collection of instruments including GOES, TRMM, CloudSat, MODIS, and QuikScat. However, this particular study will focus on the work using TRMM and QuikScat, and the methodology of using CloudSat. Sound research has already been conducted for computing CMFs using the GOES instruments (Jewett and Mecikalski 2009, submitted to J. Geophys. Res.). Using satellite-derived winds, namely mesoscale atmospheric motion vectors (MAMVs) as described by Bedka and Mecikalski (2005), one can obtain the actual winds occurring within a convective environment as perturbed by convection. Surface outflow boundaries and upper-tropospheric anvil outflow will produce “perturbation” winds on smaller, convective scales. Combined with estimated vertical motion retrieved using geostationary infrared imagery, CMFs were estimated using MAMVs, with an average profile being calculated across a convective regime or a domain covered by active storms. This study involves estimating draft-tilt from TRMM PR radar reflectivity and sub-cloud base fluxes using QuikScat data. The “slope” of falling hydrometeors (relative to Earth) in data are related to u', v' and w' winds within convection. The main up- and down-drafts within convection are described by precipitation patterns (Mecikalski 2003). Vertical motion estimates are made using model results for deep convection

  15. Using Corporate-Based Methods To Assess Technical Communication Programs.

    ERIC Educational Resources Information Center

    Faber, Brenton; Bekins, Linn; Karis, Bill

    2002-01-01

    Investigates methods of program assessment used by corporate learning sites and profiles value added methods as a way to both construct and evaluate academic programs in technical communication. Examines and critiques assessment methods from corporate training environments including methods employed by corporate universities and value added…

  16. Using Corporate-Based Methods To Assess Technical Communication Programs.

    ERIC Educational Resources Information Center

    Faber, Brenton; Bekins, Linn; Karis, Bill

    2002-01-01

    Investigates methods of program assessment used by corporate learning sites and profiles value added methods as a way to both construct and evaluate academic programs in technical communication. Examines and critiques assessment methods from corporate training environments including methods employed by corporate universities and value added…

  17. Evaluation of medical students of teacher-based and student-based teaching methods in Infectious diseases course

    PubMed Central

    Ghasemzadeh, I; Aghamolaei, T; Hosseini-Parandar, F

    2015-01-01

    Introduction: In recent years, medical education has changed dramatically and many medical schools in the world have been trying for expand modern training methods. Purpose of the research is to appraise the medical students of teacher-based and student-based teaching methods in Infectious diseases course, in the Medical School of Hormozgan Medical Sciences University. Methods: In this interventional study, a total of 52 medical scholars that used Section in this Infectious diseases course were included. About 50% of this course was presented by a teacher-based teaching method (lecture) and 50% by a student-based teaching method (problem-based learning). The satisfaction of students regarding these methods was assessed by a questionnaire and a test was used to measure their learning. information are examined with using SPSS 19 and paired t-test. Results: The satisfaction of students of student-based teaching method (problem-based learning) was more positive than their satisfaction of teacher-based teaching method (lecture).The mean score of students in teacher-based teaching method was 12.03 (SD=4.08) and in the student-based teaching method it was 15.50 (SD=4.26) and where is a considerable variation among them (p<0.001). Conclusion: The use of the student-based teaching method (problem-based learning) in comparison with the teacher-based teaching method (lecture) to present the Infectious diseases course led to the student satisfaction and provided additional learning opportunities.

  18. Evaluation of medical students of teacher-based and student-based teaching methods in Infectious diseases course.

    PubMed

    Ghasemzadeh, I; Aghamolaei, T; Hosseini-Parandar, F

    2015-01-01

    Introduction: In recent years, medical education has changed dramatically and many medical schools in the world have been trying for expand modern training methods. Purpose of the research is to appraise the medical students of teacher-based and student-based teaching methods in Infectious diseases course, in the Medical School of Hormozgan Medical Sciences University. Methods: In this interventional study, a total of 52 medical scholars that used Section in this Infectious diseases course were included. About 50% of this course was presented by a teacher-based teaching method (lecture) and 50% by a student-based teaching method (problem-based learning). The satisfaction of students regarding these methods was assessed by a questionnaire and a test was used to measure their learning. information are examined with using SPSS 19 and paired t-test. Results: The satisfaction of students of student-based teaching method (problem-based learning) was more positive than their satisfaction of teacher-based teaching method (lecture).The mean score of students in teacher-based teaching method was 12.03 (SD=4.08) and in the student-based teaching method it was 15.50 (SD=4.26) and where is a considerable variation among them (p<0.001). Conclusion: The use of the student-based teaching method (problem-based learning) in comparison with the teacher-based teaching method (lecture) to present the Infectious diseases course led to the student satisfaction and provided additional learning opportunities.

  19. Gene-based segregation method for identifying rare variants in family-based sequencing studies.

    PubMed

    Qiao, Dandi; Lange, Christoph; Laird, Nan M; Won, Sungho; Hersh, Craig P; Morrow, Jarrett; Hobbs, Brian D; Lutz, Sharon M; Ruczinski, Ingo; Beaty, Terri H; Silverman, Edwin K; Cho, Michael H

    2017-02-13

    Whole-exome sequencing using family data has identified rare coding variants in Mendelian diseases or complex diseases with Mendelian subtypes, using filters based on variant novelty, functionality, and segregation with the phenotype within families. However, formal statistical approaches are limited. We propose a gene-based segregation test (GESE) that quantifies the uncertainty of the filtering approach. It is constructed using the probability of segregation events under the null hypothesis of Mendelian transmission. This test takes into account different degrees of relatedness in families, the number of functional rare variants in the gene, and their minor allele frequencies in the corresponding population. In addition, a weighted version of this test allows incorporating additional subject phenotypes to improve statistical power. We show via simulations that the GESE and weighted GESE tests maintain appropriate type I error rate, and have greater power than several commonly used region-based methods. We apply our method to whole-exome sequencing data from 49 extended pedigrees with severe, early-onset chronic obstructive pulmonary disease (COPD) in the Boston Early-Onset COPD study (BEOCOPD) and identify several promising candidate genes. Our proposed methods show great potential for identifying rare coding variants of large effect and high penetrance for family-based sequencing data. The proposed tests are implemented in an R package that is available on CRAN (https://cran.r-project.org/web/packages/GESE/).

  20. A Monitoring Method Based on FBG for Concrete Corrosion Cracking

    PubMed Central

    Mao, Jianghong; Xu, Fangyuan; Gao, Qian; Liu, Shenglin; Jin, Weiliang; Xu, Yidong

    2016-01-01

    Corrosion cracking of reinforced concrete caused by chloride salt is one of the main determinants of structure durability. Monitoring the entire process of concrete corrosion cracking is critical for assessing the remaining life of the structure and determining if maintenance is needed. Fiber Bragg Grating (FBG) sensing technology is extensively developed in photoelectric monitoring technology and has been used on many projects. FBG can detect the quasi-distribution of strain and temperature under corrosive environments, and thus it is suitable for monitoring reinforced concrete cracking. According to the mechanical principle that corrosion expansion is responsible for the reinforced concrete cracking, a package design of reinforced concrete cracking sensors based on FBG was proposed and investigated in this study. The corresponding relationship between the grating wavelength and strain was calibrated by an equal strength beam test. The effectiveness of the proposed method was verified by an electrically accelerated corrosion experiment. The fiber grating sensing technology was able to track the corrosion expansion and corrosion cracking in real time and provided data to inform decision-making for the maintenance and management of the engineering structure. PMID:27428972

  1. Agent-based method for distributed clustering of textual information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  2. An efficient method for DEM-based overland flow routing

    NASA Astrophysics Data System (ADS)

    Huang, Pin-Chun; Lee, Kwan Tun

    2013-05-01

    The digital elevation model (DEM) is frequently used to represent watershed topographic features based on a raster or a vector data format. It has been widely linked with flow routing equations for watershed runoff simulation. In this study, a recursive formulation was encoded into the conventional kinematic- and diffusion-wave routing algorithms to permit a larger time increment, despite the Courant-Friedrich-Lewy condition having been violated. To meet the requirement of recursive formulation, a novel routing sequence was developed to determine the cell-to-cell computational procedure for the DEM database. The routing sequence can be set either according to the grid elevation in descending order for the kinematic-wave routing or according to the water stage of the grid in descending order for the diffusion-wave routing. The recursive formulation for 1D runoff routing was first applied to a conceptual overland plane to demonstrate the precision of the formulation using an analytical solution for verification. The proposed novel routing sequence with the recursive formulation was then applied to two mountain watersheds for 2D runoff simulations. The results showed that the efficiency of the proposed method was significantly superior to that of the conventional algorithm, especially when applied to a steep watershed.

  3. A Monitoring Method Based on FBG for Concrete Corrosion Cracking.

    PubMed

    Mao, Jianghong; Xu, Fangyuan; Gao, Qian; Liu, Shenglin; Jin, Weiliang; Xu, Yidong

    2016-07-14

    Corrosion cracking of reinforced concrete caused by chloride salt is one of the main determinants of structure durability. Monitoring the entire process of concrete corrosion cracking is critical for assessing the remaining life of the structure and determining if maintenance is needed. Fiber Bragg Grating (FBG) sensing technology is extensively developed in photoelectric monitoring technology and has been used on many projects. FBG can detect the quasi-distribution of strain and temperature under corrosive environments, and thus it is suitable for monitoring reinforced concrete cracking. According to the mechanical principle that corrosion expansion is responsible for the reinforced concrete cracking, a package design of reinforced concrete cracking sensors based on FBG was proposed and investigated in this study. The corresponding relationship between the grating wavelength and strain was calibrated by an equal strength beam test. The effectiveness of the proposed method was verified by an electrically accelerated corrosion experiment. The fiber grating sensing technology was able to track the corrosion expansion and corrosion cracking in real time and provided data to inform decision-making for the maintenance and management of the engineering structure.

  4. Correction of placement error in EBL using model based method

    NASA Astrophysics Data System (ADS)

    Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro

    2016-10-01

    The main source of placement error in maskmaking using electron beam is charging. DISPLACE software provides a method to correct placement errors for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. Unknown physical parameters such as fogging can be found from calibration experiments. A test layout on a single calibration mask was used to calibrate physical parameters used in the correction model. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE, and the mask was fabricated and measured. A good correlation of the measured and predicted values of the correction all over the mask with the complex pattern confirmed the high accuracy of the charging placement error correction.

  5. Jet-based methods to print living cells.

    PubMed

    Ringeisen, Bradley R; Othon, Christina M; Barron, Jason A; Young, Daniel; Spargo, Barry J

    2006-09-01

    Cell printing has been popularized over the past few years as a revolutionary advance in tissue engineering has potentially enabled heterogeneous 3-D scaffolds to be built cell-by-cell. This review article summarizes the state-of-the-art cell printing techniques that utilize fluid jetting phenomena to deposit 2- and 3-D patterns of living eukaryotic cells. There are four distinct categories of jetbased approaches to printing cells. Laser guidance direct write (LG DW) was the first reported technique to print viable cells by forming patterns of embryonic-chick spinal-cord cells on a glass slide (1999). Shortly after this, modified laser-induced forward transfer techniques (LIFT) and modified ink jet printers were also used to print viable cells, followed by the most recent demonstration using an electrohydrodynamic jetting (EHDJ) method. The low cost of some of these printing technologies has spurred debate as to whether they could be used on a large scale to manufacture tissue and possibly even whole organs. This review summarizes the published results of these cell printers (cell viability, retained genotype and phenotype), and also includes a physical description of the various jetting processes with a discussion of the stresses and forces that may be encountered by cells during printing. We conclude the review by comparing and contrasting the different jet-based techniques, while providing a map for future experiments that could lead to significant advances in the field of tissue engineering.

  6. Spectral curvature correction method based on inverse distance weighted interpolation

    NASA Astrophysics Data System (ADS)

    Jing, Juanjuan; Zhou, Jinsong; Li, Yacan; Feng, Lei

    2016-10-01

    Spectral curvature (smile effect) is universally existed in dispersive imaging spectrometer. Since most image processing systems considered all spatial pixels having the same wavelength, spectral curvature destroys the response consistence of the radiation energy in spatial dimension, it is necessary to correct the spectral curvature based on the spectral calibration data of the imaging spectrometer. Interpolation is widely used in resampling the measured spectra at the non-offset wavelength, but it is not versatile because the accuracy is different due to the spectral resolution changed. In the paper, we introduce the inverse distance weighted(IDW) method in spectrum resampling. First, calculate the Euclidean distance between the non-offset wavelength and the points near to it, the points number can be two, three, four or five, as many as you define. Then use the Euclidean distance to calculate the weight value of these points. Finally calculate the radiation of non-offset wavelength using the weight value and its corresponding radiation. The results turned out to be effective with the practical data acquired by the instrument, and it has the characteristics of versatility, simplicity, and fast.

  7. Residual Stress Analysis Based on Acoustic and Optical Methods

    PubMed Central

    Yoshida, Sanichiro; Sasaki, Tomohiro; Usui, Masaru; Sakamoto, Shuichi; Gurney, David; Park, Ik-Keun

    2016-01-01

    Co-application of acoustoelasticity and optical interferometry to residual stress analysis is discussed. The underlying idea is to combine the advantages of both methods. Acoustoelasticity is capable of evaluating a residual stress absolutely but it is a single point measurement. Optical interferometry is able to measure deformation yielding two-dimensional, full-field data, but it is not suitable for absolute evaluation of residual stresses. By theoretically relating the deformation data to residual stresses, and calibrating it with absolute residual stress evaluated at a reference point, it is possible to measure residual stresses quantitatively, nondestructively and two-dimensionally. The feasibility of the idea has been tested with a butt-jointed dissimilar plate specimen. A steel plate 18.5 mm wide, 50 mm long and 3.37 mm thick is braze-jointed to a cemented carbide plate of the same dimension along the 18.5 mm-side. Acoustoelasticity evaluates the elastic modulus at reference points via acoustic velocity measurement. A tensile load is applied to the specimen at a constant pulling rate in a stress range substantially lower than the yield stress. Optical interferometry measures the resulting acceleration field. Based on the theory of harmonic oscillation, the acceleration field is correlated to compressive and tensile residual stresses qualitatively. The acoustic and optical results show reasonable agreement in the compressive and tensile residual stresses, indicating the feasibility of the idea. PMID:28787912

  8. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2016-06-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  9. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2017-02-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  10. Landslide Monitoring in Three Gorges Area by Joint Use of Phase Based and Amplitude Based Methods

    NASA Astrophysics Data System (ADS)

    Shi, Xuguo; Zhang, Lu; Liao, Mingsheng; Balz, Timo

    2015-05-01

    Landslides are serious geohazards in Three Gorges area, China especially after the impoundment of Three Gorges Reservoir. It is very urgent to monitoring the landslides for early warning or disaster prevention purpose. In this paper, phase based methods such as traditional differential InSAR and small baseline subset method were used to investigate slow moving landslides. Point-like targets offset tracking (PTOT) was used to investigate fast moving landslides. Furthermore, in order to describe the displacement on landslide, two TerraSAR-X datasets obtained from different descending orbits were combined to obtain the three dimensional displacements on Shuping landslides with the PTOT measurements in the azimuth and range direction.

  11. Kinetic theory based new upwind methods for inviscid compressible flows

    NASA Technical Reports Server (NTRS)

    Deshpande, S. M.

    1986-01-01

    Two new upwind methods called the Kinetic Numerical Method (KNM) and the Kinetic Flux Vector Splitting (KFVS) method for the solution of the Euler equations have been presented. Both of these methods can be regarded as some suitable moments of an upwind scheme for the solution of the Boltzmann equation provided the distribution function is Maxwellian. This moment-method strategy leads to a unification of the Riemann approach and the pseudo-particle approach used earlier in the development of upwind methods for the Euler equations. A very important aspect of the moment-method strategy is that the new upwind methods satisfy the entropy condition because of the Boltzmann H-Theorem and suggest a possible way of extending the Total Variation Diminishing (TVD) principle within the framework of the H-Theorem. The ability of these methods in obtaining accurate wiggle-free solution is demonstrated by applying them to two test problems.

  12. Chord-based versus voxel-based methods of electron transport in the skeletal tissues

    SciTech Connect

    Shah, Amish P.; Jokisch, Derek W.; Rajon, Didier A.; Watchman, Christopher J.; Patton, Phillip W.; Bolch, Wesley E.

    2005-10-15

    Anatomic models needed for internal dose assessment have traditionally been developed using mathematical surface equations to define organ boundaries, shapes, and their positions within the body. Many researchers, however, are now advocating the use of tomographic models created from segmented patient computed tomography (CT) or magnetic resonance (MR) scans. In the skeleton, however, the tissue structures of the bone trabeculae, marrow cavities, and endosteal layer are exceedingly small and of complex shape, and thus do not lend themselves easily to either stylistic representations or in-vivo CT imaging. Historically, the problem of modeling the skeletal tissues has been addressed through the development of chord-based methods of radiation particle transport, as given by studies at the University of Leeds (Leeds, UK) using a 44-year male subject. We have proposed an alternative approach to skeletal dosimetry in which excised sections of marrow-intact cadaver spongiosa are imaged directly via microCT scanning. The cadaver selected for initial investigation of this technique was a 66-year male subject of nominal body mass index (22.7 kg m{sup -2}). The objectives of the present study were to compare chord-based versus voxel-based methods of skeletal dosimetry using data from the UF 66-year male subject. Good agreement between chord-based and voxel-based transport was noted for marrow irradiation by either bone surface or bone volume sources up to 500-1000 keV (depending upon the skeletal site). In contrast, chord-based models of electron transport yielded consistently lower values of the self-absorbed fraction to marrow tissues than seen under voxel-based transport at energies above 100 keV, a feature directly attributed to the inability of chord-based models to account for nonlinear electron trajectories. Significant differences were also noted in the dosimetry of the endosteal layer (for all source tissues), with chord-based transport predicting a higher fraction

  13. Metal Temperature Evaluation Method Based on Microstructure Change of Ni Based Superalloy

    NASA Astrophysics Data System (ADS)

    Okada, Ikuo; Taneike, Masaki; Oguma, Hidetaka

    It is known that among numerous factors, industrial turbine Inlet gas Temperature (TIT, hereafter) has greatest influence on thermal efficiency of industrial gas turbine. In order to achieve higher thermal efficiency, TIT has already reached as high as 1500°C, which has been realized owing to improvement and development of cooling configuration and materials for hot parts. On the other hand, to operate gas turbines soundly, it is necessary to evaluate metal temperature of hot parts precisely. So, metal temperature evaluation methods of Ni base superalloy were investigated and developed based on coarsening phenomenon of γ' phase and carbide particles.

  14. A new method for base flow separation based on heads illustrated in the Pang catchment (UK)

    NASA Astrophysics Data System (ADS)

    Peters, E.; van Lanen, H. A. J.

    2003-04-01

    A new separation filter based on observed groundwater heads was developed to separate streamflow into two components: base flow and direct runoff. Base flow was estimated using heads and direct runoff was estimated using excess precipitation. Together they were calibrated on the observed total streamflow. Instead of one best solution, a range of satisfactory solutions derived from a Monte Carlo simulation was accepted. For the calibration, data from two nested gauging stations in the Pang catchment (UK) were used. The streamflow at the upstream station is strongly dominated by base flow from the main aquifer. The downstream station also includes a significant flow component from a fast responding region with low permeability deposits. The results of this separation filter were compared to the results from three other filters, namely an arithmetic filter (BFI), the Boughton two-parameter digital filter and another filter based on heads developed by Kliner and Knĕžek. For the upstream station three of the filters gave reasonable, consistent estimates, only the estimates from the Kliner and Knĕžek filter, which are minimum estimates, were lower. For the downstream station, however, the base flow estimates differ. The base flow estimate from the method proposed in this paper is considerably lower than for the BFI and Boughton filters. These filters do not distinguish between interflow from the low permeability deposits in the downstream part of the catchment and the much more delayed outflow from the main aquifer, and thus the base flow estimate was only slightly smaller for the downstream station than for the upstream station. The filter proposed in this paper based on heads only estimates the base flow component derived from the main aquifer. The main difference occurs during winter. Apparantly in this period of the year a large component interflow occurs which cannot directly be related to precipitation, but which is neither derived from the main aquifer.

  15. Project-Based Learning in Undergraduate Environmental Chemistry Laboratory: Using EPA Methods to Guide Student Method Development for Pesticide Quantitation

    ERIC Educational Resources Information Center

    Davis, Eric J.; Pauls, Steve; Dick, Jonathan

    2017-01-01

    Presented is a project-based learning (PBL) laboratory approach for an upper-division environmental chemistry or quantitative analysis course. In this work, a combined laboratory class of 11 environmental chemistry students developed a method based on published EPA methods for the extraction of dichlorodiphenyltrichloroethane (DDT) and its…

  16. CHAPTER 7. BERYLLIUM ANALYSIS BY NON-PLASMA BASED METHODS

    SciTech Connect

    Ekechukwu, A

    2009-04-20

    The most common method of analysis for beryllium is inductively coupled plasma atomic emission spectrometry (ICP-AES). This method, along with inductively coupled plasma mass spectrometry (ICP-MS), is discussed in Chapter 6. However, other methods exist and have been used for different applications. These methods include spectroscopic, chromatographic, colorimetric, and electrochemical. This chapter provides an overview of beryllium analysis methods other than plasma spectrometry (inductively coupled plasma atomic emission spectrometry or mass spectrometry). The basic methods, detection limits and interferences are described. Specific applications from the literature are also presented.

  17. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  18. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  19. DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...

  20. DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...

  1. Alternative modeling methods for plasma-based Rf ion sources

    SciTech Connect

    Veitzer, Seth A. Kundrapu, Madhusudhan Stoltz, Peter H. Beckwith, Kristian R. C.

    2016-02-15

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two

  2. Alternative modeling methods for plasma-based Rf ion sources

    NASA Astrophysics Data System (ADS)

    Veitzer, Seth A.; Kundrapu, Madhusudhan; Stoltz, Peter H.; Beckwith, Kristian R. C.

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H- source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H- ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models

  3. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD

  4. Online prediction model based on the SVD-KPCA method.

    PubMed

    Elaissi, Ilyes; Jaffel, Ines; Taouali, Okba; Messaoud, Hassani

    2013-01-01

    This paper proposes a new method for online identification of a nonlinear system modelled on Reproducing Kernel Hilbert Space (RKHS). The proposed SVD-KPCA method uses the Singular Value Decomposition (SVD) technique to update the principal components. Then we use the Reduced Kernel Principal Component Analysis (RKPCA) to approach the principal components which represent the observations selected by the KPCA method.

  5. An improved segmentation-based HMM learning method for Condition-based Maintenance

    NASA Astrophysics Data System (ADS)

    Liu, T.; Lemeire, J.; Cartella, F.; Meganck, S.

    2012-05-01

    In the domain of condition-based maintenance (CBM), persistence of machine states is a valid assumption. Based on this assumption, we present an improved Hidden Markov Model (HMM) learning algorithm for the assessment of equipment states. By a good estimation of initial parameters, more accurate learning can be achieved than by regular HMM learning methods which start with randomly chosen initial parameters. It is also better in avoiding getting trapped in local maxima. The data is segmented with a change-point analysis method which uses a combination of cumulative sum charts (CUSUM) and bootstrapping techniques. The method determines a confidence level that a state change happens. After the data is segmented, in order to label and combine the segments corresponding to the same states, a clustering technique is used based on a low-pass filter or root mean square (RMS) values of the features. The segments with their labelled hidden state are taken as 'evidence' to estimate the parameters of an HMM. Then, the estimated parameters are served as initial parameters for the traditional Baum-Welch (BW) learning algorithms, which are used to improve the parameters and train the model. Experiments on simulated and real data demonstrate that both performance and convergence speed is improved.

  6. ILP/SMT-Based Method for Design of Boolean Networks Based on Singleton Attractors.

    PubMed

    Kobayashi, Koichi; Hiraishi, Kunihiko

    2014-01-01

    Attractors in gene regulatory networks represent cell types or states of cells. In system biology and synthetic biology, it is important to generate gene regulatory networks with desired attractors. In this paper, we focus on a singleton attractor, which is also called a fixed point. Using a Boolean network (BN) model, we consider the problem of finding Boolean functions such that the system has desired singleton attractors and has no undesired singleton attractors. To solve this problem, we propose a matrix-based representation of BNs. Using this representation, the problem of finding Boolean functions can be rewritten as an Integer Linear Programming (ILP) problem and a Satisfiability Modulo Theories (SMT) problem. Furthermore, the effectiveness of the proposed method is shown by a numerical example on a WNT5A network, which is related to melanoma. The proposed method provides us a basic method for design of gene regulatory networks.

  7. Comparing the Principle-Based SBH Maieutic Method to Traditional Case Study Methods of Teaching Media Ethics

    ERIC Educational Resources Information Center

    Grant, Thomas A.

    2012-01-01

    This quasi-experimental study at a Northwest university compared two methods of teaching media ethics, a class taught with the principle-based SBH Maieutic Method (n = 25) and a class taught with a traditional case study method (n = 27), with a control group (n = 21) that received no ethics training. Following a 16-week intervention, a one-way…

  8. Polyphony: superposition independent methods for ensemble-based drug discovery.

    PubMed

    Pitt, William R; Montalvão, Rinaldo W; Blundell, Tom L

    2014-09-30

    Structure-based drug design is an iterative process, following cycles of structural biology, computer-aided design, synthetic chemistry and bioassay. In favorable circumstances, this process can lead to the structures of hundreds of protein-ligand crystal structures. In addition, molecular dynamics simulations are increasingly being used to further explore the conformational landscape of these complexes. Currently, methods capable of the analysis of ensembles of crystal structures and MD trajectories are limited and usually rely upon least squares superposition of coordinates. Novel methodologies are described for the analysis of multiple structures of a protein. Statistical approaches that rely upon residue equivalence, but not superposition, are developed. Tasks that can be performed include the identification of hinge regions, allosteric conformational changes and transient binding sites. The approaches are tested on crystal structures of CDK2 and other CMGC protein kinases and a simulation of p38α. Known interaction - conformational change relationships are highlighted but also new ones are revealed. A transient but druggable allosteric pocket in CDK2 is predicted to occur under the CMGC insert. Furthermore, an evolutionarily-conserved conformational link from the location of this pocket, via the αEF-αF loop, to phosphorylation sites on the activation loop is discovered. New methodologies are described and validated for the superimposition independent conformational analysis of large collections of structures or simulation snapshots of the same protein. The methodologies are encoded in a Python package called Polyphony, which is released as open source to accompany this paper [http://wrpitt.bitbucket.org/polyphony/].

  9. Ensemble-based methods for forecasting census in hospital units.

    PubMed

    Koestler, Devin C; Ombao, Hernando; Bender, Jesse

    2013-05-30

    The ability to accurately forecast census counts in hospital departments has considerable implications for hospital resource allocation. In recent years several different methods have been proposed forecasting census counts, however many of these approaches do not use available patient-specific information. In this paper we present an ensemble-based methodology for forecasting the census under a framework that simultaneously incorporates both (i) arrival trends over time and (ii) patient-specific baseline and time-varying information. The proposed model for predicting census has three components, namely: current census count, number of daily arrivals and number of daily departures. To model the number of daily arrivals, we use a seasonality adjusted Poisson Autoregressive (PAR) model where the parameter estimates are obtained via conditional maximum likelihood. The number of daily departures is predicted by modeling the probability of departure from the census using logistic regression models that are adjusted for the amount of time spent in the census and incorporate both patient-specific baseline and time varying patient-specific covariate information. We illustrate our approach using neonatal intensive care unit (NICU) data collected at Women & Infants Hospital, Providence RI, which consists of 1001 consecutive NICU admissions between April 1st 2008 and March 31st 2009. Our results demonstrate statistically significant improved prediction accuracy for 3, 5, and 7 day census forecasts and increased precision of our forecasting model compared to a forecasting approach that ignores patient-specific information. Forecasting models that utilize patient-specific baseline and time-varying information make the most of data typically available and have the capacity to substantially improve census forecasts.

  10. Geometric correction methods for Timepix based large area detectors

    NASA Astrophysics Data System (ADS)

    Zemlicka, J.; Dudak, J.; Karch, J.; Krejci, F.

    2017-01-01

    X-ray micro radiography with the hybrid pixel detectors provides versatile tool for the object inspection in various fields of science. It has proven itself especially suitable for the samples with low intrinsic attenuation contrast (e.g. soft tissue in biology, plastics in material sciences, thin paint layers in cultural heritage, etc.). The limited size of single Medipix type detector (1.96 cm2) was recently overcome by the construction of large area detectors WidePIX assembled of Timepix chips equipped with edgeless silicon sensors. The largest already built device consists of 100 chips and provides fully sensitive area of 14.3 × 14.3 cm2 without any physical gaps between sensors. The pixel resolution of this device is 2560 × 2560 pixels (6.5 Mpix). The unique modular detector layout requires special processing of acquired data to avoid occurring image distortions. It is necessary to use several geometric compensations after standard corrections methods typical for this type of pixel detectors (i.e. flat-field, beam hardening correction). The proposed geometric compensations cover both concept features and particular detector assembly misalignment of individual chip rows of large area detectors based on Timepix assemblies. The former deals with larger border pixels in individual edgeless sensors and their behaviour while the latter grapple with shifts, tilts and steps between detector rows. The real position of all pixels is defined in Cartesian coordinate system and together with non-binary reliability mask it is used for the final image interpolation. The results of geometric corrections for test wire phantoms and paleo botanic material are presented in this article.

  11. Connectivity-Based Hierarchy for theoretical thermochemistry: assessment using wave function-based methods.

    PubMed

    Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2012-07-19

    The Connectivity-Based Hierarchy (CBH) is a generalized method we have developed recently to accurately predict the thermochemical properties of large closed-shell organic molecules-hydrocarbons as well as nonhydrocarbons. The performance of the different rungs of the hierarchy was initially evaluated using density functional theory. In this study, we have carried out a wave function-based analysis of the CBH method to analyze the influence of electron correlation effects on the reaction energies and enthalpies of formation. For a test set containing unstrained molecules, all levels of theory (HF, MP2, and CCSD(T)) yield small reaction energies and accurate enthalpies of formation even with modest-sized polarized double-ζ or triple-ζ basis sets. For an initial test set of five strained molecules, however, the computed reaction energies are not small, though correlated schemes still yield accurate enthalpies of formation. Thus, small reaction energies cannot be used as the principal criterion to calibrate the success of thermochemical reaction schemes for molecules possessing special features (such as ring strain or aromaticity). Overall, for the relatively large nonaromatic molecules considered in this study, the mean absolute deviation with the MP2 method at the isoatomic CBH-2 rung is comparable to that with the more expensive CCSD(T) method at the higher CBH-3 rung.

  12. Novel method of manufacturing hydrogen storage materials combining with numerical analysis based on discrete element method

    NASA Astrophysics Data System (ADS)

    Zhao, Xuzhe

    High efficiency hydrogen storage method is significant in development of fuel cell vehicle. Seeking for a high energy density material as the fuel becomes the key of wide spreading fuel cell vehicle. LiBH4 + MgH 2 system is a strong candidate due to their high hydrogen storage density and the reaction between them is reversible. However, LiBH4 + MgH 2 system usually requires the high temperature and hydrogen pressure for hydrogen release and uptake reaction. In order to reduce the requirements of this system, nanoengineering is the simple and efficient method to improve the thermodynamic properties and reduce kinetic barrier of reaction between LiBH4 and MgH2. Based on ab initio density functional theory (DFT) calculations, the previous study has indicated that the reaction between LiBH4 and MgH2 can take place at temperature near 200°C or below. However, the predictions have been shown to be inconsistent with many experiments. Therefore, it is the first time that our experiment using ball milling with aerosol spraying (BMAS) to prove the reaction between LiBH4 and MgH2 can happen during high energy ball milling at room temperature. Through this BMAS process we have found undoubtedly the formation of MgB 2 and LiH during ball milling of MgH2 while aerosol spraying of the LiBH4/THF solution. Aerosol nanoparticles from LiBH 4/THF solution leads to form Li2B12H12 during BMAS process. The Li2B12H12 formed then reacts with MgH2 in situ during ball milling to form MgB 2 and LiH. Discrete element modeling (DEM) is a useful tool to describe operation of various ball milling processes. EDEM is software based on DEM to predict power consumption, liner and media wear and mill output. In order to further improve the milling efficiency of BMAS process, EDEM is conducted to make analysis for complicated ball milling process. Milling speed and ball's filling ratio inside the canister as the variables are considered to determine the milling efficiency. The average and maximum

  13. Research on Knowledge-Based Optimization Method of Indoor Location Based on Low Energy Bluetooth

    NASA Astrophysics Data System (ADS)

    Li, C.; Li, G.; Deng, Y.; Wang, T.; Kang, Z.

    2017-09-01

    With the rapid development of LBS (Location-based Service), the demand for commercialization of indoor location has been increasing, but its technology is not perfect. Currently, the accuracy of indoor location, the complexity of the algorithm, and the cost of positioning are hard to be simultaneously considered and it is still restricting the determination and application of mainstream positioning technology. Therefore, this paper proposes a method of knowledge-based optimization of indoor location based on low energy Bluetooth. The main steps include: 1) The establishment and application of a priori and posterior knowledge base. 2) Primary selection of signal source. 3) Elimination of positioning gross error. 4) Accumulation of positioning knowledge. The experimental results show that the proposed algorithm can eliminate the signal source of outliers and improve the accuracy of single point positioning in the simulation data. The proposed scheme is a dynamic knowledge accumulation rather than a single positioning process. The scheme adopts cheap equipment and provides a new idea for the theory and method of indoor positioning. Moreover, the performance of the high accuracy positioning results in the simulation data shows that the scheme has a certain application value in the commercial promotion.

  14. Blended General Linear Methods based on Generalized BDF

    NASA Astrophysics Data System (ADS)

    Brugnano, Luigi; Magherini, Cecilia

    2008-09-01

    General Linear Methods were introduced in order to encompass a large family of numerical methods for the solution of ODE-IVPs, ranging from LMF to RK formulae. In so doing, it is possible to obtain methods able to overcome typical drawbacks of the previous classes of methods. For example, stability limitations of LMF and order reduction for RK methods. Nevertheless, these goals are usually achieved at the price of a higher computational cost. Consequently, many efforts have been done in order to derive GLMs with particular features, to be exploited for their efficient implementation. In recent years, the derivation of GLMs from particular Boundary Value Methods (BVMs), namely the family of Generalized BDF (GBDF), has been proposed for the numerical solution of stiff ODE-IVPs. Here, this approach is further developed in order to derive GLMs combining good stability and accuracy properties with the possibility of efficiently solving the generated discrete problems via the blended implementation of the methods.

  15. Breast augmentation with anatomic implants: a method based on the breast implantation base.

    PubMed

    Martin del Yerro, Jose L; Vegas, Manuel R; Sanz, Ignacio; Moreno, Emilio; Fernandez, Veronica; Puga, Susana; Vecino, Maria G; Biggs, Thomas M

    2014-04-01

    Currently, aesthetic and reconstructive surgery of the breast should be considered in terms of contouring, and hence in terms of dimensions. Based on experience performing more than 5,000 breast augmentations with highly cohesive anatomic implants, the authors explore the aesthetic anatomy of the (augmented) breast and explain the importance of the breast implantation base (BIB), the aesthetic proportions of the lower breast pole, and the patient's somatotype in the implant selection for a natural-appearing breast augmentation. A method is described for transferring all these concepts and proportions to the preoperative marking of the individual patient. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .

  16. The simulation of the recharging method of active medical implant based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Kong, Xianyue; Song, Yong; Hao, Qun; Cao, Jie; Zhang, Xiaoyu; Dai, Pantao; Li, Wansong

    2014-11-01

    The recharging of Active Medical Implant (AMI) is an important issue for its future application. In this paper, a method for recharging active medical implant using wearable incoherent light source has been proposed. Firstly, the models of the recharging method are developed. Secondly, the recharging processes of the proposed method have been simulated by using Monte Carlo (MC) method. Finally, some important conclusions have been reached. The results indicate that the proposed method will help to result in a convenient, safe and low-cost recharging method of AMI, which will promote the application of this kind of implantable device.

  17. Ground-based ULF methods of monitoring the magnetospheric plasma

    NASA Astrophysics Data System (ADS)

    Romanova, Natalia; Pilipenko, Viacheslav; Stepanova, Marina; Kozyreva, Olga; Kawano, Hideaki

    The terrestrial magnetosphere is a giant natural MHD resonator. The magnetospheric Alfven resonator is formed by the geomagnetic field lines terminated by the conductive ionospheres. Though a source of Pc3-5 waves is not reliably known, the identification of resonant frequency enables one to determine the magnetospheric plasma density and ionospheric conductance from ground magnetometer observations. However, a spectral peak does not necessarily correspond to a local resonant frequency, and the width of a spectral peak cannot be directly used to determine the quality factor of the magnetospheric resonator. This ambiguity can be resolved with the help of various gradient and polarization methods, reviewed in this presentation: Gradient method (GM), Amplitude-Phase Gradient method (APGM),Polarization methods (including H/D method), and Hodograph (H) method. These methods can be regarded as tools for the "hydromagnetic spectroscopy“ to diagnose the magnetosphere. The H-method has additional possibilities as compared with the gradient method: one can determine continuous distribution of the magnetospheric resonant frequencies and Q-factors in the range of latitudes beyond the observation baseline. These methods are illustrated by results of their application to the SAMBA magnetometers array data.

  18. A genetic algorithm based method for docking flexible molecules

    SciTech Connect

    Judson, R.S.; Jaeger, E.P.; Treasurywala, A.M.

    1993-11-01

    The authors describe a computational method for docking flexible molecules into protein binding sites. The method uses a genetic algorithm (GA) to search the combined conformation/orientation space of the molecule to find low energy conformation. Several techniques are described that increase the efficiency of the basic search method. These include the use of several interacting GA subpopulations or niches; the use of a growing algorithm that initially docks only a small part of the molecule; and the use of gradient minimization during the search. To illustrate the method, they dock Cbz-GlyP-Leu-Leu (ZGLL) into thermolysin. This system was chosen because a well refined crystal structure is available and because another docking method had previously been tested on this system. Their method is able to find conformations that lie physically close to and in some cases lower in energy than the crystal conformation in reasonable periods of time on readily available hardware.

  19. Disordered Speech Assessment Using Automatic Methods Based on Quantitative Measures

    NASA Astrophysics Data System (ADS)

    Gu, Lingyun; Harris, John G.; Shrivastav, Rahul; Sapienza, Christine

    2005-12-01

    Speech quality assessment methods are necessary for evaluating and documenting treatment outcomes of patients suffering from degraded speech due to Parkinson's disease, stroke, or other disease processes. Subjective methods of speech quality assessment are more accurate and more robust than objective methods but are time-consuming and costly. We propose a novel objective measure of speech quality assessment that builds on traditional speech processing techniques such as dynamic time warping (DTW) and the Itakura-Saito (IS) distortion measure. Initial results show that our objective measure correlates well with the more expensive subjective methods.

  20. Analysis of surface asperity flattening based on two different methods

    NASA Astrophysics Data System (ADS)

    Li, Hejie; Öchsner, Andreas; Ni, Guowei; Wei, Dongbin; Jiang, Zhengyi

    2016-11-01

    The stress state is an important parameter in metal forming processes, which significantly influences the strain state and microstructure of products, affecting their surface qualities. In order to make the metal products have a good surface quality, the surface stress state must be optimised. In this study, two classical methods, the upper bound method and the crystal plasticity finite element method, were investigated. The differences between the two methods were discussed in regard to the model, the velocity field, and the strain field. Then the related surface roughness is deduced.

  1. The historical bases of the Rayleigh and Ritz methods

    NASA Astrophysics Data System (ADS)

    Leissa, A. W.

    2005-11-01

    Rayleigh's classical book Theory of Sound was first published in 1877. In it are many examples of calculating fundamental natural frequencies of free vibration of continuum systems (strings, bars, beams, membranes, plates) by assuming the mode shape, and setting the maximum values of potential and kinetic energy in a cycle of motion equal to each other. This procedure is well known as "Rayleigh's Method." In 1908, Ritz laid out his famous method for determining frequencies and mode shapes, choosing multiple admissible displacement functions, and minimizing a functional involving both potential and kinetic energies. He then demonstrated it in detail in 1909 for the completely free square plate. In 1911, Rayleigh wrote a paper congratulating Ritz on his work, but stating that he himself had used Ritz's method in many places in his book and in another publication. Subsequently, hundreds of research articles and many books have appeared which use the method, some calling it the "Ritz method" and others the "Rayleigh-Ritz method." The present article examines the method in detail, as Ritz presented it, and as Rayleigh claimed to have used it. It concludes that, although Rayleigh did solve a few problems which involved minimization of a frequency, these solutions were not by the straightforward, direct method presented by Ritz and used subsequently by others. Therefore, Rayleigh's name should not be attached to the method.

  2. Methods for Data-based Delineation of Spatial Regions

    SciTech Connect

    Wilson, John E.

    2012-10-01

    In data analysis, it is often useful to delineate or segregate areas of interest from the general population of data in order to concentrate further analysis efforts on smaller areas. Three methods are presented here for automatically generating polygons around spatial data of interest. Each method addresses a distinct data type. These methods were developed for and implemented in the sample planning tool called Visual Sample Plan (VSP). Method A is used to delineate areas of elevated values in a rectangular grid of data (raster). The data used for this method are spatially related. Although VSP uses data from a kriging process for this method, it will work for any type of data that is spatially coherent and appears on a regular grid. Method B is used to surround areas of interest characterized by individual data points that are congregated within a certain distance of each other. Areas where data are “clumped” together spatially will be delineated. Method C is used to recreate the original boundary in a raster of data that separated data values from non-values. This is useful when a rectangular raster of data contains non-values (missing data) that indicate they were outside of some original boundary. If the original boundary is not delivered with the raster, this method will approximate the original boundary.

  3. Aperture-Tolerant, Chemical-Based Methods to Reduce Channeling

    SciTech Connect

    Randall S. Seright

    2007-09-30

    This final technical progress report describes work performed from October 1, 2004, through May 16, 2007, for the project, 'Aperture-Tolerant, Chemical-Based Methods to Reduce Channeling'. We explored the potential of pore-filling gels for reducing excess water production from both fractured and unfractured production wells. Several gel formulations were identified that met the requirements--i.e., providing water residual resistance factors greater than 2,000 and ultimate oil residual resistance factors (F{sub rro}) of 2 or less. Significant oil throughput was required to achieve low F{sub rro} values, suggesting that gelant penetration into porous rock must be small (a few feet or less) for existing pore-filling gels to provide effective disproportionate permeability reduction. Compared with adsorbed polymers and weak gels, strong pore-filling gels can provide greater reliability and behavior that is insensitive to the initial rock permeability. Guidance is provided on where relative-permeability-modification/disproportionate-permeability-reduction treatments can be successfully applied for use in either oil or gas production wells. When properly designed and executed, these treatments can be successfully applied to a limited range of oilfield excessive-water-production problems. We examined whether gel rheology can explain behavior during extrusion through fractures. The rheology behavior of the gels tested showed a strong parallel to the results obtained from previous gel extrusion experiments. However, for a given aperture (fracture width or plate-plate separation), the pressure gradients measured during the gel extrusion experiments were much higher than anticipated from rheology measurements. Extensive experiments established that wall slip and first normal stress difference were not responsible for the pressure gradient discrepancy. To explain the discrepancy, we noted that the aperture for gel flow (for mobile gel wormholing through concentrated immobile

  4. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    NASA Astrophysics Data System (ADS)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  5. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    PubMed Central

    Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  6. Exploring the Query Expansion Methods for Concept Based Representation

    DTIC Science & Technology

    2014-11-01

    documents from term based representation to concept based representation. We then utilized the Cases Database and UMLS relations to expand the key...same query expansion techniques to the term based representation. The results show that using the UMLS relation could help to improve performance. 2...www.casesdatabase.com/. Currently closed. Fig. 2. An example case report on Cases Database. 2.2 Query expansion with UMLS relationships Concepts are

  7. Ensemble ROCK Methods and Ensemble SWFM Methods for Clustering of Cross Citrus Accessions Based on Mixed Numerical and Categorical Dataset

    NASA Astrophysics Data System (ADS)

    Alvionita; Sutikno; Suharsono, A.

    2017-03-01

    Cluster analysis is a technique in multivariate analysis methods that reduces (classifying) data. This analysis has the main purpose to classify the objects of observation into groups based on characteristics. In the process, a cluster analysis is not only used for numerical data or categorical data but also developed for mixed data. There are several methods in analyzing the mixed data as ensemble methods and methods Similarity Weight and Filter Methods (SWFM). There is a lot of research on these methods, but the study did not compare the performance given by both of these methods. Therefore, this paper will be compared the performance between the clustering ensemble ROCK methods and ensemble SWFM methods. These methods will be used in clustering cross citrus accessions based on the characteristics of fruit and leaves that involve variables that are a mixture of numerical and categorical. Clustering methods with the best performance determined by looking at the ratio of standard deviation values within groups (SW) with a standard deviation between groups (SB). Methods with the best performance has the smallest ratio. From the result, we get that the performance of ensemble ROCK methods is better than ensemble SWFM methods.

  8. Classification of Polarimetric SAR Image Based on the Subspace Method

    NASA Astrophysics Data System (ADS)

    Xu, J.; Li, Z.; Tian, B.; Chen, Q.; Zhang, P.

    2013-07-01

    Land cover classification is one of the most significant applications in remote sensing. Compared to optical sensing technologies, synthetic aperture radar (SAR) can penetrate through clouds and have all-weather capabilities. Therefore, land cover classification for SAR image is important in remote sensing. The subspace method is a novel method for the SAR data, which reduces data dimensionality by incorporating feature extraction into the classification process. This paper uses the averaged learning subspace method (ALSM) method that can be applied to the fully polarimetric SAR image for classification. The ALSM algorithm integrates three-component decomposition, eigenvalue/eigenvector decomposition and textural features derived from the gray-level cooccurrence matrix (GLCM). The study site, locates in the Dingxing county, in Hebei Province, China. We compare the subspace method with the traditional supervised Wishart classification. By conducting experiments on the fully polarimetric Radarsat-2 image, we conclude the proposed method yield higher classification accuracy. Therefore, the ALSM classification method is a feasible and alternative method for SAR image.

  9. Job Search Methods: Consequences for Gender-based Earnings Inequality.

    ERIC Educational Resources Information Center

    Huffman, Matt L.; Torres, Lisa

    2001-01-01

    Data from adults in Atlanta, Boston, and Los Angeles (n=1,942) who searched for work using formal (ads, agencies) or informal (networks) methods indicated that type of method used did not contribute to the gender gap in earnings. Results do not support formal job search as a way to reduce gender inequality. (Contains 55 references.) (SK)

  10. Method of detecting meter base on image-processing

    NASA Astrophysics Data System (ADS)

    Wang, Hong-ping; Wang, Peng; Yu, Zheng-lin

    2008-03-01

    This paper proposes a new idea of detecting meter using image arithmetic- logic operation and high-precision raster sensor. This method regards the data measured by precision raster as real value, the data obtained by digital image-processing as measuring value, and achieves the aim of detecting meter through the compare of above two datum finally. This method utilizes the dynamic change of meter pointer to complete subtraction processing of image, to realize image segmentation, and to achieve warp-value of image pointer of border time. This method using the technology of image segmentation replaces the traditional method which is low accuracy and low repetition caused by manual operation and ocular reading. Its precision reaches technology index demand according to the arithmetic of nation detecting rules and experiment indicates it is reliable, high accuracy. The paper introduces the total scheme of detecting meter, capturing method of image pointer, and also shows the precision analysis of indicating value error.

  11. A detection method of signal frequency based on optimization theory

    NASA Astrophysics Data System (ADS)

    Nie, Chunyan; Shi, Yaowu; Wang, Zhuwen; Guo, Bin

    2006-11-01

    The sensitive characteristic to initial value of chaos system and the immunity to noise sufficiently demonstrate the superiority in weak signal detection. In this paper Duffing equation is used as system detection model, on the basis of optimization theory, a most optimization searching method, which takes the variance of output X as the detected value is present. The basic principle and the theoretical algorithm about detecting the weak signal with this method are proposed. At the same time, the simulation experiments and the result analysis are given. The results indicated this method is rapidly, simple, convenient and the accuracy is high, which is a novel detecting frequency method. If this method were applied in signal processing field or other application field, it would have practical significance.

  12. A Speckle Reduction Filter Using Wavelet-Based Methods for Medical Imaging Application

    DTIC Science & Technology

    2001-10-25

    A Speckle Reduction Filter using Wavelet-Based Methods for Medical Imaging Application Su...Wavelet-Based Methods for Medical Imaging Application Contract Number Grant Number Program Element Number Author(s) Project Number Task Number Work

  13. Pressure-based impact method to count bedload particles

    NASA Astrophysics Data System (ADS)

    Antico, Federica; Mendes, Luís; Aleixo, Rui; Ferreira, Rui M. L.

    2017-04-01

    -channel flow, was analysed. All tests featured a period of 90 s data collection. For a detailed description of the laboratory facilities and test conditions see Mendes et al. (2016). Results from MiCas system were compared with those of obtained from the analysis of a high-speed video footage. The obtained results shown a good agreement between both techniques. The measurements carried out allowed to determine that MiCas system is able to track particle impact in real-time within an error margin of 2.0%. From different tests with the same conditions it was possible to determine the repeatability of MiCas system. Derived quantities such as bedload transport rates, eulerian auto-correlation functions and structure functions are also in close agreement with measurements based on optical methods. The main advantages of MiCas system relatively to digital image processing methods are: a) independence from optical access, thus avoiding problems with light intensity variations and oscillating free surfaces; b) small volume of data associated to particle counting, which allows for the possibility of acquiring very long data series (hours, days) of particle impacts. In the considered cases, it would take more than two hours to generate 1 MB of data. For the current validation tests, 90 s acquisition time generated 25 Gb of images but 11 kB of MiCas data. On the other hand the time necessary to process the digital images may correspond to days, effectively limiting its usage to small time series. c) the possibility of real-time measurements, allowing for detection of problems during the experiments and minimizing some post-processing steps. This research was partially supported by Portuguese and European funds, within programs COMPETE2020 and PORL-FEDER, through project PTDC/ECM-HID/6387/2014 granted by the National Foundation for Science and Technology (FCT). References Mendes L., Antico F., Sanches P., Alegria F., Aleixo R., and Ferreira RML. (2016). A particle counting system for

  14. The research of positioning methods based on Internet of Things

    NASA Astrophysics Data System (ADS)

    Zou, Dongyao; Liu, Jia; Sun, Hui; Li, Nana; Han, Xueqin

    2013-03-01

    With the advent of Internet of Things time, more and more applications require location-based services. This article describes the concept and basic principles of several of Internet of things positioning technology such as GPS positioning, Base Station positioning, ZigBee positioning. And then the advantages and disadvantages of these types of positioning technologies are compared.

  15. [Comparison of sustainable development status in Heilongjiang Province based on traditional ecological footprint method and emergy ecological footprint method].

    PubMed

    Chen, Chun-feng; Wang, Hong-yan; Xiao, Du-ning; Wang, Da-qing

    2008-11-01

    By using traditional ecological footprint method and its modification, emergy ecological footprint method, the sustainable development status of Heilongjiang Province in 2005 was analyzed. The results showed that the ecological deficits of Heilongjiang Province in 2005 based on emergy and conventional ecological footprint methods were 1.919 and 0.6256 hm2 x cap(-1), respectively. The ecological footprint value based on the two methods both exceeded its carrying capacity, which indicated that the social and economic development of the study area was not sustainable. Emergy ecological footprint method was used to discuss the relationships between human's material demand and ecosystem resources supply, and more stable parameters such as emergy transformity and emergy density were introduced into emergy ecological footprint method, which overcame some of the shortcomings of conventional ecological method.

  16. Financial time series analysis based on information categorization method

    NASA Astrophysics Data System (ADS)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  17. 3D face recognition by projection-based methods

    NASA Astrophysics Data System (ADS)

    Dutagaci, Helin; Sankur, Bülent; Yemez, Yücel

    2006-02-01

    In this paper, we investigate recognition performances of various projection-based features applied on registered 3D scans of faces. Some features are data driven, such as ICA-based features or NNMF-based features. Other features are obtained using DFT or DCT-based schemes. We apply the feature extraction techniques to three different representations of registered faces, namely, 3D point clouds, 2D depth images and 3D voxel. We consider both global and local features. Global features are extracted from the whole face data, whereas local features are computed over the blocks partitioned from 2D depth images. The block-based local features are fused both at feature level and at decision level. The resulting feature vectors are matched using Linear Discriminant Analysis. Experiments using different combinations of representation types and feature vectors are conducted on the 3D-RMA dataset.

  18. Comparison of Different Recruitment Methods for Sexual and Reproductive Health Research: Social Media–Based Versus Conventional Methods

    PubMed Central

    Motoki, Yoko; Taguri, Masataka; Asai-Sato, Mikiko; Enomoto, Takayuki; Wark, John Dennis; Garland, Suzanne Marie

    2017-01-01

    Background Prior research about the sexual and reproductive health of young women has relied mostly on self-reported survey studies. Thus, participant recruitment using Web-based methods can improve sexual and reproductive health research about cervical cancer prevention. In our prior study, we reported that Facebook is a promising way to reach young women for sexual and reproductive health research. However, it remains unknown whether Web-based or other conventional recruitment methods (ie, face-to-face or flyer distribution) yield comparable survey responses from similar participants. Objective We conducted a survey to determine whether there was a difference in the sexual and reproductive health survey responses of young Japanese women based on recruitment methods: social media–based and conventional methods. Methods From July 2012 to March 2013 (9 months), we invited women of ages 16-35 years in Kanagawa, Japan, to complete a Web-based questionnaire. They were recruited through either a social media–based (social networking site, SNS, group) or by conventional methods (conventional group). All participants enrolled were required to fill out and submit their responses through a Web-based questionnaire about their sexual and reproductive health for cervical cancer prevention. Results Of the 243 participants, 52.3% (127/243) were recruited by SNS, whereas 47.7% (116/243) were recruited by conventional methods. We found no differences between recruitment methods in responses to behaviors and attitudes to sexual and reproductive health survey, although more participants from the conventional group (15%, 14/95) chose not to answer the age of first intercourse compared with those from the SNS group (5.2%, 6/116; P=.03). Conclusions No differences were found between recruitment methods in the responses of young Japanese women to a Web–based sexual and reproductive health survey. PMID:28283466

  19. Bayesian Stereo Matching Method Based on Edge Constraints.

    PubMed

    Li, Jie; Shi, Wenxuan; Deng, Dexiang; Jia, Wenyan; Sun, Mingui

    2012-12-01

    A new global stereo matching method is presented that focuses on the handling of disparity, discontinuity and occlusion. The Bayesian approach is utilized for dense stereo matching problem formulated as a maximum a posteriori Markov Random Field (MAP-MRF) problem. In order to improve stereo matching performance, edges are incorporated into the Bayesian model as a soft constraint. Accelerated belief propagation is applied to obtain the maximum a posteriori estimates in the Markov random field. The proposed algorithm is evaluated using the Middlebury stereo benchmark. Our experimental results comparing with some state-of-the-art stereo matching methods demonstrate that the proposed method provides superior disparity maps with a subpixel precision.

  20. Developing a Self-Report-Based Sequential Analysis Method for Educational Technology Systems: A Process-Based Usability Evaluation

    ERIC Educational Resources Information Center

    Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse

    2015-01-01

    The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…

  1. Developing a Self-Report-Based Sequential Analysis Method for Educational Technology Systems: A Process-Based Usability Evaluation

    ERIC Educational Resources Information Center

    Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse

    2015-01-01

    The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…

  2. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  3. Analyzing ligation mixtures using a PCR based method

    PubMed Central

    Wikel, Stephen K.

    2005-01-01

    We have developed a simple and effective method (Lig-PCR) for monitoring ligation reactions using PCR and primers that are common to many cloning vectors. Ligation mixtures can directly be used as templates and the results can be analyzed by conventional gel electrophoresis. The PCR products are representative of the recombinant molecules created during ligation and the corresponding transformants. Orientation of inserts can also be determined using an internal primer. The usefulness of this method has been demonstrated using ligation mixtures of two cDNA’s derived from the salivary glands of Aedes aegypti mosquitoes. The method described here is sensitive and easy to perform compared to currently available methods. PMID:16136227

  4. A Comparison of Satellite-Based Multilayered Cloud Detection Methods

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Chang, Fu-Lung; Khaiyer, Mandana M.; Ayers, Jeffrey K.; Palikonda, Rabindra; Nordeen, Michele L.; Spangenberg, Douglas A.

    2007-01-01

    Both techniques show skill in detecting multilayered clouds, but they disagree more than 50% of the time. BTD method tends to detect more ML clouds than CO2 method and has slightly higher detection accuracy. CO2 method might be better for minimizing false positives, but further study is needed. Neither method as been optimized for GOES data. BTD technique developed on AVHRR, better BTD signals & resolution. CO2 developed on MODIS, better resolution & 4 CO2 channels. Many additional comparisons with ARSCL data will be used to optimize both techniques. A combined technique will be examined using MODIS & Meteosat-8 data. After optimization, the techniques will be implemented in the ARM operational satellite cloud processing.

  5. Improved color interpolation method based on Bayer image

    NASA Astrophysics Data System (ADS)

    Wang, Jin

    2012-10-01

    Image sensors are important components of lunar exploration device. Considering volume and cost, image sensors generally adopt a single CCD or CMOS at the present time, and the surface of the sensor is covered with a layer of color filter array(CFA), which is usually Bayer CFA. In the Bayer CFA, each pixel can only get one of tricolor, so it is necessary to carry out color interpolation in order to get the full color image. An improved Bayer image interpolation method is presented, which is novel, practical, and also easy to be realized. The results of experiments to prove the effect of the interpolation are shown. Comparing with classic methods, this method can find edge of image more accurately, reduce the saw tooth phenomenon in the edge area, and keep the image smooth in other area. This method is applied successfully in a certain exploration imaging system.

  6. Optimization based inversion method for the inverse heat conduction problems

    NASA Astrophysics Data System (ADS)

    Mu, Huaiping; Li, Jingtao; Wang, Xueyao; Liu, Shi

    2017-05-01

    Precise estimation of the thermal physical properties of materials, boundary conditions, heat flux distributions, heat sources and initial conditions is highly desired for real-world applications. The inverse heat conduction problem (IHCP) analysis method provides an alternative approach for acquiring such parameters. The effectiveness of the inversion algorithm plays an important role in practical applications of the IHCP method. Different from traditional inversion models, in this paper a new inversion model that simultaneously highlights the measurement errors and the inaccurate properties of the forward problem is proposed to improve the inversion accuracy and robustness. A generalized cost function is constructed to convert the original IHCP into an optimization problem. An iterative scheme that splits a complicated optimization problem into several simpler sub-problems and integrates the superiorities of the alternative optimization method and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is developed for solving the proposed cost function. Numerical experiment results validate the effectiveness of the proposed inversion method.

  7. Human body region enhancement method based on Kinect infrared imaging

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Fan, Yubo; Song, Xiaowei; Cai, Wenjing

    2016-10-01

    To effectively improve the low contrast of human body region in the infrared images, a combing method of several enhancement methods is utilized to enhance the human body region. Firstly, for the infrared images acquired by Kinect, in order to improve the overall contrast of the infrared images, an Optimal Contrast-Tone Mapping (OCTM) method with multi-iterations is applied to balance the contrast of low-luminosity infrared images. Secondly, to enhance the human body region better, a Level Set algorithm is employed to improve the contour edges of human body region. Finally, to further improve the human body region in infrared images, Laplacian Pyramid decomposition is adopted to enhance the contour-improved human body region. Meanwhile, the background area without human body region is processed by bilateral filtering to improve the overall effect. With theoretical analysis and experimental verification, the results show that the proposed method could effectively enhance the human body region of such infrared images.

  8. Respiratory Pattern Variability Analysis Based on Nonlinear Prediction Methods

    DTIC Science & Technology

    2007-11-02

    Brobely. All-night sleep EEG and artificial stochastic control signals have similar correlation dimensions . Electroencephalogr. Clin. Neurophisiol...methods. These methods use the volume signals generated by the respiratory system in order to construct a model of its dynamics, and then to estimate the...definition have been considered. The incidence of different prediction depths and embedding dimensions have been analyzed. A group of 12 patients on

  9. Cleaning Verification Monitor Technique Based on Infrared Optical Methods

    DTIC Science & Technology

    2004-10-01

    Cleaning Verification Techniques.” Real-time methods to provide both qualitative and quantitative assessments of surface cleanliness are needed for a...detection VCPI method offer a wide range of complementary capabilities in real-time surface cleanliness verification. Introduction Currently...also has great potential to reduce or eliminate premature failures of surface coatings caused by a lack of surface cleanliness . Additional

  10. A Finger Vein Identification Method Based on Template Matching

    NASA Astrophysics Data System (ADS)

    Zou, Hui; Zhang, Bing; Tao, Zhigang; Wang, Xiaoping

    2016-01-01

    New methods for extracting vein features from finger vein image and generating templates for matching are proposed. In the algorithm for generating templates, we proposed a parameter-templates quality factor (TQF) - to measure the quality of generated templates. So that we can use fewer finger vein samples to generate templates that meet the quality requirement of identification. The recognition accuracy of using proposed methods of finger vein feature extraction and template generation strategy for identification is 97.14%.

  11. Research iris serial images quality assessment method based on HVS

    NASA Astrophysics Data System (ADS)

    Li, Zhi-hui; Zhang, Chang-hai; Ming, Xing; Zhao, Yong-hua

    2006-01-01

    Iris recognition can be widely used in security and customs, and it provides superiority security than other human feature recognition such as fingerprint, face and so on. The iris image quality is crucial to recognition effect. Accordingly reliable image quality assessments are necessary for evaluating iris image quality. However, there haven't uniformly criterion to Image quality assessment. Image quality assessment have Objective and Subjective Evaluation methods, In practice, However Subjective Evaluation method is fussy and doesn't effective on iris recognition. Objective Evaluation method should be used in iris recognition. According to human visual system model (HVS) Multi-scale and selectivity characteristic, it presents a new iris Image quality assessment method. In the paper, ROI is found and wavelet transform zero-crossing is used to find Multi-scale edge, and Multi-scale fusion measure is used to assess iris image quality. In experiment, Objective and Subjective Evaluation methods are used to assess iris images. From the results, the method is effectively to iris image quality assessment.

  12. Comparison of two PCR-based human papillomavirus genotyping methods.

    PubMed

    Castle, Philip E; Porras, Carolina; Quint, Wim G; Rodriguez, Ana Cecilia; Schiffman, Mark; Gravitt, Patti E; González, Paula; Katki, Hormuzd A; Silva, Sandra; Freer, Enrique; Van Doorn, Leen-Jan; Jiménez, Silvia; Herrero, Rolando; Hildesheim, Allan

    2008-10-01

    We compared two consensus primer PCR human papillomavirus (HPV) genotyping methods for the detection of individual HPV genotypes and carcinogenic HPV genotypes as a group, using a stratified sample of enrollment cervical specimens from sexually active women participating in the NCI/Costa Rica HPV16/18 Vaccine Efficacy Trial. For the SPF(10) method, DNA was extracted from 0.1% of the cervical specimen by using a MagNA Pure LC instrument, a 65-bp region of the HPV L1 gene was targeted for PCR amplification by using SPF(10) primers, and 25 genotypes were detected by reverse-line blot hybridization of the amplicons. For the Linear Array (LA) method, DNA was extracted from 0.5% of the cervical specimen by using an MDx robot, a 450-bp region of the HPV L1 gene was targeted for PCR amplification by using PGMY09/11 L1 primers, and 37 genotypes were detected by reverse-line blot hybridization of the amplicons. Specimens (n = 1,427) for testing by the LA method were randomly selected from strata defined on the basis of enrollment test results from the SPF(10) method, cytology, and Hybrid Capture 2. LA results were extrapolated to the trial cohort (n = 5,659). The LA and SPF(10) methods detected 21 genotypes in common; HPV16, -18, -31, -33, -35, -39, -45, -51, -52, -56, -58, -59, -66, -68, and -73 were considered the carcinogenic HPV genotypes. There was no difference in the overall results for grouped detection of carcinogenic HPV by the SPF(10) and LA methods (35.3% versus 35.9%, respectively; P = 0.5), with a 91.8% overall agreement and a kappa value of 0.82. In comparisons of individual HPV genotypes, the LA method detected significantly more HPV16, HPV18, HPV39, HPV58, HPV59, HPV66, and HPV68/73 and less HPV31 and HPV52 than the SPF(10) method; inclusion of genotype-specific testing for HPV16 and HPV18 for those specimens testing positive for HPV by the SPF(10) method but for which no individual HPV genotype was detected abrogated any differences between the LA and SPF

  13. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  14. A Cluster-Based Method for Test Construction. Research Report 88-3.

    ERIC Educational Resources Information Center

    Boekkooi-Timminga, Ellen

    A new test construction method based on integer linear programming is described. This method selects optimal tests in small amounts of computer time. The new method, called the Cluster-Based Method, assumes that the items in the bank have been grouped according to their item information curves so that items within a group, or cluster, are…

  15. Silicon-Based Anode and Method for Manufacturing the Same

    NASA Technical Reports Server (NTRS)

    Yushin, Gleb Nikolayevich (Inventor); Luzinov, Igor (Inventor); Zdyrko, Bogdan (Inventor); Magasinski, Alexandre (Inventor)

    2017-01-01

    A silicon-based anode comprising silicon, a carbon coating that coats the surface of the silicon, a polyvinyl acid that binds to at least a portion of the silicon, and vinylene carbonate that seals the interface between the silicon and the polyvinyl acid. Because of its properties, polyvinyl acid binders offer improved anode stability, tunable properties, and many other attractive attributes for silicon-based anodes, which enable the anode to withstand silicon cycles of expansion and contraction during charging and discharging.

  16. Seamless Method- and Model-based Software and Systems Engineering

    NASA Astrophysics Data System (ADS)

    Broy, Manfred

    Today engineering software intensive systems is still more or less handicraft or at most at the level of manufacturing. Many steps are done ad-hoc and not in a fully systematic way. Applied methods, if any, are not scientifically justified, not justified by empirical data and as a result carrying out large software projects still is an adventure. However, there is no reason why the development of software intensive systems cannot be done in the future with the same precision and scientific rigor as in established engineering disciplines. To do that, however, a number of scientific and engineering challenges have to be mastered. The first one aims at a deep understanding of the essentials of carrying out such projects, which includes appropriate models and effective management methods. What is needed is a portfolio of models and methods coming together with a comprehensive support by tools as well as deep insights into the obstacles of developing software intensive systems and a portfolio of established and proven techniques and methods with clear profiles and rules that indicate when which method is ready for application. In the following we argue that there is scientific evidence and enough research results so far to be confident that solid engineering of software intensive systems can be achieved in the future. However, yet quite a number of scientific research problems have to be solved.

  17. A speaker change detection method based on coarse searching

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-yuan; He, Qian-hua; Li, Yan-xiong; He, Jun

    2013-03-01

    The conventional speaker change detection (SCD) method using Bayesian Information Criterion (BIC) has been widely used. However, its performance relies on the choice of penalty factor and suffers from mass calculation. The twostep SCD is less time consuming but generates more detection errors. The limitation of conventional method's performance originates from the two adjacent data windows. We propose a strategy that inserts an interval between the two adjacent fixed-size data windows in each analysis window. The dissimilarity value between the data windows is regarded as the probability of a speaker identity change within the interval area. Then this analysis window is slid along the audio by a large step to locate the areas where speaker change points may appear. Afterwards we only focus on these areas and locate precisely where the change points are. Other areas where a speaker change point unlikely appears are abandoned. The proposed method is computationally efficient and more robust to noise and penalty factor compared with conventional method. Evaluated on the corpus of China Central Television (CCTV) news, the proposed method obtains 74.18% reduction in calculation time and 22.24% improvement in F1-measure compared with the conventional approach.

  18. Bootstrap embedding: An internally consistent fragment-based method.

    PubMed

    Welborn, Matthew; Tsuchimochi, Takashi; Van Voorhis, Troy

    2016-08-21

    Strong correlation poses a difficult problem for electronic structure theory, with computational cost scaling quickly with system size. Fragment embedding is an attractive approach to this problem. By dividing a large complicated system into smaller manageable fragments "embedded" in an approximate description of the rest of the system, we can hope to ameliorate the steep cost of correlated calculations. While appealing, these methods often converge slowly with fragment size because of small errors at the boundary between fragment and bath. We describe a new electronic embedding method, dubbed "Bootstrap Embedding," a self-consistent wavefunction-in-wavefunction embedding theory that uses overlapping fragments to improve the description of fragment edges. We apply this method to the one dimensional Hubbard model and a translationally asymmetric variant, and find that it performs very well for energies and populations. We find Bootstrap Embedding converges rapidly with embedded fragment size, overcoming the surface-area-to-volume-ratio error typical of many embedding methods. We anticipate that this method may lead to a low-scaling, high accuracy treatment of electron correlation in large molecular systems.

  19. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  20. Bootstrap embedding: An internally consistent fragment-based method

    NASA Astrophysics Data System (ADS)

    Welborn, Matthew; Tsuchimochi, Takashi; Van Voorhis, Troy

    2016-08-01

    Strong correlation poses a difficult problem for electronic structure theory, with computational cost scaling quickly with system size. Fragment embedding is an attractive approach to this problem. By dividing a large complicated system into smaller manageable fragments "embedded" in an approximate description of the rest of the system, we can hope to ameliorate the steep cost of correlated calculations. While appealing, these methods often converge slowly with fragment size because of small errors at the boundary between fragment and bath. We describe a new electronic embedding method, dubbed "Bootstrap Embedding," a self-consistent wavefunction-in-wavefunction embedding theory that uses overlapping fragments to improve the description of fragment edges. We apply this method to the one dimensional Hubbard model and a translationally asymmetric variant, and find that it performs very well for energies and populations. We find Bootstrap Embedding converges rapidly with embedded fragment size, overcoming the surface-area-to-volume-ratio error typical of many embedding methods. We anticipate that this method may lead to a low-scaling, high accuracy treatment of electron correlation in large molecular systems.