Science.gov

Sample records for fvm-bem method based

  1. Classification method based on KCCA

    NASA Astrophysics Data System (ADS)

    Wang, Zhanqing; Zhang, Guilin; Zhao, Guangzhou

    2007-11-01

    Nonlinear CCA extends the linear CCA in that it operates in the kernel space and thus implies the nonlinear combinations in the original space. This paper presents a classification method based on the kernel canonical correlation analysis (KCCA). We introduce the probabilistic label vectors (PLV) for a give pattern which extend the conventional concept of class label, and investigate the correlation between feature variables and PLV variables. A PLV predictor is presented based on KCCA, and then classification is performed on the predicted PLV. We formulate a frame for classification by integrating class information through PLV. Experimental results on Iris data set classification and facial expression recognition show the efficiencies of the proposed method.

  2. DOM Based XSS Detecting Method Based on Phantomjs

    NASA Astrophysics Data System (ADS)

    Dong, Ri-Zhan; Ling, Jie; Liu, Yi

    Because malicious code does not appear in html source code, DOM based XSS cannot be detected by traditional methods. By analyzing the causes of DOM based XSS, this paper proposes a detection method of DOM based XSS based on phantomjs. This paper uses function hijacking to detect dangerous operation and achieves a prototype system. Comparing with existing tools shows that the system improves the detection rate and the method is effective to detect DOM based XSS.

  3. Impulse-based methods for fluid flow

    SciTech Connect

    Cortez, R.

    1995-05-01

    A Lagrangian numerical method based on impulse variables is analyzed. A relation between impulse vectors and vortex dipoles with a prescribed dipole moment is presented. This relation is used to adapt the high-accuracy cutoff functions of vortex methods for use in impulse-based methods. A source of error in the long-time implementation of the impulse method is explained and two techniques for avoiding this error are presented. An application of impulse methods to the motion of a fluid surrounded by an elastic membrane is presented.

  4. A new automatic baseline correction method based on iterative method

    NASA Astrophysics Data System (ADS)

    Bao, Qingjia; Feng, Jiwen; Chen, Fang; Mao, Wenping; Liu, Zao; Liu, Kewen; Liu, Chaoyang

    2012-05-01

    A new automatic baseline correction method for Nuclear Magnetic Resonance (NMR) spectra is presented. It is based on an improved baseline recognition method and a new iterative baseline modeling method. The presented baseline recognition method takes advantages of three baseline recognition algorithms in order to recognize all signals in spectra. While in the iterative baseline modeling method, besides the well-recognized baseline points in signal-free regions, the 'quasi-baseline points' in the signal-crowded regions are also identified and then utilized to improve robustness by preventing the negative regions. The experimental results on both simulated data and real metabolomics spectra with over-crowded peaks show the efficiency of this automatic method.

  5. METHOD OF JOINING CARBIDES TO BASE METALS

    DOEpatents

    Krikorian, N.H.; Farr, J.D.; Witteman, W.G.

    1962-02-13

    A method is described for joining a refractory metal carbide such as UC or ZrC to a refractory metal base such as Ta or Nb. The method comprises carburizing the surface of the metal base and then sintering the base and carbide at temperatures of about 2000 deg C in a non-oxidizing atmosphere, the base and carbide being held in contact during the sintering step. To reduce the sintering temperature and time, a sintering aid such as iron, nickel, or cobait is added to the carbide, not to exceed 5 wt%. (AEC)

  6. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  7. Manifold based methods in facial expression recognition

    NASA Astrophysics Data System (ADS)

    Xie, Kun

    2013-07-01

    This paper describes a novel method for facial expression recognition based on non-linear manifold techniques. The graph-based algorithms are designed to treat structure in data, and regularize accordingly. This same goal is shared by several other algorithms, from linear method principal components analysis (PCA) to modern variants such as Laplacian eigenmaps. In this paper we focus on manifold learning for dimensionality reduction and clustering using Laplacian eigenmaps for facial expression recognition. We evaluate the algorithm by using all the pixels and selected features respectively and compare the performance of the proposed non-linear manifold method with the previous linear manifold approach, and the non linear method produces higher recognition rate than the facial expression representation using linear methods.

  8. Annular subaperture stitching method based on autocollimation

    NASA Astrophysics Data System (ADS)

    Yiwei, Chen; Erlong, Miao; Yongxin, Sui; Huaijiang, Yang

    2014-11-01

    In this paper, we propose an annular subaperture stitching method based on an autocollimation method to relax the requirements on mechanical location accuracy. In this approach, we move a ball instead of the interferometer and the aspheric surface so that testing results for adjacent annular subapertures are registered. Thus, the stitching algorithm can easily stitch the subaperture testing results together when large mechanical location errors exist. To verify this new method, we perform a simulation experiment. The simulation results demonstrate that this method can stitch together the subaperture testing results under large mechanical location errors.

  9. A Property Restriction Based Knowledge Merging Method

    NASA Astrophysics Data System (ADS)

    Che, Haiyan; Chen, Wei; Feng, Tie; Zhang, Jiachen

    Merging new instance knowledge extracted from the Web according to certain domain ontology into the knowledge base (KB for short) is essential for the knowledge management and should be processed carefully, since this may introduce redundant or contradictory knowledge, and the quality of the knowledge in the KB, which is very important for a knowledge-based system to provide users high quality services, will suffer from such "bad" knowledge. Advocates a property restriction based knowledge merging method, it can identify the equivalent instances, redundant or contradictory knowledge according to the property restrictions defined in the domain ontology and can consolidate the knowledge about equivalent instances and discard the redundancy and conflict to keep the KB compact and consistent. This knowledge merging method has been used in a semantic-based search engine project: CRAB and the effect is satisfactory.

  10. Recommendation advertising method based on behavior retargeting

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  11. Bayesian individualization via sampling-based methods.

    PubMed

    Wakefield, J

    1996-02-01

    We consider the situation where we wish to adjust the dosage regimen of a patient based on (in general) sparse concentration measurements taken on-line. A Bayesian decision theory approach is taken which requires the specification of an appropriate prior distribution and loss function. A simple method for obtaining samples from the posterior distribution of the pharmacokinetic parameters of the patient is described. In general, these samples are used to obtain a Monte Carlo estimate of the expected loss which is then minimized with respect to the dosage regimen. Some special cases which yield analytic solutions are described. When the prior distribution is based on a population analysis then a method of accounting for the uncertainty in the population parameters is described. Two simulation studies showing how the methods work in practice are presented. PMID:8827585

  12. Topology based methods for vector field comparisons

    NASA Astrophysics Data System (ADS)

    Batra, Rajesh Kumar

    Vector fields are commonly found in almost all branches of the physical sciences. Aerodynamics, dynamical systems, electromagnetism, and global climate modeling are a few examples. These multivariate data fields are often large, and no general, automated method exists for comparing these fields. Existing methods require either subjective visual judgments, or data interface compatibility, or domain specific knowledge. A topology based method intrinsically eliminates all of the above limitations and has the additional advantage of significantly compressing the vector field by representing only key features of the flow. Therefore, large databases are compactly represented and quickly searched. Topology is a natural framework for the study of many vector fields. It provides rules of an organizing principle, a flow grammar, that can describe and connect together the properties common to flows. Helman and Hesselink first introduced automated methods to extract and visualize this grammar. This work extends their method by introducing automated methods for vector topology comparison. Basic two-dimensional flows are first compared. The theory is extended to compare three-dimensional flow fields and the topology on no-slip surfaces. Concepts from graph theory and linear programming are utilized to solve these problems. Finally, the first automated method for higher order singularity comparisons is introduced using mathematical theories from geometric (Clifford) algebra.

  13. A multicore based parallel image registration method.

    PubMed

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  14. Lagrangian based methods for coherent structure detection.

    PubMed

    Allshouse, Michael R; Peacock, Thomas

    2015-09-01

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows. PMID:26428570

  15. Lagrangian based methods for coherent structure detection

    SciTech Connect

    Allshouse, Michael R.; Peacock, Thomas

    2015-09-15

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.

  16. Chapter 11. Community analysis-based methods

    SciTech Connect

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  17. A flocking based method for brain tractography.

    PubMed

    Aranda, Ramon; Rivera, Mariano; Ramirez-Manzanares, Alonso

    2014-04-01

    We propose a new method to estimate axonal fiber pathways from Multiple Intra-Voxel Diffusion Orientations. Our method uses the multiple local orientation information for leading stochastic walks of particles. These stochastic particles are modeled with mass and thus they are subject to gravitational and inertial forces. As result, we obtain smooth, filtered and compact trajectory bundles. This gravitational interaction can be seen as a flocking behavior among particles that promotes better and robust axon fiber estimations because they use collective information to move. However, the stochastic walks may generate paths with low support (outliers), generally associated to incorrect brain connections. In order to eliminate the outlier pathways, we propose a filtering procedure based on principal component analysis and spectral clustering. The performance of the proposal is evaluated on Multiple Intra-Voxel Diffusion Orientations from two realistic numeric diffusion phantoms and a physical diffusion phantom. Additionally, we qualitatively demonstrate the performance on in vivo human brain data. PMID:24583805

  18. Method for extruding pitch based foam

    SciTech Connect

    Klett, James W.

    2002-01-01

    A method and apparatus for extruding pitch based foam is disclosed. The method includes the steps of: forming a viscous pitch foam; passing the precursor through an extrusion tube; and subjecting the precursor in said extrusion tube to a temperature gradient which varies along the length of the extrusion tube to form an extruded carbon foam. The apparatus includes an extrusion tube having a passageway communicatively connected to a chamber in which a viscous pitch foam formed in the chamber paring through the extrusion tube, and a heating mechanism in thermal communication with the tube for heating the viscous pitch foam along the length of the tube in accordance with a predetermined temperature gradient.

  19. Homogenization method based on the inverse problem

    SciTech Connect

    Tota, A.; Makai, M.

    2013-07-01

    We present a method for deriving homogeneous multi-group cross sections to replace a heterogeneous region's multi-group cross sections; providing that the fluxes and the currents on the external boundary, and the region averaged fluxes are preserved. The method is developed using diffusion approximation to the neutron transport equation in a symmetrical slab geometry. Assuming that the boundary fluxes are given, two response matrices (RMs) can be defined. The first derives the boundary current from the boundary flux, the second derives the flux integral over the region from the boundary flux. Assuming that these RMs are known, we present a formula which reconstructs the multi-group cross-section matrix and the diffusion coefficients from the RMs of a homogeneous slab. Applying this formula to the RMs of a slab with multiple homogeneous regions yields a homogenization method; which produce such homogenized multi-group cross sections and homogenized diffusion coefficients, that the fluxes and the currents on the external boundary, and the region averaged fluxes are preserved. The method is based on the determination of the eigenvalues and the eigenvectors of the RMs. We reproduce the four-group cross section matrix and the diffusion constants from the RMs in numerical examples. We give conditions for replacing a heterogeneous region by a homogeneous one so that the boundary current and the region-averaged flux are preserved for a given boundary flux. (authors)

  20. Dreamlet-based interpolation using POCS method

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Wu, Ru-Shan; Geng, Yu; Chen, Xiaohong

    2014-10-01

    Due to incomplete and non-uniform coverage of the acquisition system and dead traces, real seismic data always has some missing traces which affect the performance of a multi-channel algorithm, such as Surface-Related Multiple Elimination (SRME), imaging and inversion. Therefore, it is necessary to interpolate seismic data. Dreamlet transform has been successfully used in the modeling of seismic wave propagation and imaging, and this paper explains the application of dreamlet transform to seismic data interpolation. In order to avoid spatial aliasing in transform domain thus getting arbitrary under-sampling rate, improved Jittered under-sampling strategy is proposed to better control the dataset. With L0 constraint and Projection Onto Convex Sets (POCS) method, performances of dreamlet-based and curvelet-based interpolation are compared in terms of recovered signal to noise ratio (SNR) and convergence rate. Tests on synthetic and real cases demonstrate that dreamlet transform has superior performance to curvelet transform.

  1. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  2. Graph-based Methods for Orbit Classification

    SciTech Connect

    Bagherjeiran, A; Kamath, C

    2005-09-29

    An important step in the quest for low-cost fusion power is the ability to perform and analyze experiments in prototype fusion reactors. One of the tasks in the analysis of experimental data is the classification of orbits in Poincare plots. These plots are generated by the particles in a fusion reactor as they move within the toroidal device. In this paper, we describe the use of graph-based methods to extract features from orbits. These features are then used to classify the orbits into several categories. Our results show that existing machine learning algorithms are successful in classifying orbits with few points, a situation which can arise in data from experiments.

  3. Numerical manifold method based on the method of weighted residuals

    NASA Astrophysics Data System (ADS)

    Li, S.; Cheng, Y.; Wu, Y.-F.

    2005-05-01

    Usually, the governing equations of the numerical manifold method (NMM) are derived from the minimum potential energy principle. For many applied problems it is difficult to derive in general outset the functional forms of the governing equations. This obviously strongly restricts the implementation of the minimum potential energy principle or other variational principles in NMM. In fact, the governing equations of NMM can be derived from a more general method of weighted residuals. By choosing suitable weight functions, the derivation of the governing equations of the NMM from the weighted residual method leads to the same result as that derived from the minimum potential energy principle. This is demonstrated in the paper by deriving the governing equations of the NMM for linear elasticity problems, and also for Laplace's equation for which the governing equations of the NMM cannot be derived from the minimum potential energy principle. The performance of the method is illustrated by three numerical examples.

  4. Subjective evidence based ethnography: method and applications.

    PubMed

    Lahlou, Saadi; Le Bellu, Sophie; Boesen-Mariani, Sabine

    2015-06-01

    Subjective Evidence Based Ethnography (SEBE) is a method designed to access subjective experience. It uses First Person Perspective (FPP) digital recordings as a basis for analytic Replay Interviews (RIW) with the participants. This triggers their memory and enables a detailed step by step understanding of activity: goals, subgoals, determinants of actions, decision-making processes, etc. This paper describes the technique and two applications. First, the analysis of professional practices for know-how transferring purposes in industry is illustrated with the analysis of nuclear power-plant operators' gestures. This shows how SEBE enables modelling activity, describing good and bad practices, risky situations, and expert tacit knowledge. Second, the analysis of full days lived by Polish mothers taking care of their children is described, with a specific focus on how they manage their eating and drinking. This research has been done on a sub-sample of a large scale intervention designed to increase plain water drinking vs sweet beverages. It illustrates the interest of SEBE as an exploratory technique in complement to other more classic approaches such as questionnaires and behavioural diaries. It provides the detailed "how" of the effects that are measured at aggregate level by other techniques.

  5. Subjective evidence based ethnography: method and applications.

    PubMed

    Lahlou, Saadi; Le Bellu, Sophie; Boesen-Mariani, Sabine

    2015-06-01

    Subjective Evidence Based Ethnography (SEBE) is a method designed to access subjective experience. It uses First Person Perspective (FPP) digital recordings as a basis for analytic Replay Interviews (RIW) with the participants. This triggers their memory and enables a detailed step by step understanding of activity: goals, subgoals, determinants of actions, decision-making processes, etc. This paper describes the technique and two applications. First, the analysis of professional practices for know-how transferring purposes in industry is illustrated with the analysis of nuclear power-plant operators' gestures. This shows how SEBE enables modelling activity, describing good and bad practices, risky situations, and expert tacit knowledge. Second, the analysis of full days lived by Polish mothers taking care of their children is described, with a specific focus on how they manage their eating and drinking. This research has been done on a sub-sample of a large scale intervention designed to increase plain water drinking vs sweet beverages. It illustrates the interest of SEBE as an exploratory technique in complement to other more classic approaches such as questionnaires and behavioural diaries. It provides the detailed "how" of the effects that are measured at aggregate level by other techniques. PMID:25579747

  6. DNA-based methods of geochemical prospecting

    DOEpatents

    Ashby, Matthew

    2011-12-06

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  7. Iterative methods based upon residual averaging

    NASA Technical Reports Server (NTRS)

    Neuberger, J. W.

    1980-01-01

    Iterative methods for solving boundary value problems for systems of nonlinear partial differential equations are discussed. The methods involve subtracting an average of residuals from one approximation in order to arrive at a subsequent approximation. Two abstract methods in Hilbert space are given and application of these methods to quasilinear systems to give numerical schemes for such problems is demonstrated. Potential theoretic matters related to the iteration schemes are discussed.

  8. Multifractal Framework Based on Blanket Method

    PubMed Central

    Paskaš, Milorad P.; Reljin, Irini S.; Reljin, Branimir D.

    2014-01-01

    This paper proposes two local multifractal measures motivated by blanket method for calculation of fractal dimension. They cover both fractal approaches familiar in image processing. The first two measures (proposed Methods 1 and 3) support model of image with embedded dimension three, while the other supports model of image embedded in space of dimension three (proposed Method 2). While the classical blanket method provides only one value for an image (fractal dimension) multifractal spectrum obtained by any of the proposed measures gives a whole range of dimensional values. This means that proposed multifractal blanket model generalizes classical (monofractal) blanket method and other versions of this monofractal approach implemented locally. Proposed measures are validated on Brodatz image database through texture classification. All proposed methods give similar classification results, while average computation time of Method 3 is substantially longer. PMID:24578664

  9. Method for sequencing DNA base pairs

    DOEpatents

    Sessler, Andrew M.; Dawson, John

    1993-01-01

    The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source.

  10. Method for sequencing DNA base pairs

    DOEpatents

    Sessler, A.M.; Dawson, J.

    1993-12-14

    The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source. 6 figures.

  11. New ITF measure method based on fringes

    NASA Astrophysics Data System (ADS)

    Fang, Qiaoran; Liu, Shijie; Gao, Wanrong; Zhou, You; Liu, HuanHuan

    2016-01-01

    With the unprecedented developments of the intense laser and aerospace projects', the interferometer is widely used in detecting middle frequency indicators of the optical elements, which put forward very high request towards the interferometer system transfer function (ITF). Conventionally, the ITF is measured by comparing the power spectra of known phase objects such as high-quality phase step. However, the fabrication of phase step is complex and high-cost, especially in the measurement of large-aperture interferometer. In this paper, a new fringe method is proposed to measure the ITF without additional objects. The frequency was changed by adjusting the number of fringes, and the normalized transfer function value was measured at different frequencies. The ITF value measured by fringe method was consistent with the traditional phase step method, which confirms the feasibility of proposed method. Moreover, the measurement error caused by defocus was analyzed. The proposed method does not require the preparation of a step artifact, which greatly reduces the test cost, and is of great significance to the ITF measurement of large aperture interferometer.

  12. HMM-Based Gene Annotation Methods

    SciTech Connect

    Haussler, David; Hughey, Richard; Karplus, Keven

    1999-09-20

    Development of new statistical methods and computational tools to identify genes in human genomic DNA, and to provide clues to their functions by identifying features such as transcription factor binding sites, tissue, specific expression and splicing patterns, and remove homologies at the protein level with genes of known function.

  13. Method of casting pitch based foam

    DOEpatents

    Klett, James W.

    2002-01-01

    A process for producing molded pitch based foam is disclosed which minimizes cracking. The process includes forming a viscous pitch foam in a container, and then transferring the viscous pitch foam from the container into a mold. The viscous pitch foam in the mold is hardened to provide a carbon foam having a relatively uniform distribution of pore sizes and a highly aligned graphic structure in the struts.

  14. Roadside-based communication system and method

    NASA Technical Reports Server (NTRS)

    Bachelder, Aaron D. (Inventor)

    2007-01-01

    A roadside-based communication system providing backup communication between emergency mobile units and emergency command centers. In the event of failure of a primary communication, the mobile units transmit wireless messages to nearby roadside controllers that may take the form of intersection controllers. The intersection controllers receive the wireless messages, convert the messages into standard digital streams, and transmit the digital streams along a citywide network to a destination intersection or command center.

  15. Method for producing iron-based catalysts

    DOEpatents

    Farcasiu, Malvina; Kaufman, Phillip B.; Diehl, J. Rodney; Kathrein, Hendrik

    1999-01-01

    A method for preparing an acid catalyst having a long shelf-life is provided comprising doping crystalline iron oxides with lattice-compatible metals and heating the now-doped oxide with halogen compounds at elevated temperatures. The invention also provides for a catalyst comprising an iron oxide particle having a predetermined lattice structure, one or more metal dopants for said iron oxide, said dopants having an ionic radius compatible with said lattice structure; and a halogen bound with the iron and the metal dopants on the surface of the particle.

  16. [Culture based diagnostic methods for tuberculosis].

    PubMed

    Baylan, Orhan

    2005-01-01

    Culture methods providing isolates for identification and drug susceptibility testing, still represent the gold standard for the definitive diagnosis of tuberculosis, although the delay in obtaining results still remains a problem. Traditional solid media are recommended for use along with liquid media in primary isolation of mycobacteria. At present, a number of elaborate culture systems are available commercially. They range from simple bottles and tubes such as MGIT (BD Diagnostic Systems, USA), Septi-Chek AFB (BD, USA) and MB Redox (Biotest Diagnostics, USA) to semiautomated system (BACTEC 460TB, BD, USA) and fully automated systems (BACTEC 9000 MB [BD, USA], BACTEC MGIT 960 [BD, USA], ESP Culture System II [Trek Diagnostics, USA], MB/BacT ALERT 3D System [BioMérieux, NC], TK Culture System [Salubris Inc, Turkey]). Culture methods available today are sufficient to permit laboratories to develop an algoritm that is optimal for patients and administrative needs. In this review article, the culture systems used for the diagnosis of tuberculosis, their mechanisms, advantages and disadvantages have been discussed under the light of recent literature.

  17. PCLC flake-based apparatus and method

    DOEpatents

    Cox, Gerald P; Fromen, Cathy A; Marshall, Kenneth L; Jacobs, Stephen D

    2012-10-23

    A PCLC flake/fluid host suspension that enables dual-frequency, reverse drive reorientation and relaxation of the PCLC flakes is composed of a fluid host that is a mixture of: 94 to 99.5 wt % of a non-aqueous fluid medium having a dielectric constant value .di-elect cons., where 1<.di-elect cons.<7, a conductivity value .sigma., where 10.sup.-9>.sigma.>10.sup.-7 Siemens per meter (S/m), and a resistivity r, where 10.sup.7>r>10.sup.10 ohm-meters (.OMEGA.-m), and which is optically transparent in a selected wavelength range .DELTA..lamda.; 0.0025 to 0.25 wt % of an inorganic chloride salt; 0.0475 to 4.75 wt % water; and 0.25 to 2 wt % of an anionic surfactant; and 1 to 5 wt % of PCLC flakes suspended in the fluid host mixture. Various encapsulation forms and methods are disclosed including a Basic test cell, a Microwell, a Microcube, Direct encapsulation (I), Direct encapsulation (II), and Coacervation encapsulation. Applications to display devices are disclosed.

  18. Brain Based Teaching: Fad or Promising Teaching Method.

    ERIC Educational Resources Information Center

    Winters, Clyde A.

    This paper discusses brain-based teaching and examines its relevance as a teaching method and knowledge base. Brain-based teaching is very popular among early childhood educators. Positive attributes of brain-based education include student engagement and active involvement in their own learning, teachers teaching for meaning and understanding,…

  19. Shrinkage regression-based methods for microarray missing value imputation

    PubMed Central

    2013-01-01

    Background Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. Results To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Conclusions Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods. PMID:24565159

  20. New search method based on hash table and heuristic search method

    NASA Astrophysics Data System (ADS)

    Wang, He-Chen; Xian, Wu; Luo, Da L.; Song, Xiang

    1991-02-01

    The article ut forword a new method of dynamic search based on hash tabIeand heuristic search method. The method can inQrove the speed of search oijeration when ful control knowledge about the solution space to objects is known. An example about using the search method to decode the Haffman code is discussed in detai I.

  1. A Cluster-Based Method for Test Construction.

    ERIC Educational Resources Information Center

    Boekkooi-Timminga, Ellen

    1990-01-01

    A new test construction model based on the Rasch model is proposed. This model, the cluster-based method, considers groups of interchangeable items rather than individual items and uses integer programing. Results for six test construction problems indicate that the method produces accurate results in small amounts of time. (SLD)

  2. Pyrolyzed-parylene based sensors and method of manufacture

    NASA Technical Reports Server (NTRS)

    Tai, Yu-Chong (Inventor); Liger, Matthieu (Inventor); Miserendino, Scott (Inventor); Konishi, Satoshi (Inventor)

    2007-01-01

    A method (and resulting structure) for fabricating a sensing device. The method includes providing a substrate comprising a surface region and forming an insulating material overlying the surface region. The method also includes forming a film of carbon based material overlying the insulating material and treating to the film of carbon based material to pyrolyzed the carbon based material to cause formation of a film of substantially carbon based material having a resistivity ranging within a predetermined range. The method also provides at least a portion of the pyrolyzed carbon based material in a sensor application and uses the portion of the pyrolyzed carbon based material in the sensing application. In a specific embodiment, the sensing application is selected from chemical, humidity, piezoelectric, radiation, mechanical strain or temperature.

  3. Correlation theory-based signal processing method for CMF signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-lin; Tu, Ya-qing

    2016-06-01

    Signal processing precision of Coriolis mass flowmeter (CMF) signals affects measurement accuracy of Coriolis mass flowmeters directly. To improve the measurement accuracy of CMFs, a correlation theory-based signal processing method for CMF signals is proposed, which is comprised of the correlation theory-based frequency estimation method and phase difference estimation method. Theoretical analysis shows that the proposed method eliminates the effect of non-integral period sampling signals on frequency and phase difference estimation. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of frequency and phase difference estimation and has better estimation performance than the adaptive notch filter, discrete Fourier transform and autocorrelation methods in terms of frequency estimation and the data extension-based correlation, Hilbert transform, quadrature delay estimator and discrete Fourier transform methods in terms of phase difference estimation, which contributes to improving the measurement accuracy of Coriolis mass flowmeters.

  4. Highly sensitive methods for electroanalytical chemistry based on nanotubule membranes.

    PubMed

    Kobayashi, Y; Martin, C R

    1999-09-01

    Two new methods of electroanalysis are described. These methods are based on membranes containing monodisperse Au nanotubules with inside diameters approaching molecular dimensions. In one method, the analyte species is detected by measuring the change in trans-membrane current when the analyte is added to the nanotubule-based cell. The second method entails the use of a concentration cell based on the nanotubule membrane. In this case, the change in membrane potential is used to detect the analyte. Detection limits as low as 10(-11) M have been achieved. Hence, these methods compete with even the most sensitive of modern analytical methodologies. In addition, excellent molecular-sized-based selectivity is observed.

  5. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  6. New Robust Face Recognition Methods Based on Linear Regression

    PubMed Central

    Mi, Jian-Xun; Liu, Jin-Xing; Wen, Jiajun

    2012-01-01

    Nearest subspace (NS) classification based on linear regression technique is a very straightforward and efficient method for face recognition. A recently developed NS method, namely the linear regression-based classification (LRC), uses downsampled face images as features to perform face recognition. The basic assumption behind this kind method is that samples from a certain class lie on their own class-specific subspace. Since there are only few training samples for each individual class, which will cause the small sample size (SSS) problem, this problem gives rise to misclassification of previous NS methods. In this paper, we propose two novel LRC methods using the idea that every class-specific subspace has its unique basis vectors. Thus, we consider that each class-specific subspace is spanned by two kinds of basis vectors which are the common basis vectors shared by many classes and the class-specific basis vectors owned by one class only. Based on this concept, two classification methods, namely robust LRC 1 and 2 (RLRC 1 and 2), are given to achieve more robust face recognition. Unlike some previous methods which need to extract class-specific basis vectors, the proposed methods are developed merely based on the existence of the class-specific basis vectors but without actually calculating them. Experiments on three well known face databases demonstrate very good performance of the new methods compared with other state-of-the-art methods. PMID:22879992

  7. An image restoration method based on sparse constraint

    NASA Astrophysics Data System (ADS)

    Qiang, Zhenping; Liu, Hui; Chen, Xu; Shang, Zhenhong; Zeng, Lingjun

    2013-07-01

    In this paper, proposed an image restoration method which base on the sparse constraint. Based on the principle of Compressed Sensing, the observed image is transformed into the wavelet domain, and then converted the image restoration problem to a convex set unrestricted optimization problem by limiting the number of non-zero elements of the wavelet domain, using the gradient projection method for solving the optimization problem to achieve the restoration of the input image. Experiments show that the method presented has the fast convergence and good robustness compared to the traditional total variation regularization restoration method.

  8. Recent methods for the determination of peroxide-based explosives.

    PubMed

    Schulte-Ladbeck, Rasmus; Vogel, Martin; Karst, Uwe

    2006-10-01

    In the last few years, the need to determine peroxide-based explosives in solid samples and air samples has resulted in the development of a series of new analytical methods for triacetonetriperoxide (TATP, acetone peroxide) and hexamethylenetriperoxidediamine (HMTD). In this review, after a short introduction describing the state of the art in the field, these new analytical methods are critically discussed. Particular emphasis is placed on spectroscopic and mass spectrometric methods as well as on chromatographic techniques with selective detection schemes. The potential of these methods to analyse unknown solid samples that might contain one or more of the explosives and to analyse peroxide-based explosives in air is evaluated.

  9. DNA-Based Methods in the Immunohematology Reference Laboratory

    PubMed Central

    Denomme, Gregory A

    2010-01-01

    Although hemagglutination serves the immunohematology reference laboratory well, when used alone, it has limited capability to resolve complex problems. This overview discusses how molecular approaches can be used in the immunohematology reference laboratory. In order to apply molecular approaches to immunohematology, knowledge of genes, DNA-based methods, and the molecular bases of blood groups are required. When applied correctly, DNA-based methods can predict blood groups to resolve ABO/Rh discrepancies, identify variant alleles, and screen donors for antigen-negative units. DNA-based testing in immunohematology is a valuable tool used to resolve blood group incompatibilities and to support patients in their transfusion needs. PMID:21257350

  10. DNA-based methods in the immunohematology reference laboratory.

    PubMed

    Reid, Marion E; Denomme, Gregory A

    2011-02-01

    Although hemagglutination serves the immunohematology reference laboratory well, when used alone, it has limited capability to resolve complex problems. This overview discusses how molecular approaches can be used in the immunohematology reference laboratory. In order to apply molecular approaches to immunohematology, knowledge of genes, DNA-based methods, and the molecular bases of blood groups are required. When applied correctly, DNA-based methods can predict blood groups to resolve ABO/Rh discrepancies, identify variant alleles, and screen donors for antigen-negative units. DNA-based testing in immunohematology is a valuable tool used to resolve blood group incompatibilities and to support patients in their transfusion needs.

  11. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  12. Propensity Score–Based Methods versus MTE-Based Methods in Causal Inference: Identification, Estimation, and Application*

    PubMed Central

    ZHOU, XIANG; XIE, YU

    2012-01-01

    Since the seminal introduction of the propensity score by Rosenbaum and Rubin, propensity-score-based (PS-based) methods have been widely used for drawing causal inferences in the behavioral and social sciences. However, the propensity score approach depends on the ignorability assumption: there are no unobserved confounders once observed covariates are taken into account. For situations where this assumption may be violated, Heckman and his associates have recently developed a novel approach based on marginal treatment effects (MTE). In this paper, we (1) explicate consequences for PS-based methods when aspects of the ignorability assumption are violated; (2) compare PS-based methods and MTE-based methods by making a close examination of their identification assumptions and estimation performances; (3) apply these two approaches in estimating the economic return to college using data from NLSY 1979 and discuss their discrepancies in results. When there is a sorting gain but no systematic baseline difference between treated and untreated units given observed covariates, PS-based methods can identify the treatment effect of the treated (TT). The MTE approach performs best when there is a valid and strong instrumental variable (IV). In addition, this paper introduces the “smoothing-difference PS-based method,” which enables us to uncover heterogeneity across people of different propensity scores in both counterfactual outcomes and treatment effects. PMID:26877562

  13. Evaluation of PCR-based beef sexing methods.

    PubMed

    Zeleny, Reinhard; Bernreuther, Alexander; Schimmel, Heinz; Pauwels, Jean

    2002-07-17

    Analysis of the sex of beef meat by fast and reliable molecular methods is an important measure to ensure correct allocation of export refunds, which are considerably higher for male beef meat. Two PCR-based beef sexing methods have been optimized and evaluated. The amelogenin-type method revealed excellent accuracy and robustness, whereas the bovine satellite/Y-chromosome duplex PCR procedure showed more ambiguous results. In addition, an interlaboratory comparison was organized to evaluate currently applied PCR-based sexing methods in European customs laboratories. From a total of 375 samples sent out, only 1 false result was reported (female identified as male). However, differences in the performances of the applied methods became apparent. The collected data contribute to specify technical requirements for a common European beef sexing methodology based on PCR. PMID:12105941

  14. A new ultrasound based method for rapid microorganism detection

    NASA Astrophysics Data System (ADS)

    Shukla, Shiva Kant; Segura, Luis Elvira; Sánchez, Carlos José Sierra; López, Pablo Resa

    2012-05-01

    A new method for rapid detection of catalase positive microorganisms by using an ultrasonic measuring method is proposed in this work. The developed technique is based on the detection of oxygen bubbles produced by the hydrolysis of hydrogen peroxide induced by the enzyme catalase which is present in many microorganisms. The bubbles are trapped in a media based on agar gel which was especially developed for microbiological evaluation. It is found that microorganism concentrations of the order of 105 c.f.u./ml can be detected by using this method. The results obtained show up that the proposed method is competitive with other modern commercial methods like luminescence by ATP system. The method can also be used for characterization of enzyme activity.

  15. A Channelization-Based DOA Estimation Method for Wideband Signals.

    PubMed

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  16. A Novel Method for Learner Assessment Based on Learner Annotations

    ERIC Educational Resources Information Center

    Noorbehbahani, Fakhroddin; Samani, Elaheh Biglar Beigi; Jazi, Hossein Hadian

    2013-01-01

    Assessment is one of the most essential parts of any instructive learning process which aims to evaluate a learner's knowledge about learning concepts. In this work, a new method for learner assessment based on learner annotations is presented. The proposed method exploits the M-BLEU algorithm to find the most similar reference annotations…

  17. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  18. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, G.F.; Steindler, M.J.

    1985-05-21

    A method of removing a phosphorus-based poisonous substance from water contaminated is presented. In addition, the toxicity of the phosphorus-based substance is also subsequently destroyed. A water-immiscible organic solvent is first immobilized on a supported liquid membrane before the contaminated water is contacted with one side of the supported liquid membrane to absorb the phosphorus-based substance in the organic solvent. The other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react with phosphorus-based solvated species to form a non-toxic product.

  19. A method for selecting training samples based on camera response

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua

    2016-09-01

    In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.

  20. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1990-10-09

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  1. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1987-10-07

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  2. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Moyzis, Robert K.; Ratliff, Robert L.; Shera, E. Brooks; Stewart, Carleton C.

    1990-01-01

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed.

  3. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  4. An algorithmic method for reducing conductance-based neuron models.

    PubMed

    Sorensen, Michael E; DeWeerth, Stephen P

    2006-08-01

    Although conductance-based neural models provide a realistic depiction of neuronal activity, their complexity often limits effective implementation and analysis. Neuronal model reduction methods provide a means to reduce model complexity while retaining the original model's realism and relevance. Such methods, however, typically include ad hoc components that require that the modeler already be intimately familiar with the dynamics of the original model. We present an automated, algorithmic method for reducing conductance-based neuron models using the method of equivalent potentials (Kelper et al., Biol Cybern 66(5):381-387, 1992) Our results demonstrate that this algorithm is able to reduce the complexity of the original model with minimal performance loss, and requires minimal prior knowledge of the model's dynamics. Furthermore, by utilizing a cost function based on the contribution of each state variable to the total conductance of the model, the performance of the algorithm can be significantly improved.

  5. Optimizing distance-based methods for large data sets

    NASA Astrophysics Data System (ADS)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  6. Leaf image segmentation method based on multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Jin-Wei; Shi, Wen; Liao, Gui-Ping

    2013-12-01

    To identify singular regions of crop leaf affected by diseases, based on multifractal detrended fluctuation analysis (MF-DFA), an image segmentation method is proposed. In the proposed method, first, we defend a new texture descriptor: local generalized Hurst exponent, recorded as LHq based on MF-DFA. And then, box-counting dimension f(LHq) is calculated for sub-images constituted by the LHq of some pixels, which come from a specific region. Consequently, series of f(LHq) of the different regions can be obtained. Finally, the singular regions are segmented according to the corresponding f(LHq). Six kinds of corn diseases leaf's images are tested in our experiments. Both the proposed method and other two segmentation methods—multifractal spectrum based and fuzzy C-means clustering have been compared in the experiments. The comparison results demonstrate that the proposed method can recognize the lesion regions more effectively and provide more robust segmentations.

  7. An overview of modal-based damage identification methods

    SciTech Connect

    Farrar, C.R.; Doebling, S.W.

    1997-09-01

    This paper provides an overview of methods that examine changes in measured vibration response to detect, locate, and characterize damage in structural and mechanical systems. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is first provided. The methods are then categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. This overview is limited to methods that can be adapted to a wide range of structures (i.e., are not dependent on a particular assumed model form for the system such as beam-bending behavior and methods and that are not based on updating finite element models). Next, the methods are described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of modal-based damage identification.

  8. Integrated navigation method based on inertial navigation system and Lidar

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyue; Shi, Haitao; Pan, Jianye; Zhang, Chunxi

    2016-04-01

    An integrated navigation method based on the inertial navigational system (INS) and Lidar was proposed for land navigation. Compared with the traditional integrated navigational method and dead reckoning (DR) method, the influence of the inertial measurement unit (IMU) scale factor and misalignment was considered in the new method. First, the influence of the IMU scale factor and misalignment on navigation accuracy was analyzed. Based on the analysis, the integrated system error model of INS and Lidar was established, in which the IMU scale factor and misalignment error states were included. Then the observability of IMU error states was analyzed. According to the results of the observability analysis, the integrated system was optimized. Finally, numerical simulation and a vehicle test were carried out to validate the availability and utility of the proposed INS/Lidar integrated navigational method. Compared with the test result of a traditional integrated navigation method and DR method, the proposed integrated navigational method could result in a higher navigation precision. Consequently, the IMU scale factor and misalignment error were effectively compensated by the proposed method and the new integrated navigational method is valid.

  9. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  10. Dense Stereo Matching Method Based on Local Affine Model.

    PubMed

    Li, Jie; Shi, Wenxuan; Deng, Dexiang; Jia, Wenyan; Sun, Mingui

    2013-07-01

    A new method for constructing an accurate disparity space image and performing an efficient cost aggregation in stereo matching based on local affine model is proposed in this paper. The key algorithm includes a new self-adapting dissimilarity measurement used for calculating the matching cost and a local affine model used in cost aggregation stage. Different from the traditional region-based methods, which try to change the matching window size or to calculate an adaptive weight to do the aggregation, the proposed method focuses on obtaining the efficient and accurate local affine model to aggregate the cost volume while preserving the disparity discontinuity. Moreover, the local affine model can be extended to the color space. Experimental results demonstrate that the proposed method is able to provide subpixel precision disparity maps compared with some state-of-the-art stereo matching methods. PMID:24163727

  11. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  12. Empirical comparison of structure-based pathway methods

    PubMed Central

    Jaakkola, Maria K.

    2016-01-01

    Multiple methods have been proposed to estimate pathway activities from expression profiles, and yet, there is not enough information available about the performance of those methods. This makes selection of a suitable tool for pathway analysis difficult. Although methods based on simple gene lists have remained the most common approach, various methods that also consider pathway structure have emerged. To provide practical insight about the performance of both list-based and structure-based methods, we tested six different approaches to estimate pathway activities in two different case study settings of different characteristics. The first case study setting involved six renal cell cancer data sets, and the differences between expression profiles of case and control samples were relatively big. The second case study setting involved four type 1 diabetes data sets, and the profiles of case and control samples were more similar to each other. In general, there were marked differences in the outcomes of the different pathway tools even with the same input data. In the cancer studies, the results of a tested method were typically consistent across the different data sets, yet different between the methods. In the more challenging diabetes studies, almost all the tested methods detected as significant only few pathways if any. PMID:26197809

  13. Consistency-based ellipse detection method for complicated images

    NASA Astrophysics Data System (ADS)

    Zhang, Lijun; Huang, Xuexiang; Feng, Weichun; Liang, Shuli; Hu, Tianjian

    2016-05-01

    Accurate ellipse detection in complicated images is a challenging problem due to corruptions from image clutter, noise, or occlusion of other objects. To cope with this problem, an edge-following-based ellipse detection method is proposed which promotes the performances of the subprocesses based on consistency. The ellipse detector models edge connectivity by line segments and exploits inconsistent endpoints of the line segments to split the edge contours into smooth arcs. The smooth arcs are further refined with a novel arc refinement method which iteratively improves the consistency degree of the smooth arc. A two-phase arc integration method is developed to group disconnected elliptical arcs belonging to the same ellipse, and two constraints based on consistency are defined to increase the effectiveness and speed of the merging process. Finally, an efficient ellipse validation method is proposed to evaluate the saliency of the elliptic hypotheses. Detailed evaluation on synthetic images shows that our method outperforms other state-of-the-art ellipse detection methods in terms of effectiveness and speed. Additionally, we test our detector on three challenging real-world datasets. The F-measure score and execution time of results demonstrate that our method is effective and fast in complicated images. Therefore, the proposed method is suitable for practical applications.

  14. Spatial clustering method based on three-dimensional cloud model

    NASA Astrophysics Data System (ADS)

    Wang, Haijun; Wang, Li; Deng, Yu; Liu, Jia

    2008-12-01

    Spatial clustering is one of those major methods applying to spatial data mining and knowledge discovery. The purpose of this paper is to set forth Spatial Clustering Method Based on Multidimensional Cloud Model, which can be widely applied to the research on classification and hierarchy in realm of spatial data mining and knowledge discovery. This paper summarizes all kinds of cloud model and analyzes the optimalizing form of spatial data-three-dimensional cloud model. The limitation which sets the weighing value subjectively in traditional way and propagation of error can be avoided. The implementation procedure of this method is advanced, and the feasibility of this method is proven through experiments effectively.

  15. A perceptual hashing method based on luminance features

    NASA Astrophysics Data System (ADS)

    Luo, Siqing

    2011-02-01

    With the rapid development of multimedia technology, content based searching and image authentication has become strong requirements. Image hashing technique has been proposed to meet them. In this paper, an RST (Rotation, Scaling, and Translation) resistant image hash algorithm is presented. In this method, the geometric distortions are extracted and adjusted by normalization. The features of the image are generated from the high-rank moments of luminance distribution. With the help of the efficient image representation capability of high-rank moments, the robustness and discrimination of proposed method are improved. The experimental results show that the proposed method is better than some existing methods in robustness under rotation attack.

  16. System and method for deriving a process-based specification

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael Gerard (Inventor); Rash, James Larry (Inventor); Rouff, Christopher A. (Inventor)

    2009-01-01

    A system and method for deriving a process-based specification for a system is disclosed. The process-based specification is mathematically inferred from a trace-based specification. The trace-based specification is derived from a non-empty set of traces or natural language scenarios. The process-based specification is mathematically equivalent to the trace-based specification. Code is generated, if applicable, from the process-based specification. A process, or phases of a process, using the features disclosed can be reversed and repeated to allow for an interactive development and modification of legacy systems. The process is applicable to any class of system, including, but not limited to, biological and physical systems, electrical and electro-mechanical systems in addition to software, hardware and hybrid hardware-software systems.

  17. a Minimum Spanning Tree Based Method for Uav Image Segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Wei, Zheng; Cui, Weihong; Lin, Zhiyong

    2016-06-01

    This paper proposes a Minimum Span Tree (MST) based image segmentation method for UAV images in coastal area. An edge weight based optimal criterion (merging predicate) is defined, which based on statistical learning theory (SLT). And we used a scale control parameter to control the segmentation scale. Experiments based on the high resolution UAV images in coastal area show that the proposed merging predicate can keep the integrity of the objects and prevent results from over segmentation. The segmentation results proves its efficiency in segmenting the rich texture images with good boundary of objects.

  18. Quaternion-Based Discriminant Analysis Method for Color Face Recognition

    PubMed Central

    Xu, Yong

    2012-01-01

    Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition. PMID:22937054

  19. Two DL-based methods for auditing medical terminological systems.

    PubMed

    Cornet, Ronald; Abu-Hanna, Ameen

    2005-01-01

    Medical terminological systems (TSs) play an increasingly important role in health care by supporting recording, retrieval and analysis of patient information. As the size and complexity of TSs are growing, the need arises for means to audit them, i.e. verify and maintain (logical) consistency and (semantic) correctness of their contents. In this paper we describe two methods based on description logics (DLs) for the audit of TSs. One method uses non-primitive definitions to detect concepts with equivalent definitions. The other method is characterized by stringent assumptions that are made about concept definitions, in order to detect inconsistent definitions. We discuss the possibility of applying these methods to the Foundational Model of Anatomy (FMA) to demonstrate the potentials and pitfalls of these methods. We show that the methods are complementary, and can indeed improve the contents of medical TSs.

  20. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.

  1. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  2. Quaternion-based discriminant analysis method for color face recognition.

    PubMed

    Xu, Yong

    2012-01-01

    Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition. PMID:22937054

  3. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  4. A Hybrid Method for Pancreas Extraction from CT Image Based on Level Set Methods

    PubMed Central

    Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction. PMID:24066016

  5. A hybrid method for pancreas extraction from CT image based on level set methods.

    PubMed

    Jiang, Huiyan; Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction.

  6. Moving Mesh Methods in Multiple Dimensions Based on Harmonic Maps

    NASA Astrophysics Data System (ADS)

    Li, Ruo; Tang, Tao; Zhang, Pingwen

    2001-07-01

    In practice, there are three types of adaptive methods using the finite element approach, namely the h-method, p-method, and r-method. In the h-method, the overall method contains two parts, a solution algorithm and a mesh selection algorithm. These two parts are independent of each other in the sense that the change of the PDEs will affect the first part only. However, in some of the existing versions of the r-method (also known as the moving mesh method), these two parts are strongly associated with each other and as a result any change of the PDEs will result in the rewriting of the whole code. In this work, we will propose a moving mesh method which also contains two parts, a solution algorithm and a mesh-redistribution algorithm. Our efforts are to keep the advantages of the r-method (e.g., keep the number of nodes unchanged) and of the h-method (e.g., the two parts in the code are independent). A framework for adaptive meshes based on the Hamilton-Schoen-Yau theory was proposed by Dvinsky. In this work, we will extend Dvinsky's method to provide an efficient solver for the mesh-redistribution algorithm. The key idea is to construct the harmonic map between the physical space and a parameter space by an iteration procedure. Each iteration step is to move the mesh closer to the harmonic map. This procedure is simple and easy to program and also enables us to keep the map harmonic even after long times of numerical integration. The numerical schemes are applied to a number of test problems in two dimensions. It is observed that the mesh-redistribution strategy based on the harmonic maps adapts the mesh extremely well to the solution without producing skew elements for multi-dimensional computations.

  7. A Triangle Mesh Standardization Method Based on Particle Swarm Optimization.

    PubMed

    Wang, Wuli; Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang

    2016-01-01

    To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh. PMID:27509129

  8. [DNA-based methods for identification of seafood species].

    PubMed

    Zhang, Li; Zhang, Liang; Liu, Shu-Cheng; Zhang, Yi-Jun; Han, Yi

    2010-06-01

    With the development of molecular biotechnology, methods for identification of seafood species are developed from protein to DNA. At present, the main DNA-based methods for species identification are FINS, PCR-RFLP, and specific-PCR, which have been used to identify the species of fresh, frozen, and pickled or canned seafood. However, qualitative and quantitative methods for identification of the mixed seafood species remain to be resolved. The gene databases play an important role in identifying species and are valuable information resources for identification of seafood species. In this paper, recent progresses of major DNA-based methods for identification of seafood species are reviewed and the perspectives of this field are discussed. PMID:20566458

  9. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    SciTech Connect

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.

  10. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGESBeta

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  11. An automatic registration method based on runway detection

    NASA Astrophysics Data System (ADS)

    Zhang, Xiuqiong; Yu, Li; Huang, Guo

    2014-04-01

    Runway is seen distinctly that is a crucial condition in the process of approaching and landing. One of the enhanced vision methods is image fusion method between the infrared and visible images in EVS (Enhanced Vision System). The image registration plays a very important role in image fusion. So, an automatic image registration method is proposed based on the accurate runway detection. Firstly, runway is detected using DWT (discrete wavelets transform) from the infrared and visible images respectively. Then, a fitting triangle is constructed according to the edges of runway. The corresponding feature points extracted from the middle points of edges and the centroid of triangle are used to compute the transform parameters. The results of registration are more accurate and efficient than those of registration based on mutual information. This method is robust and has less computation which can be applied to real-time system.

  12. A Triangle Mesh Standardization Method Based on Particle Swarm Optimization

    PubMed Central

    Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang

    2016-01-01

    To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh. PMID:27509129

  13. Method of removing and detoxifying a phosphorus-based substance

    SciTech Connect

    Vandegrift, G.F.; Steindler, M.J.

    1989-07-25

    A method of removing organic phosphorus-based poisonous substances from water contaminated therewith and of subsequently destroying the toxicity of the substances is disclosed. Initially, a water-immiscible organic is immobilized on a supported liquid membrane. Thereafter, the contaminated water is contacted with one side of the supported liquid membrane to selectively dissolve the phosphorus-based substance in the organic extractant. At the same time, the other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react the phosphorus-based substance dissolved by the organic extractant with a hydroxy ion. This forms a non-toxic reaction product in the base. The organic extractant can be a water-insoluble trialkyl amine, such as trilauryl amine. The phosphorus-based substance can be phosphoryl or a thiophosphoryl.

  14. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, George F.; Steindler, Martin J.

    1989-01-01

    A method of removing organic phosphorus-based poisonous substances from water contaminated therewith and of subsequently destroying the toxicity of the substance is disclosed. Initially, a water-immiscible organic is immobilized on a supported liquid membrane. Thereafter, the contaminated water is contacted with one side of the supported liquid membrane to selectively dissolve the phosphorus-based substance in the organic extractant. At the same time, the other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react the phosphorus-based substance dissolved by the organic extractant with a hydroxy ion. This forms a non-toxic reaction product in the base. The organic extractant can be a water-insoluble trialkyl amine, such as trilauryl amine. The phosphorus-based substance can be phosphoryl or a thiophosphoryl.

  15. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2011-12-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  16. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2012-01-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  17. Review of atom probe FIB-based specimen preparation methods.

    PubMed

    Miller, Michael K; Russell, Kaye F; Thompson, Keith; Alvis, Roger; Larson, David J

    2007-12-01

    Several FIB-based methods that have been developed to fabricate needle-shaped atom probe specimens from a variety of specimen geometries, and site-specific regions are reviewed. These methods have enabled electronic device structures to be characterized. The atom probe may be used to quantify the level and range of gallium implantation and has demonstrated that the use of low accelerating voltages during the final stages of milling can dramatically reduce the extent of gallium implantation. PMID:18001509

  18. LINEAR SCANNING METHOD BASED ON THE SAFT COARRAY

    SciTech Connect

    Martin, C. J.; Martinez-Graullera, O.; Romero, D.; Ullate, L. G.; Higuti, R. T.

    2010-02-22

    This work presents a method to obtain B-scan images based on linear array scanning and 2R-SAFT. Using this technique some advantages are obtained: the ultrasonic system is very simple; it avoids the grating lobes formation, characteristic in conventional SAFT; and subaperture size and focussing lens (to compensate emission-reception) can be adapted dynamically to every image point. The proposed method has been experimentally tested in the inspection of CFRP samples.

  19. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    NASA Astrophysics Data System (ADS)

    Bo, Wurigen; Shashkov, Mikhail

    2015-10-01

    eW present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35,34,6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. In the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way. In the current paper we present a new adaptive ReALE method, A-ReALE, that is based on the following design principles. First, a monitor function (or error indicator) based on the Hessian of some flow parameter(s) is utilized. Second, an equi-distribution principle for the monitor function is used as a criterion for adapting the mesh. Third, a centroidal Voronoi tessellation is used to adapt the mesh. Fourth, we scale the monitor function to avoid very small and large cells and then smooth it to permit the use of theoretical results related to weighted centroidal Voronoi tessellation. In the A-ReALE method, both number of cells and their locations are allowed to change at the rezone stage on each time step. The number of generators at each time step is chosen to guarantee the required spatial resolution in regions where monitor function reaches its maximum value. We present all details required for implementation of new adaptive A-ReALE method and demonstrate its performance in comparison with standard ReALE method on series of numerical examples.

  20. Innovating Method of Existing Mechanical Product Based on TRIZ Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Cunyou; Shi, Dongyan; Wu, Han

    Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.

  1. Improving merge methods for grid-based digital elevation models

    NASA Astrophysics Data System (ADS)

    Leitão, J. P.; Prodanović, D.; Maksimović, Č.

    2016-03-01

    Digital Elevation Models (DEMs) are used to represent the terrain in applications such as, for example, overland flow modelling or viewshed analysis. DEMs generated from digitising contour lines or obtained by LiDAR or satellite data are now widely available. However, in some cases, the area of study is covered by more than one of the available elevation data sets. In these cases the relevant DEMs may need to be merged. The merged DEM must retain the most accurate elevation information available while generating consistent slopes and aspects. In this paper we present a thorough analysis of three conventional grid-based DEM merging methods that are available in commercial GIS software. These methods are evaluated for their applicability in merging DEMs and, based on evaluation results, a method for improving the merging of grid-based DEMs is proposed. DEMs generated by the proposed method, called MBlend, showed significant improvements when compared to DEMs produced by the three conventional methods in terms of elevation, slope and aspect accuracy, ensuring also smooth elevation transitions between the original DEMs. The results produced by the improved method are highly relevant different applications in terrain analysis, e.g., visibility, or spotting irregularities in landforms and for modelling terrain phenomena, such as overland flow.

  2. Global gravimetric geoid model based a new method

    NASA Astrophysics Data System (ADS)

    Shen, W. B.; Han, J. C.

    2012-04-01

    The geoid, defined as the equipotential surface nearest to the mean sea level, plays a key role in physical geodesy and unification of height datum system. In this study, we introduce a new method, which is quite different from the conventional geoid modeling methods (e.g., Stokes method, Molodensky method), to determine the global gravimetric geoid (GGG). Based on the new method, using the dada base of the external Earth gravity field model EGM2008, digital topographic model DTM2006.0 and crust density distribution model CRUST2.0, we first determined the inner geopotential field until to the depth of D, and then established a GGG model , the accuracy of which is evaluated by comparing with the observations from USA, AUS, some parts of Canada, and some parts of China. The main idea of the new method is stated as follows. Given the geopotential field (e.g. EGM2008) outside the Earth, we may determine the inner geopotential field until to the depth of D by using Newtonian integral, once the density distribution model (e.g. CRUST2.0) of a shallow layer until to the depth D is given. Then, based on the definition of the geoid (i.e. an equipotential surface nearest to the mean sea level) one may determine the GGG. This study is supported by Natural Science Foundation China (grant No.40974015; No.41174011; No.41021061; No.41128003).

  3. A online credit evaluation method based on AHP and SPA

    NASA Astrophysics Data System (ADS)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  4. Deformable target tracking method based on Lie algebra

    NASA Astrophysics Data System (ADS)

    Liu, Yunpeng; Shi, Zelin; Li, Guangwei

    2007-11-01

    Conventional approaches to object tracking use area correlation, but they are difficult to solve the problem of deformation of object region during tracking. A novel target tracking method based on Lie algebra is presented. We use Gabor feature as target token, model deformation using affine Lie group, and optimize parameters directly on manifold, which can be solved by exponential mapping between Lie Group and its Lie algebra. We analyze the essence of our method and test the algorithm using real image sequences. The experimental results demonstrate that Lie algebra method outperforms other traditional algorithms in efficiency, stabilization and accuracy.

  5. A Star Pattern Recognition Method Based on Decreasing Redundancy Matching

    NASA Astrophysics Data System (ADS)

    Yao, Lu; Xiao-xiang, Zhang; Rong-yu, Sun

    2016-04-01

    During the optical observation of space objects, it is difficult to enable the background stars to get matched when the telescope pointing error and tracking error are significant. Based on the idea of decreasing redundancy matching, an effective recognition method for background stars is proposed in this paper. The simulative images under different conditions and the observed images are used to verify the proposed method. The experimental results show that the proposed method has raised the rate of recognition and reduced the time consumption, it can be used to match star patterns accurately and rapidly.

  6. Network motif-based method for identifying coronary artery disease

    PubMed Central

    LI, YIN; CONG, YAN; ZHAO, YUN

    2016-01-01

    The present study aimed to develop a more efficient method for identifying coronary artery disease (CAD) than the conventional method using individual differentially expressed genes (DEGs). GSE42148 gene microarray data were downloaded, preprocessed and screened for DEGs. Additionally, based on transcriptional regulation data obtained from ENCODE database and protein-protein interaction data from the HPRD, the common genes were downloaded and compared with genes annotated from gene microarrays to screen additional common genes in order to construct an integrated regulation network. FANMOD was then used to detect significant three-gene network motifs. Subsequently, GlobalAncova was used to screen differential three-gene network motifs between the CAD group and the normal control data from GSE42148. Genes involved in the differential network motifs were then subjected to functional annotation and pathway enrichment analysis. Finally, clustering analysis of the CAD and control samples was performed based on individual DEGs and the top 20 network motifs identified. In total, 9,008 significant three-node network motifs were detected from the integrated regulation network; these were categorized into 22 interaction modes, each containing a minimum of one transcription factor. Subsequently, 1,132 differential network motifs involving 697 genes were screened between the CAD and control group. The 697 genes were enriched in 154 gene ontology terms, including 119 biological processes, and 14 KEGG pathways. Identifying patients with CAD based on the top 20 network motifs provided increased accuracy compared with the conventional method based on individual DEGs. The results of the present study indicate that the network motif-based method is more efficient and accurate for identifying CAD patients than the conventional method based on individual DEGs. PMID:27347046

  7. A Localization Method for Multistatic SAR Based on Convex Optimization.

    PubMed

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  8. A Localization Method for Multistatic SAR Based on Convex Optimization

    PubMed Central

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  9. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    PubMed Central

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  10. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method.

    PubMed

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  11. A phase match based frequency estimation method for sinusoidal signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-Lin; Tu, Ya-Qing; Chen, Lin-Jun; Shen, Ting-Ao

    2015-04-01

    Accurate frequency estimation affects the ranging precision of linear frequency modulated continuous wave (LFMCW) radars significantly. To improve the ranging precision of LFMCW radars, a phase match based frequency estimation method is proposed. To obtain frequency estimation, linear prediction property, autocorrelation, and cross correlation of sinusoidal signals are utilized. The analysis of computational complex shows that the computational load of the proposed method is smaller than those of two-stage autocorrelation (TSA) and maximum likelihood. Simulations and field experiments are performed to validate the proposed method, and the results demonstrate the proposed method has better performance in terms of frequency estimation precision than methods of Pisarenko harmonic decomposition, modified covariance, and TSA, which contribute to improving the precision of LFMCW radars effectively.

  12. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    NASA Astrophysics Data System (ADS)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  13. [Segmentation Method for Liver Organ Based on Image Sequence Context].

    PubMed

    Zhang, Meiyun; Fang, Bin; Wang, Yi; Zhong, Nanchang

    2015-10-01

    In view of the problems of more artificial interventions and segmentation defects in existing two-dimensional segmentation methods and abnormal liver segmentation errors in three-dimensional segmentation methods, this paper presents a semi-automatic liver organ segmentation method based on the image sequence context. The method takes advantage of the existing similarity between the image sequence contexts of the prior knowledge of liver organs, and combines region growing and level set method to carry out semi-automatic segmentation of livers, along with the aid of a small amount of manual intervention to deal with liver mutation situations. The experiment results showed that the liver segmentation algorithm presented in this paper had a high precision, and a good segmentation effect on livers which have greater variability, and can meet clinical application demands quite well.

  14. Acoustic radiation force-based elasticity imaging methods

    PubMed Central

    Palmeri, Mark L.; Nightingale, Kathryn R.

    2011-01-01

    Conventional diagnostic ultrasound images portray differences in the acoustic properties of soft tissues, whereas ultrasound-based elasticity images portray differences in the elastic properties of soft tissues (i.e. stiffness, viscosity). The benefit of elasticity imaging lies in the fact that many soft tissues can share similar ultrasonic echogenicities, but may have different mechanical properties that can be used to clearly visualize normal anatomy and delineate pathological lesions. Acoustic radiation force-based elasticity imaging methods use acoustic radiation force to transiently deform soft tissues, and the dynamic displacement response of those tissues is measured ultrasonically and is used to estimate the tissue's mechanical properties. Both qualitative images and quantitative elasticity metrics can be reconstructed from these measured data, providing complimentary information to both diagnose and longitudinally monitor disease progression. Recently, acoustic radiation force-based elasticity imaging techniques have moved from the laboratory to the clinical setting, where clinicians are beginning to characterize tissue stiffness as a diagnostic metric, and commercial implementations of radiation force-based ultrasonic elasticity imaging are beginning to appear on the commercial market. This article provides an overview of acoustic radiation force-based elasticity imaging, including a review of the relevant soft tissue material properties, a review of radiation force-based methods that have been proposed for elasticity imaging, and a discussion of current research and commercial realizations of radiation force based-elasticity imaging technologies. PMID:22419986

  15. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Posner, Richard G.; Marrone, Babetta L.; Hammond, Mark L.; Simpson, Daniel J.

    1995-01-01

    Method for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand.

  16. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Posner, R.G.; Marrone, B.L.; Hammond, M.L.; Simpson, D.J.

    1995-04-11

    A method is described for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand. 4 figures.

  17. pyro: Python-based tutorial for computational methods for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  18. Metaphoric Investigation of the Phonic-Based Sentence Method

    ERIC Educational Resources Information Center

    Dogan, Birsen

    2012-01-01

    This study aimed to understand the views of prospective teachers with "phonic-based sentence method" through metaphoric images. In this descriptive study, the participants involve the prospective teachers who take reading-writing instruction courses in Primary School Classroom Teaching Program of the Education Faculty of Pamukkale University. The…

  19. A Natural Teaching Method Based on Learning Theory.

    ERIC Educational Resources Information Center

    Smilkstein, Rita

    1991-01-01

    The natural teaching method is active and student-centered, based on schema and constructivist theories, and informed by research in neuroplasticity. A schema is a mental picture or understanding of something we have learned. Humans can have knowledge only to the degree to which they have constructed schemas from learning experiences and practice.…

  20. Docking methods for structure-based library design.

    PubMed

    Cavasotto, Claudio N; Phatak, Sharangdhar S

    2011-01-01

    The drug discovery process mainly relies on the experimental high-throughput screening of huge compound libraries in their pursuit of new active compounds. However, spiraling research and development costs and unimpressive success rates have driven the development of more rational, efficient, and cost-effective methods. With the increasing availability of protein structural information, advancement in computational algorithms, and faster computing resources, in silico docking-based methods are increasingly used to design smaller and focused compound libraries in order to reduce screening efforts and costs and at the same time identify active compounds with a better chance of progressing through the optimization stages. This chapter is a primer on the various docking-based methods developed for the purpose of structure-based library design. Our aim is to elucidate some basic terms related to the docking technique and explain the methodology behind several docking-based library design methods. This chapter also aims to guide the novice computational practitioner by laying out the general steps involved for such an exercise. Selected successful case studies conclude this chapter. PMID:20981523

  1. A New IRT-Based Small Sample DIF Method.

    ERIC Educational Resources Information Center

    Tang, Huixing

    This paper describes an item response theory (IRT) based method of differential item functioning (DIF) detection that involves neither separate calibration nor ability grouping. IRT is used to generate residual scores, scores free of the effects of person or group ability and item difficulty. Analysis of variance is then used to test the group…

  2. Docking methods for structure-based library design.

    PubMed

    Cavasotto, Claudio N; Phatak, Sharangdhar S

    2011-01-01

    The drug discovery process mainly relies on the experimental high-throughput screening of huge compound libraries in their pursuit of new active compounds. However, spiraling research and development costs and unimpressive success rates have driven the development of more rational, efficient, and cost-effective methods. With the increasing availability of protein structural information, advancement in computational algorithms, and faster computing resources, in silico docking-based methods are increasingly used to design smaller and focused compound libraries in order to reduce screening efforts and costs and at the same time identify active compounds with a better chance of progressing through the optimization stages. This chapter is a primer on the various docking-based methods developed for the purpose of structure-based library design. Our aim is to elucidate some basic terms related to the docking technique and explain the methodology behind several docking-based library design methods. This chapter also aims to guide the novice computational practitioner by laying out the general steps involved for such an exercise. Selected successful case studies conclude this chapter.

  3. Bead Collage: An Arts-Based Research Method

    ERIC Educational Resources Information Center

    Kay, Lisa

    2013-01-01

    In this paper, "bead collage," an arts-based research method that invites participants to reflect, communicate and construct their experience through the manipulation of beads and found objects is explained. Emphasizing the significance of one's personal biography and experiences as a researcher, I discuss how my background as an…

  4. A Quantum-Based Similarity Method in Virtual Screening.

    PubMed

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2015-10-02

    One of the most widely-used techniques for ligand-based virtual screening is similarity searching. This study adopted the concepts of quantum mechanics to present as state-of-the-art similarity method of molecules inspired from quantum theory. The representation of molecular compounds in mathematical quantum space plays a vital role in the development of quantum-based similarity approach. One of the key concepts of quantum theory is the use of complex numbers. Hence, this study proposed three various techniques to embed and to re-represent the molecular compounds to correspond with complex numbers format. The quantum-based similarity method that developed in this study depending on complex pure Hilbert space of molecules called Standard Quantum-Based (SQB). The recall of retrieved active molecules were at top 1% and top 5%, and significant test is used to evaluate our proposed methods. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiment show that the effectiveness of SQB method was significantly increased due to the role of representational power of molecular compounds in complex numbers forms compared to Tanimoto benchmark similarity measure.

  5. A New Intrusion Detection Method Based on Antibody Concentration

    NASA Astrophysics Data System (ADS)

    Zeng, Jie; Li, Tao; Li, Guiyang; Li, Haibo

    Antibody is one kind of protein that fights against the harmful antigen in human immune system. In modern medical examination, the health status of a human body can be diagnosed by detecting the intrusion intensity of a specific antigen and the concentration indicator of corresponding antibody from human body’s serum. In this paper, inspired by the principle of antigen-antibody reactions, we present a New Intrusion Detection Method Based on Antibody Concentration (NIDMBAC) to reduce false alarm rate without affecting detection rate. In our proposed method, the basic definitions of self, nonself, antigen and detector in the intrusion detection domain are given. Then, according to the antigen intrusion intensity, the change of antibody number is recorded from the process of clone proliferation for detectors based on the antigen classified recognition. Finally, building upon the above works, a probabilistic calculation method for the intrusion alarm production, which is based on the correlation between the antigen intrusion intensity and the antibody concen-tration, is proposed. Our theoretical analysis and experimental results show that our proposed method has a better performance than traditional methods.

  6. Kinetic Plasma Simulation Using a Quadrature-based Moment Method

    NASA Astrophysics Data System (ADS)

    Larson, David J.

    2008-11-01

    The recently developed quadrature-based moment method [Desjardins, Fox, and Villedieu, J. Comp. Phys. 227 (2008)] is an interesting alternative to standard Lagrangian particle simulations. The two-node quadrature formulation allows multiple flow velocities within a cell, thus correctly representing crossing particle trajectories and lower-order velocity moments without resorting to Lagrangian methods. Instead of following many particles per cell, the Eulerian transport equations are solved for selected moments of the kinetic equation. The moments are then inverted to obtain a discrete representation of the velocity distribution function. Potential advantages include reduced computational cost, elimination of statistical noise, and a simpler treatment of collisional effects. We present results obtained using the quadrature-based moment method applied to the Vlasov equation in simple one-dimensional electrostatic plasma simulations. In addition we explore the use of the moment inversion process in modeling collisional processes within the Complex Particle Kinetics framework.

  7. Efficient method of image edge detection based on FSVM

    NASA Astrophysics Data System (ADS)

    Cai, Aiping; Xiong, Xiaomei

    2013-07-01

    For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.

  8. Method of plasma etching Ga-based compound semiconductors

    DOEpatents

    Qiu, Weibin; Goddard, Lynford L.

    2012-12-25

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent to the process chamber. The process chamber contains a sample comprising a Ga-based compound semiconductor. The sample is in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. The method includes flowing SiCl.sub.4 gas into the chamber, flowing Ar gas into the chamber, and flowing H.sub.2 gas into the chamber. RF power is supplied independently to the source electrode and the platen. A plasma is generated based on the gases in the process chamber, and regions of a surface of the sample adjacent to one or more masked portions of the surface are etched to create a substantially smooth etched surface including features having substantially vertical walls beneath the masked portions.

  9. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.

  10. A Novel Method for Pulsometry Based on Traditional Iranian Medicine

    PubMed Central

    Yousefipoor, Farzane; Nafisi, Vahidreza

    2015-01-01

    Arterial pulse measurement is one of the most important methods for evaluation of healthy conditions. In traditional Iranian medicine (TIM), physician may detect radial pulse by holding four fingers on the patient's wrist. By using this method, under standard condition, the detected pulses are subjective and erroneous, in case of weak and/or abnormal pulses, the ambiguity of diagnosis may rise. In this paper, we present an equipment which is designed and implemented for automation of traditional pulse detection method. By this novel system, the developed noninvasive diagnostic method and database based on the TIM are way forward to apply traditional medicine and diagnose patients with present technology. The accuracy for period measuring is 76% and systolic peak is 72%. PMID:26955566

  11. Spindle extraction method for ISAR image based on Radon transform

    NASA Astrophysics Data System (ADS)

    Wei, Xia; Zheng, Sheng; Zeng, Xiangyun; Zhu, Daoyuan; Xu, Gaogui

    2015-12-01

    In this paper, a method of spindle extraction of target in inverse synthetic aperture radar (ISAR) image is proposed which depends on Radon Transform. Firstly, utilizing Radon Transform to detect all straight lines which are collinear with these line segments in image. Then, using Sobel operator to detect image contour. Finally, finding all intersections of each straight line and image contour, the two intersections which have maximum distance between them is the two ends of this line segment and the longest line segment of all line segments is spindle of target. According to the proposed spindle extraction method, one hundred simulated ISAR images which are respectively rotated 0 degrees, 10 degrees, 20 degrees, 30 degrees and 40 degrees in counterclockwise are used to do experiment and the proposed method and the detection results are more close to the real spindle of target than the method based on Hough Transform .

  12. A Novel Robot Visual Homing Method Based on SIFT Features

    PubMed Central

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-01-01

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method. PMID:26473880

  13. Measurement-based method for verifying quantum discord

    NASA Astrophysics Data System (ADS)

    Rahimi-Keshari, Saleh; Caves, Carlton M.; Ralph, Timothy C.

    2013-01-01

    We introduce a measurement-based method for verifying quantum discord of any bipartite quantum system. We show that by performing an informationally complete positive operator valued measurement (IC-POVM) on one subsystem and checking the commutativity of the conditional states of the other subsystem, quantum discord from the second subsystem to the first can be verified. This is an improvement upon previous methods, which enables us to efficiently apply our method to continuous-variable systems, as IC-POVM's are readily available from homodyne or heterodyne measurements. We show that quantum discord for Gaussian states can be verified by checking whether the peaks of the conditional Wigner functions corresponding to two different outcomes of heterodyne measurement coincide at the same point in the phase space. Using this method, we also prove that the only Gaussian states with zero discord are product states; hence, Gaussian states with Gaussian discord have nonzero quantum discord.

  14. A history-based method to estimate animal preference.

    PubMed

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  15. A novel robot visual homing method based on SIFT features.

    PubMed

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-10-14

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method.

  16. A history-based method to estimate animal preference

    PubMed Central

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  17. Do dynamic-based MR knee kinematics methods produce the same results as static methods?

    PubMed

    d'Entremont, Agnes G; Nordmeyer-Massner, Jurek A; Bos, Clemens; Wilson, David R; Pruessmann, Klaas P

    2013-06-01

    MR-based methods provide low risk, noninvasive assessment of joint kinematics; however, these methods often use static positions or require many identical cycles of movement. The study objective was to compare the 3D kinematic results approximated from a series of sequential static poses of the knee with the 3D kinematic results obtained from continuous dynamic movement of the knee. To accomplish this objective, we compared kinematic data from a validated static MR method to a fast static MR method, and compared kinematic data from both static methods to a newly developed dynamic MR method. Ten normal volunteers were imaged using the three kinematic methods (dynamic, static standard, and static fast). Results showed that the two sets of static results were in agreement, indicating that the sequences (standard and fast) may be used interchangeably. Dynamic kinematic results were significantly different from both static results in eight of 11 kinematic parameters: patellar flexion, patellar tilt, patellar proximal translation, patellar lateral translation, patellar anterior translation, tibial abduction, tibial internal rotation, and tibial anterior translation. Three-dimensional MR kinematics measured from dynamic knee motion are often different from those measured in a static knee at several positions, indicating that dynamic-based kinematics provides information that is not obtainable from static scans.

  18. Object Recognition using Feature- and Color-Based Methods

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  19. Adaptive Ripple Down Rules Method based on Description Length

    NASA Astrophysics Data System (ADS)

    Yoshida, Tetsuya; Wada, Takuya; Motoda, Hiroshi; Washio, Takashi

    A knowledge acquisition method Ripple Down Rules (RDR) can directly acquire and encode knowledge from human experts. It is an incremental acquisition method and each new piece of knowledge is added as an exception to the existing knowledge base. Past researches on RDR method assume that the problem domain is stable. This is not the case in reality, especially when an environment changes. Things change over time. This paper proposes an adaptive Ripple Down Rules method based on the Minimum Description Length Principle aiming at knowledge acquisition in a dynamically changing environment. We consider the change in the correspondence between attribute-values and class labels as a typical change in the environment. When such a change occurs, some pieces of knowledge previously acquired become worthless, and the existence of such knowledge may hinder acquisition of new knowledge. In our approach knowledge deletion is carried out as well as knowledge acquisition so that useless knowledge is properly discarded to ensure efficient knowledge acquisition while maintaining the prediction accuracy for future data. Furthermore, pruning is incorporated into the incremental knowledge acquisition in RDR to improve the prediction accuracy of the constructed knowledge base. Experiments were conducted by simulating the change in the correspondence between attribute-values and class labels using the datasets in UCI repository. The results are encouraging.

  20. Lunar-base construction equipment and methods evaluation

    NASA Astrophysics Data System (ADS)

    Boles, Walter W.; Ashley, David B.; Tucker, Richard L.

    1993-07-01

    A process for evaluating lunar-base construction equipment and methods concepts is presented. The process is driven by the need for more quantitative, systematic, and logical methods for assessing further research and development requirements in an area where uncertainties are high, dependence upon terrestrial heuristics is questionable, and quantitative methods are seldom applied. Decision theory concepts are used in determining the value of accurate information and the process is structured as a construction-equipment-and-methods selection methodology. Total construction-related, earth-launch mass is the measure of merit chosen for mathematical modeling purposes. The work is based upon the scope of the lunar base as described in the National Aeronautics and Space Administration's Office of Exploration's 'Exploration Studies Technical Report, FY 1989 Status'. Nine sets of conceptually designed construction equipment are selected as alternative concepts. It is concluded that the evaluation process is well suited for assisting in the establishment of research agendas in an approach that is first broad, with a low level of detail, followed by more-detailed investigations into areas that are identified as critical due to high degrees of uncertainty and sensitivity.

  1. Weaving a Formal Methods Education with Problem-Based Learning

    NASA Astrophysics Data System (ADS)

    Gibson, J. Paul

    The idea of weaving formal methods through computing (or software engineering) degrees is not a new one. However, there has been little success in developing and implementing such a curriculum. Formal methods continue to be taught as stand-alone modules and students, in general, fail to see how fundamental these methods are to the engineering of software. A major problem is one of motivation — how can the students be expected to enthusiastically embrace a challenging subject when the learning benefits, beyond passing an exam and achieving curriculum credits, are not clear? Problem-based learning has gradually moved from being an innovative pedagogique technique, commonly used to better-motivate students, to being widely adopted in the teaching of many different disciplines, including computer science and software engineering. Our experience shows that a good problem can be re-used throughout a student's academic life. In fact, the best computing problems can be used with children (young and old), undergraduates and postgraduates. In this paper we present a process for weaving formal methods through a University curriculum that is founded on the application of problem-based learning and a library of good software engineering problems, where students learn about formal methods without sitting a traditional formal methods module. The process of constructing good problems and integrating them into the curriculum is shown to be analagous to the process of engineering software. This approach is not intended to replace more traditional formal methods modules: it will better prepare students for such specialised modules and ensure that all students have an understanding and appreciation for formal methods even if they do not go on to specialise in them.

  2. Springback Compensation Based on FDM-DTF Method

    SciTech Connect

    Liu Qiang; Kang Lan

    2010-06-15

    Stamping part error caused by springback is usually considered to be a tooling defect in sheet metal forming process. This problem can be corrected by adjusting the tooling shape to appropriate shape. In this paper, springback compensation based on FDM-DTF method is proposed to be used for design and modification of the tooling shape. Firstly, based on FDM method, the tooling shape is designed by reversing inner force's direction at the end of forming simulation, the required tooling shape can be got through some iterations. Secondly actual tooling is produced based on results got in the first step. When the tooling and part surface discrete data are investigated, the transfer function between numerical springback error and real springback error can be calculated based on wavelet transform results, which can be used in predicting the tooling shape for the desired product. Finally the FDM-DTF method is proved to control springback effectively after it has been applied in the 2D irregular product springback control.

  3. Evaluation of base widening methods on flexible pavements in Wyoming

    NASA Astrophysics Data System (ADS)

    Offei, Edward

    The surface transportation system forms the biggest infrastructure investment in the United States of which the roadway pavement is an integral part. Maintaining the roadways can involve rehabilitation in the form of widening, which requires a longitudinal joint between the existing and new pavement sections to accommodate wider travel lanes, additional travel lanes or modification to shoulder widths. Several methods are utilized for the joint construction between the existing and new pavement sections including vertical, tapered and stepped joints. The objective of this research is to develop a formal recommendation for the preferred joint construction method that provides the best base layer support for the state of Wyoming. Field collection of Dynamic Cone Penetrometer (DCP) data, Falling Weight Deflectometer (FWD) data, base samples for gradation and moisture content were conducted on 28 existing and 4 newly constructed pavement widening projects. A survey of constructability issues on widening projects as experienced by WYDOT engineers was undertaken. Costs of each joint type were compared as well. Results of the analyses indicate that the tapered joint type showed relatively better pavement strength compared to the vertical joint type and could be the preferred joint construction method. The tapered joint type also showed significant base material savings than the vertical joint type. The vertical joint has an 18% increase in cost compared to the tapered joint. This research is intended to provide information and/or recommendation to state policy makers as to which of the base widening joint techniques (vertical, tapered, stepped) for flexible pavement provides better pavement performance.

  4. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  5. A velocity-correction projection method based immersed boundary method for incompressible flows

    NASA Astrophysics Data System (ADS)

    Cai, Shanggui

    2014-11-01

    In the present work we propose a novel direct forcing immersed boundary method based on the velocity-correction projection method of [J.L. Guermond, J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (1)(2003) 112]. The principal idea of immersed boundary method is to correct the velocity in the vicinity of the immersed object by using an artificial force to mimic the presence of the physical boundaries. Therefore, velocity-correction projection method is preferred to its pressure-correction counterpart in the present work. Since the velocity-correct projection method is considered as a dual class of pressure-correction method, the proposed method here can also be interpreted in the way that first the pressure is predicted by treating the viscous term explicitly without the consideration of the immersed boundary, and the solenoidal velocity is used to determine the volume force on the Lagrangian points, then the non-slip boundary condition is enforced by correcting the velocity with the implicit viscous term. To demonstrate the efficiency and accuracy of the proposed method, several numerical simulations are performed and compared with the results in the literature. China Scholarship Council.

  6. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  7. Methods for reconstructing acoustic quantities based on acoustic pressure measurements.

    PubMed

    Wu, Sean F

    2008-11-01

    This paper presents an overview of the acoustic imaging methods developed over the past three decades that enable one to reconstruct all acoustic quantities based on the acoustic pressure measurements taken around a target source at close distances. One such method that has received the most attention is known as near-field acoustical holography (NAH). The original NAH relies on Fourier transforms that are suitable for a surface containing a level of constant coordinate in a source-free region. Other methods are developed to reconstruct the acoustic quantities in three-dimensional space and on an arbitrary three-dimensional source surface. Note that there is a fine difference between Fourier transform based NAH and other methods that is largely overlooked. The former can offer a wave number spectrum, thus enabling visualization of various structural waves of different wavelengths that travel on the surface of a structure; the latter cannot provide such information, which is critical to acquire an in-depth understanding of the interrelationships between structural vibrations and sound radiation. All these methods are discussed in this paper, their advantages and limitations are compared, and the need for further development to analyze the root causes of noise and vibration problems is discussed.

  8. Development of redesign method of production system based on QFD

    NASA Astrophysics Data System (ADS)

    Kondoh, Shinsuke; Umeda, Yasusi; Togawa, Hisashi

    In order to catch up with rapidly changing market environment, rapid and flexible redesign of production system is quite important. For effective and rapid redesign of production system, a redesign support system is eagerly needed. To this end, this paper proposes a redesign method of production system based on Quality Function Deployment (QFD). This method represents a designer's intention in the form of QFD, collects experts' knowledge as “Production Method (PM) modules,” and formulates redesign guidelines as seven redesign operations so as to support a designer to find out improvement ideas in a systematical manner. This paper also illustrates a redesign support tool of a production system we have developed based on this method, and demonstrates its feasibility with a practical example of a production system of a contact probe. A result from this example shows that comparable cost reduction to those of veteran designers can be achieved by a novice designer. From this result, we conclude our redesign method is effective and feasible for supporting redesign of a production system.

  9. A Decomposition Method Based on a Model of Continuous Change

    PubMed Central

    HORIUCHI, SHIRO; WILMOTH, JOHN R.; PLETCHER, SCOTT D.

    2008-01-01

    A demographic measure is often expressed as a deterministic or stochastic function of multiple variables (covariates), and a general problem (the decomposition problem) is to assess contributions of individual covariates to a difference in the demographic measure (dependent variable) between two populations. We propose a method of decomposition analysis based on an assumption that covariates change continuously along an actual or hypothetical dimension. This assumption leads to a general model that logically justifies the additivity of covariate effects and the elimination of interaction terms, even if the dependent variable itself is a nonadditive function. A comparison with earlier methods illustrates other practical advantages of the method: in addition to an absence of residuals or interaction terms, the method can easily handle a large number of covariates and does not require a logically meaningful ordering of covariates. Two empirical examples show that the method can be applied flexibly to a wide variety of decomposition problems. This study also suggests that when data are available at multiple time points over a long interval, it is more accurate to compute an aggregated decomposition based on multiple subintervals than to compute a single decomposition for the entire study period. PMID:19110897

  10. CEMS using hot wet extractive method based on DOAS

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Zhang, Chi; Sun, Changku

    2011-11-01

    A continuous emission monitoring system (CEMS) using hot wet extractive method based on differential optical absorption spectroscopy (DOAS) is designed. The developed system is applied to retrieving the concentration of SO2 and NOx in flue gas on-site. The flue gas is carried along a heated sample line into the sample pool at a constant temperature above the dew point. In this case, the adverse impact of water vapor on measurement accuracy is reduced greatly, and the on-line calibration is implemented. And then the flue gas is discharged from the sample pool after the measuring process is complete. The on-site applicability of the system is enhanced by using Programmable Logic Controller (PLC) to control each valve in the system during the measuring and on-line calibration process. The concentration retrieving method used in the system is based on the partial least squares (PLS) regression nonlinear method. The relationship between the known concentration and the differential absorption feature gathered by the PLS nonlinear method can be figured out after the on-line calibration process. Then the concentration measurement of SO2 and NOx can be easily implemented according to the definite relationship. The concentration retrieving method can identify the information and noise effectively, which improves the measuring accuracy of the system. SO2 with four different concentrations are measured by the system under laboratory conditions. The results proved that the full-scale error of this system is less than 2%FS.

  11. Cepstrum based feature extraction method for fungus detection

    NASA Astrophysics Data System (ADS)

    Yorulmaz, Onur; Pearson, Tom C.; Çetin, A. Enis

    2011-06-01

    In this paper, a method for detection of popcorn kernels infected by a fungus is developed using image processing. The method is based on two dimensional (2D) mel and Mellin-cepstrum computation from popcorn kernel images. Cepstral features that were extracted from popcorn images are classified using Support Vector Machines (SVM). Experimental results show that high recognition rates of up to 93.93% can be achieved for both damaged and healthy popcorn kernels using 2D mel-cepstrum. The success rate for healthy popcorn kernels was found to be 97.41% and the recognition rate for damaged kernels was found to be 89.43%.

  12. Methods for preparing colloidal nanocrystal-based thin films

    DOEpatents

    Kagan, Cherie R.; Fafarman, Aaron T.; Choi, Ji-Hyuk; Koh, Weon-kyu; Kim, David K.; Oh, Soong Ju; Lai, Yuming; Hong, Sung-Hoon; Saudari, Sangameshwar Rao; Murray, Christopher B.

    2016-05-10

    Methods of exchanging ligands to form colloidal nanocrystals (NCs) with chalcogenocyanate (xCN)-based ligands and apparatuses using the same are disclosed. The ligands may be exchanged by assembling NCs into a thin film and immersing the thin film in a solution containing xCN-based ligands. The ligands may also be exchanged by mixing a xCN-based solution with a dispersion of NCs, flocculating the mixture, centrifuging the mixture, discarding the supernatant, adding a solvent to the pellet, and dispersing the solvent and pellet to form dispersed NCs with exchanged xCN-ligands. The NCs with xCN-based ligands may be used to form thin film devices and/or other electronic, optoelectronic, and photonic devices. Devices comprising nanocrystal-based thin films and methods for forming such devices are also disclosed. These devices may be constructed by depositing NCs on to a substrate to form an NC thin film and then doping the thin film by evaporation and thermal diffusion.

  13. Design of a Password-Based EAP Method

    NASA Astrophysics Data System (ADS)

    Manganaro, Andrea; Koblensky, Mingyur; Loreti, Michele

    In recent years, amendments to IEEE standards for wireless networks added support for authentication algorithms based on the Extensible Authentication Protocol (EAP). Available solutions generally use digital certificates or pre-shared keys but the management of the resulting implementations is complex or unlikely to be scalable. In this paper we present EAP-SRP-256, an authentication method proposal that relies on the SRP-6 protocol and provides a strong password-based authentication mechanism. It is intended to meet the IETF security and key management requirements for wireless networks.

  14. Real reproduction and evaluation of color based on BRDF method

    NASA Astrophysics Data System (ADS)

    Qin, Feng; Yang, Weiping; Yang, Jia; Li, Hongning; Luo, Yanlin; Long, Hongli

    2013-12-01

    It is difficult to reproduce the original color of targets really in different illuminating environment using the traditional methods. So a function which can reconstruct the characteristics of reflection about every point on the surface of target is required urgently to improve the authenticity of color reproduction, which known as the Bidirectional Reflectance Distribution Function(BRDF). A method of color reproduction based on the BRDF measurement is introduced in this paper. Radiometry is combined with the colorimetric theories to measure the irradiance and radiance of GretagMacbeth 24 ColorChecker by using PR-715 Radiation Spectrophotometer of PHOTO RESEARCH, Inc, USA. The BRDF and BRF (Bidirectional Reflectance Factor) values of every color piece corresponding to the reference area are calculated according to irradiance and radiance, thus color tristimulus values of 24 ColorChecker are reconstructed. The results reconstructed by BRDF method are compared with values calculated by the reflectance using PR-715, at last, the chromaticity coordinates in color space and color difference between each other are analyzed. The experimental result shows average color difference and sample standard deviation between the method proposed in this paper and traditional reconstruction method depended on reflectance are 2.567 and 1.3049 respectively. The conclusion indicates that the method of color reproduction based on BRDF has the more obvious advantages to describe the color information of object than the reflectance in hemisphere space through the theoretical and experimental analysis. This method proposed in this paper is effective and feasible during the research of reproducing the chromaticity.

  15. Genomic comparisons of Brucella spp. and closely related bacteria using base compositional and proteome based methods

    PubMed Central

    2010-01-01

    Background Classification of bacteria within the genus Brucella has been difficult due in part to considerable genomic homogeneity between the different species and biovars, in spite of clear differences in phenotypes. Therefore, many different methods have been used to assess Brucella taxonomy. In the current work, we examine 32 sequenced genomes from genus Brucella representing the six classical species, as well as more recently described species, using bioinformatical methods. Comparisons were made at the level of genomic DNA using oligonucleotide based methods (Markov chain based genomic signatures, genomic codon and amino acid frequencies based comparisons) and proteomes (all-against-all BLAST protein comparisons and pan-genomic analyses). Results We found that the oligonucleotide based methods gave different results compared to that of the proteome based methods. Differences were also found between the oligonucleotide based methods used. Whilst the Markov chain based genomic signatures grouped the different species in genus Brucella according to host preference, the codon and amino acid frequencies based methods reflected small differences between the Brucella species. Only minor differences could be detected between all genera included in this study using the codon and amino acid frequencies based methods. Proteome comparisons were found to be in strong accordance with current Brucella taxonomy indicating a remarkable association between gene gain or loss on one hand and mutations in marker genes on the other. The proteome based methods found greater similarity between Brucella species and Ochrobactrum species than between species within genus Agrobacterium compared to each other. In other words, proteome comparisons of species within genus Agrobacterium were found to be more diverse than proteome comparisons between species in genus Brucella and genus Ochrobactrum. Pan-genomic analyses indicated that uptake of DNA from outside genus Brucella appears to be

  16. Microbial detection method based on sensing molecular hydrogen

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Stoner, G. E.; Boykin, E. H.

    1974-01-01

    A simple method for detecting bacteria, based on the time of hydrogen evolution, was developed and tested against various members of the Enterobacteriaceae group. The test system consisted of (1) two electrodes, platinum and a reference electrode, (2) a buffer amplifier, and (3) a strip-chart recorder. Hydrogen evolution was measured by an increase in voltage in the negative (cathodic) direction. A linear relationship was established between inoculum size and the time hydrogen was detected (lag period). Lag times ranged from 1 h for 1 million cells/ml to 7 h for 1 cell/ml. For each 10-fold decrease in inoculum, length of the lag period increased 60 to 70 min. Based on the linear relationship between inoculum and lag period, these results indicate the potential application of the hydrogen-sensing method for rapidly detecting coliforms and other gas-producing microorganisms in a variety of clinical, food, and other samples.

  17. A New Power Flow Tracing Method Based on Directed Circuit

    NASA Astrophysics Data System (ADS)

    Zhang, Weimin; Guo, Xiaojing; Liu, Yaonian; Ni, Defu; Wang, Lina; Jia, Yanbing

    In this paper, a power-sharing principle is proposed, base on the generation of the directed path. The principle is used to calculate the contributions of individual generators and loads to line flows and the real power transfer among individual generators and loads that are significant to transmission open access. From the IEEE 14-bus power systems, the effectiveness and availability of the method are verified.

  18. A Model Based Security Testing Method for Protocol Implementation

    PubMed Central

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation. PMID:25105163

  19. Geophysics-based method of locating a stationary earth object

    DOEpatents

    Daily, Michael R.; Rohde, Steven B.; Novak, James L.

    2008-05-20

    A geophysics-based method for determining the position of a stationary earth object uses the periodic changes in the gravity vector of the earth caused by the sun- and moon-orbits. Because the local gravity field is highly irregular over a global scale, a model of local tidal accelerations can be compared to actual accelerometer measurements to determine the latitude and longitude of the stationary object.

  20. Accurate measurement method for tube's endpoints based on machine vision

    NASA Astrophysics Data System (ADS)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2016-08-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  1. Advances in nucleic acid-based detection methods.

    PubMed Central

    Wolcott, M J

    1992-01-01

    Laboratory techniques based on nucleic acid methods have increased in popularity over the last decade with clinical microbiologists and other laboratory scientists who are concerned with the diagnosis of infectious agents. This increase in popularity is a result primarily of advances made in nucleic acid amplification and detection techniques. Polymerase chain reaction, the original nucleic acid amplification technique, changed the way many people viewed and used nucleic acid techniques in clinical settings. After the potential of polymerase chain reaction became apparent, other methods of nucleic acid amplification and detection were developed. These alternative nucleic acid amplification methods may become serious contenders for application to routine laboratory analyses. This review presents some background information on nucleic acid analyses that might be used in clinical and anatomical laboratories and describes some recent advances in the amplification and detection of nucleic acids. PMID:1423216

  2. A Swarm-Based Learning Method Inspired by Social Insects

    NASA Astrophysics Data System (ADS)

    He, Xiaoxian; Zhu, Yunlong; Hu, Kunyuan; Niu, Ben

    Inspired by cooperative transport behaviors of ants, on the basis of Q-learning, a new learning method, Neighbor-Information-Reference (NIR) learning method, is present in the paper. This is a swarm-based learning method, in which principles of swarm intelligence are strictly complied with. In NIR learning, the i-interval neighbor's information, namely its discounted reward, is referenced when an individual selects the next state, so that it can make the best decision in a computable local neighborhood. In application, different policies of NIR learning are recommended by controlling the parameters according to time-relativity of concrete tasks. NIR learning can remarkably improve individual efficiency, and make swarm more "intelligent".

  3. General conformal transformation method based on Schwarz-Christoffel approach

    NASA Astrophysics Data System (ADS)

    Tang, Linlong; Yin, Jinchan; Yuan, Guishan; Du, Jinglei; Gao, Hongtao; Dong, Xiaochun; Lu, Yueguang; Du, Chunlei

    2011-08-01

    A general conformal transformation method (CTM) is proposed to construct the conformal mapping between two irregular geometries. In order to find the material parameters corresponding to the conformal transformation between two irregular geometries, two polygons are utilized to approximate the two irregular geometries, and an intermediate geometry is used to connect the mapping relations between the two polygons. Based on these manipulations, the approximate material parameters for TE and TM waves are finally obtained by calculating the Schwarz-Christoffel (SC) mappings. To demonstrate the validity of the method, a phase modulator and a plane focal surface Luneburg lens are designed and simulated by the finite element method. The results show that the conformal transformation can be expanded to the cases that the transformed objects are with irregular geometries.

  4. CT Scanning Imaging Method Based on a Spherical Trajectory.

    PubMed

    Chen, Ping; Han, Yan; Gui, Zhiguo

    2016-01-01

    In industrial computed tomography (CT), the mismatch between the X-ray energy and the effective thickness makes it difficult to ensure the integrity of projection data using the traditional scanning model, because of the limitations of the object's complex structure. So, we have developed a CT imaging method that is based on a spherical trajectory. Considering an unrestrained trajectory for iterative reconstruction, an iterative algorithm can be used to realise the CT reconstruction of a spherical trajectory for complete projection data only. Also, an inclined circle trajectory is used as an example of a spherical trajectory to illustrate the accuracy and feasibility of this new scanning method. The simulation results indicate that the new method produces superior results for a larger cone-beam angle, a limited angle and tabular objects compared with traditional circle trajectory scanning.

  5. CT Scanning Imaging Method Based on a Spherical Trajectory

    PubMed Central

    2016-01-01

    In industrial computed tomography (CT), the mismatch between the X-ray energy and the effective thickness makes it difficult to ensure the integrity of projection data using the traditional scanning model, because of the limitations of the object’s complex structure. So, we have developed a CT imaging method that is based on a spherical trajectory. Considering an unrestrained trajectory for iterative reconstruction, an iterative algorithm can be used to realise the CT reconstruction of a spherical trajectory for complete projection data only. Also, an inclined circle trajectory is used as an example of a spherical trajectory to illustrate the accuracy and feasibility of this new scanning method. The simulation results indicate that the new method produces superior results for a larger cone-beam angle, a limited angle and tabular objects compared with traditional circle trajectory scanning. PMID:26934744

  6. Metabolomics-Based Methods for Early Disease Diagnostics: A Review

    PubMed Central

    Nagana Gowda, G. A.; Zhang, Shucha; Gu, Haiwei; Asiago, Vincent; Shanaiah, Narasimhamurthy; Raftery, Daniel

    2013-01-01

    The emerging field of “metabolomics,” in which a large number of small molecule metabolites from body fluids or tissues are detected quantitatively in a single step, promises immense potential for early diagnosis, therapy monitoring and for understanding the pathogenesis of many diseases. Metabolomics methods are mostly focused on the information rich analytical techniques of nuclear magnetic resonance (NMR) spectroscopy and mass spectrometry (MS). Analysis of the data from these high-resolution methods using advanced chemometric approaches provides a powerful platform for translational and clinical research, and diagnostic applications. In this review, the current trends and recent advances in NMR- and MS-based metabolomics are described with a focus on the development of advanced NMR and MS methods, improved multivariate statistical data analysis and recent applications in the area of cancer, diabetes, inborn errors of metabolism, and cardiovascular diseases. PMID:18785810

  7. A finite mass based method for Vlasov-Poisson simulations

    NASA Astrophysics Data System (ADS)

    Larson, David; Young, Christopher

    2014-10-01

    A method for the numerical simulation of plasma dynamics using discrete particles is introduced. The shape function kinetics (SFK) method is based on decomposing the mass into discrete particles using shape functions of compact support. The particle positions and shape evolve in response to internal velocity spread and external forces. Remapping is necessary in order to maintain accuracy and two strategies for remapping the particles are discussed. Numerical simulations of standard test problems illustrate the advantages of the method which include very low noise compared to the standard particle-in-cell technique, inherent positivity, large dynamic range, and ease of implementation. This work was performed under the auspices of the U.S. Department of Energy by LLNL under Contract DE-AC52-07NA27344. C. V. Young acknowledges the support of the DOE NNSA Stewardship Science Graduate Fellowship under Contract DE-FC52-08NA28752.

  8. Grid-based Methods in Relativistic Hydrodynamics and Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Martí, José María; Müller, Ewald

    2015-12-01

    An overview of grid-based numerical methods used in relativistic hydrodynamics (RHD) and magnetohydrodynamics (RMHD) is presented. Special emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods. Results of a set of demanding test bench simulations obtained with different numerical methods are compared in an attempt to assess the present capabilities and limits of the various numerical strategies. Applications to three astrophysical phenomena are briefly discussed to motivate the need for and to demonstrate the success of RHD and RMHD simulations in their understanding. The review further provides FORTRAN programs to compute the exact solution of the Riemann problem in RMHD, and to simulate 1D RMHD flows in Cartesian coordinates.

  9. A MUSIC-based method for SSVEP signal processing.

    PubMed

    Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei

    2016-03-01

    The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications.

  10. Method of creating microscale prototypes using SLM based holographic lithography

    NASA Astrophysics Data System (ADS)

    Lawson, Joseph L.; Jenness, Nathan; Wilson, Scott; Clark, Robert L.

    2013-03-01

    A method of generating arbitrary structures using spatial light modulator (SLM) based holograms with multiphoton absorption is presented. Current methodologies for designing 3D prototyping, such as G-code, are not ideally suited for holographic lithography and therefore limit its functionality or requires additional complex processing. The process outlined here allows a microstructure to be fabricated based on designs from commercially available CAD software. CAD software enables the microstructures to be designed and then realized using dynamic holographic lithography methods enabling designers a simple, quick, and robust method of fabricating novel microstructures. Holographic patterning routines such as raster scans of one or multiple focal points, holograms encoded with two or three dimensional spatial information, or a combination of both techniques may be utilized with this methodology. The process described allows for the development of complex structures that would be difficult to otherwise program using traditional methods. No limitations are placed on the form or function of the designed components, enabling undercut and interlocking features to be fabricated. This methodology also enables the location and orientation of the structures to be controlled dynamically simplifying the process of creating multi-scaled structures or complex arrays of arbitrary structures. As a proof of concept demonstration, a simple cantilever beam was modeled and fabricated.

  11. Effect of changing journal clubs from traditional method to evidence-based method on psychiatry residents

    PubMed Central

    Faridhosseini, Farhad; Saghebi, Ali; Khadem-Rezaiyan, Majid; Moharari, Fatemeh; Dadgarmoghaddam, Maliheh

    2016-01-01

    Introduction Journal club is a valuable educational tool in the medical field. This method follows different goals. This study aims to investigate the effect on psychiatry residents of changing journal clubs from the traditional method to the evidence-based method. Method This study was conducted using a before–after design. First- and second-year residents of psychiatry were included in the study. First, the status quo was evaluated by standardized questionnaire regarding the effect of journal club. Then, ten sessions were held to familiarize the residents with the concept of journal club. After that, evidence-based journal club sessions were held. The questionnaire was given to the residents again after the final session. Data were analyzed through descriptive statistics (frequency and percentage frequency, mean and standard deviation), and analytic statistics (paired t-test) using SPSS 22. Results Of a total of 20 first- and second-year residents of psychiatry, the data of 18 residents were finally analyzed. Most of the subjects (17 [93.7%]) were females. The mean overall score before and after the intervention was 1.83±0.45 and 2.85±0.57, respectively, which showed a significant increase (P<0.001). Conclusion Moving toward evidence-based journal clubs seems like an appropriate measure to reach the goals set by this educational tool. PMID:27570469

  12. Recursive approach to the moment-based phase unwrapping method.

    PubMed

    Langley, Jason A; Brice, Robert G; Zhao, Qun

    2010-06-01

    The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm. PMID:20517381

  13. Hydrologic regionalization using wavelet-based multiscale entropy method

    NASA Astrophysics Data System (ADS)

    Agarwal, A.; Maheswaran, R.; Sehgal, V.; Khosa, R.; Sivakumar, B.; Bernhofer, C.

    2016-07-01

    Catchment regionalization is an important step in estimating hydrologic parameters of ungaged basins. This paper proposes a multiscale entropy method using wavelet transform and k-means based hybrid approach for clustering of hydrologic catchments. Multi-resolution wavelet transform of a time series reveals structure, which is often obscured in streamflow records, by permitting gross and fine features of a signal to be separated. Wavelet-based Multiscale Entropy (WME) is a measure of randomness of the given time series at different timescales. In this study, streamflow records observed during 1951-2002 at 530 selected catchments throughout the United States are used to test the proposed regionalization framework. Further, based on the pattern of entropy across multiple scales, each cluster is given an entropy signature that provides an approximation of the entropy pattern of the streamflow data in each cluster. The tests for homogeneity reveals that the proposed approach works very well in regionalization.

  14. Current trends in virtual high throughput screening using ligand-based and structure-based methods.

    PubMed

    Sukumar, Nagamani; Das, Sourav

    2011-12-01

    High throughput in silico methods have offered the tantalizing potential to drastically accelerate the drug discovery process. Yet despite significant efforts expended by academia, national labs and industry over the years, many of these methods have not lived up to their initial promise of reducing the time and costs associated with the drug discovery enterprise, a process that can typically take over a decade and cost hundreds of millions of dollars from conception to final approval and marketing of a drug. Nevertheless structure-based modeling has become a mainstay of computational biology and medicinal chemistry, helping to leverage our knowledge of the biological target and the chemistry of protein-ligand interactions. While ligand-based methods utilize the chemistry of molecules that are known to bind to the biological target, structure-based drug design methods rely on knowledge of the three-dimensional structure of the target, as obtained through crystallographic, spectroscopic or bioinformatics techniques. Here we review recent developments in the methodology and applications of structure-based and ligand-based methods and target-based chemogenomics in Virtual High Throughput Screening (VHTS), highlighting some case studies of recent applications, as well as current research in further development of these methods. The limitations of these approaches will also be discussed, to give the reader an indication of what might be expected in years to come. PMID:21843144

  15. Assessment of mesoscopic particle-based methods in microfluidic geometries

    NASA Astrophysics Data System (ADS)

    Zhao, Tongyang; Wang, Xiaogong; Jiang, Lei; Larson, Ronald G.

    2013-08-01

    We assess the accuracy and efficiency of two particle-based mesoscopic simulation methods, namely, Dissipative Particle Dynamics (DPD) and Stochastic Rotation Dynamics (SRD) for predicting a complex flow in a microfluidic geometry. Since both DPD and SRD use soft or weakly interacting particles to carry momentum, both methods contain unavoidable inertial effects and unphysically high fluid compressibility. To assess these effects, we compare the predictions of DPD and SRD for both an exact Stokes-flow solution and nearly exact solutions at finite Reynolds numbers from the finite element method for flow in a straight channel with periodic slip boundary conditions. This flow represents a periodic electro-osmotic flow, which is a complex flow with an analytical solution for zero Reynolds number. We find that SRD is roughly ten-fold faster than DPD in predicting the flow field, with better accuracy at low Reynolds numbers. However, SRD has more severe problems with compressibility effects than does DPD, which limits the Reynolds numbers attainable in SRD to around 25-50, while DPD can achieve Re higher than this before compressibility effects become too large. However, since the SRD method runs much faster than DPD does, we can afford to enlarge the number of grid cells in SRD to reduce the fluid compressibility at high Reynolds number. Our simulations provide a method to estimate the range of conditions for which SRD or DPD is preferable for mesoscopic simulations.

  16. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  17. Selection of construction methods: a knowledge-based approach.

    PubMed

    Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  18. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. PMID:23246613

  19. A beam hardening correction method based on HL consistency

    NASA Astrophysics Data System (ADS)

    Mou, Xuanqin; Tang, Shaojie; Yu, Hengyong

    2006-08-01

    XCT with polychromatic tube spectrum causes artifact called beam hardening effect. The current correction in CT device is carried by apriori polynomial from water phantom experiment. This paper proposes a new beam hardening correction algorithm that the correction polynomial depends on the relativity of projection data in angles, which obeys Helgasson-Ludwig Consistency (HL Consistency). Firstly, a bi-polynomial is constructed to characterize the beam hardening effect based on the physical model of medical x-ray imaging. In this bi-polynomial, a factor r(γ,β) represents the ratio of the attenuation contributions caused by high density mass (bone, etc.) to low density mass (muscle, vessel, blood, soft tissue, fat, etc.) respectively in the projection angle β and fan angle γ. Secondly, let r(γ,β)=0, the bi-polynomial is degraded as a sole-polynomial. The coefficient of this polynomial can be calculated based on HL Consistency. Then, the primary correction is reached, which is also more efficient in theoretical than the correction method in current CT devices. Thirdly, based on the result of a normal CT reconstruction from the corrected projection data, r(γ,β) can be estimated. Fourthly, the coefficient of bi-polynomial can also be calculated based HL Consistency and the final correction are achieved. Experiments of circular cone beam CT indicate this method an excellent property. Correcting beam hardening effect based on HL Consistency, not only achieving a self-adaptive and more precise correction, but also getting rid of regular inconvenient water phantom experiments, will renovate the correction technique of current CT devices.

  20. Warped document image correction method based on heterogeneous registration strategies

    NASA Astrophysics Data System (ADS)

    Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan

    2013-03-01

    With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.

  1. A comparison between two probabilistic radar-based nowcasting methods

    NASA Astrophysics Data System (ADS)

    Buil, Alejandro; Berenguer, Marc; Sempere-Torres, Daniel

    2013-04-01

    Until now, some algorithms have been developed for very short-term precipitation forecasting based on radar data. Unlike deterministic methods, probabilistic nowcasting techniques aim at describing the uncertainty in the forecasts. This work presents a comparison of two probabilistic nowcasting techniques based on Lagrangian extrapolation of recent radar observations. Germann and Zawadzki (2004) described and evaluated four probabilistic techniques. We have chosen to compare the so-called Local Lagrangian technique [the one that demonstrated the best skill,among those of Germann and Zawadzki (2004)] with the ensemble nowcasting technique SBMcast (Berenguer et al., 2011). These two methods are conceptually different: while the Local Lagrangian techinque forecasts pdfs of point rainfall values calculated examining the spatial variability of the radar field, SBMcast generates a set of future rainfall scenarios (ensemble members) compatible with the observations keeping spatial and temporal structure of the rainfall field according to the String of Beads model. The comparison of these methods has been carried out in the vicinity of Barcelona, Catalunya (Spain) using the observations of the Catalan Weather Service radar network. References Berenguer, M., D. Sempere-Torres, and G. Pegram, 2011: SBMcast-An ensemble nowcasting technique to assess the uncertainty in rainfall forecasts by Lagrangian extrapolation.Journal of Hydrology, 404, 226-240. Germann, U. and I. Zawadzki, 2004: Scale dependence of the predictability of precipitation from the continental radar image. Part II: Probability forecasts. Journal of Applied Meteorology, 43, 74-89.

  2. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  3. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  4. Design of time interval generator based on hybrid counting method

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  5. Integrated method for the measurement of trace atmospheric bases

    NASA Astrophysics Data System (ADS)

    Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.

    2011-09-01

    Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace atmospheric nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  6. Integrated method for the measurement of trace nitrogenous atmospheric bases

    NASA Astrophysics Data System (ADS)

    Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.

    2011-12-01

    Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv), as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  7. a Robust Pct Method Based on Complex Least Squares Adjustment Method

    NASA Astrophysics Data System (ADS)

    Haiqiang, F.; Jianjun, Z.; Changcheng, W.; Qinghua, X.; Rong, Z.

    2013-07-01

    Polarization Coherence Tomography (PCT) method has the good performance in deriving the vegetation vertical structure. However, Errors caused by temporal decorrelation and vegetation height and ground phase always propagate to the data analysis and contaminate the results. In order to overcome this disadvantage, we exploit Complex Least Squares Adjustment Method to compute vegetation height and ground phase based on Random Volume over Ground and Volume Temporal Decorrelation (RVoG + VTD) model. By the fusion of different polarimetric InSAR data, we can use more observations to obtain more robust estimations of temporal decorrelation and vegetation height, and then, we introduce them into PCT to acquire more accurate vegetation vertical structure. Finally the new approach is validated on E-SAR data of Oberpfaffenhofen, Germany. The results demonstrate that the robust method can greatly improve accusation of vegetation vertical structure.

  8. Wave-equation based traveltime seismic tomography - Part 1: Method

    NASA Astrophysics Data System (ADS)

    Tong, P.; Zhao, D.; Yang, D.; Yang, X.; Chen, J.; Liu, Q.

    2014-08-01

    In this paper, we propose a wave-equation based traveltime seismic tomography method with a detailed description of its step-by-step process. First, a linear relationship between the traveltime residual Δt = Tobs - Tsyn and the relative velocity perturbation δc(x) / c(x) connected by a finite-frequency traveltime sensitivity kernel K(x) is theoretically derived using the adjoint method. To accurately calculate the traveltime residual Δt, two automatic arrival-time picking techniques including the envelop energy ratio method and the combined ray and cross-correlation method are then developed to compute the arrival times Tsyn for synthetic seismograms. The arrival times Tobs of observed seismograms are usually determined by manual hand picking in real applications. Traveltime sensitivity kernel K(x) is constructed by convolving a forward wavefield u(t,x) with an adjoint wavefield q(t,x). The calculations of synthetic seismograms and sensitivity kernels rely on forward modelling. To make it computationally feasible for tomographic problems involving a large number of seismic records, the forward problem is solved in the two-dimensional (2-D) vertical plane passing through the source and the receiver by a high-order central difference method. The final model is parameterized on 3-D regular grid (inversion) nodes with variable spacings, while model values on each 2-D forward modelling node are linearly interpolated by the values at its eight surrounding 3-D inversion grid nodes. Finally, the tomographic inverse problem is formulated as a regularized optimization problem, which can be iteratively solved by either the LSQR solver or a non-linear conjugate-gradient method. To provide some insights into future 3-D tomographic inversions, Fréchet kernels for different seismic phases are also demonstrated in this study.

  9. Density functional theory based generalized effective fragment potential method

    SciTech Connect

    Nguyen, Kiet A. E-mail: ruth.pachter@wpafb.af.mil; Pachter, Ruth E-mail: ruth.pachter@wpafb.af.mil; Day, Paul N.

    2014-06-28

    We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes.

  10. Density functional theory based generalized effective fragment potential method

    NASA Astrophysics Data System (ADS)

    Nguyen, Kiet A.; Pachter, Ruth; Day, Paul N.

    2014-06-01

    We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes.

  11. Cartridge case image matching using effective correlation area based method.

    PubMed

    Yammen, S; Muneesawang, P

    2013-06-10

    A firearm leaves a unique impression on fired cartridge cases. The cross-correlation function plays an important role in matching the characteristic features on the cartridge case found at the crime scene with a specific firearm, for accurate firearm identification. This paper proposes that the computational forensic techniques of alignment and effective correlation area-based approaches to image matching are essential to firearm identification. Specifically, the reference and the corresponding cartridge cases are aligned according to the phase-correlation criterion on the transform domain. The informative segments of the breech face marks are identified by a cross-covariance coefficient using the coefficient value in a window located locally in the image space. The segments are then passed to the measurement of edge density for computing effective correlation areas. Experimental results on a new dataset show that the correlation system can make use of the best properties of alignment and effective correlation area-based approaches, and can attain significant improvement of image-correlation results, compared with the traditional image-matching methods for firearm identification, which employ cartridge-case samples. An analysis of image-alignment score matrices suggests that all translation and scaling parameters are estimated correctly, and contribute to the successful extraction of effective correlation areas. It was found that the proposed method has a high discriminant power, compared with the conventional correlator. This paper advocates that this method will enable forensic science to compile a large-scale image database to perform correlation of cartridge case bases, in order to identify firearms that involve pairwise alignments and comparisons.

  12. Feasible methods to estimate disease based price indexes.

    PubMed

    Bradley, Ralph

    2013-05-01

    There is a consensus that statistical agencies should report medical data by disease rather than by service. This study computes price indexes that are necessary to deflate nominal disease expenditures and to decompose their growth into price, treated prevalence and output per patient growth. Unlike previous studies, it uses methods that can be implemented by the Bureau of Labor Statistics (BLS). For the calendar years 2005-2010, I find that these feasible disease based indexes are approximately 1% lower on an annual basis than indexes computed by current methods at BLS. This gives evidence that traditional medical price indexes have not accounted for the more efficient use of medical inputs in treating most diseases.

  13. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  14. An Optimization-based Atomistic-to-Continuum Coupling Method

    SciTech Connect

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally, we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.

  15. Novel parameter-based flexure bearing design method

    NASA Astrophysics Data System (ADS)

    Amoedo, Simon; Thebaud, Edouard; Gschwendtner, Michael; White, David

    2016-06-01

    A parameter study was carried out on the design variables of a flexure bearing to be used in a Stirling engine with a fixed axial displacement and a fixed outer diameter. A design method was developed in order to assist identification of the optimum bearing configuration. This was achieved through a parameter study of the bearing carried out with ANSYS®. The parameters varied were the number and the width of the arms, the thickness of the bearing, the eccentricity, the size of the starting and ending holes, and the turn angle of the spiral. Comparison was made between the different designs in terms of axial and radial stiffness, the natural frequency, and the maximum induced stresses. Moreover, the Finite Element Analysis (FEA) was compared to theoretical results for a given design. The results led to a graphical design method which assists the selection of flexure bearing geometrical parameters based on pre-determined geometric and material constraints.

  16. Application of DNA-based methods in forensic entomology.

    PubMed

    Wells, Jeffrey D; Stevens, Jamie R

    2008-01-01

    A forensic entomological investigation can benefit from a variety of widely practiced molecular genotyping methods. The most commonly used is DNA-based specimen identification. Other applications include the identification of insect gut contents and the characterization of the population genetic structure of a forensically important insect species. The proper application of these procedures demands that the analyst be technically expert. However, one must also be aware of the extensive list of standards and expectations that many legal systems have developed for forensic DNA analysis. We summarize the DNA techniques that are currently used in, or have been proposed for, forensic entomology and review established genetic analyses from other scientific fields that address questions similar to those in forensic entomology. We describe how accepted standards for forensic DNA practice and method validation are likely to apply to insect evidence used in a death or other forensic entomological investigation.

  17. Neural cell image segmentation method based on support vector machine

    NASA Astrophysics Data System (ADS)

    Niu, Shiwei; Ren, Kan

    2015-10-01

    In the analysis of neural cell images gained by optical microscope, accurate and rapid segmentation is the foundation of nerve cell detection system. In this paper, a modified image segmentation method based on Support Vector Machine (SVM) is proposed to reduce the adverse impact caused by low contrast ratio between objects and background, adherent and clustered cells' interference etc. Firstly, Morphological Filtering and OTSU Method are applied to preprocess images for extracting the neural cells roughly. Secondly, the Stellate Vector, Circularity and Histogram of Oriented Gradient (HOG) features are computed to train SVM model. Finally, the incremental learning SVM classifier is used to classify the preprocessed images, and the initial recognition areas identified by the SVM classifier are added to the library as the positive samples for training SVM model. Experiment results show that the proposed algorithm can achieve much better segmented results than the classic segmentation algorithms.

  18. Method to implement the CCD timing generator based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  19. Dominant partition method. [based on a wave function formalism

    NASA Technical Reports Server (NTRS)

    Dixon, R. M.; Redish, E. F.

    1979-01-01

    By use of the L'Huillier, Redish, and Tandy (LRT) wave function formalism, a partially connected method, the dominant partition method (DPM) is developed for obtaining few body reductions of the many body problem in the LRT and Bencze, Redish, and Sloan (BRS) formalisms. The DPM maps the many body problem to a fewer body one by using the criterion that the truncated formalism must be such that consistency with the full Schroedinger equation is preserved. The DPM is based on a class of new forms for the irreducible cluster potential, which is introduced in the LRT formalism. Connectivity is maintained with respect to all partitions containing a given partition, which is referred to as the dominant partition. Degrees of freedom corresponding to the breakup of one or more of the clusters of the dominant partition are treated in a disconnected manner. This approach for simplifying the complicated BRS equations is appropriate for physical problems where a few body reaction mechanism prevails.

  20. An Optimization-based Atomistic-to-Continuum Coupling Method

    DOE PAGESBeta

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less

  1. A novel non-uniformity correction method based on ROIC

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoming; Li, Yujue; Di, Chao; Wang, Xinxing; Cao, Yi

    2011-11-01

    Infrared focal plane arrays (IRFPA) suffer from inherent low frequency and fixed patter noised (FPN). They are thus limited by their inability to calibrate out individual detector variations including detector dark current (offset) and responsivity (gain). To achieve high quality infrared image by mitigating the FPN of IRFPAs, we have developed a novel non-uniformity correction (NUC) method based on read-out integrated circuit (ROIC). The offset and gain correction coefficients can be calculated by function fitting for the linear relationship between the detector's output and a reference voltage in ROIC. We tested the purposed method using an infrared imaging system using the ULIS 03 19 1 detector with real nonuniformity. A set of 384*288 infrared images with 12 bits was collected to evaluate the performance. With the experiments, the non-uniformity was greatly eliminated. We also used the universe non-uniformity (NU) parameter to estimate the performance. The calculated NU parameters with the two-point calibration (TPC) and the purposed method imply that the purposed method has almost as good performance as TPC.

  2. Optimal grid-based methods for thin film micromagnetics simulations

    NASA Astrophysics Data System (ADS)

    Muratov, C. B.; Osipov, V. V.

    2006-08-01

    Thin film micromagnetics are a broad class of materials with many technological applications, primarily in magnetic memory. The dynamics of the magnetization distribution in these materials is traditionally modeled by the Landau-Lifshitz-Gilbert (LLG) equation. Numerical simulations of the LLG equation are complicated by the need to compute the stray field due to the inhomogeneities in the magnetization which presents the chief bottleneck for the simulation speed. Here, we introduce a new method for computing the stray field in a sample for a reduced model of ultra-thin film micromagnetics. The method uses a recently proposed idea of optimal finite difference grids for approximating Neumann-to-Dirichlet maps and has an advantage of being able to use non-uniform discretization in the film plane, as well as an efficient way of dealing with the boundary conditions at infinity for the stray field. We present several examples of the method's implementation and give a detailed comparison of its performance for studying domain wall structures compared to the conventional FFT-based methods.

  3. Transistor-based particle detection systems and methods

    DOEpatents

    Jain, Ankit; Nair, Pradeep R.; Alam, Muhammad Ashraful

    2015-06-09

    Transistor-based particle detection systems and methods may be configured to detect charged and non-charged particles. Such systems may include a supporting structure contacting a gate of a transistor and separating the gate from a dielectric of the transistor, and the transistor may have a near pull-in bias and a sub-threshold region bias to facilitate particle detection. The transistor may be configured to change current flow through the transistor in response to a change in stiffness of the gate caused by securing of a particle to the gate, and the transistor-based particle detection system may configured to detect the non-charged particle at least from the change in current flow.

  4. Biosensor method and system based on feature vector extraction

    SciTech Connect

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  5. Detection of biological thiols based on a colorimetric method*

    PubMed Central

    Xu, Yuan-yuan; Sun, Yang-yang; Zhang, Yu-juan; Lu, Chen-he; Miao, Jin-feng

    2016-01-01

    Biological thiols (biothiols), an important kind of functional biomolecules, such as cysteine (Cys) and glutathione (GSH), play vital roles in maintaining the stability of the intracellular environment. In past decades, studies have demonstrated that metabolic disorder of biothiols is related to many serious disease processes and will lead to extreme damage in human and numerous animals. We carried out a series of experiments to detect biothiols in biosamples, including bovine plasma and cell lysates of seven different cell lines based on a simple colorimetric method. In a typical test, the color of the test solution could gradually change from blue to colorless after the addition of biothiols. Based on the color change displayed, experimental results reveal that the percentage of biothiols in the embryonic fibroblast cell line is significantly higher than those in the other six cell lines, which provides the basis for the following biothiols-related study. PMID:27704750

  6. Hybrid Modeling Method for a DEP Based Particle Manipulation

    PubMed Central

    Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad

    2013-01-01

    In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results. PMID:23364197

  7. Method for fabricating beryllium-based multilayer structures

    DOEpatents

    Skulina, Kenneth M.; Bionta, Richard M.; Makowiecki, Daniel M.; Alford, Craig S.

    2003-02-18

    Beryllium-based multilayer structures and a process for fabricating beryllium-based multilayer mirrors, useful in the wavelength region greater than the beryllium K-edge (111 .ANG. or 11.1 nm). The process includes alternating sputter deposition of beryllium and a metal, typically from the fifth row of the periodic table, such as niobium (Nb), molybdenum (Mo), ruthenium (Ru), and rhodium (Rh). The process includes not only the method of sputtering the materials, but the industrial hygiene controls for safe handling of beryllium. The mirrors made in accordance with the process may be utilized in soft x-ray and extreme-ultraviolet projection lithography, which requires mirrors of high reflectivity (>60%) for x-rays in the range of 60-140 .ANG. (60-14.0 nm).

  8. Method of plasma etching GA-based compound semiconductors

    DOEpatents

    Qiu, Weibin; Goddard, Lynford L.

    2013-01-01

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent thereto. The chamber contains a Ga-based compound semiconductor sample in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. SiCl.sub.4 and Ar gases are flowed into the chamber. RF power is supplied to the platen at a first power level, and RF power is supplied to the source electrode. A plasma is generated. Then, RF power is supplied to the platen at a second power level lower than the first power level and no greater than about 30 W. Regions of a surface of the sample adjacent to one or more masked portions of the surface are etched at a rate of no more than about 25 nm/min to create a substantially smooth etched surface.

  9. Emerging methods for ensemble-based virtual screening.

    PubMed

    Amaro, Rommie E; Li, Wilfred W

    2010-01-01

    Ensemble based virtual screening refers to the use of conformational ensembles from crystal structures, NMR studies or molecular dynamics simulations. It has gained greater acceptance as advances in the theoretical framework, computational algorithms, and software packages enable simulations at longer time scales. Here we focus on the use of computationally generated conformational ensembles and emerging methods that use these ensembles for discovery, such as the Relaxed Complex Scheme or Dynamic Pharmacophore Model. We also discuss the more rigorous physics-based computational techniques such as accelerated molecular dynamics and thermodynamic integration and their applications in improving conformational sampling or the ranking of virtual screening hits. Finally, technological advances that will help make virtual screening tools more accessible to a wider audience in computer aided drug design are discussed.

  10. [Others physical methods in psychiatric treatment based on electromagnetic stimulation].

    PubMed

    Zyss, Tomasz; Rachel, Wojciech; Datka, Wojciech; Hese, Robert T; Gorczyca, Piotr; Zięba, Andrzej; Piekoszewski, Wojciech

    2016-01-01

    In the last decades a few new physical methods based on the electromagnetic head stimulation were subjected to the clinical research. To them belong:--vagus nerve stimulation (VNS),--magnetic seizure therapy/magnetoconvulsive therapy (MST/MCT),--deep stimulation of the brain (DBS) and--transcranial direct current stimulation (tDCS). The paper presents a description of mentioned techniques (nature, advantages, defects, restrictions), which were compared to the applied electroconvulsive treatment ECT, earlier described transcranial magnetic stimulation TMS and the pharmacotherapy (the basis of the psychiatric treatment). PMID:27197431

  11. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Smith, Timothy A. (Inventor); Urnes, James M., Sr. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  12. Study on torpedo fuze signal denoising method based on WPT

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Sun, Changcun; Zhang, Tao; Ren, Zhiliang

    2013-07-01

    Torpedo fuze signal denoising is an important action to ensure reliable operation of fuze. Based on the good characteristics of wavelet packet transform (WPT) in signal denoising, the paper used wavelet packet transform to denoise the fuze signal under a complex background interference, and a simulation of the denoising results with Matlab is performed. Simulation result shows that the WPT denoising method can effectively eliminate background noise exist in torpedo fuze target signal with higher precision and less distortion, leading to advance the reliability of torpedo fuze operation.

  13. Methods and applications of positron-based medical imaging

    NASA Astrophysics Data System (ADS)

    Herzog, H.

    2007-02-01

    Positron emission tomography (PET) is a diagnostic imaging method to examine metabolic functions and their disorders. Dedicated ring systems of scintillation detectors measure the 511 keV γ-radiation produced in the course of the positron emission from radiolabelled metabolically active molecules. A great number of radiopharmaceuticals labelled with 11C, 13N, 15O, or 18F positron emitters have been applied both for research and clinical purposes in neurology, cardiology and oncology. The recent success of PET with rapidly increasing installations is mainly based on the use of [ 18F]fluorodeoxyglucose (FDG) in oncology where it is most useful to localize primary tumours and their metastases.

  14. [Others physical methods in psychiatric treatment based on electromagnetic stimulation].

    PubMed

    Zyss, Tomasz; Rachel, Wojciech; Datka, Wojciech; Hese, Robert T; Gorczyca, Piotr; Zięba, Andrzej; Piekoszewski, Wojciech

    2016-01-01

    In the last decades a few new physical methods based on the electromagnetic head stimulation were subjected to the clinical research. To them belong:--vagus nerve stimulation (VNS),--magnetic seizure therapy/magnetoconvulsive therapy (MST/MCT),--deep stimulation of the brain (DBS) and--transcranial direct current stimulation (tDCS). The paper presents a description of mentioned techniques (nature, advantages, defects, restrictions), which were compared to the applied electroconvulsive treatment ECT, earlier described transcranial magnetic stimulation TMS and the pharmacotherapy (the basis of the psychiatric treatment).

  15. Supersampling method for efficient grid-based electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn

    2016-03-01

    The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing.

  16. Comparisom of Wavelet-Based and Hht-Based Feature Extraction Methods for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Huang, X.-M.; Hsu, P.-H.

    2012-07-01

    Hyperspectral images, which contain rich and fine spectral information, can be used to identify surface objects and improve land use/cover classification accuracy. Due to the property of high dimensionality of hyperspectral data, traditional statistics-based classifiers cannot be directly used on such images with limited training samples. This problem is referred as "curse of dimensionality". The commonly used method to solve this problem is dimensionality reduction, and feature extraction is used to reduce the dimensionality of hyperspectral images more frequently. There are two types of feature extraction methods. The first type is based on statistical property of data. The other type is based on time-frequency analysis. In this study, the time-frequency analysis methods are used to extract the features for hyperspectral image classification. Firstly, it has been proven that wavelet-based feature extraction provide an effective tool for spectral feature extraction. On the other hand, Hilbert-Huang transform (HHT), a relative new time-frequency analysis tool, has been widely used in nonlinear and nonstationary data analysis. In this study, wavelet transform and HHT are implemented on the hyperspectral data for physical spectral analysis. Therefore, we can get a small number of salient features, reduce the dimensionality of hyperspectral images and keep the accuracy of classification results. An AVIRIS data set is used to test the performance of the proposed HHT-based feature extraction methods; then, the results are compared with wavelet-based feature extraction. According to the experiment results, HHT-based feature extraction methods are effective tools and the results are similar with wavelet-based feature extraction methods.

  17. Human Temporal Bone Removal: The Skull Base Block Method.

    PubMed

    Dinh, Christine; Szczupak, Mikhaylo; Moon, Seo; Angeli, Simon; Eshraghi, Adrien; Telischi, Fred F

    2015-08-01

    Objectives To describe a technique for harvesting larger temporal bone specimens from human cadavers for the training of otolaryngology residents and fellows on the various approaches to the lateral and posterolateral skull base. Design Human cadaveric anatomical study. The calvarium was excised 6 cm above the superior aspect of the ear canal. The brain and cerebellum were carefully removed, and the cranial nerves were cut sharply. Two bony cuts were performed, one in the midsagittal plane and the other in the coronal plane at the level of the optic foramen. Setting Medical school anatomy laboratory. Participants Human cadavers. Main Outcome Measures Anatomical contents of specimens and technical effort required. Results Larger temporal bone specimens containing portions of the parietal, occipital, and sphenoidal bones were consistently obtained using this technique of two bone cuts. All specimens were inspected and contained pertinent surface and skull base landmarks. Conclusions The skull base block method allows for larger temporal bone specimens using a two bone cut technique that is efficient and reproducible. These specimens have the necessary anatomical bony landmarks for studying the complexity, utility, and limitations of lateral and posterolateral approaches to the skull base, important for the education of otolaryngology residents and fellows.

  18. Human Temporal Bone Removal: The Skull Base Block Method.

    PubMed

    Dinh, Christine; Szczupak, Mikhaylo; Moon, Seo; Angeli, Simon; Eshraghi, Adrien; Telischi, Fred F

    2015-08-01

    Objectives To describe a technique for harvesting larger temporal bone specimens from human cadavers for the training of otolaryngology residents and fellows on the various approaches to the lateral and posterolateral skull base. Design Human cadaveric anatomical study. The calvarium was excised 6 cm above the superior aspect of the ear canal. The brain and cerebellum were carefully removed, and the cranial nerves were cut sharply. Two bony cuts were performed, one in the midsagittal plane and the other in the coronal plane at the level of the optic foramen. Setting Medical school anatomy laboratory. Participants Human cadavers. Main Outcome Measures Anatomical contents of specimens and technical effort required. Results Larger temporal bone specimens containing portions of the parietal, occipital, and sphenoidal bones were consistently obtained using this technique of two bone cuts. All specimens were inspected and contained pertinent surface and skull base landmarks. Conclusions The skull base block method allows for larger temporal bone specimens using a two bone cut technique that is efficient and reproducible. These specimens have the necessary anatomical bony landmarks for studying the complexity, utility, and limitations of lateral and posterolateral approaches to the skull base, important for the education of otolaryngology residents and fellows. PMID:26225316

  19. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method

    PubMed Central

    2011-01-01

    Background Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. Methods We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views), knee (three views), wrist (two views), and other. Indirect costs were allocated to cost objects using the ABC method. Results The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. Conclusions The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients. PMID:21961846

  20. An acoustic intensity-based method and its aeroacoustic applications

    NASA Astrophysics Data System (ADS)

    Yu, Chao

    Aircraft noise prediction and control is one of the most urgent and challenging tasks worldwide. A hybrid approach is usually considered for predicting the aerodynamic noise. The approach separates the field into aerodynamic source and acoustic propagation regions. Conventional CFD solvers are typically used to evaluate the flow field in the source region. Once the sound source is predicted, the linearized Euler Equations (LEE) can be used to extend the near-field CFD solution to the mid-field acoustic radiation. However, the far-field extension is very time consuming and always prohibited by the excessive computer memory requirements. The FW-H method, instead, predicts the far-field radiation using the flow-field quantities on a closed control surface (that encloses the entire aerodynamic source region) if the wave equation is assumed outside. The surface integration, however, has to be carried out for each far-field location. This would be still computationally intensive for a practical 3D problem even though the intensity in terms of the CPU time has been much decreased compared with that required by the LEE methods. For an accurate far-field prediction, the other difficulty of using the FW-H method is that the complete control surface may be infeasible to accomplish for most practical applications. Motivated by the need for the accurate and efficient far-field prediction techniques, an Acoustic Intensity-Based Method (AIBM) has been developed based on an acoustic input from an OPEN control surface. The AIBM assumes that the sound propagation is governed by the modified Helmholtz equation on and outside a control surface that encloses all the nonlinear effects and noise sources. The prediction of the acoustic radiation field is carried out by the inverse method with an input of acoustic pressure derivative and its simultaneous, co-located acoustic pressure. The reconstructed acoustic radiation field using the AIBM is unique due to the unique continuation theory

  1. Scanning-fiber-based imaging method for tissue engineering

    NASA Astrophysics Data System (ADS)

    Hofmann, Matthias C.; Whited, Bryce M.; Mitchell, Josh; Vogt, William C.; Criswell, Tracy; Rylander, Christopher; Rylander, Marissa Nichole; Soker, Shay; Wang, Ge; Xu, Yong

    2012-06-01

    A scanning-fiber-based method developed for imaging bioengineered tissue constructs such as synthetic carotid arteries is reported. Our approach is based on directly embedding one or more hollow-core silica fibers within the tissue scaffold to function as micro-imaging channels (MIC). The imaging process is carried out by translating and rotating an angle-polished fiber micro-mirror within the MIC to scan excitation light across the tissue scaffold. The locally emitted fluorescent signals are captured using an electron multiplying CCD camera and then mapped into fluorophore distributions according to fiber micro-mirror positions. Using an optical phantom composed of fluorescent microspheres, tissue scaffolds, and porcine skin, we demonstrated single-cell-level imaging resolution (20 to 30 μm) at an imaging depth that exceeds the photon transport mean free path by one order of magnitude. This result suggests that the imaging depth is no longer constrained by photon scattering, but rather by the requirement that the fluorophore signal overcomes the background ``noise'' generated by processes such as scaffold autofluorescence. Finally, we demonstrated the compatibility of our imaging method with tissue engineering by visualizing endothelial cells labeled with green fluorescent protein through a ~500 μm thick and highly scattering electrospun scaffold.

  2. Method of estimation of cloud base height using ground-based digital stereophotography

    NASA Astrophysics Data System (ADS)

    Chulichkov, Alexey I.; Andreev, Maksim S.; Emilenko, Aleksandr S.; Ivanov, Victor A.; Medvedev, Andrey P.; Postylyakov, Oleg V.

    2015-11-01

    Errors of the retrieval of the atmospheric composition using optical methods (DOAS et al.) are under the determining influence of the cloudiness during the measurements. Information on cloud characteristics helps to adjust the optical model of the atmosphere used to interpret the measurements and to reduce the retrieval errors are. For the reconstruction of some geometrical characteristics of clouds a method was developed based on taking pictures of the sky by a pair of digital photo cameras and subsequent processing of the obtained sequence of stereo frames to obtain the height of the cloud base. Since the directions of the optical axes of the stereo cameras are not exactly known, a procedure of adjusting of obtained frames was developed which use photographs of the night starry sky. In the second step, the method of the morphological analysis of images is used to determine the relative shift of the coordinates of some fragment of cloud. The shift is used to estimate the searched cloud base height. The proposed method can be used for automatic processing of stereo data and getting the cloud base height. The report describes a mathematical model of stereophotography measurement, poses and solves the problem of adjusting of optical axes of the cameras, describes method of searching of cloud fragments at another frame by the morphological image analysis; the problem of estimating the cloud base height is formulated and solved. Theoretical investigation shows that for the stereo base of 60 m and shooting with a resolution of 1600x1200 pixels in field of view of 60° the errors do not exceed 10% for the cloud base height up to 4 km. Optimization of camera settings can farther improve the accuracy. Available for authors experimental setup with the stereo base of 17 m and a resolution of 640x480 pixels preliminary confirmed theoretical estimations of the accuracy in comparison with laser rangefinder.

  3. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    NASA Astrophysics Data System (ADS)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  4. A Vocal-Based Analytical Method for Goose Behaviour Recognition

    PubMed Central

    Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole

    2012-01-01

    Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86–97% sensitivity, 89–98% precision) and a reasonable recognition of flushing (79–86%, 66–80%) and landing behaviour(73–91%, 79–92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system. PMID:22737037

  5. A vocal-based analytical method for goose behaviour recognition.

    PubMed

    Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole

    2012-01-01

    Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86-97% sensitivity, 89-98% precision) and a reasonable recognition of flushing (79-86%, 66-80%) and landing behaviour(73-91%, 79-92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system.

  6. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  7. Histogram-Based Calibration Method for Pipeline ADCs

    PubMed Central

    Son, Hyeonuk; Jang, Jaewon; Kim, Heetae; Kang, Sungho

    2015-01-01

    Measurement and calibration of an analog-to-digital converter (ADC) using a histogram-based method requires a large volume of data and a long test duration, especially for a high resolution ADC. A fast and accurate calibration method for pipelined ADCs is proposed in this research. The proposed calibration method composes histograms through the outputs of each stage and calculates error sources. The digitized outputs of a stage are influenced directly by the operation of the prior stage, so the results of the histogram provide the information of errors in the prior stage. The composed histograms reduce the required samples and thus calibration time being implemented by simple modules. For 14-bit resolution pipelined ADC, the measured maximum integral non-linearity (INL) is improved from 6.78 to 0.52 LSB, and the spurious-free dynamic range (SFDR) and signal-to-noise-and-distortion ratio (SNDR) are improved from 67.0 to 106.2dB and from 65.6 to 84.8dB, respectively. PMID:26070196

  8. A method for MREIT-based source imaging: simulation studies.

    PubMed

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data. PMID:27401235

  9. Histogram-Based Calibration Method for Pipeline ADCs.

    PubMed

    Son, Hyeonuk; Jang, Jaewon; Kim, Heetae; Kang, Sungho

    2015-01-01

    Measurement and calibration of an analog-to-digital converter (ADC) using a histogram-based method requires a large volume of data and a long test duration, especially for a high resolution ADC. A fast and accurate calibration method for pipelined ADCs is proposed in this research. The proposed calibration method composes histograms through the outputs of each stage and calculates error sources. The digitized outputs of a stage are influenced directly by the operation of the prior stage, so the results of the histogram provide the information of errors in the prior stage. The composed histograms reduce the required samples and thus calibration time being implemented by simple modules. For 14-bit resolution pipelined ADC, the measured maximum integral non-linearity (INL) is improved from 6.78 to 0.52 LSB, and the spurious-free dynamic range (SFDR) and signal-to-noise-and-distortion ratio (SNDR) are improved from 67.0 to 106.2dB and from 65.6 to 84.8dB, respectively.

  10. Interior reconstruction method based on rotation-translation scanning model.

    PubMed

    Wang, Xianchao; Tang, Ziyue; Yan, Bin; Li, Lei; Bao, Shanglian

    2014-01-01

    In various applications of computed tomography (CT), it is common that the reconstructed object is over the field of view (FOV) or we may intend to sue a FOV which only covers the region of interest (ROI) for the sake of reducing radiation dose. These kinds of imaging situations often lead to interior reconstruction problems which are difficult cases in the reconstruction field of CT, due to the truncated projection data at every view angle. In this paper, an interior reconstruction method is developed based on a rotation-translation (RT) scanning model. The method is implemented by first scanning the reconstructed region, and then scanning a small region outside the support of the reconstructed object after translating the rotation centre. The differentiated backprojection (DBP) images of the reconstruction region and the small region outside the object can be respectively obtained from the two-time scanning data without data rebinning process. At last, the projection onto convex sets (POCS) algorithm is applied to reconstruct the interior region. Numerical simulations are conducted to validate the proposed reconstruction method.

  11. A method for MREIT-based source imaging: simulation studies

    NASA Astrophysics Data System (ADS)

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data.

  12. Cardiac rate detection method based on the beam splitter prism

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Liu, Xiaohua; Liu, Ming; Zhao, Yuejin; Dong, Liquan; Zhao, Ruirui; Jin, Xiaoli; Zhao, Jingsheng

    2013-09-01

    A new cardiac rate measurement method is proposed. Through the beam splitter prism, the common-path optical system of transmitting and receiving signals is achieved. By the focusing effect of the lens, the small amplitude motion artifact is inhibited and the signal-to-noise is improved. The cardiac rate is obtained based on the PhotoPlethysmoGraphy (PPG). We use LED as the light source and use photoelectric diode as the receiving tube. The LED and the photoelectric diode are on the different sides of the beam splitter prism and they form the optical system. The signal processing and display unit is composed by the signal processing circuit, data acquisition device and computer. The light emitted by the modulated LED is collimated by the lens and irradiates the measurement target through the beam splitter prism. The light reflected by the target is focused on the receiving tube through the beam splitter prism and another lens. The signal received by the photoelectric diode is processed by the analog circuit and obtained by the data acquisition device. Through the filtering and Fast Fourier Transform, the cardiac rate is achieved. We get the real time cardiac rate by the moving average method. We experiment with 30 volunteers, containing different genders and different ages. We compare the signals captured by this method to a conventional PPG signal captured concurrently from a finger. The results of the experiments are all relatively agreeable and the biggest deviation value is about 2bmp.

  13. [A Standing Balance Evaluation Method Based on Largest Lyapunov Exponent].

    PubMed

    Liu, Kun; Wang, Hongrui; Xiao, Jinzhuang; Zhao, Qing

    2015-12-01

    In order to evaluate the ability of human standing balance scientifically, we in this study proposed a new evaluation method based on the chaos nonlinear analysis theory. In this method, a sinusoidal acceleration stimulus in forward/backward direction was forced under the subjects' feet, which was supplied by a motion platform. In addition, three acceleration sensors, which were fixed to the shoulder, hip and knee of each subject, were applied to capture the balance adjustment dynamic data. Through reconstructing the system phase space, we calculated the largest Lyapunov exponent (LLE) of the dynamic data of subjects' different segments, then used the sum of the squares of the difference between each LLE (SSDLLE) as the balance capabilities evaluation index. Finally, 20 subjects' indexes were calculated, and compared with evaluation results of existing methods. The results showed that the SSDLLE were more in line with the subjects' performance during the experiment, and it could measure the body's balance ability to some extent. Moreover, the results also illustrated that balance level was determined by the coordinate ability of various joints, and there might be more balance control strategy in the process of maintaining balance. PMID:27079089

  14. Improved reliability analysis method based on the failure assessment diagram

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Zhang, Zheng; Zhong, Qunpeng

    2012-07-01

    With the uncertainties related to operating conditions, in-service non-destructive testing (NDT) measurements and material properties considered in the structural integrity assessment, probabilistic analysis based on the failure assessment diagram (FAD) approach has recently become an important concern. However, the point density revealing the probabilistic distribution characteristics of the assessment points is usually ignored. To obtain more detailed and direct knowledge from the reliability analysis, an improved probabilistic fracture mechanics (PFM) assessment method is proposed. By integrating 2D kernel density estimation (KDE) technology into the traditional probabilistic assessment, the probabilistic density of the randomly distributed assessment points is visualized in the assessment diagram. Moreover, a modified interval sensitivity analysis is implemented and compared with probabilistic sensitivity analysis. The improved reliability analysis method is applied to the assessment of a high pressure pipe containing an axial internal semi-elliptical surface crack. The results indicate that these two methods can give consistent sensitivities of input parameters, but the interval sensitivity analysis is computationally more efficient. Meanwhile, the point density distribution and its contour are plotted in the FAD, thereby better revealing the characteristics of PFM assessment. This study provides a powerful tool for the reliability analysis of critical structures.

  15. Histogram-Based Calibration Method for Pipeline ADCs.

    PubMed

    Son, Hyeonuk; Jang, Jaewon; Kim, Heetae; Kang, Sungho

    2015-01-01

    Measurement and calibration of an analog-to-digital converter (ADC) using a histogram-based method requires a large volume of data and a long test duration, especially for a high resolution ADC. A fast and accurate calibration method for pipelined ADCs is proposed in this research. The proposed calibration method composes histograms through the outputs of each stage and calculates error sources. The digitized outputs of a stage are influenced directly by the operation of the prior stage, so the results of the histogram provide the information of errors in the prior stage. The composed histograms reduce the required samples and thus calibration time being implemented by simple modules. For 14-bit resolution pipelined ADC, the measured maximum integral non-linearity (INL) is improved from 6.78 to 0.52 LSB, and the spurious-free dynamic range (SFDR) and signal-to-noise-and-distortion ratio (SNDR) are improved from 67.0 to 106.2dB and from 65.6 to 84.8dB, respectively. PMID:26070196

  16. Gradient-based methods for full waveform inversion

    NASA Astrophysics Data System (ADS)

    Métivier, L.; Brossier, R.; Operto, S.; Virieux, J.

    2012-12-01

    The minimization of the distance between recorded and synthetic seismograms (namely misfit function) for the reconstruction of subsurface velocity models leads to large-scale non-linear inverse problems. These problems are generally solved using gradient-based methods, such as the (preconditioned) steepest-descent method or the (preconditioned) non-linear conjugate gradient method, Gauss-Newton approach and more recently the l-BFGS quasi-Newton method. Except the Gauss-Newton approach, these methods only require the capability of computing (and storing) the gradient of the misfit function, efficiently performed through the adjoint state method, leading to the resolution of one forward problem and one adjoint problem per source. However, the inverse Hessian operator could be considered for compensating from target illumination variations coming from acquisition geometry and medium velocity variations. This operator acts as a filter in the model space when velocity is updated. For example, the l-BFGS method estimates an approximation of the inverse of the Hessian from gradients of previous iterations without no significant extra computational costs. Gauss-Newton approximation of the Hessian not only adds an extra computational cost but also neglects multi-scattering effects. Exact Newton methods will consider multi-scattering effects and may be more accurate than the l-BFGS approximation. For such investigation, we shall introduce the second-order adjoint formulation for the efficient estimation of the product of the Hessian operator and any vector in the model space. Using this product, we may update the velocity model through the resolution of the linear system associated with the computation of the Newton descent direction using a "matrix free" iterative linear solver such as the conjugate gradient method. This implementation could be performed for Newton approaches (and also the Gauss-Newton approximation) and requires an additional state and adjoint

  17. Updating National Topographic Data Base Using Change Detection Methods

    NASA Astrophysics Data System (ADS)

    Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.

    2016-06-01

    The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.

  18. Distance-Based Phylogenetic Methods Around a Polytomy.

    PubMed

    Davidson, Ruth; Sullivant, Seth

    2014-01-01

    Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny.

  19. Change detection methods for distinction task of stochastic textures based on nonparametric method

    NASA Astrophysics Data System (ADS)

    Sultanov, Albert K.

    2016-03-01

    The following article describes use of nonparametric method. This method is to be used to find multivariate change of random processes for image processing. It is aimed to find the borders of irregular phenomena in front of terrain. The current task of finding change and evaluation of a change point in consequent setting proposes test statistics based on value of sampling characteristic functions. The relevant criterion in a wide range of alternatives has a predetermined assessment of the asymptotic significance level. Current work also proposes an algorithm of texture segmentation for two-dimensional case. This algorithm is given as a consequence of processing operations in columns and rows of test statistics values, obtained during scanning of images. The test results are quoted.

  20. Case-based explanation of non-case-based learning methods.

    PubMed

    Caruana, R; Kangarloo, H; Dionisio, J D; Sinha, U; Johnson, D

    1999-01-01

    We show how to generate case-based explanations for non-case-based learning methods such as artificial neural nets or decision trees. The method uses the trained model (e.g., the neural net or the decision tree) as a distance metric to determine which cases in the training set are most similar to the case that needs to be explained. This approach is well suited to medical domains, where it is important to understand predictions made by complex machine learning models, and where training and clinical practice makes users adept at case interpretation.

  1. Artificial Boundary Conditions Based on the Difference Potentials Method

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.

    1996-01-01

    While numerically solving a problem initially formulated on an unbounded domain, one typically truncates this domain, which necessitates setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The issue of setting the ABC's appears to be most significant in many areas of scientific computing, for example, in problems originating from acoustics, electrodynamics, solid mechanics, and fluid dynamics. In particular, in computational fluid dynamics (where external problems present a wide class of practically important formulations) the proper treatment of external boundaries may have a profound impact on the overall quality and performance of numerical algorithms. Most of the currently used techniques for setting the ABC's can basically be classified into two groups. The methods from the first group (global ABC's) usually provide high accuracy and robustness of the numerical procedure but often appear to be fairly cumbersome and (computationally) expensive. The methods from the second group (local ABC's) are, as a rule, algorithmically simple, numerically cheap, and geometrically universal; however, they usually lack accuracy of computations. In this paper we first present a survey and provide a comparative assessment of different existing methods for constructing the ABC's. Then, we describe a relatively new ABC's technique of ours and review the corresponding results. This new technique, in our opinion, is currently one of the most promising in the field. It enables one to construct such ABC's that combine the advantages relevant to the two aforementioned classes of existing methods. Our approach is based on application of the difference potentials method attributable to V. S. Ryaben'kii. This approach allows us to obtain highly accurate ABC's in the form of certain (nonlocal) boundary operator equations. The operators involved are analogous to the pseudodifferential boundary projections first introduced by A. P. Calderon and then

  2. ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS

    SciTech Connect

    E. Blanford; E. Keldrauk; M. Laufer; M. Mieler; J. Wei; B. Stojadinovic; P.F. Peterson

    2010-09-20

    Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement, and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using

  3. Physics-Based Imaging Methods for Terahertz Nondestructive Evaluation Applications

    NASA Astrophysics Data System (ADS)

    Kniffin, Gabriel Paul

    Lying between the microwave and far infrared (IR) regions, the "terahertz gap" is a relatively unexplored frequency band in the electromagnetic spectrum that exhibits a unique combination of properties from its neighbors. Like in IR, many materials have characteristic absorption spectra in the terahertz (THz) band, facilitating the spectroscopic "fingerprinting" of compounds such as drugs and explosives. In addition, non-polar dielectric materials such as clothing, paper, and plastic are transparent to THz, just as they are to microwaves and millimeter waves. These factors, combined with sub-millimeter wavelengths and non-ionizing energy levels, makes sensing in the THz band uniquely suited for many NDE applications. In a typical nondestructive test, the objective is to detect a feature of interest within the object and provide an accurate estimate of some geometrical property of the feature. Notable examples include the thickness of a pharmaceutical tablet coating layer or the 3D location, size, and shape of a flaw or defect in an integrated circuit. While the material properties of the object under test are often tightly controlled and are generally known a priori, many objects of interest exhibit irregular surface topographies such as varying degrees of curvature over the extent of their surfaces. Common THz pulsed imaging (TPI) methods originally developed for objects with planar surfaces have been adapted for objects with curved surfaces through use of mechanical scanning procedures in which measurements are taken at normal incidence over the extent of the surface. While effective, these methods often require expensive robotic arm assemblies, the cost and complexity of which would likely be prohibitive should a large volume of tests be needed to be carried out on a production line. This work presents a robust and efficient physics-based image processing approach based on the mature field of parabolic equation methods, common to undersea acoustics, seismology

  4. Ascent and reentry guidance concept based on NLP-methods

    NASA Astrophysics Data System (ADS)

    Gräßlin, M. H.; Telaar, J.; Schöttle, U. M.

    2004-08-01

    This paper addresses the application of an autonomous guidance concept to the ascent flight of the reusable launch vehicle Hopper, and to the reentry mission of the space plane X-38. Presently, the guidance requirements with respect to autonomy, accuracy and mission flexibility have been increased steadily for RLV applications. Nonlinear Programming (NLP)-based guidance strategies have been proposed that offer the potential to meet these demands. The autonomous guidance is achieved by combining onboard flight path prediction and NLP methods for flight optimization. Such guidance strategies hold promise for reduced pre-mission analyses for trajectory planning, and for improved adaptability to non-nominal mission conditions. Its applicability, autonomy and performance will be discussed showing numerical results obtained with a flight-simulation environment.

  5. Image processing methods for visual prostheses based on DSP

    NASA Astrophysics Data System (ADS)

    Liu, Huwei; Zhao, Ying; Tian, Yukun; Ren, Qiushi; Chai, Xinyu

    2008-12-01

    Visual prostheses for extreme vision impairment have come closer to reality during these few years. The task of this research has been to design exoteric devices and study image processing algorithms and methods for different complexity images. We have developed a real-time system capable of image capture and processing to obtain most available and important image features for recognition and simulation experiment based on DSP (Digital Signal Processor). Beyond developing hardware system, we introduce algorithms such as resolution reduction, information extraction, dilation and erosion, square (circular) pixelization and Gaussian pixelization. And we classify images with different stages according to different complexity such as simple images, medium complex images, complex images. As a result, this paper will get the needed signal for transmitting to electrode array and images for simulation experiment.

  6. Method and apparatus for making articles from particle based materials

    DOEpatents

    Moorhead, Arthur J.; Menchhofer, Paul A.

    1995-01-01

    A method and apparatus for the production of articles made of a particle-based material; e.g., ceramics and sintered metals. In accordance with the invention, a thermally settable slurry containing a relatively high concentration of the particles is conveyed through an elongate flow area having a desired cross-sectional configuration. The slurry is heated as it is advanced through the flow area causing the slurry to set or harden in a shape which conforms to the cross-sectional configuration of the flow area. The material discharges from the flow area as a self-supporting solid of near net final dimensions. The article may then be sintered to consolidate the particles and provide a high density product.

  7. Method and apparatus for making articles from particle based materials

    DOEpatents

    Moorhead, A.J.; Menchhofer, P.A.

    1995-12-19

    A method and apparatus are disclosed for the production of articles made of a particle-based material; e.g., ceramics and sintered metals. In accordance with the invention, a thermally settable slurry containing a relatively high concentration of the particles is conveyed through an elongate flow area having a desired cross-sectional configuration. The slurry is heated as it is advanced through the flow area causing the slurry to set or harden in a shape which conforms to the cross-sectional configuration of the flow area. The material discharges from the flow area as a self-supporting solid of near net final dimensions. The article may then be sintered to consolidate the particles and provide a high density product. 10 figs.

  8. Note: A manifold ranking based saliency detection method for camera

    NASA Astrophysics Data System (ADS)

    Zhang, Libo; Sun, Yihan; Luo, Tiejian; Rahman, Mohammad Muntasir

    2016-09-01

    Research focused on salient object region in natural scenes has attracted a lot in computer vision and has widely been used in many applications like object detection and segmentation. However, an accurate focusing on the salient region, while taking photographs of the real-world scenery, is still a challenging task. In order to deal with the problem, this paper presents a novel approach based on human visual system, which works better with the usage of both background prior and compactness prior. In the proposed method, we eliminate the unsuitable boundary with a fixed threshold to optimize the image boundary selection which can provide more precise estimations. Then, the object detection, which is optimized with compactness prior, is obtained by ranking with background queries. Salient objects are generally grouped together into connected areas that have compact spatial distributions. The experimental results on three public datasets demonstrate that the precision and robustness of the proposed algorithm have been improved obviously.

  9. Study of Flapping Flight Using Discrete Vortex Method Based Simulations

    NASA Astrophysics Data System (ADS)

    Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.

    2013-12-01

    In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.

  10. A cosmological hydrodynamic code based on the piecewise parabolic method

    NASA Astrophysics Data System (ADS)

    Gheller, Claudio; Pantano, Ornella; Moscardini, Lauro

    1998-04-01

    We present a hydrodynamical code for cosmological simulations which uses the piecewise parabolic method (PPM) to follow the dynamics of the gas component and an N-body particle-mesh algorithm for the evolution of the collisionless component. The gravitational interaction between the two components is regulated by the Poisson equation, which is solved by a standard fast Fourier transform (FFT) procedure. In order to simulate cosmological flows, we have introduced several modifications to the original PPM scheme, which we describe in detail. Various tests of the code are presented, including adiabatic expansion, single and multiple pancake formation, and three-dimensional cosmological simulations with initial conditions based on the cold dark matter scenario.

  11. Computational nano-optic technology based on discrete sources method

    NASA Astrophysics Data System (ADS)

    Eremina, Elena; Eremin, Yuri; Wriedt, Thomas

    2011-03-01

    Continuous advance in the potential of fabrication and utilization of nanostructures for different applications requires an adequate tool for such structures' analysis and characterization. Investigation of light scattered by nanostructures by means of computer simulation seems to be a reliable tool for investigation of the properties and functional abilities of nanostructures. In particular, nano-features embedded in layered structures are of growing interest for many practical applications. Mathematical modeling of light scattering allows us to predict functional properties and behavior of nanostructures prior to their fabrication. This helps to reduce manufacturing and experimental costs. In the present paper, the Discrete Sources Method (DSM) is used as a tool of computational nano-optics. Mathematical models based on DSM are used for several practical applications. We are going to demonstrate that the computer simulation analysis allows not only prediction and investigation of the system properties, but can help in development and design of new setups.

  12. Measurement matrix optimization method based on matrix orthogonal similarity transformation

    NASA Astrophysics Data System (ADS)

    Pan, Jinfeng

    2016-05-01

    Optimization of the measurement matrix is one of the important research aspects of compressive sensing theory. A measurement matrix optimization method is presented based on the orthogonal similarity transformation of the information operator's Gram matrix. In terms of the fact that the information operator's Gram matrix is a singular symmetric matrix, a simplified orthogonal similarity transformation is deduced, and thus the simplified diagonal matrix that is orthogonally similar to it is obtained. Then an approximation of the Gram matrix is obtained by letting all the nonzero diagonal entries of the simplified diagonal matrix equal their average value. Thus an optimized measurement matrix can be acquired according to its relationship with the information operator. Results of experiments show that the optimized measurement matrix compared to the random measurement matrix is less coherent with dictionaries. The relative signal recovery error also declines when the proposed measurement matrix is utilized.

  13. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem. PMID:27505357

  14. An image feature data compressing method based on product RSOM

    NASA Astrophysics Data System (ADS)

    Wang, Jianming; Liu, Lihua; Xia, Shengping

    2015-12-01

    Data explosion and information redundancy are the main characteristics of the era of big data. Digging out valuable information from mass data is the premise of efficient information processing, which is a key technology in the area of object recognition with mass feature database. In the area of large scale image processing, both of the massive image data and the image features of high-dimension take great challenges to object recognition and information retrieval. Similar with big data, the large scale image feature database, which contains extensive quantity of information redundancy, can also be quantitatively represented by finite clustering models without degrading recognition performance. Inspired by the ideas of product quantization and high dimensional feature division, a data compression method based on recursive self-organizing mapping (RSOM) algorithm is proposed in this paper.

  15. Classification data mining method based on dynamic RBF neural networks

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Xu, Min; Zhang, Zhang; Duan, Luping

    2009-04-01

    With the widely application of databases and sharp development of Internet, The capacity of utilizing information technology to manufacture and collect data has improved greatly. It is an urgent problem to mine useful information or knowledge from large databases or data warehouses. Therefore, data mining technology is developed rapidly to meet the need. But DM (data mining) often faces so much data which is noisy, disorder and nonlinear. Fortunately, ANN (Artificial Neural Network) is suitable to solve the before-mentioned problems of DM because ANN has such merits as good robustness, adaptability, parallel-disposal, distributing-memory and high tolerating-error. This paper gives a detailed discussion about the application of ANN method used in DM based on the analysis of all kinds of data mining technology, and especially lays stress on the classification Data Mining based on RBF neural networks. Pattern classification is an important part of the RBF neural network application. Under on-line environment, the training dataset is variable, so the batch learning algorithm (e.g. OLS) which will generate plenty of unnecessary retraining has a lower efficiency. This paper deduces an incremental learning algorithm (ILA) from the gradient descend algorithm to improve the bottleneck. ILA can adaptively adjust parameters of RBF networks driven by minimizing the error cost, without any redundant retraining. Using the method proposed in this paper, an on-line classification system was constructed to resolve the IRIS classification problem. Experiment results show the algorithm has fast convergence rate and excellent on-line classification performance.

  16. A quantitative dimming method for LED based on PWM

    NASA Astrophysics Data System (ADS)

    Wang, Jiyong; Mou, Tongsheng; Wang, Jianping; Tian, Xiaoqing

    2012-10-01

    Traditional light sources were required to provide stable and uniform illumination for a living or working environment considering performance of visual function of human being. The requirement was always reasonable until non-visual functions of the ganglion cells in the retina photosensitive layer were found. New generation of lighting technology, however, is emerging based on novel lighting materials such as LED and photobiological effects on human physiology and behavior. To realize dynamic lighting of LED whose intensity and color were adjustable to the need of photobiological effects, a quantitative dimming method based on Pulse Width Modulation (PWM) and light-mixing technology was presented. Beginning with two channels' PWM, this paper demonstrated the determinacy and limitation of PWM dimming for realizing Expected Photometric and Colorimetric Quantities (EPCQ), in accordance with the analysis on geometrical, photometric, colorimetric and electrodynamic constraints. A quantitative model which mapped the EPCQ into duty cycles was finally established. The deduced model suggested that the determinacy was a unique individuality only for two channels' and three channels' PWM, but the limitation was an inevitable commonness for multiple channels'. To examine the model, a light-mixing experiment with two kinds of white LED simulated variations of illuminance and Correlation Color Temperature (CCT) from dawn to midday. Mean deviations between theoretical values and measured values were obtained, which were 15lx and 23K respectively. Result shows that this method can effectively realize the light spectrum which has a specific requirement of EPCQ, and provides a theoretical basis and a practical way for dynamic lighting of LED.

  17. Data Bases in Writing: Method, Practice, and Metaphor.

    ERIC Educational Resources Information Center

    Schwartz, Helen J.

    1985-01-01

    Points out the need for informed and experienced users of data bases. Discusses the definition of a data base, creating a data base for research, comparison use, and checking written text as a data base. (EL)

  18. Correlation-Based Image Reconstruction Methods for Magnetic Particle Imaging

    NASA Astrophysics Data System (ADS)

    Ishihara, Yasutoshi; Kuwabara, Tsuyoshi; Honma, Takumi; Nakagawa, Yohei

    Magnetic particle imaging (MPI), in which the nonlinear interaction between internally administered magnetic nanoparticles (MNPs) and electromagnetic waves irradiated from outside of the body is utilized, has attracted attention for its potential to achieve early diagnosis of diseases such as cancer. In MPI, the local magnetic field distribution is scanned, and the magnetization signal from MNPs within a selected region is detected. However, the signal sensitivity and image resolution are degraded by interference from magnetization signals generated by MNPs outside of the selected region, mainly because of imperfections (limited gradients) in the local magnetic field distribution. Here, we propose new methods based on correlation information between the observed signal and the system function—defined as the interaction between the magnetic field distribution and the magnetizing properties of MNPs. We performed numerical analyses and found that, although the images were somewhat blurred, image artifacts could be significantly reduced and accurate images could be reconstructed without the inverse-matrix operation used in conventional image reconstruction methods.

  19. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  20. Molecular Dynamics and Energy Minimization Based on Embedded Atom Method

    1995-03-01

    This program performs atomic scale computer simulations of the structure and dynamics of metallic system using energetices based on the Embedded Atom Method. The program performs two types of calculations. First, it performs local energy minimization of all atomic positions to determine ground state and saddle point energies and structures. Second, it performs molecular dynamics simulations to determine thermodynamics or miscroscopic dynamics of the system. In both cases, various constraints can be applied to themore » system. The volume of the system can be varied automatically to achieve any desired external pressure. The temperature in molecular dynamics simulations can be controlled by a variety of methods. Further, the temperature control can be applied either to the entire system or just a subset of the atoms that would act as a thermal source/sink. The motion of one or more of the atoms can be constrained to either simulate the effects of bulk boundary conditions or to facilitate the determination of saddle point configurations. The simulations are performed with periodic boundary conditions.« less

  1. A wavelet-based method for multispectral face recognition

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Zhang, Chaoyang; Zhou, Zhaoxian

    2012-06-01

    A wavelet-based method is proposed for multispectral face recognition in this paper. Gabor wavelet transform is a common tool for orientation analysis of a 2D image; whereas Hamming distance is an efficient distance measurement for face identification. Specifically, at each frequency band, an index number representing the strongest orientational response is selected, and then encoded in binary format to favor the Hamming distance calculation. Multiband orientation bit codes are then organized into a face pattern byte (FPB) by using order statistics. With the FPB, Hamming distances are calculated and compared to achieve face identification. The FPB algorithm was initially created using thermal images, while the EBGM method was originated with visible images. When two or more spectral images from the same subject are available, the identification accuracy and reliability can be enhanced using score fusion. We compare the identification performance of applying five recognition algorithms to the three-band (visible, near infrared, thermal) face images, and explore the fusion performance of combing the multiple scores from three recognition algorithms and from three-band face images, respectively. The experimental results show that the FPB is the best recognition algorithm, the HMM yields the best fusion result, and the thermal dataset results in the best fusion performance compared to other two datasets.

  2. An analytic reconstruction method for PET based on cubic splines

    NASA Astrophysics Data System (ADS)

    Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.

    2014-03-01

    PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.

  3. Trinocular stereo vision method based on mesh candidates

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Xu, Gang; Li, Haibin

    2010-10-01

    One of the most interesting goals of machine vision is 3D structure recovery of the scenes. This recovery has many applications, such as object recognition, reverse engineering, automatic cartography, autonomous robot navigation, etc. To meet the demand of measuring the complex prototypes in reverse engineering, a trinocular stereo vision method based on mesh candidates was proposed. After calibration of the cameras, the joint field of view can be defined in the world coordinate system. Mesh grid is established along the coordinate axes, and the mesh nodes are considered as potential depth data of the object surface. By similarity measure of the correspondence pairs which are projected from a certain group of candidates, the depth data can be obtained readily. With mesh nodes optimization, the interval between the neighboring nodes in depth direction could be designed reasonably. The potential ambiguity can be eliminated efficiently in correspondence matching with the constraint of a third camera. The cameras can be treated as two independent pairs, left-right and left-centre. Due to multiple peaks of the correlation values, the binocular method may not satisfy the accuracy of the measurement. Another image pair is involved if the confidence coefficient is less than the preset threshold. The depth is determined by the highest sum of correlation of both camera pairs. The measurement system was simulated using 3DS MAX and Matlab software for reconstructing the surface of the object. The experimental result proved that the trinocular vision system has good performance in depth measurement.

  4. Digital image registration method based upon binary boundary maps

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.; Andrus, J. F.; Campbell, C. W.

    1974-01-01

    A relatively fast method is presented for matching or registering the digital data of imagery from the same ground scene acquired at different times, or from different multispectral images, sensors, or both. It is assumed that the digital images can be registed by using translations and rotations only, that the images are of the same scale, and that little or no distortion exists between images. It is further assumed that by working with several local areas of the image, the rotational effects in the local areas can be neglected. Thus, by treating the misalignments of local areas as translations, it is possible to determine rotational and translational misalignments for a larger portion of the image containing the local areas. This procedure of determining the misalignment and then registering the data according to the misalignment can be repeated until the desired degree of registration is achieved. The method to be presented is based upon the use of binary boundary maps produced from the raw digital imagery rather than the raw digital data.

  5. Bacteria counting method based on polyaniline/bacteria thin film.

    PubMed

    Zhihua, Li; Xuetao, Hu; Jiyong, Shi; Xiaobo, Zou; Xiaowei, Huang; Xucheng, Zhou; Tahir, Haroon Elrasheid; Holmes, Mel; Povey, Malcolm

    2016-07-15

    A simple and rapid bacteria counting method based on polyaniline (PANI)/bacteria thin film was proposed. Since the negative effects of immobilized bacteria on the deposition of PANI on glass carbon electrode (GCE), PANI/bacteria thin films containing decreased amount of PANI would be obtained when increasing the bacteria concentration. The prepared PANI/bacteria film was characterized with cyclic voltammetry (CV) technique to provide quantitative index for the determination of the bacteria count, and electrochemical impedance spectroscopy (EIS) was also performed to further investigate the difference in the PANI/bacteria films. Good linear relationship of the peak currents of the CVs and the log total count of bacteria (Bacillus subtilis) could be established using the equation Y=-30.413X+272.560 (R(2)=0.982) over the range of 5.3×10(4) to 5.3×10(8)CFUmL(-1), which also showed acceptable stability, reproducibility and switchable ability. The proposed method was feasible for simple and rapid counting of bacteria. PMID:26921555

  6. Mass spectrometry-based carboxyl footprinting of proteins: Method evaluation

    SciTech Connect

    Zhang, Hao; Wen, Jianzhong; Huang, Richard Y-C.; Blankenship, Robert E.; Gross, Michael L.

    2012-02-01

    Protein structure determines function in biology, and a variety of approaches have been employed to obtain structural information about proteins. Mass spectrometry-based protein footprinting is one fast-growing approach. One labeling-based footprinting approach is the use of a water-soluble carbodiimide, 1-ethyl-3-(3-dimethylaminopropyl)carbodiimide (EDC) and glycine ethyl ester (GEE) to modify solvent-accessible carboxyl groups on glutamate (E) and aspartate (D). This paper describes method development of carboxyl-group modification in protein footprinting. The modification protocol was evaluated by using the protein calmodulin as a model. Because carboxyl-group modification is a slow reaction relative to protein folding and unfolding, there is an issue that modifications at certain sites may induce protein unfolding and lead to additional modification at sites that are not solvent-accessible in the wild-type protein. We investigated this possibility by using hydrogen deuterium amide exchange (H/DX). The study demonstrated that application of carboxyl group modification in probing conformational changes in calmodulin induced by Ca{sup 2+} binding provides useful information that is not compromised by modification-induced protein unfolding.

  7. A GIS-based method for flood risk assessment

    NASA Astrophysics Data System (ADS)

    Kalogeropoulos, Kleomenis; Stathopoulos, Nikos; Psarogiannis, Athanasios; Penteris, Dimitris; Tsiakos, Chrisovalantis; Karagiannopoulou, Aikaterini; Krikigianni, Eleni; Karymbalis, Efthimios; Chalkias, Christos

    2016-04-01

    Floods are physical global hazards with negative environmental and socio-economic impacts on local and regional scale. The technological evolution during the last decades, especially in the field of geoinformatics, has offered new advantages in hydrological modelling. This study seeks to use this technology in order to quantify flood risk assessment. The study area which was used is an ungauged catchment and by using mostly GIS hydrological and geomorphological analysis together with a GIS-based distributed Unit Hydrograph model, a series of outcomes have risen. More specifically, this paper examined the behaviour of the Kladeos basin (Peloponnese, Greece) using real rainfall data, as well hypothetical storms. The hydrological analysis held using a Digital Elevation Model of 5x5m pixel size, while the quantitative drainage basin characteristics were calculated and were studied in terms of stream order and its contribution to the flood. Unit Hydrographs are, as it known, useful when there is lack of data and in this work, based on time-area method, a sequences of flood risk assessments have been made using the GIS technology. Essentially, the proposed methodology estimates parameters such as discharge, flow velocity equations etc. in order to quantify flood risk assessment. Keywords Flood Risk Assessment Quantification; GIS; hydrological analysis; geomorphological analysis.

  8. High accuracy operon prediction method based on STRING database scores.

    PubMed

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-07-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/. PMID:20385580

  9. Iron-based amorphous alloys and methods of synthesizing iron-based amorphous alloys

    DOEpatents

    Saw, Cheng Kiong; Bauer, William A.; Choi, Jor-Shan; Day, Dan; Farmer, Joseph C.

    2016-05-03

    A method according to one embodiment includes combining an amorphous iron-based alloy and at least one metal selected from a group consisting of molybdenum, chromium, tungsten, boron, gadolinium, nickel phosphorous, yttrium, and alloys thereof to form a mixture, wherein the at least one metal is present in the mixture from about 5 atomic percent (at %) to about 55 at %; and ball milling the mixture at least until an amorphous alloy of the iron-based alloy and the at least one metal is formed. Several amorphous iron-based metal alloys are also presented, including corrosion-resistant amorphous iron-based metal alloys and radiation-shielding amorphous iron-based metal alloys.

  10. An image fusion method based on biorthogonal wavelet

    NASA Astrophysics Data System (ADS)

    Li, Jianlin; Yu, Jiancheng; Sun, Shengli

    2008-03-01

    Image fusion could process and utilize the source images, with complementing different image information, to achieve the more objective and essential understanding of the identical object. Recently, image fusion has been extensively applied in many fields such as medical imaging, micro photographic imaging, remote sensing, and computer vision as well as robot. There are various methods have been proposed in the past years, such as pyramid decomposition and wavelet transform algorithm. As for wavelet transform algorithm, due to the virtue of its multi-resolution, wavelet transform has been applied in image processing successfully. Another advantage of wavelet transform is that it can be much more easily realized in hardware, because its data format is very simple, so it could save a lot of resources, besides, to some extent, it can solve the real-time problem of huge-data image fusion. However, as the orthogonal filter of wavelet transform doesn't have the characteristics of linear phase, the phase distortion will lead to the distortion of the image edge. To make up for this shortcoming, the biorthogonal wavelet is introduced here. So, a novel image fusion scheme based on biorthogonal wavelet decomposition is presented in this paper. As for the low-frequency and high-frequency wavelet decomposition coefficients, the local-area-energy-weighted-coefficient fusion rule is adopted and different thresholds of low-frequency and high-frequency are set. Based on biorthogonal wavelet transform and traditional pyramid decomposition algorithm, an MMW image and a visible image are fused in the experiment. Compared with the traditional pyramid decomposition, the fusion scheme based biorthogonal wavelet is more capable to retain and pick up image information, and make up the distortion of image edge. So, it has a wide application potential.

  11. Post-Fragmentation Whole Genome Amplification-Based Method

    NASA Technical Reports Server (NTRS)

    Benardini, James; LaDuc, Myron T.; Langmore, John

    2011-01-01

    This innovation is derived from a proprietary amplification scheme that is based upon random fragmentation of the genome into a series of short, overlapping templates. The resulting shorter DNA strands (<400 bp) constitute a library of DNA fragments with defined 3 and 5 termini. Specific primers to these termini are then used to isothermally amplify this library into potentially unlimited quantities that can be used immediately for multiple downstream applications including gel eletrophoresis, quantitative polymerase chain reaction (QPCR), comparative genomic hybridization microarray, SNP analysis, and sequencing. The standard reaction can be performed with minimal hands-on time, and can produce amplified DNA in as little as three hours. Post-fragmentation whole genome amplification-based technology provides a robust and accurate method of amplifying femtogram levels of starting material into microgram yields with no detectable allele bias. The amplified DNA also facilitates the preservation of samples (spacecraft samples) by amplifying scarce amounts of template DNA into microgram concentrations in just a few hours. Based on further optimization of this technology, this could be a feasible technology to use in sample preservation for potential future sample return missions. The research and technology development described here can be pivotal in dealing with backward/forward biological contamination from planetary missions. Such efforts rely heavily on an increasing understanding of the burden and diversity of microorganisms present on spacecraft surfaces throughout assembly and testing. The development and implementation of these technologies could significantly improve the comprehensiveness and resolving power of spacecraft-associated microbial population censuses, and are important to the continued evolution and advancement of planetary protection capabilities. Current molecular procedures for assaying spacecraft-associated microbial burden and diversity have

  12. Adaptive enhancement method of infrared image based on scene feature

    NASA Astrophysics Data System (ADS)

    Zhang, Xiao; Bai, Tingzhu; Shang, Fei

    2008-12-01

    All objects emit radiation in amounts related to their temperature and their ability to emit radiation. The infrared image shows the invisible infrared radiation emitted directly. Because of the advantages, the technology of infrared imaging is applied to many kinds of fields. But compared with visible image, the disadvantages of infrared image are obvious. The characteristics of low luminance, low contrast and the inconspicuous difference target and background are the main disadvantages of infrared image. The aim of infrared image enhancement is to improve the interpretability or perception of information in infrared image for human viewers, or to provide 'better' input for other automated image processing techniques. Most of the adaptive algorithm for image enhancement is mainly based on the gray-scale distribution of infrared image, and is not associated with the actual image scene of the features. So the pertinence of infrared image enhancement is not strong, and the infrared image is not conducive to the application of infrared surveillance. In this paper we have developed a scene feature-based algorithm to enhance the contrast of infrared image adaptively. At first, after analyzing the scene feature of different infrared image, we have chosen the feasible parameters to describe the infrared image. In the second place, we have constructed the new histogram distributing base on the chosen parameters by using Gaussian function. In the last place, the infrared image is enhanced by constructing a new form of histogram. Experimental results show that the algorithm has better performance than other methods mentioned in this paper for infrared scene images.

  13. Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.

    2011-01-01

    This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…

  14. Geomorphometry-based method of landform assessment for geodiversity

    NASA Astrophysics Data System (ADS)

    Najwer, Alicja; Zwoliński, Zbigniew

    2015-04-01

    Climate variability primarily induces the variations in the intensity and frequency of surface processes and consequently, principal changes in the landscape. As a result, abiotic heterogeneity may be threatened and the key elements of the natural diversity even decay. The concept of geodiversity was created recently and has rapidly gained the approval of scientists around the world. However, the problem recognition is still at an early stage. Moreover, little progress has been made concerning its assessment and geovisualisation. Geographical Information System (GIS) tools currently provide wide possibilities for the Earth's surface studies. Very often, the main limitation in that analysis is acquisition of geodata in appropriate resolution. The main objective of this study was to develop a proceeding algorithm for the landform geodiversity assessment using geomorphometric parameters. Furthermore, final maps were compared to those resulting from thematic layers method. The study area consists of two peculiar valleys, characterized by diverse landscape units and complex geological setting: Sucha Woda in Polish part of Tatra Mts. and Wrzosowka in Sudetes Mts. Both valleys are located in the National Park areas. The basis for the assessment is a proper selection of geomorphometric parameters with reference to the definition of geodiversity. Seven factor maps were prepared for each valley: General Curvature, Topographic Openness, Potential Incoming Solar Radiation, Topographic Position Index, Topographic Wetness Index, Convergence Index and Relative Heights. After the data integration and performing the necessary geoinformation analysis, the next step with a certain degree of subjectivity is score classification of the input maps using an expert system and geostatistical analysis. The crucial point to generate the final maps of geodiversity by multi-criteria evaluation (MCE) with GIS-based Weighted Sum technique is to assign appropriate weights for each factor map by

  15. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method. PMID:25227014

  16. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.

  17. Research on pavement crack recognition methods based on image processing

    NASA Astrophysics Data System (ADS)

    Cai, Yingchun; Zhang, Yamin

    2011-06-01

    In order to overview and analysis briefly pavement crack recognition methods , then find the current existing problems in pavement crack image processing, the popular methods of crack image processing such as neural network method, morphology method, fuzzy logic method and traditional image processing .etc. are discussed, and some effective solutions to those problems are presented.

  18. Estimation of Convective Momentum Fluxes Using Satellite-Based Methods

    NASA Astrophysics Data System (ADS)

    Jewett, C.; Mecikalski, J. R.

    2009-12-01

    Research and case studies have shown that convection plays a significant role in large-scale environmental circulations. Convective momentum fluxes (CMFs) have been studied for many years using in-situ and aircraft measurements, along with numerical simulations. However, despite these successes, little work has been conducted on methods that use satellite remote sensing as a tool to diagnose these fluxes. Uses of satellite data have the capability to provide continuous analysis across regions void of ground-based remote sensing. Therefore, the project's overall goal is to develop a synergistic approach for retrieving CMFs using a collection of instruments including GOES, TRMM, CloudSat, MODIS, and QuikScat. However, this particular study will focus on the work using TRMM and QuikScat, and the methodology of using CloudSat. Sound research has already been conducted for computing CMFs using the GOES instruments (Jewett and Mecikalski 2009, submitted to J. Geophys. Res.). Using satellite-derived winds, namely mesoscale atmospheric motion vectors (MAMVs) as described by Bedka and Mecikalski (2005), one can obtain the actual winds occurring within a convective environment as perturbed by convection. Surface outflow boundaries and upper-tropospheric anvil outflow will produce “perturbation” winds on smaller, convective scales. Combined with estimated vertical motion retrieved using geostationary infrared imagery, CMFs were estimated using MAMVs, with an average profile being calculated across a convective regime or a domain covered by active storms. This study involves estimating draft-tilt from TRMM PR radar reflectivity and sub-cloud base fluxes using QuikScat data. The “slope” of falling hydrometeors (relative to Earth) in data are related to u', v' and w' winds within convection. The main up- and down-drafts within convection are described by precipitation patterns (Mecikalski 2003). Vertical motion estimates are made using model results for deep convection

  19. Chemical Sensors Based On Oxygen Detection By Optical Methods

    NASA Astrophysics Data System (ADS)

    Parker, Jennifer W.; Cox, M. E.; Dunn, Bruce S.

    1986-08-01

    Fluorescence quenching is shown to be a viable method of measuring oxygen concentration. Two oxygen/optical transducers based on fluorescence quenching have been developed and characterized: one is hydrophobic and the other is hydrophilic. The development of both transducers provides great flexibility in the application of fluorescence to oxygen measurement. One transducer is produced by entrapping a fluorophor, 9,10-diphenyl anthracene, in poly(dimethyl siloxane) to yield a homogeneous composite polymer matrix. The resulting matrix is hydrophobic. This transducer is extremely sensitive to PO2 as a result of oxygen quenching the fluorescence of 9,10-diphenyl anthracene. This quenching is utilized in the novel method employed to measure the transport properties of oxygen within Ulf 2matrix. Results show large values for the diffusion coefficient at 25°C, D = 3.5 x 10-5 cm /s. The fluorescence intensity varies inversely with P02. The second oxygen transducer is fabricated by entrapping 9,10-diphenyl anthracene in poly(hydroxy ethyl methacrylate). Free radical, room temperature polymerization is employed. This transducer is hydrophilic, and contains 37% water. The transport properties of oxygen within this transducer are compared with those of the hydrophobic transducer. The feasibility of generalizing the oxygen transducers to a wider class of chemical sensors through coupling to other chemistries is proposed. An example of such coupling is given in a glucose/oxygen transducer. The glucose transducer is produced by entrapping an enzyme, glucose oxidase, in the composite matrix of the hydrophilic oxygen transducer. Glucose oxidase catalyzes a reaction between glucose and oxygen, thereby lowering the local oxygen concentration. This transducer yields a glucose modified optical oxygen signal. The operation of this transducer and preliminary results of its characterization are presented.

  20. Method of Heating a Foam-Based Catalyst Bed

    NASA Technical Reports Server (NTRS)

    Fortini, Arthur J.; Williams, Brian E.; McNeal, Shawn R.

    2009-01-01

    A method of heating a foam-based catalyst bed has been developed using silicon carbide as the catalyst support due to its readily accessible, high surface area that is oxidation-resistant and is electrically conductive. The foam support may be resistively heated by passing an electric current through it. This allows the catalyst bed to be heated directly, requiring less power to reach the desired temperature more quickly. Designed for heterogeneous catalysis, the method can be used by the petrochemical, chemical processing, and power-generating industries, as well as automotive catalytic converters. Catalyst beds must be heated to a light-off temperature before they catalyze the desired reactions. This typically is done by heating the assembly that contains the catalyst bed, which results in much of the power being wasted and/or lost to the surrounding environment. The catalyst bed is heated indirectly, thus requiring excessive power. With the electrically heated catalyst bed, virtually all of the power is used to heat the support, and only a small fraction is lost to the surroundings. Although the light-off temperature of most catalysts is only a few hundred degrees Celsius, the electrically heated foam is able to achieve temperatures of 1,200 C. Lower temperatures are achievable by supplying less electrical power to the foam. Furthermore, because of the foam s open-cell structure, the catalyst can be applied either directly to the foam ligaments or in the form of a catalyst- containing washcoat. This innovation would be very useful for heterogeneous catalysis where elevated temperatures are needed to drive the reaction.

  1. Riding comfort optimization of railway trains based on pseudo-excitation method and symplectic method

    NASA Astrophysics Data System (ADS)

    Zhang, You-Wei; Zhao, Yan; Zhang, Ya-Hui; Lin, Jia-Hao; He, Xing-Wen

    2013-10-01

    This research is intended to develop a FEM-based riding comfort optimization approach to the railway trains considering the coupling effect of vehicle-track system. To obtain its accurate dynamic response, the car body is modeled with finite elements, while the bogie frames and wheel-sets are idealized as rigid bodies. The differential equations of motion of the dynamic vehicle-track system are derived considering wheel-track interaction, in which the pseudo-excitation method and the symplectic mathematical method are effectively applied to simplify the calculation. Then, the min-max optimization approach is utilized to improve the train riding comfort with related parameters of the suspension structure adopted as design variables, in which 54 design points on the car floor are chosen as estimation locations. The K-S function is applied to fit the objective function to make it smooth, differentiable and have superior integrity. Analytical sensitivities of the K-S function are then derived to solve the optimization problem. Finally, the effectiveness of the proposed approach is demonstrated through numerical examples and some useful discussions are made.

  2. 3D modeling method for computer animate based on modified weak structured light method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  3. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2016-06-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  4. Using Corporate-Based Methods To Assess Technical Communication Programs.

    ERIC Educational Resources Information Center

    Faber, Brenton; Bekins, Linn; Karis, Bill

    2002-01-01

    Investigates methods of program assessment used by corporate learning sites and profiles value added methods as a way to both construct and evaluate academic programs in technical communication. Examines and critiques assessment methods from corporate training environments including methods employed by corporate universities and value added…

  5. Chord-based versus voxel-based methods of electron transport in the skeletal tissues

    SciTech Connect

    Shah, Amish P.; Jokisch, Derek W.; Rajon, Didier A.; Watchman, Christopher J.; Patton, Phillip W.; Bolch, Wesley E.

    2005-10-15

    Anatomic models needed for internal dose assessment have traditionally been developed using mathematical surface equations to define organ boundaries, shapes, and their positions within the body. Many researchers, however, are now advocating the use of tomographic models created from segmented patient computed tomography (CT) or magnetic resonance (MR) scans. In the skeleton, however, the tissue structures of the bone trabeculae, marrow cavities, and endosteal layer are exceedingly small and of complex shape, and thus do not lend themselves easily to either stylistic representations or in-vivo CT imaging. Historically, the problem of modeling the skeletal tissues has been addressed through the development of chord-based methods of radiation particle transport, as given by studies at the University of Leeds (Leeds, UK) using a 44-year male subject. We have proposed an alternative approach to skeletal dosimetry in which excised sections of marrow-intact cadaver spongiosa are imaged directly via microCT scanning. The cadaver selected for initial investigation of this technique was a 66-year male subject of nominal body mass index (22.7 kg m{sup -2}). The objectives of the present study were to compare chord-based versus voxel-based methods of skeletal dosimetry using data from the UF 66-year male subject. Good agreement between chord-based and voxel-based transport was noted for marrow irradiation by either bone surface or bone volume sources up to 500-1000 keV (depending upon the skeletal site). In contrast, chord-based models of electron transport yielded consistently lower values of the self-absorbed fraction to marrow tissues than seen under voxel-based transport at energies above 100 keV, a feature directly attributed to the inability of chord-based models to account for nonlinear electron trajectories. Significant differences were also noted in the dosimetry of the endosteal layer (for all source tissues), with chord-based transport predicting a higher fraction

  6. Inter-Domain Redundancy Path Computation Methods Based on PCE

    NASA Astrophysics Data System (ADS)

    Hayashi, Rie; Oki, Eiji; Shiomoto, Kohei

    This paper evaluates three inter-domain redundancy path computation methods based on PCE (Path Computation Element). Some inter-domain paths carry traffic that must be assured of high quality and high reliability transfer such as telephony over IP and premium virtual private networks (VPNs). It is, therefore, important to set inter-domain redundancy paths, i. e. primary and secondary paths. The first scheme utilizes an existing protocol and the basic PCE implementation. It does not need any extension or modification. In the second scheme, PCEs make a virtual shortest path tree (VSPT) considering the candidates of primary paths that have corresponding secondary paths. The goal is to reduce blocking probability; corresponding secondary paths may be found more often after a primary path is decided; no protocol extension is necessary. In the third scheme, PCEs make a VSPT considering all candidates of primary and secondary paths. Blocking probability is further decreased since all possible candidates are located, and the sum of primary and secondary path cost is reduced by choosing the pair with minimum cost among all path pairs. Numerical evaluations show that the second and third schemes offer only a few percent reduction in blocking probability and path pair total cost, while the overheads imposed by protocol revision and increase of the amount of calculation and information to be exchanged are large. This suggests that the first scheme, the most basic and simple one, is the best choice.

  7. Conformational thermodynamics of biomolecular complexes: The histogram-based method

    NASA Astrophysics Data System (ADS)

    Das, Amit; Sikdar, Samapan; Ghosh, Mahua; Chakrabarti, J.

    2015-09-01

    Conformational changes in biomacromolecules govern majority of biological processes. Complete characterization of conformational contributions to thermodynamics of complexation of biomacromolecules has been challenging. Although, advances in NMR relaxation experiments and several computational studies have revealed important aspects of conformational entropy changes, efficient and large-scale estimations still remain an intriguing facet. Recent histogram-based method (HBM) offers a simple yet rigorous route to estimate both conformational entropy and free energy changes from same set of histograms in an efficient manner. The HBM utilizes the power of histograms which can be generated as accurately as desired from an arbitrarily large sample space from atomistic simulation trajectories. Here we discuss some recent applications of the HBM, using dihedral angles of amino acid residues as conformational variables, which provide good measure of conformational thermodynamics of several protein-peptide complexes, obtained from NMR, metal-ion binding to an important metalloprotein, interfacial changes in protein-protein complex and insight to protein function, coupled with conformational changes. We conclude the paper with a few future directions worth pursuing.

  8. A Monitoring Method Based on FBG for Concrete Corrosion Cracking.

    PubMed

    Mao, Jianghong; Xu, Fangyuan; Gao, Qian; Liu, Shenglin; Jin, Weiliang; Xu, Yidong

    2016-07-14

    Corrosion cracking of reinforced concrete caused by chloride salt is one of the main determinants of structure durability. Monitoring the entire process of concrete corrosion cracking is critical for assessing the remaining life of the structure and determining if maintenance is needed. Fiber Bragg Grating (FBG) sensing technology is extensively developed in photoelectric monitoring technology and has been used on many projects. FBG can detect the quasi-distribution of strain and temperature under corrosive environments, and thus it is suitable for monitoring reinforced concrete cracking. According to the mechanical principle that corrosion expansion is responsible for the reinforced concrete cracking, a package design of reinforced concrete cracking sensors based on FBG was proposed and investigated in this study. The corresponding relationship between the grating wavelength and strain was calibrated by an equal strength beam test. The effectiveness of the proposed method was verified by an electrically accelerated corrosion experiment. The fiber grating sensing technology was able to track the corrosion expansion and corrosion cracking in real time and provided data to inform decision-making for the maintenance and management of the engineering structure.

  9. A Monitoring Method Based on FBG for Concrete Corrosion Cracking

    PubMed Central

    Mao, Jianghong; Xu, Fangyuan; Gao, Qian; Liu, Shenglin; Jin, Weiliang; Xu, Yidong

    2016-01-01

    Corrosion cracking of reinforced concrete caused by chloride salt is one of the main determinants of structure durability. Monitoring the entire process of concrete corrosion cracking is critical for assessing the remaining life of the structure and determining if maintenance is needed. Fiber Bragg Grating (FBG) sensing technology is extensively developed in photoelectric monitoring technology and has been used on many projects. FBG can detect the quasi-distribution of strain and temperature under corrosive environments, and thus it is suitable for monitoring reinforced concrete cracking. According to the mechanical principle that corrosion expansion is responsible for the reinforced concrete cracking, a package design of reinforced concrete cracking sensors based on FBG was proposed and investigated in this study. The corresponding relationship between the grating wavelength and strain was calibrated by an equal strength beam test. The effectiveness of the proposed method was verified by an electrically accelerated corrosion experiment. The fiber grating sensing technology was able to track the corrosion expansion and corrosion cracking in real time and provided data to inform decision-making for the maintenance and management of the engineering structure. PMID:27428972

  10. A Monitoring Method Based on FBG for Concrete Corrosion Cracking.

    PubMed

    Mao, Jianghong; Xu, Fangyuan; Gao, Qian; Liu, Shenglin; Jin, Weiliang; Xu, Yidong

    2016-01-01

    Corrosion cracking of reinforced concrete caused by chloride salt is one of the main determinants of structure durability. Monitoring the entire process of concrete corrosion cracking is critical for assessing the remaining life of the structure and determining if maintenance is needed. Fiber Bragg Grating (FBG) sensing technology is extensively developed in photoelectric monitoring technology and has been used on many projects. FBG can detect the quasi-distribution of strain and temperature under corrosive environments, and thus it is suitable for monitoring reinforced concrete cracking. According to the mechanical principle that corrosion expansion is responsible for the reinforced concrete cracking, a package design of reinforced concrete cracking sensors based on FBG was proposed and investigated in this study. The corresponding relationship between the grating wavelength and strain was calibrated by an equal strength beam test. The effectiveness of the proposed method was verified by an electrically accelerated corrosion experiment. The fiber grating sensing technology was able to track the corrosion expansion and corrosion cracking in real time and provided data to inform decision-making for the maintenance and management of the engineering structure. PMID:27428972

  11. Agent-based method for distributed clustering of textual information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  12. Hyperspectral image-based methods for spectral diversity

    NASA Astrophysics Data System (ADS)

    Sotomayor, Alejandro; Medina, Ollantay; Chinea, J. D.; Manian, Vidya

    2015-05-01

    Hyperspectral images are an important tool to assess ecosystem biodiversity. To obtain more precise analysis of biodiversity indicators that agree with indicators obtained using field data, analysis of spectral diversity calculated from images have to be validated with field based diversity estimates. The plant species richness is one of the most important indicators of biodiversity. This indicator can be measured in hyperspectral images considering the Spectral Variation Hypothesis (SVH) which states that the spectral heterogeneity is related to spatial heterogeneity and thus to species richness. The goal of this research is to capture spectral heterogeneity from hyperspectral images for a terrestrial neo tropical forest site using Vector Quantization (VQ) method and then use the result for prediction of plant species richness. The results are compared with that of Hierarchical Agglomerative Clustering (HAC). The validation of the process index is done calculating the Pearson correlation coefficient between the Shannon entropy from actual field data and the Shannon entropy computed in the images. One of the advantages of developing more accurate analysis tools would be the extension of the analysis to larger zones. Multispectral image with a lower spatial resolution has been evaluated as a prospective tool for spectral diversity.

  13. Kinetic theory based new upwind methods for inviscid compressible flows

    NASA Technical Reports Server (NTRS)

    Deshpande, S. M.

    1986-01-01

    Two new upwind methods called the Kinetic Numerical Method (KNM) and the Kinetic Flux Vector Splitting (KFVS) method for the solution of the Euler equations have been presented. Both of these methods can be regarded as some suitable moments of an upwind scheme for the solution of the Boltzmann equation provided the distribution function is Maxwellian. This moment-method strategy leads to a unification of the Riemann approach and the pseudo-particle approach used earlier in the development of upwind methods for the Euler equations. A very important aspect of the moment-method strategy is that the new upwind methods satisfy the entropy condition because of the Boltzmann H-Theorem and suggest a possible way of extending the Total Variation Diminishing (TVD) principle within the framework of the H-Theorem. The ability of these methods in obtaining accurate wiggle-free solution is demonstrated by applying them to two test problems.

  14. CHAPTER 7. BERYLLIUM ANALYSIS BY NON-PLASMA BASED METHODS

    SciTech Connect

    Ekechukwu, A

    2009-04-20

    The most common method of analysis for beryllium is inductively coupled plasma atomic emission spectrometry (ICP-AES). This method, along with inductively coupled plasma mass spectrometry (ICP-MS), is discussed in Chapter 6. However, other methods exist and have been used for different applications. These methods include spectroscopic, chromatographic, colorimetric, and electrochemical. This chapter provides an overview of beryllium analysis methods other than plasma spectrometry (inductively coupled plasma atomic emission spectrometry or mass spectrometry). The basic methods, detection limits and interferences are described. Specific applications from the literature are also presented.

  15. Method ruggedness studies incorporating a risk based approach: a tutorial.

    PubMed

    Borman, Phil J; Chatfield, Marion J; Damjanov, Ivana; Jackson, Patrick

    2011-10-10

    This tutorial explains how well thought-out application of design and analysis methodology, combined with risk assessment, leads to improved assessment of method ruggedness. The authors define analytical method ruggedness as an experimental evaluation of noise factors such as analyst, instrument or stationary phase batch. Ruggedness testing is usually performed upon transfer of a method to another laboratory, however, it can also be employed during method development when an assessment of the method's inherent variability is required. The use of a ruggedness study provides a more rigorous method for assessing method precision than a simple comparative intermediate precision study which is typically performed as part of method validation. Prior to designing a ruggedness study, factors that are likely to have a significant effect on the performance of the method should be identified (via a risk assessment) and controlled where appropriate. Noise factors that are not controlled are considered for inclusion in the study. The purpose of the study should be to challenge the method and identify whether any noise factors significantly affect the method's precision. The results from the study are firstly used to identify any special cause variability due to specific attributable circumstances. Secondly, common cause variability is apportioned to determine which factors are responsible for most of the variability. The total common cause variability can then be used to assess whether the method's precision requirements are achievable. The approach used to design and analyse method ruggedness studies will be covered in this tutorial using a real example.

  16. ILP/SMT-Based Method for Design of Boolean Networks Based on Singleton Attractors.

    PubMed

    Kobayashi, Koichi; Hiraishi, Kunihiko

    2014-01-01

    Attractors in gene regulatory networks represent cell types or states of cells. In system biology and synthetic biology, it is important to generate gene regulatory networks with desired attractors. In this paper, we focus on a singleton attractor, which is also called a fixed point. Using a Boolean network (BN) model, we consider the problem of finding Boolean functions such that the system has desired singleton attractors and has no undesired singleton attractors. To solve this problem, we propose a matrix-based representation of BNs. Using this representation, the problem of finding Boolean functions can be rewritten as an Integer Linear Programming (ILP) problem and a Satisfiability Modulo Theories (SMT) problem. Furthermore, the effectiveness of the proposed method is shown by a numerical example on a WNT5A network, which is related to melanoma. The proposed method provides us a basic method for design of gene regulatory networks.

  17. A fracture enhancement method based on the histogram equalization of eigenstructure-based coherence

    NASA Astrophysics Data System (ADS)

    Dou, Xi-Ying; Han, Li-Guo; Wang, En-Li; Dong, Xue-Hua; Yang, Qing; Yan, Gao-Han

    2014-06-01

    Eigenstructure-based coherence attributes are efficient and mature techniques for large-scale fracture detection. However, in horizontally bedded and continuous strata, buried fractures in high grayscale value zones are difficult to detect. Furthermore, middle- and small-scale fractures in fractured zones where migration image energies are usually not concentrated perfectly are also hard to detect because of the fuzzy, clouded shadows owing to low grayscale values. A new fracture enhancement method combined with histogram equalization is proposed to solve these problems. With this method, the contrast between discontinuities and background in coherence images is increased, linear structures are highlighted by stepwise adjustment of the threshold of the coherence image, and fractures are detected at different scales. Application of the method shows that it can also improve fracture cognition and accuracy.

  18. Alternative modeling methods for plasma-based Rf ion sources

    NASA Astrophysics Data System (ADS)

    Veitzer, Seth A.; Kundrapu, Madhusudhan; Stoltz, Peter H.; Beckwith, Kristian R. C.

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H- source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H- ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models

  19. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD

  20. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  1. DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...

  2. Comparing the Principle-Based SBH Maieutic Method to Traditional Case Study Methods of Teaching Media Ethics

    ERIC Educational Resources Information Center

    Grant, Thomas A.

    2012-01-01

    This quasi-experimental study at a Northwest university compared two methods of teaching media ethics, a class taught with the principle-based SBH Maieutic Method (n = 25) and a class taught with a traditional case study method (n = 27), with a control group (n = 21) that received no ethics training. Following a 16-week intervention, a one-way…

  3. Harmonic Golay coded excitation based on harmonic quadrature demodulation method.

    PubMed

    Kim, Sang-Min; Song, Jae-Hee; Song, Tai-Kyong

    2008-01-01

    Harmonic coded excitation techniques have been used to increase SNR of harmonic imaging with limited peak voltage. Harmonic Golay coded excitation, in particular, generates each scan line using four transmit-receive cycles, unlike conventional Golay coded excitation method, thus resulting in low frame rates. In this paper we propose a method of increasing the frame rate of said method without impacting the image quality. The proposed method performs two transmit-receive cycles using QPSK code to ensure that the harmonic components of incoming signals are Golay coded and uses harmonic quadrature demodulation to extract compressed second harmonic component only. The proposed method has been validated through mathematical analysis and MATLAB simulation, and has been verified to yield a limited error of -52.08dB compared to the ideal case. Therefore, the proposed method doubles the frame rate compared to the existing harmonic Golay coded excitation method without significantly deteriorating the image quality.

  4. Novel method of manufacturing hydrogen storage materials combining with numerical analysis based on discrete element method

    NASA Astrophysics Data System (ADS)

    Zhao, Xuzhe

    High efficiency hydrogen storage method is significant in development of fuel cell vehicle. Seeking for a high energy density material as the fuel becomes the key of wide spreading fuel cell vehicle. LiBH4 + MgH 2 system is a strong candidate due to their high hydrogen storage density and the reaction between them is reversible. However, LiBH4 + MgH 2 system usually requires the high temperature and hydrogen pressure for hydrogen release and uptake reaction. In order to reduce the requirements of this system, nanoengineering is the simple and efficient method to improve the thermodynamic properties and reduce kinetic barrier of reaction between LiBH4 and MgH2. Based on ab initio density functional theory (DFT) calculations, the previous study has indicated that the reaction between LiBH4 and MgH2 can take place at temperature near 200°C or below. However, the predictions have been shown to be inconsistent with many experiments. Therefore, it is the first time that our experiment using ball milling with aerosol spraying (BMAS) to prove the reaction between LiBH4 and MgH2 can happen during high energy ball milling at room temperature. Through this BMAS process we have found undoubtedly the formation of MgB 2 and LiH during ball milling of MgH2 while aerosol spraying of the LiBH4/THF solution. Aerosol nanoparticles from LiBH 4/THF solution leads to form Li2B12H12 during BMAS process. The Li2B12H12 formed then reacts with MgH2 in situ during ball milling to form MgB 2 and LiH. Discrete element modeling (DEM) is a useful tool to describe operation of various ball milling processes. EDEM is software based on DEM to predict power consumption, liner and media wear and mill output. In order to further improve the milling efficiency of BMAS process, EDEM is conducted to make analysis for complicated ball milling process. Milling speed and ball's filling ratio inside the canister as the variables are considered to determine the milling efficiency. The average and maximum

  5. A genetic algorithm based method for docking flexible molecules

    SciTech Connect

    Judson, R.S.; Jaeger, E.P.; Treasurywala, A.M.

    1993-11-01

    The authors describe a computational method for docking flexible molecules into protein binding sites. The method uses a genetic algorithm (GA) to search the combined conformation/orientation space of the molecule to find low energy conformation. Several techniques are described that increase the efficiency of the basic search method. These include the use of several interacting GA subpopulations or niches; the use of a growing algorithm that initially docks only a small part of the molecule; and the use of gradient minimization during the search. To illustrate the method, they dock Cbz-GlyP-Leu-Leu (ZGLL) into thermolysin. This system was chosen because a well refined crystal structure is available and because another docking method had previously been tested on this system. Their method is able to find conformations that lie physically close to and in some cases lower in energy than the crystal conformation in reasonable periods of time on readily available hardware.

  6. Methods for Data-based Delineation of Spatial Regions

    SciTech Connect

    Wilson, John E.

    2012-10-01

    In data analysis, it is often useful to delineate or segregate areas of interest from the general population of data in order to concentrate further analysis efforts on smaller areas. Three methods are presented here for automatically generating polygons around spatial data of interest. Each method addresses a distinct data type. These methods were developed for and implemented in the sample planning tool called Visual Sample Plan (VSP). Method A is used to delineate areas of elevated values in a rectangular grid of data (raster). The data used for this method are spatially related. Although VSP uses data from a kriging process for this method, it will work for any type of data that is spatially coherent and appears on a regular grid. Method B is used to surround areas of interest characterized by individual data points that are congregated within a certain distance of each other. Areas where data are “clumped” together spatially will be delineated. Method C is used to recreate the original boundary in a raster of data that separated data values from non-values. This is useful when a rectangular raster of data contains non-values (missing data) that indicate they were outside of some original boundary. If the original boundary is not delivered with the raster, this method will approximate the original boundary.

  7. Analysis of surface asperity flattening based on two different methods

    NASA Astrophysics Data System (ADS)

    Li, Hejie; Öchsner, Andreas; Ni, Guowei; Wei, Dongbin; Jiang, Zhengyi

    2016-11-01

    The stress state is an important parameter in metal forming processes, which significantly influences the strain state and microstructure of products, affecting their surface qualities. In order to make the metal products have a good surface quality, the surface stress state must be optimised. In this study, two classical methods, the upper bound method and the crystal plasticity finite element method, were investigated. The differences between the two methods were discussed in regard to the model, the velocity field, and the strain field. Then the related surface roughness is deduced.

  8. Study on project schedule management based on comprehensive comparison methods

    NASA Astrophysics Data System (ADS)

    Ge, Jun-ying

    2011-10-01

    Project schedule management is the central content in project organization plan, which affects the project time and investment. The traditional representation methods of schedule are gant chart, network chart, S curve, etc. With the engineering scale increasing constantly, techniques and management level are improving, single method can not meet the requirements of project schedule management. Comprehensive comparison method gets more and more attention, and has become one symbol of project management modernization with its vivacity and brevity form. The paper analyzes the factors that affect the project, and then compare the progress of the different control methods, finally it made some management measures of project schedule.

  9. Aperture-Tolerant, Chemical-Based Methods to Reduce Channeling

    SciTech Connect

    Randall S. Seright

    2007-09-30

    This final technical progress report describes work performed from October 1, 2004, through May 16, 2007, for the project, 'Aperture-Tolerant, Chemical-Based Methods to Reduce Channeling'. We explored the potential of pore-filling gels for reducing excess water production from both fractured and unfractured production wells. Several gel formulations were identified that met the requirements--i.e., providing water residual resistance factors greater than 2,000 and ultimate oil residual resistance factors (F{sub rro}) of 2 or less. Significant oil throughput was required to achieve low F{sub rro} values, suggesting that gelant penetration into porous rock must be small (a few feet or less) for existing pore-filling gels to provide effective disproportionate permeability reduction. Compared with adsorbed polymers and weak gels, strong pore-filling gels can provide greater reliability and behavior that is insensitive to the initial rock permeability. Guidance is provided on where relative-permeability-modification/disproportionate-permeability-reduction treatments can be successfully applied for use in either oil or gas production wells. When properly designed and executed, these treatments can be successfully applied to a limited range of oilfield excessive-water-production problems. We examined whether gel rheology can explain behavior during extrusion through fractures. The rheology behavior of the gels tested showed a strong parallel to the results obtained from previous gel extrusion experiments. However, for a given aperture (fracture width or plate-plate separation), the pressure gradients measured during the gel extrusion experiments were much higher than anticipated from rheology measurements. Extensive experiments established that wall slip and first normal stress difference were not responsible for the pressure gradient discrepancy. To explain the discrepancy, we noted that the aperture for gel flow (for mobile gel wormholing through concentrated immobile

  10. Job Search Methods: Consequences for Gender-based Earnings Inequality.

    ERIC Educational Resources Information Center

    Huffman, Matt L.; Torres, Lisa

    2001-01-01

    Data from adults in Atlanta, Boston, and Los Angeles (n=1,942) who searched for work using formal (ads, agencies) or informal (networks) methods indicated that type of method used did not contribute to the gender gap in earnings. Results do not support formal job search as a way to reduce gender inequality. (Contains 55 references.) (SK)

  11. Developing a Self-Report-Based Sequential Analysis Method for Educational Technology Systems: A Process-Based Usability Evaluation

    ERIC Educational Resources Information Center

    Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse

    2015-01-01

    The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…

  12. WormBase: methods for data mining and comparative genomics.

    PubMed

    Harris, Todd W; Stein, Lincoln D

    2006-01-01

    WormBase is a comprehensive repository for information on Caenorhabditis elegans and related nematodes. Although the primary web-based interface of WormBase (http:// www.wormbase.org/) is familiar to most C. elegans researchers, WormBase also offers powerful data-mining features for addressing questions of comparative genomics, genome structure, and evolution. In this chapter, we focus on data mining at WormBase through the use of flexible web interfaces, custom queries, and scripts. The intended audience includes users wishing to query the database beyond the confines of the web interface or fetch data en masse. No knowledge of programming is necessary or assumed, although users with intermediate skills in the Perl scripting language will be able to utilize additional data-mining approaches.

  13. Segment-based vs. element-based integration for mortar methods in computational contact mechanics

    NASA Astrophysics Data System (ADS)

    Farah, Philipp; Popp, Alexander; Wall, Wolfgang A.

    2015-01-01

    Mortar finite element methods provide a very convenient and powerful discretization framework for geometrically nonlinear applications in computational contact mechanics, because they allow for a variationally consistent treatment of contact conditions (mesh tying, non-penetration, frictionless or frictional sliding) despite the fact that the underlying contact surface meshes are non-matching and possibly also geometrically non-conforming. However, one of the major issues with regard to mortar methods is the design of adequate numerical integration schemes for the resulting interface coupling terms, i.e. curve integrals for 2D contact problems and surface integrals for 3D contact problems. The way how mortar integration is performed crucially influences the accuracy of the overall numerical procedure as well as the computational efficiency of contact evaluation. Basically, two different types of mortar integration schemes, which will be termed as segment-based integration and element-based integration here, can be found predominantly in the literature. While almost the entire existing literature focuses on either of the two mentioned mortar integration schemes without questioning this choice, the intention of this paper is to provide a comprehensive and unbiased comparison. The theoretical aspects covered here include the choice of integration rule, the treatment of boundaries of the contact zone, higher-order interpolation and frictional sliding. Moreover, a new hybrid scheme is proposed, which beneficially combines the advantages of segment-based and element-based mortar integration. Several numerical examples are presented for a detailed and critical evaluation of the overall performance of the different schemes within several well-known benchmark problems of computational contact mechanics.

  14. Financial time series analysis based on information categorization method

    NASA Astrophysics Data System (ADS)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  15. A Comparison of Satellite-Based Multilayered Cloud Detection Methods

    NASA Technical Reports Server (NTRS)

    Minnis, Patrick; Chang, Fu-Lung; Khaiyer, Mandana M.; Ayers, Jeffrey K.; Palikonda, Rabindra; Nordeen, Michele L.; Spangenberg, Douglas A.

    2007-01-01

    Both techniques show skill in detecting multilayered clouds, but they disagree more than 50% of the time. BTD method tends to detect more ML clouds than CO2 method and has slightly higher detection accuracy. CO2 method might be better for minimizing false positives, but further study is needed. Neither method as been optimized for GOES data. BTD technique developed on AVHRR, better BTD signals & resolution. CO2 developed on MODIS, better resolution & 4 CO2 channels. Many additional comparisons with ARSCL data will be used to optimize both techniques. A combined technique will be examined using MODIS & Meteosat-8 data. After optimization, the techniques will be implemented in the ARM operational satellite cloud processing.

  16. Research iris serial images quality assessment method based on HVS

    NASA Astrophysics Data System (ADS)

    Li, Zhi-hui; Zhang, Chang-hai; Ming, Xing; Zhao, Yong-hua

    2006-01-01

    Iris recognition can be widely used in security and customs, and it provides superiority security than other human feature recognition such as fingerprint, face and so on. The iris image quality is crucial to recognition effect. Accordingly reliable image quality assessments are necessary for evaluating iris image quality. However, there haven't uniformly criterion to Image quality assessment. Image quality assessment have Objective and Subjective Evaluation methods, In practice, However Subjective Evaluation method is fussy and doesn't effective on iris recognition. Objective Evaluation method should be used in iris recognition. According to human visual system model (HVS) Multi-scale and selectivity characteristic, it presents a new iris Image quality assessment method. In the paper, ROI is found and wavelet transform zero-crossing is used to find Multi-scale edge, and Multi-scale fusion measure is used to assess iris image quality. In experiment, Objective and Subjective Evaluation methods are used to assess iris images. From the results, the method is effectively to iris image quality assessment.

  17. Reconstruction of Banknote Fragments Based on Keypoint Matching Method.

    PubMed

    Gwo, Chih-Ying; Wei, Chia-Hung; Li, Yue; Chiu, Nan-Hsing

    2015-07-01

    Banknotes may be shredded by a scrap machine, ripped up by hand, or damaged in accidents. This study proposes an image registration method for reconstruction of multiple sheets of banknotes. The proposed method first constructs different scale spaces to identify keypoints in the underlying banknote fragments. Next, the features of those keypoints are extracted to represent their local patterns around keypoints. Then, similarity is computed to find the keypoint pairs between the fragment and the reference banknote. The banknote fragments can determine the coordinate and amend the orientation. Finally, an assembly strategy is proposed to piece multiple sheets of banknote fragments together. Experimental results show that the proposed method causes, on average, a deviation of 0.12457 ± 0.12810° for each fragment while the SIFT method deviates 1.16893 ± 2.35254° on average. The proposed method not only reconstructs the banknotes but also decreases the computing cost. Furthermore, the proposed method can estimate relatively precisely the orientation of the banknote fragments to assemble.

  18. A Cluster-Based Method for Test Construction. Research Report 88-3.

    ERIC Educational Resources Information Center

    Boekkooi-Timminga, Ellen

    A new test construction method based on integer linear programming is described. This method selects optimal tests in small amounts of computer time. The new method, called the Cluster-Based Method, assumes that the items in the bank have been grouped according to their item information curves so that items within a group, or cluster, are…

  19. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  20. A comparison between boat-based and diver-based methods for quantifying coral bleaching

    USGS Publications Warehouse

    Zawada, David G.; Ruzicka, Rob; Colella, Michael A.

    2015-01-01

    Recent increases in both the frequency and severity of coral bleaching events have spurred numerous surveys to quantify the immediate impacts and monitor the subsequent community response. Most of these efforts utilize conventional diver-based methods, which are inherently time-consuming, expensive, and limited in spatial scope unless they deploy large teams of scientifically-trained divers. In this study, we evaluated the effectiveness of the Along-Track Reef Imaging System (ATRIS), an automated image-acquisition technology, for assessing a moderate bleaching event that occurred in the summer of 2011 in the Florida Keys. More than 100,000 images were collected over 2.7 km of transects spanning four patch reefs in a 3-h period. In contrast, divers completed 18, 10-m long transects at nine patch reefs over a 5-day period. Corals were assigned to one of four categories: not bleached, pale, partially bleached, and bleached. The prevalence of bleaching estimated by ATRIS was comparable to the results obtained by divers, but only for corals > 41 cm in size. The coral size-threshold computed for ATRIS in this study was constrained by prevailing environmental conditions (turbidity and sea state) and, consequently, needs to be determined on a study-by-study basis. Both ATRIS and diver-based methods have innate strengths and weaknesses that must be weighed with respect to project goals.

  1. Comparison of sequencing-based methods to profile DNA methylation and identification of monoallelic epigenetic modifications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Analysis of DNA methylation patterns relies increasingly on sequencing-based profiling methods. The four most frequently used sequencing-based technologies are the bisulfite-based methods MethylC-seq and reduced representation bisulfite sequencing (RRBS), and the enrichment-based techniques methylat...

  2. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    SciTech Connect

    Paganelli, Chiara; Peroni, Marta

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  3. A silica gel based method for extracting insect surface hydrocarbons.

    PubMed

    Choe, Dong-Hwan; Ramírez, Santiago R; Tsutsui, Neil D

    2012-02-01

    Here, we describe a novel method for the extraction of insect cuticular hydrocarbons using silica gel, herein referred to as "silica-rubbing". This method permits the selective sampling of external hydrocarbons from insect cuticle surfaces for subsequent analysis using gas chromatography-mass spectrometry (GC-MS). The cuticular hydrocarbons are first adsorbed to silica gel particles by rubbing the cuticle of insect specimens with the materials, and then are subsequently eluted using organic solvents. We compared the cuticular hydrocarbon profiles that resulted from extractions using silica-rubbing and solvent-soaking methods in four ant and one bee species: Linepithema humile, Azteca instabilis, Camponotus floridanus, Pogonomyrmex barbatus (Hymenoptera: Formicidae), and Euglossa dilemma (Hymenoptera: Apidae). We also compared the hydrocarbon profiles of Euglossa dilemma obtained via silica-rubbing and solid phase microextraction (SPME). Comparison of hydrocarbon profiles obtained by different extraction methods indicates that silica rubbing selectively extracts the hydrocarbons that are present on the surface of the cuticular wax layer, without extracting hydrocarbons from internal glands and tissues. Due to its surface specificity, efficiency, and low cost, this new method may be useful for studying the biology of insect cuticular hydrocarbons.

  4. Morphology-based fusion method of hyperspectral image

    NASA Astrophysics Data System (ADS)

    Yue, Song; Zhang, Zhijie; Ren, Tingting; Wang, Chensheng; Yu, Hui

    2014-11-01

    Hyperspectral image analysis method is widely used in all kinds of application including agriculture identification and forest investigation and atmospheric pollution monitoring. In order to accurately and steadily analyze hyperspectral image, considering the spectrum and spatial information which is provided by hyperspectral data together is necessary. The hyperspectral image has the characteristics of large amount of wave bands and information. Corresponding to the characteristics of hyperspectral image, a fast image fusion method that can fuse the hyperspectral image with high fidelity is studied and proposed in this paper. First of all, hyperspectral image is preprocessed before the morphological close operation. The close operation is used to extract wave band characteristic to reduce dimensionality of hyperspectral image. The spectral data is smoothed at the same time to avoid the discontinuity of the data by combination of spatial information and spectral information. On this basis, Mean-shift method is adopted to register key frames. Finally, the selected key frames by fused into one fusing image by the pyramid fusion method. The experiment results show that this method can fuse hyper spectral image in high quality. The fused image's attributes is better than the original spectral images comparing to the spectral images and reach the objective of fusion.

  5. Bootstrap embedding: An internally consistent fragment-based method.

    PubMed

    Welborn, Matthew; Tsuchimochi, Takashi; Van Voorhis, Troy

    2016-08-21

    Strong correlation poses a difficult problem for electronic structure theory, with computational cost scaling quickly with system size. Fragment embedding is an attractive approach to this problem. By dividing a large complicated system into smaller manageable fragments "embedded" in an approximate description of the rest of the system, we can hope to ameliorate the steep cost of correlated calculations. While appealing, these methods often converge slowly with fragment size because of small errors at the boundary between fragment and bath. We describe a new electronic embedding method, dubbed "Bootstrap Embedding," a self-consistent wavefunction-in-wavefunction embedding theory that uses overlapping fragments to improve the description of fragment edges. We apply this method to the one dimensional Hubbard model and a translationally asymmetric variant, and find that it performs very well for energies and populations. We find Bootstrap Embedding converges rapidly with embedded fragment size, overcoming the surface-area-to-volume-ratio error typical of many embedding methods. We anticipate that this method may lead to a low-scaling, high accuracy treatment of electron correlation in large molecular systems. PMID:27544082

  6. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  7. Bootstrap embedding: An internally consistent fragment-based method

    NASA Astrophysics Data System (ADS)

    Welborn, Matthew; Tsuchimochi, Takashi; Van Voorhis, Troy

    2016-08-01

    Strong correlation poses a difficult problem for electronic structure theory, with computational cost scaling quickly with system size. Fragment embedding is an attractive approach to this problem. By dividing a large complicated system into smaller manageable fragments "embedded" in an approximate description of the rest of the system, we can hope to ameliorate the steep cost of correlated calculations. While appealing, these methods often converge slowly with fragment size because of small errors at the boundary between fragment and bath. We describe a new electronic embedding method, dubbed "Bootstrap Embedding," a self-consistent wavefunction-in-wavefunction embedding theory that uses overlapping fragments to improve the description of fragment edges. We apply this method to the one dimensional Hubbard model and a translationally asymmetric variant, and find that it performs very well for energies and populations. We find Bootstrap Embedding converges rapidly with embedded fragment size, overcoming the surface-area-to-volume-ratio error typical of many embedding methods. We anticipate that this method may lead to a low-scaling, high accuracy treatment of electron correlation in large molecular systems.

  8. Bootstrap embedding: An internally consistent fragment-based method.

    PubMed

    Welborn, Matthew; Tsuchimochi, Takashi; Van Voorhis, Troy

    2016-08-21

    Strong correlation poses a difficult problem for electronic structure theory, with computational cost scaling quickly with system size. Fragment embedding is an attractive approach to this problem. By dividing a large complicated system into smaller manageable fragments "embedded" in an approximate description of the rest of the system, we can hope to ameliorate the steep cost of correlated calculations. While appealing, these methods often converge slowly with fragment size because of small errors at the boundary between fragment and bath. We describe a new electronic embedding method, dubbed "Bootstrap Embedding," a self-consistent wavefunction-in-wavefunction embedding theory that uses overlapping fragments to improve the description of fragment edges. We apply this method to the one dimensional Hubbard model and a translationally asymmetric variant, and find that it performs very well for energies and populations. We find Bootstrap Embedding converges rapidly with embedded fragment size, overcoming the surface-area-to-volume-ratio error typical of many embedding methods. We anticipate that this method may lead to a low-scaling, high accuracy treatment of electron correlation in large molecular systems.

  9. [Problem-based learning, description of a pedagogical method leading to evidence-based medicine].

    PubMed

    Chalon, P; Delvenne, C; Pasleau, F

    2000-04-01

    Problem-Based Learning is an educational method which uses health care scenarios to provide a context for learning and to elaborate knowledge through discussion. Additional expectations are to stimulate critical thinking and problem-solving skills, and to develop clinical reasoning taking into account the patient's psychosocial environment and preferences, the economic requirements as well as the best evidence from biomedical research. Appearing at the end of the 60's, it has been adopted by 10% of medical schools world-wide. PBL follows the same rules as Evidence-Based Medicine but is student-centered and provides the information-seeking skills necessary for self-directed life long learning. In this short article, we review the theoretical basis and process of PBL, emphasizing the teacher-student relationship and discussing the suggested advantages and disadvantages of this curriculum. Students in PBL programs make greater use of self-selected references and online searching. From this point of view, PBL strengthens the role of health libraries in medical education, and prepares the future physician for Evidence-Based Medicine. PMID:10909306

  10. An Adaptive Derivative-based Method for Function Approximation

    SciTech Connect

    Tong, C

    2008-10-22

    To alleviate the high computational cost of large-scale multi-physics simulations to study the relationships between the model parameters and the outputs of interest, response surfaces are often used in place of the exact functional relationships. This report explores a method for response surface construction using adaptive sampling guided by derivative information at each selected sample point. This method is especially suitable for applications that can readily provide added information such as gradients and Hessian with respect to the input parameters under study. When higher order terms (third and above) in the Taylor series are negligible, the approximation error for this method can be controlled. We present details of the adaptive algorithm and numerical results on a few test problems.

  11. General method for quantifying base adducts in specific mammalian genes.

    PubMed Central

    Thomas, D C; Morton, A G; Bohr, V A; Sancar, A

    1988-01-01

    A general method has been developed to measure the formation and removal of DNA adducts in defined sequences of mammalian genomes. Adducted genomic DNA is digested with an appropriate restriction enzyme, treated with Escherichia coli UvrABC excision nuclease (ABC excinuclease), subjected to alkaline gel electrophoresis, and probed for specific sequences by Southern hybridization. The ABC excinuclease incises DNA containing bulky adducts and thus reduces the intensity of the full-length fragments in Southern hybridization in proportion to the number of adducts present in the probed sequence. This method is similar to that developed by Bohr et al. [Bohr, V. A., Smith, C. A., Okumoto, D. S. & Hanawalt, P. C. (1985) Cell 40, 359-369] for quantifying pyrimidine dimers by using T4 endonuclease V. Because of the wide substrate range of ABC exinuclease, however, our method can be used to quantify a large variety of DNA adducts in specific genomic sequences. Images PMID:2836856

  12. Sunspot drawings handwritten character recognition method based on deep learning

    NASA Astrophysics Data System (ADS)

    Zheng, Sheng; Zeng, Xiangyun; Lin, Ganghua; Zhao, Cui; Feng, Yongli; Tao, Jinping; Zhu, Daoyuan; Xiong, Li

    2016-05-01

    High accuracy scanned sunspot drawings handwritten characters recognition is an issue of critical importance to analyze sunspots movement and store them in the database. This paper presents a robust deep learning method for scanned sunspot drawings handwritten characters recognition. The convolution neural network (CNN) is one algorithm of deep learning which is truly successful in training of multi-layer network structure. CNN is used to train recognition model of handwritten character images which are extracted from the original sunspot drawings. We demonstrate the advantages of the proposed method on sunspot drawings provided by Chinese Academy Yunnan Observatory and obtain the daily full-disc sunspot numbers and sunspot areas from the sunspot drawings. The experimental results show that the proposed method achieves a high recognition accurate rate.

  13. Calibration of EMI data based on different electrical methods

    NASA Astrophysics Data System (ADS)

    Nüsch, Anne-Kathrin; Werban, Ulrike; Dietrich, Peter

    2013-04-01

    The advantages of the electromagnetic induction (EMI)-method have been known to soil scientists for many years. Thus it is used for many soil investigations, ranging from salinity measurements over water content monitoring to classification of different soil types. There are several companies that provide instruments for each type of investigation. However, a major disadvantage of the method is that measurements obtained under different conditions (e.g. with different instruments, or at different times or field sites) are not easily comparable. Data values yielded when using the instruments are not absolute, which is an important prerequisite for the correct application of EMI, especially at the landscape scale. Furthermore drifts can occur, potentially caused by weather conditions or instrument errors and subsequently give results with variations in conductivities, which are not actually reflective of actual test results. With the help of reference lines and repeated measurements, drifts can be detected and eliminated. Different measurements (spatial and temporal) are more comparable, but the final corrected values are still not absolute. The best solution that allows for absolute values to be obtained is to calibrate the EMI-Data with the help of a known conductivity from other electrical methods. In a series of test measurements, we studied which electrical method is most feasible for a calibration of EMI-data. The chosen field site is situated at the floodplain of the river Mulde in Saxony (Germany). We chose a profile 100 meters in length which is very heterogeneous and crosses a buried back water channel. Results show a significant variance of conductivities. Several EMI-instruments were tested. Among these are EM38DD and EM31 devices from Geonics. These instruments are capable of investigating the subsurface to a depth of up to six meters. For the calibration process, we chose electrical resistivity tomography (ERT), Vertical Electrical Sounding (VES), and

  14. Space Object Tracking Method Based on a Snake Model

    NASA Astrophysics Data System (ADS)

    Zhan-wei, Xu; Xin, Wang

    2016-04-01

    In this paper, aiming at the problem of unstable tracking of low-orbit variable and bright space objects, adopting an active contour model, a kind of improved GVF (Gradient Vector Flow) - Snake algorithm is proposed to realize the real-time search of the real object contour on the CCD image. Combined with the Kalman filter for prediction, a new adaptive tracking method is proposed for space objects. Experiments show that this method can overcome the tracking error caused by the fixed window, and improve the tracking robustness.

  15. Numerical simulation of thermal discharge based on FVM method

    NASA Astrophysics Data System (ADS)

    Yu, Yunli; Wang, Deguan; Wang, Zhigang; Lai, Xijun

    2006-01-01

    A two-dimensional numerical model is proposed to simulate the thermal discharge from a power plant in Jiangsu Province. The equations in the model consist of two-dimensional non-steady shallow water equations and thermal waste transport equations. Finite volume method (FVM) is used to discretize the shallow water equations, and flux difference splitting (FDS) scheme is applied. The calculated area with the same temperature increment shows the effect of thermal discharge on sea water. A comparison between simulated results and the experimental data shows good agreement. It indicates that this method can give high precision in the heat transfer simulation in coastal areas.

  16. Gender-based violence: concepts, methods, and findings.

    PubMed

    Russo, Nancy Felipe; Pirlott, Angela

    2006-11-01

    The United Nations has identified gender-based violence against women as a global health and development issue, and a host of policies, public education, and action programs aimed at reducing gender-based violence have been undertaken around the world. This article highlights new conceptualizations, methodological issues, and selected research findings that can inform such activities. In addition to describing recent research findings that document relationships between gender, power, sexuality, and intimate violence cross-nationally, it identifies cultural factors, including linkages between sex and violence through media images that may increase women's risk for violence, and profiles a host of negative physical, mental, and behavioral health outcomes associated with victimization including unwanted pregnancy and abortion. More research is needed to identify the causes, dynamics, and outcomes of gender-based violence, including media effects, and to articulate how different forms of such violence vary in outcomes depending on cultural context.

  17. CRISPR-Based Methods for Caenorhabditis elegans Genome Engineering

    PubMed Central

    Dickinson, Daniel J.; Goldstein, Bob

    2016-01-01

    The advent of genome editing techniques based on the clustered regularly interspersed short palindromic repeats (CRISPR)–Cas9 system has revolutionized research in the biological sciences. CRISPR is quickly becoming an indispensible experimental tool for researchers using genetic model organisms, including the nematode Caenorhabditis elegans. Here, we provide an overview of CRISPR-based strategies for genome editing in C. elegans. We focus on practical considerations for successful genome editing, including a discussion of which strategies are best suited to producing different kinds of targeted genome modifications. PMID:26953268

  18. A New Activity-Based Financial Cost Management Method

    NASA Astrophysics Data System (ADS)

    Qingge, Zhang

    The standard activity-based financial cost management model is a new model of financial cost management, which is on the basis of the standard cost system and the activity-based cost and integrates the advantages of the two. It is a new model of financial cost management with more accurate and more adequate cost information by taking the R&D expenses as the accounting starting point and after-sale service expenses as the terminal point and covering the whole producing and operating process and the whole activities chain and value chain aiming at serving the internal management and decision.

  19. Study of an image restoration method based on Poisson-maximum likelihood estimation method for earthquake ruin scene

    NASA Astrophysics Data System (ADS)

    Song, Yanxing; Yang, Jingsong; Cheng, Lina; Liu, Shucong

    2014-09-01

    An image restoration method based on Poisson-maximum likelihood estimation method (PMLE) for earthquake ruin scene is proposed in this paper. The PMLE algorithm is introduced at first, and automatic acceleration method is used in the algorithm to accelerate the iterative process, then an image of earthquake ruin scene is processed with this image restoration method. The spectral correlation method and PSNR (peak signal-to-noise ratio) are chosen respectively to validate the restoration effect of the method, the simulation results show that iterations in this method will effect the PSNR of the processed image and operation time, and this method can restore image of earthquake ruin scene effectively and has a good practicability.

  20. A multivariate based event detection method and performance comparison with two baseline methods.

    PubMed

    Liu, Shuming; Smith, Kate; Che, Han

    2015-09-01

    Early warning systems have been widely deployed to protect water systems from accidental and intentional contamination events. Conventional detection algorithms are often criticized for having high false positive rates and low true positive rates. This mainly stems from the inability of these methods to determine whether variation in sensor measurements is caused by equipment noise or the presence of contamination. This paper presents a new detection method that identifies the existence of contamination by comparing Euclidean distances of correlation indicators, which are derived from the correlation coefficients of multiple water quality sensors. The performance of the proposed method was evaluated using data from a contaminant injection experiment and compared with two baseline detection methods. The results show that the proposed method can differentiate between fluctuations caused by equipment noise and those due to the presence of contamination. It yielded higher possibility of detection and a lower false alarm rate than the two baseline methods. With optimized parameter values, the proposed method can correctly detect 95% of all contamination events with a 2% false alarm rate.

  1. [A hyperspectral subpixel target detection method based on inverse least squares method].

    PubMed

    Li, Qing-Bo; Nie, Xin; Zhang, Guang-Jun

    2009-01-01

    In the present paper, an inverse least square (ILS) method combined with the Mahalanobis distance outlier detection method is discussed to detect the subpixel target from the hyperspectral image. Firstly, the inverse model for the target spectrum and all the pixel spectra was established, in which the accurate target spectrum was obtained previously, and then the SNV algorithm was employed to preprocess each original pixel spectra separately. After the pretreatment, the regressive coefficient of ILS was calculated with partial least square (PLS) algorithm. Each point in the vector of regressive coefficient corresponds to a pixel in the image. The Mahalanobis distance was calculated with each point in the regressive coefficient vector. Because Mahalanobis distance stands for the extent to which samples deviate from the total population, the point with Mahalanobis distance larger than the 3sigma was regarded as the subpixel target. In this algorithm, no other prior information such as representative background spectrum or modeling of background is required, and only the target spectrum is needed. In addition, the result of the detection is insensitive to the complexity of background. This method was applied to AVIRIS remote sensing data. For this simulation experiment, AVIRIS remote sensing data was free downloaded from the NASA official websit, the spectrum of a ground object in the AVIRIS hyperspectral image was picked up as the target spectrum, and the subpixel target was simulated though a linear mixed method. The comparison of the subpixel detection result of the method mentioned above with that of orthogonal subspace projection method (OSP) was performed. The result shows that the performance of the ILS method is better than the traditional OSP method. The ROC (receive operating characteristic curve) and SNR were calculated, which indicates that the ILS method possesses higher detection accuracy and less computing time than the OSP algorithm. PMID:19385196

  2. Objective, Way and Method of Faculty Management Based on Ergonomics

    ERIC Educational Resources Information Center

    WANG, Hong-bin; Liu, Yu-hua

    2008-01-01

    The core problem that influences educational quality of talents in colleges and universities is the faculty management. Without advanced faculty, it is difficult to cultivate excellent talents. With regard to some problems in present faculty construction of colleges and universities, this paper puts forward the new objectives, ways and methods of…

  3. Methods of use for sensor based fluid detection devices

    NASA Technical Reports Server (NTRS)

    Lewis, Nathan S. (Inventor)

    2001-01-01

    Methods of use and devices for detecting analyte in fluid. A system for detecting an analyte in a fluid is described comprising a substrate having a sensor comprising a first organic material and a second organic material where the sensor has a response to permeation by an analyte. A detector is operatively associated with the sensor. Further, a fluid delivery appliance is operatively associated with the sensor. The sensor device has information storage and processing equipment, which is operably connected with the device. This device compares a response from the detector with a stored ideal response to detect the presence of analyte. An integrated system for detecting an analyte in a fluid is also described where the sensing device, detector, information storage and processing device, and fluid delivery device are incorporated in a substrate. Methods for use for the above system are also described where the first organic material and a second organic material are sensed and the analyte is detected with a detector operatively associated with the sensor. The method provides for a device, which delivers fluid to the sensor and measures the response of the sensor with the detector. Further, the response is compared to a stored ideal response for the analyte to determine the presence of the analyte. In different embodiments, the fluid measured may be a gaseous fluid, a liquid, or a fluid extracted from a solid. Methods of fluid delivery for each embodiment are accordingly provided.

  4. Comparison of an EMG-based and a stress-based method to predict shoulder muscle forces.

    PubMed

    Engelhardt, Christoph; Malfroy Camine, Valérie; Ingram, David; Müllhaupt, Philippe; Farron, Alain; Pioletti, Dominique; Terrier, Alexandre

    2015-01-01

    The estimation of muscle forces in musculoskeletal shoulder models is still controversial. Two different methods are widely used to solve the indeterminacy of the system: electromyography (EMG)-based methods and stress-based methods. The goal of this work was to evaluate the influence of these two methods on the prediction of muscle forces, glenohumeral load and joint stability after total shoulder arthroplasty. An EMG-based and a stress-based method were implemented into the same musculoskeletal shoulder model. The model replicated the glenohumeral joint after total shoulder arthroplasty. It contained the scapula, the humerus, the joint prosthesis, the rotator cuff muscles supraspinatus, subscapularis and infraspinatus and the middle, anterior and posterior deltoid muscles. A movement of abduction was simulated in the plane of the scapula. The EMG-based method replicated muscular activity of experimentally measured EMG. The stress-based method minimised a cost function based on muscle stresses. We compared muscle forces, joint reaction force, articular contact pressure and translation of the humeral head. The stress-based method predicted a lower force of the rotator cuff muscles. This was partly counter-balanced by a higher force of the middle part of the deltoid muscle. As a consequence, the stress-based method predicted a lower joint load (16% reduced) and a higher superior-inferior translation of the humeral head (increased by 1.2 mm). The EMG-based method has the advantage of replicating the observed cocontraction of stabilising muscles of the rotator cuff. This method is, however, limited to available EMG measurements. The stress-based method has thus an advantage of flexibility, but may overestimate glenohumeral subluxation.

  5. New method of contour-based mask-shape compiler

    NASA Astrophysics Data System (ADS)

    Matsuoka, Ryoichi; Sugiyama, Akiyuki; Onizawa, Akira; Sato, Hidetoshi; Toyoda, Yasutaka

    2007-10-01

    We have developed a new method of accurately profiling a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, it is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method for a DFM solution in which two dimensional data are extracted for an error free practical simulation by precise reproduction of a real mask shape in addition to the mask data simulation. The flow centering around the design data is fully automated and provides an environment where optimization and verification for fully automated model calibration with much less error is available. It also allows complete consolidation of input and output functions with an EDA system by constructing a design data oriented system structure. This method therefore is regarded as a strategic DFM approach in the semiconductor metrology.

  6. Methods and Strategies: Modeling Problem-Based Instruction

    ERIC Educational Resources Information Center

    Sterling, Donna R.

    2007-01-01

    Students get excited about science when they investigate real scientific problems in the classroom, especially when the investigation extends over several weeks. This article describes a health-science problem-based learning (PBL) investigation that a group of teachers and teacher educators devised together for a group of fourth- to sixth-grade…

  7. The Teaching of Protein Synthesis--A Microcomputer Based Method.

    ERIC Educational Resources Information Center

    Goodridge, Frank

    1983-01-01

    Describes two computer programs (BASIC for 32K Commodore PET) for teaching protein synthesis. The first is an interactive test of base-pairing knowledge, and the second generates random DNA nucleotide sequences, with instructions for substitution, insertion, and deletion printed out for each student. (JN)

  8. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  9. A Methods-Based Biotechnology Course for Undergraduates

    ERIC Educational Resources Information Center

    Chakrabarti, Debopam

    2009-01-01

    This new course in biotechnology for upper division undergraduates provides a comprehensive overview of the process of drug discovery that is relevant to biopharmaceutical industry. The laboratory exercises train students in both cell-free and cell-based assays. Oral presentations by the students delve into recent progress in drug discovery.…

  10. Application of age estimation methods based on teeth eruption: how easy is Olze method to use?

    PubMed

    De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C

    2014-09-01

    The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training.

  11. An Network Attack Modeling Method Based on MLL-AT

    NASA Astrophysics Data System (ADS)

    Fen, Yan; Xinchun, Yin; Hao, Huang

    In this paper, the method of modeling attack using attack tree is researched. The main goal is effectively using attack tree to model and express multi-stage network attacks. We expand and improve the traditional attack tree. The attack nodes in traditional attack tree are redefined, and the attack risk of leaf node is quantified. On those basis, the mentality and method of building MLL-AT (Multi-Level & Layer Attack Tree) are proposed. The improved attack tree can model attack more accurately, in particular to multi-stage network attacks. And the new model can also be used to evaluate system's risk, to distinguish between varying system security threat degrees caused by different attack sequences.

  12. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  13. Novel nanorods based on PANI / PEO polymers using electrospinning method

    NASA Astrophysics Data System (ADS)

    Al-Hazeem, Nabeel Z.; Ahmed, Naser M.; Matjafri, M. Z.; Sabah, Fayroz A.; Rasheed, Hiba S.

    2016-07-01

    In this work, we fabricated nanorods by applying an electric potential on poly (ethylene oxide) (PEO) and polyaniline (PANI) as a polymeric solution by electrospinning method. Testing was conducted on the samples by field emission scanning Electron microscope (FE-SEM), X-ray diffraction (XRD) and Photoluminescence. And the results showed the emergence of nanorods in the sample within glass substrate. Diameters of nanorods have ranged between (52.78-122.40)nm And a length of between (1.15 - 1.32)μm. The emergence of so the results are for the first time, never before was the fabrication of nanorods for polymers using the same method used in this research.

  14. Optimizing methods for PCR-based analysis of predation

    PubMed Central

    Sint, Daniela; Raso, Lorna; Kaufmann, Rüdiger; Traugott, Michael

    2011-01-01

    Molecular methods have become an important tool for studying feeding interactions under natural conditions. Despite their growing importance, many methodological aspects have not yet been evaluated but need to be considered to fully exploit the potential of this approach. Using feeding experiments with high alpine carabid beetles and lycosid spiders, we investigated how PCR annealing temperature affects prey DNA detection success and how post-PCR visualization methods differ in their sensitivity. Moreover, the replicability of prey DNA detection among individual PCR assays was tested using beetles and spiders that had digested their prey for extended times postfeeding. By screening all predators for three differently sized prey DNA fragments (range 116–612 bp), we found that only in the longest PCR product, a marked decrease in prey detection success occurred. Lowering maximum annealing temperatures by 4 °C resulted in significantly increased prey DNA detection rates in both predator taxa. Among the three post-PCR visualization methods, an eightfold difference in sensitivity was observed. Repeated screening of predators increased the total number of samples scoring positive, although the proportion of samples testing positive did not vary significantly between different PCRs. The present findings demonstrate that assay sensitivity, in combination with other methodological factors, plays a crucial role to obtain robust trophic interaction data. Future work employing molecular prey detection should thus consider and minimize the methodologically induced variation that would also allow for better cross-study comparisons. PMID:21507208

  15. A multithread based new sparse matrix method in bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Tian, Jie; Liu, Dan; Sun, Li; Yang, Xin; Han, Dong

    2010-03-01

    Among many molecular imaging modalities, bioluminescence tomography (BLT) stands out as an effective approach for in vivo imaging because of its noninvasive molecular and cellular level detection ability, high sensitivity and low cost in comparison with other imaging technologies. However, there exists the case that large scale problem with large number of points and elements in the structure of mesh standing for the small animal or phantom. And the large scale problem's system matrix generated by the diffuse approximation (DA) model using finite element method (FEM) is large. So there wouldn't be enough random access memory (RAM) for the program and the related inverse problem couldn't be solved. Considering the sparse property of the BLT system matrix, we've developed a new sparse matrix (ZSM) to overcome the problem. And the related algorithms have all been speeded up by multi-thread technologies. Then the inverse problem is solved by Tikhonov regularization method in adaptive finite element (AFE) framework. Finally, the performance of this method is tested on a heterogeneous phantom and the boundary data is obtained through Monte Carlo simulation. During the process of solving the forward model, the ZSM can save more processing time and memory space than the usual way, such as those not using sparse matrix and those using Triples or Cross Linked sparse matrix. Numerical experiments have shown when more CPU cores are used, the processing speed is increased. By incorporating ZSM, BLT can be applied to large scale problems with large system matrix.

  16. Evaluation of roundness error based on improved area hunting method

    NASA Astrophysics Data System (ADS)

    Zhan, Weiwei; Xue, Zi; Wu, Yongbo

    2010-08-01

    Rotary parts are used commonly in the field of precision machinery and their roundness error can affect installation accuracy greatly which determines the performance of machine. It is essential to establish a proper method for evaluating roundness error to ensure accurate assessment. However, various evaluation algorithms are time-consuming, complex and inaccuracy which can not meet the challenge of precision measurement. In this paper, an improved area hunting method which used minimum zone circle (MZC), minimum circumscribed circle (MCC) and maximum inscribed circle (MIC) as reference circle was proposed. According to specific area hunting rules of different reference circles, a new marked point which was closer to the real center of reference circle was located from grid cross points around previous marked point. Searched area was decreasing and the process of area hunting terminated when iteration accuracy was satisfied. This approach was realized in a precision form measurement instrument developed in NIM. The test results indicated that this improved method was efficient, accurate and can be easily implemented in precision roundness measurement.

  17. An adaptive unsupervised hyperspectral classification method based on Gaussian distribution

    NASA Astrophysics Data System (ADS)

    Yue, Jiang; Wu, Jing-wei; Zhang, Yi; Bai, Lian-fa

    2014-11-01

    In order to achieve adaptive unsupervised clustering in the high precision, a method using Gaussian distribution to fit the similarity of the inter-class and the noise distribution is proposed in this paper, and then the automatic segmentation threshold is determined by the fitting result. First, according with the similarity measure of the spectral curve, this method assumes that the target and the background both in Gaussian distribution, the distribution characteristics is obtained through fitting the similarity measure of minimum related windows and center pixels with Gaussian function, and then the adaptive threshold is achieved. Second, make use of the pixel minimum related windows to merge adjacent similar pixels into a picture-block, then the dimensionality reduction is completed and the non-supervised classification is realized. AVIRIS data and a set of hyperspectral data we caught are used to evaluate the performance of the proposed method. Experimental results show that the proposed algorithm not only realizes the adaptive but also outperforms K-MEANS and ISODATA on the classification accuracy, edge recognition and robustness.

  18. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  19. Novel multilevel inverter carrier-based PWM method

    SciTech Connect

    Tolbert, L.M.; Habetler, T.G.

    1999-10-01

    The advent of the transformerless multilevel inverter topology has brought forth various pulsewidth modulation (PWM) schemes as a means to control the switching of the active devices in each of the multiple voltage levels in the inverter. An analysis of how existing multilevel carrier-based PWM affects switch utilization for the different levels of a diode-clamped inverter is conducted. Two novel carrier-based multilevel PWM schemes are presented which help to optimize or balance the switch utilization in multilevel inverters. A 10-kW prototype six-level diode-clamped inverter has been built and controlled with the novel PWM strategies proposed in this paper to act as a voltage-source inverter for a motor drive.

  20. A Random Forest-based ensemble method for activity recognition.

    PubMed

    Feng, Zengtao; Mo, Lingfei; Li, Meng

    2015-01-01

    This paper presents a multi-sensor ensemble approach to human physical activity (PA) recognition, using random forest. We designed an ensemble learning algorithm, which integrates several independent Random Forest classifiers based on different sensor feature sets to build a more stable, more accurate and faster classifier for human activity recognition. To evaluate the algorithm, PA data collected from the PAMAP (Physical Activity Monitoring for Aging People), which is a standard, publicly available database, was utilized to train and test. The experimental results show that the algorithm is able to correctly recognize 19 PA types with an accuracy of 93.44%, while the training is faster than others. The ensemble classifier system based on the RF (Random Forest) algorithm can achieve high recognition accuracy and fast calculation. PMID:26737432

  1. Ensemble method: Community detection based on game theory

    NASA Astrophysics Data System (ADS)

    Zhang, Xia; Xia, Zhengyou; Xu, Shengwu; Wang, J. D.

    2014-08-01

    Timely and cost-effective analytics over social network has emerged as a key ingredient for success in many businesses and government endeavors. Community detection is an active research area of relevance to analyze online social network. The problem of selecting a particular community detection algorithm is crucial if the aim is to unveil the community structure of a network. The choice of a given methodology could affect the outcome of the experiments because different algorithms have different advantages and depend on tuning specific parameters. In this paper, we propose a community division model based on the notion of game theory, which can combine advantages of previous algorithms effectively to get a better community classification result. By making experiments on some standard dataset, it verifies that our community detection model based on game theory is valid and better.

  2. Method of polishing nickel-base alloys and stainless steels

    DOEpatents

    Steeves, Arthur F.; Buono, Donald P.

    1981-01-01

    A chemical attack polish and polishing procedure for use on metal surfaces such as nickel base alloys and stainless steels. The chemical attack polish comprises Fe(NO.sub.3).sub.3, concentrated CH.sub.3 COOH, concentrated H.sub.2 SO.sub.4 and H.sub.2 O. The polishing procedure includes saturating a polishing cloth with the chemical attack polish and submicron abrasive particles and buffing the metal surface.

  3. Bacteriophage Transduction in Staphylococcus aureus: Broth-Based Method.

    PubMed

    Krausz, Kelsey L; Bose, Jeffrey L

    2016-01-01

    The ability to move DNA between Staphylococcus strains is essential for the genetic manipulation of this bacterium. Often in the Staphylococci, this is accomplished through transduction using generalized transducing phage and can be performed in different ways and therefore the presence of two transduction procedures in this book. The following protocol is a relatively easy-to-perform, broth-based procedure that we have used extensively to move both plasmids and chromosomal fragments between strains of Staphylococcus aureus.

  4. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  5. An algebra-based method for inferring gene regulatory networks

    PubMed Central

    2014-01-01

    Background The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. Results This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also

  6. Distributed Cooperation Solution Method of Complex System Based on MAS

    NASA Astrophysics Data System (ADS)

    Weijin, Jiang; Yuhui, Xu

    To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.

  7. Wavelet-based acoustic emission detection method with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

    2000-06-01

    Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

  8. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.

    PubMed

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors.

  9. Bearing diagnosis based on Mahalanobis-Taguchi-Gram-Schmidt method

    NASA Astrophysics Data System (ADS)

    Shakya, Piyush; Kulkarni, Makarand S.; Darpe, Ashish K.

    2015-02-01

    A methodology is developed for defect type identification in rolling element bearings using the integrated Mahalanobis-Taguchi-Gram-Schmidt (MTGS) method. Vibration data recorded from bearings with seeded defects on outer race, inner race and balls are processed in time, frequency, and time-frequency domains. Eleven damage identification parameters (RMS, Peak, Crest Factor, and Kurtosis in time domain, amplitude of outer race, inner race, and ball defect frequencies in FFT spectrum and HFRT spectrum in frequency domain and peak of HHT spectrum in time-frequency domain) are computed. Using MTGS, these damage identification parameters (DIPs) are fused into a single DIP, Mahalanobis distance (MD), and gain values for the presence of all DIPs are calculated. The gain value is used to identify the usefulness of DIP and the DIPs with positive gain are again fused into MD by using Gram-Schmidt Orthogonalization process (GSP) in order to calculate Gram-Schmidt Vectors (GSVs). Among the remaining DIPs, sign of GSVs of frequency domain DIPs is checked to classify the probable defect. The approach uses MTGS method for combining the damage parameters and in conjunction with the GSV classifies the defect. A Defect Occurrence Index (DOI) is proposed to rank the probability of existence of a type of bearing damage (ball defect/inner race defect/outer race defect/other anomalies). The methodology is successfully validated on vibration data from a different machine, bearing type and shape/configuration of the defect. The proposed methodology is also applied on the vibration data acquired from the accelerated life test on the bearings, which established the applicability of the method on naturally induced and naturally progressed defect. It is observed that the methodology successfully identifies the correct type of bearing defect. The proposed methodology is also useful in identifying the time of initiation of a defect and has potential for implementation in a real time environment.

  10. Optical tissue phantoms based on spin coating method

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Ha, Myungjin; Yu, Sung Kon; Radfar, Edalat; Jun, Eunkwon; Lee, Nara; Jung, Byungjo

    2015-03-01

    Fabrication of optical tissue phantom (OTP) simulating whole skin structure has been regarded as laborious and time consuming work. This study fabricated multilayer OTP optically and structurally simulating epidermis-dermis structure including blood vessel. Spin coating method was used to produce thin layer mimicking epidermal layer, then optimized for reference epoxy and silicone matrix. Adequacy of both materials in phantom fabrication was considered by comparison the fabrication results. In addition similarities between OTP and biological tissue in optical property and thickness was measured to evaluate this fabrication process.

  11. Differentiated protection method in passive optical networks based on OPEX

    NASA Astrophysics Data System (ADS)

    Zhang, Zhicheng; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng

    2011-12-01

    Reliable service delivery becomes more significant due to increased dependency on electronic services all over society and the growing importance of reliable service delivery. As the capability of PON increasing, both residential and business customers may be included in a PON. Meanwhile, OPEX have been proven to be a very important factor of the total cost for a telecommunication operator. Thus, in this paper, we present the partial protection PON architecture and compare the operational expenditures (OPEX) of fully duplicated protection and partly duplicated protection for ONUs with different distributed fiber length, reliability requirement and penalty cost per hour. At last, we propose a differentiated protection method to minimize OPEX.

  12. The decoding method based on wavelet image En vector quantization

    NASA Astrophysics Data System (ADS)

    Liu, Chun-yang; Li, Hui; Wang, Tao

    2013-12-01

    With the rapidly progress of internet technology, large scale integrated circuit and computer technology, digital image processing technology has been greatly developed. Vector quantization technique plays a very important role in digital image compression. It has the advantages other than scalar quantization, which possesses the characteristics of higher compression ratio, simple algorithm of image decoding. Vector quantization, therefore, has been widely used in many practical fields. This paper will combine the wavelet analysis method and vector quantization En encoder efficiently, make a testing in standard image. The experiment result in PSNR will have a great improvement compared with the LBG algorithm.

  13. Significance of norms and completeness in variational based methods

    NASA Technical Reports Server (NTRS)

    Storch, Joel A.

    1989-01-01

    By means of a simple structural problem, an important requirement often overlooked in practice on the basis functions used in Rayleigh-Ritz-Galerkin type methods is brought into focus. The problem of the static deformation of a uniformly loaded beam is solved variationally by expanding the beam displacement in a Fourier Cosine series. The potential energy functional is rendered stationary subject to the geometric boundary conditions. It is demonstrated that the variational approach does not converge to the true solution. The object is to resolve this paradox, and in so doing, indicate the practical implications of norms and completeness in an appropriate inner product space.

  14. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a

  15. Determining the base resistance of InP HBTs: An evaluation of methods and structures

    NASA Astrophysics Data System (ADS)

    Nardmann, Tobias; Krause, Julia; Pawlak, Andreas; Schroter, Michael

    2016-09-01

    Many different methods can be found in the literature for determining both the internal and external base series resistance based on single transistor terminal characteristics. Those methods are not equally reliable or applicable for all technologies, device sizes and speeds. In this review, the most common methods are evaluated regarding their suitability for InP heterojunction bipolar transistors (HBTs) based on both measured and simulated data. Using data generated by a sophisticated physics-based compact model allows an evaluation of the extraction method precision by comparing the extracted parameter value to its known value. Based on these simulations, this study provides insight into the limitations of the applied methods, causes for errors and possible error mitigation. In addition to extraction methods based on just transistor terminal characteristics, test structures for separately determining the components of the base resistance from sheet and specific contact resistances are discussed and applied to serve as reference for the experimental evaluation.

  16. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  17. Method for Stereo Mapping Based on Objectarx and Pipeline Technology

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; Lin, Z.; Yang, Y.

    2012-07-01

    Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  18. Fast, moment-based estimation methods for delay network tomography

    SciTech Connect

    Lawrence, Earl Christophre; Michailidis, George; Nair, Vijayan N

    2008-01-01

    Consider the delay network tomography problem where the goal is to estimate distributions of delays at the link-level using data on end-to-end delays. These measurements are obtained using probes that are injected at nodes located on the periphery of the network and sent to other nodes also located on the periphery. Much of the previous literature deals with discrete delay distributions by discretizing the data into small bins. This paper considers more general models with a focus on computationally efficient estimation. The moment-based schemes presented here are designed to function well for larger networks and for applications like monitoring that require speedy solutions.

  19. A novel gene detection method based on period-3 property.

    PubMed

    Huang, Lun; Bataineh, Mohammad Al; Atkin, G E; Wang, Siyun; Zhang, Wei

    2009-01-01

    Processing of biomolecular sequences using communication theory techniques provides powerful approaches for solving highly relevant problems in bioinformatics by properly mapping character strings into numerical sequences. We provide an optimized procedure for predicting protein-coding regions in DNA sequences based on the period-3 property of coding region. We present a digital correlating and filtering approach in the process of predicting these regions, and find out their locations by using the magnitude of the output sequence. These approaches result in improved computational techniques for the solution of useful problems in genomic information science and technology. PMID:19963599

  20. METHOD FOR ANNEALING AND ROLLING ZIRCONIUM-BASE ALLOYS

    DOEpatents

    Picklesimer, M.L.

    1959-07-14

    A fabrication procedure is presented for alpha-stabilized zirconium-base alloys, and in particular Zircaloy-2. The alloy is initially worked at a temperature outside the alpha-plus-beta range (810 to 970 deg ), held at a temperature above 970 deg C for 30 minutes and cooled rapidly. The alloy is then cold-worked to reduce the size at least 20% and annealed at a temperature from 700 to 810 deg C. This procedure serves both to prevent the formation of stringers and to provide a randomly oriented crystal structure.

  1. Parallel processing methods for space based power systems

    NASA Technical Reports Server (NTRS)

    Berry, F. C.

    1993-01-01

    This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.

  2. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, J.L.; Wiczer, J.J.

    1995-01-03

    A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.

  3. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, J.L.; Wiczer, J.J.

    1994-01-25

    A system and a method for imaging desired surfaces of a workpiece is described. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.

  4. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, James L.; Wiczer, James J.

    1995-01-01

    A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.

  5. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, James L.; Wiczer, James J.

    1994-01-01

    A system and a method for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.

  6. Speckle reduction methods in laser-based picture projectors

    NASA Astrophysics Data System (ADS)

    Akram, M. Nadeem; Chen, Xuyuan

    2016-02-01

    Laser sources have been promised for many years to be better light sources as compared to traditional lamps or light-emitting diodes (LEDs) for projectors, which enable projectors having wide colour gamut for vivid image, super brightness and high contrast for the best picture quality, long lifetime for maintain free operation, mercury free, and low power consumption for green environment. A major technology obstacle in using lasers for projection has been the speckle noise caused by to the coherent nature of the lasers. For speckle reduction, current state of the art solutions apply moving parts with large physical space demand. Solutions beyond the state of the art need to be developed such as integrated optical components, hybrid MOEMS devices, and active phase modulators for compact speckle reduction. In this article, major methods reported in the literature for the speckle reduction in laser projectors are presented and explained. With the advancement in semiconductor lasers with largely reduced cost for the red, green and the blue primary colours, and the developed methods for their speckle reduction, it is hoped that the lasers will be widely utilized in different projector applications in the near future.

  7. Method for forming bismuth-based superconducting ceramics

    DOEpatents

    Maroni, Victor A.; Merchant, Nazarali N.; Parrella, Ronald D.

    2005-05-17

    A method for reducing the concentration of non-superconducting phases during the heat treatment of Pb doped Ag/Bi-2223 composites having Bi-2223 and Bi-2212 superconducting phases is disclosed. A Pb doped Ag/Bi-2223 composite having Bi-2223 and Bi-2212 superconducting phases is heated in an atmosphere having an oxygen partial pressure not less than about 0.04 atmospheres and the temperature is maintained at the lower of a non-superconducting phase take-off temperature and the Bi-2223 superconducting phase grain growth take-off temperature. The oxygen partial pressure is varied and the temperature is varied between about 815.degree. C. and about 835.degree. C. to produce not less than 80 percent conversion to Pb doped Bi-2223 superconducting phase and not greater than about 20 volume percent non-superconducting phases. The oxygen partial pressure is preferably varied between about 0.04 and about 0.21 atmospheres. A product by the method is disclosed.

  8. Spectral methods and cluster structure in correlation-based networks

    NASA Astrophysics Data System (ADS)

    Heimo, Tapio; Tibély, Gergely; Saramäki, Jari; Kaski, Kimmo; Kertész, János

    2008-10-01

    We investigate how in complex systems the eigenpairs of the matrices derived from the correlations of multichannel observations reflect the cluster structure of the underlying networks. For this we use daily return data from the NYSE and focus specifically on the spectral properties of weight W=|-δ and diffusion matrices D=W/sj-δ, where C is the correlation matrix and si=∑jW the strength of node j. The eigenvalues (and corresponding eigenvectors) of the weight matrix are ranked in descending order. As in the earlier observations, the first eigenvector stands for a measure of the market correlations. Its components are, to first approximation, equal to the strengths of the nodes and there is a second order, roughly linear, correction. The high ranking eigenvectors, excluding the highest ranking one, are usually assigned to market sectors and industrial branches. Our study shows that both for weight and diffusion matrices the eigenpair analysis is not capable of easily deducing the cluster structure of the network without a priori knowledge. In addition we have studied the clustering of stocks using the asset graph approach with and without spectrum based noise filtering. It turns out that asset graphs are quite insensitive to noise and there is no sharp percolation transition as a function of the ratio of bonds included, thus no natural threshold value for that ratio seems to exist. We suggest that these observations can be of use for other correlation based networks as well.

  9. A New Quaternion-Based Encryption Method for DICOM Images.

    PubMed

    Dzwonkowski, Mariusz; Papaj, Michal; Rykaczewski, Roman

    2015-11-01

    In this paper, a new quaternion-based lossless encryption technique for digital image and communication on medicine (DICOM) images is proposed. We have scrutinized and slightly modified the concept of the DICOM network to point out the best location for the proposed encryption scheme, which significantly improves speed of DICOM images encryption in comparison with those originally embedded into DICOM advanced encryption standard and triple data encryption standard algorithms. The proposed algorithm decomposes a DICOM image into two 8-bit gray-tone images in order to perform encryption. The algorithm implements Feistel network like the scheme proposed by Sastry and Kumar. It uses special properties of quaternions to perform rotations of data sequences in 3D space for each of the cipher rounds. The images are written as Lipschitz quaternions, and modular arithmetic was implemented for operations with the quaternions. A computer-based analysis has been carried out, and the obtained results are shown at the end of this paper. PMID:26276993

  10. Evaluation of contents-based image retrieval methods for a database of logos on drug tablets

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Hardy, Huub; Poortman, Anneke; Bijhold, Jurrien

    2001-02-01

    In this research an evaluation has been made of the different ways of contents based image retrieval of logos of drug tablets. On a database of 432 illicitly produced tablets (mostly containing MDMA), we have compared different retrieval methods. Two of these methods were available from commercial packages, QBIC and Imatch, where the implementation of the contents based image retrieval methods are not exactly known. We compared the results for this database with the MPEG-7 shape comparison methods, which are the contour-shape, bounding box and region-based shape methods. In addition, we have tested the log polar method that is available from our own research.

  11. A hybrid semi-automatic method for liver segmentation based on level-set methods using multiple seed points.

    PubMed

    Yang, Xiaopeng; Yu, Hee Chul; Choi, Younggeun; Lee, Wonsup; Wang, Baojian; Yang, Jaedo; Hwang, Hongpil; Kim, Ji Hyun; Song, Jisoo; Cho, Baik Hwan; You, Heecheon

    2014-01-01

    The present study developed a hybrid semi-automatic method to extract the liver from abdominal computerized tomography (CT) images. The proposed hybrid method consists of a customized fast-marching level-set method for detection of an optimal initial liver region from multiple seed points selected by the user and a threshold-based level-set method for extraction of the actual liver region based on the initial liver region. The performance of the hybrid method was compared with those of the 2D region growing method implemented in OsiriX using abdominal CT datasets of 15 patients. The hybrid method showed a significantly higher accuracy in liver extraction (similarity index, SI=97.6 ± 0.5%; false positive error, FPE = 2.2 ± 0.7%; false negative error, FNE=2.5 ± 0.8%; average symmetric surface distance, ASD=1.4 ± 0.5mm) than the 2D (SI=94.0 ± 1.9%; FPE = 5.3 ± 1.1%; FNE=6.5 ± 3.7%; ASD=6.7 ± 3.8mm) region growing method. The total liver extraction time per CT dataset of the hybrid method (77 ± 10 s) is significantly less than the 2D region growing method (575 ± 136 s). The interaction time per CT dataset between the user and a computer of the hybrid method (28 ± 4 s) is significantly shorter than the 2D region growing method (484 ± 126 s). The proposed hybrid method was found preferred for liver segmentation in preoperative virtual liver surgery planning.

  12. Formal Methods for Autonomic and Swarm-based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James

    2004-01-01

    Swarms of intelligent rovers and spacecraft are being considered for a number of future NASA missions. These missions will provide MSA scientist and explorers greater flexibility and the chance to gather more science than traditional single spacecraft missions. These swarms of spacecraft are intended to operate for large periods of time without contact with the Earth. To do this, they must be highly autonomous, have autonomic properties and utilize sophisticated artificial intelligence. The Autonomous Nano Technology Swarm (ANTS) mission is an example of one of the swarm type of missions NASA is considering. This mission will explore the asteroid belt using an insect colony analogy cataloging the mass, density, morphology, and chemical composition of the asteroids, including any anomalous concentrations of specific minerals. Verifying such a system would be a huge task. This paper discusses ongoing work to develop a formal method for verifying swarm and autonomic systems.

  13. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  14. Well casing-based geophysical sensor apparatus, system and method

    DOEpatents

    Daily, William D.

    2010-03-09

    A geophysical sensor apparatus, system, and method for use in, for example, oil well operations, and in particular using a network of sensors emplaced along and outside oil well casings to monitor critical parameters in an oil reservoir and provide geophysical data remote from the wells. Centralizers are affixed to the well casings and the sensors are located in the protective spheres afforded by the centralizers to keep from being damaged during casing emplacement. In this manner, geophysical data may be detected of a sub-surface volume, e.g. an oil reservoir, and transmitted for analysis. Preferably, data from multiple sensor types, such as ERT and seismic data are combined to provide real time knowledge of the reservoir and processes such as primary and secondary oil recovery.

  15. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  16. Property Exchange Method for Designing Computer-Based Learning Game

    NASA Astrophysics Data System (ADS)

    Umetsu, Takanobu; Hirashima, Tsukasa

    Motivation is one of the most important factors in learning. Many researchers of learning environments, therefore, pay special attention to learning games as a remarkable approach to realize highly motivated learning. However, to make a learning game is not easy task. Although there are several investigations for design methods of learning games, most of them are only proposals of guidelines for the design or characteristics that learning games should have. Therefore, developers of learning games are required to have enough knowledge and experiences regarding learning and games in order to understand the guidelines or to deal with the characteristics. Then, it is very difficult for teachers to obtain learning games fitting for their learning issues.

  17. Nanotunneling Junction-based Hyperspectal Polarimetric Photodetector and Detection Method

    NASA Technical Reports Server (NTRS)

    Son, Kyung-ah (Inventor); Moon, Jeongsun J. (Inventor); Chattopadhyay, Goutam (Inventor); Liao, Anna (Inventor); Ting, David (Inventor)

    2009-01-01

    A photodetector, detector array, and method of operation thereof in which nanojunctions are formed by crossing layers of nanowires. The crossing nanowires are separated by a few nm thick electrical barrier layer which allows tunneling. Each nanojunction is coupled to a slot antenna for efficient and frequency-selective coupling to photo signals. The nanojunctions formed at the intersection of the crossing wires defines a vertical tunneling diode that rectifies the AC signal from a coupled antenna and generates a DC signal suitable for reforming a video image. The nanojunction sensor allows multi/hyper spectral imaging of radiation within a spectral band ranging from terahertz to visible light, and including infrared (IR) radiation. This new detection approach also offers unprecedented speed, sensitivity and fidelity at room temperature.

  18. Physically Based Landslide Hazard Model U Method and Issues

    NASA Astrophysics Data System (ADS)

    Dhakal, A. S.; Sidle, R. C.

    An Integrated Dynamic Slope Stability Model (IDSSM) that integrates GIS with topo- graphic, distributed hydrologic and vegetation models to assess the slope stability at a basin scale is described to address the issues related to prediction of landslide hazards with physically based landslide models. Data limitations, which can be addressed as one of the major problem range from lack of spatially distributed data on soil depth, soil physical and engineering properties, and vegetation root strength to the need for better digital elevation models to characterize topography. Many times point data and their averages such as for soil depth and soil cohesion needs to be used as the represen- tative values at the element scale. These factors result in a great degree of uncertainty in the simulation results. Since factors related to landsliding have different degree of importance in causing landsliding the introduced uncertainties may not be identical for the entire variables. The sensitivities of different parameters associated with landslid- ing were examined using the IDSSM. Since many variables are important for landslide occurrence effects of most of the soil and vegetation parameters were evaluated. To test for parameter uncertainty, one variable is altered while others were held constant and cumulative areas (percentage of the drainage area) with safety factor less than certain values were compared. The sensitivity analysis suggests that the safety factor is most sensitive to changes in soil cohesion, soil depth, and internal frictional an- gle. Changes in hydraulic conductivity greatly influenced ground water table and thus slope stability. Parameters such as soil unit weight and tree surcharge was less sen- sitive to landsliding. Considering the possible fine spatial variation of soil depth and hydraulic conductivity in a forest soil these two factors seem to produce large uncer- tainties. In forest soil, the presence of macropores and preferential flow presents

  19. DESI MS based screening method for phthalates in consumer goods.

    PubMed

    Schulz, Sabine; Wagner, Sebastian; Gerbig, Stefanie; Wächter, Herbert; Sielaff, Detlef; Bohn, Dieter; Spengler, Bernhard

    2015-05-21

    Phthalates are used as plasticizes in many everyday items, but some of them are known as hormone disruptors, being especially harmful during childhood. The European Union therefore restricted their application in children's toys and certain food packaging to 0.1%w. Due to the ever increasing number of plastic-containing consumer goods, rapid screening methods are needed to ensure and improve consumer safety in the future. In this study we evaluated the performance of desorption electrospray ionization (DESI) mass spectrometry (MS) for rapid quantitative screening of phthalates in toys. DESI allowed for direct surface sampling of the toys under atmospheric conditions with minimal sample preparation, while the high performance mass spectrometer used provided a high sensitivity and reliable identification via accurate mass measurements, high mass resolving power and MS/MS capabilities. External calibration curves for six banned phthalates (DBP, BBP, DEHP, DNOP, DINP and DIDP) were obtained from matrix-matched reference materials. Coefficients of determination were greater than 0.985, LOQs ranged from 0.02%w (DIDP) to 2.26%w (DINP) and the relative standard deviation of the calibration curve slope was less than 7.8% for intraday and 11.4% for interday comparison. The phthalate contents of eleven authentic samples were determined in a proof-of-concept approach using DESI MS and results were compared to those from confirmatory methods. The phthalate content was correctly assigned with relative deviations ranging from -20% to +10% for the majority of samples. Given further optimization and automation, DESI MS is likely to become a useful tool for rapid and accurate phthalate screening in the future. PMID:25827613

  20. DESI MS based screening method for phthalates in consumer goods.

    PubMed

    Schulz, Sabine; Wagner, Sebastian; Gerbig, Stefanie; Wächter, Herbert; Sielaff, Detlef; Bohn, Dieter; Spengler, Bernhard

    2015-05-21

    Phthalates are used as plasticizes in many everyday items, but some of them are known as hormone disruptors, being especially harmful during childhood. The European Union therefore restricted their application in children's toys and certain food packaging to 0.1%w. Due to the ever increasing number of plastic-containing consumer goods, rapid screening methods are needed to ensure and improve consumer safety in the future. In this study we evaluated the performance of desorption electrospray ionization (DESI) mass spectrometry (MS) for rapid quantitative screening of phthalates in toys. DESI allowed for direct surface sampling of the toys under atmospheric conditions with minimal sample preparation, while the high performance mass spectrometer used provided a high sensitivity and reliable identification via accurate mass measurements, high mass resolving power and MS/MS capabilities. External calibration curves for six banned phthalates (DBP, BBP, DEHP, DNOP, DINP and DIDP) were obtained from matrix-matched reference materials. Coefficients of determination were greater than 0.985, LOQs ranged from 0.02%w (DIDP) to 2.26%w (DINP) and the relative standard deviation of the calibration curve slope was less than 7.8% for intraday and 11.4% for interday comparison. The phthalate contents of eleven authentic samples were determined in a proof-of-concept approach using DESI MS and results were compared to those from confirmatory methods. The phthalate content was correctly assigned with relative deviations ranging from -20% to +10% for the majority of samples. Given further optimization and automation, DESI MS is likely to become a useful tool for rapid and accurate phthalate screening in the future.

  1. Tetraethyl orthosilicate-based glass composition and method

    DOEpatents

    Wicks, G.G.; Livingston, R.R.; Baylor, L.C.; Whitaker, M.J.; O`Rourke, P.E.

    1997-06-10

    A tetraethyl orthosilicate-based, sol-gel glass composition with additives selected for various applications is described. The composition is made by mixing ethanol, water, and tetraethyl orthosilicate, adjusting the pH into the acid range, and aging the mixture at room temperature. The additives, such as an optical indicator, filler, or catalyst, are then added to the mixture to form the composition which can be applied to a substrate before curing. If the additive is an indicator, the light-absorbing characteristics of which vary upon contact with a particular analyte, the indicator can be applied to a lens, optical fiber, reagent strip, or flow cell for use in chemical analysis. Alternatively, an additive such as alumina particles is blended into the mixture to form a filler composition for patching cracks in metal, glass, or ceramic piping. 12 figs.

  2. Tetraethyl orthosilicate-based glass composition and method

    DOEpatents

    Wicks, George G.; Livingston, Ronald R.; Baylor, Lewis C.; Whitaker, Michael J.; O'Rourke, Patrick E.

    1997-01-01

    A tetraethyl orthosilicate-based, sol-gel glass composition with additives selected for various applications. The composition is made by mixing ethanol, water, and tetraethyl orthosilicate, adjusting the pH into the acid range, and aging the mixture at room temperature. The additives, such as an optical indicator, filler, or catalyst, are then added to the mixture to form the composition which can be applied to a substrate before curing. If the additive is an indicator, the light-absorbing characteristics of which vary upon contact with a particular analyte, the indicator can be applied to a lens, optical fiber, reagant strip, or flow cell for use in chemical analysis. Alternatively, an additive such as alumina particles is blended into the mixture to form a filler composition for patching cracks in metal, glass, or ceramic piping.

  3. A T Matrix Method Based upon Scalar Basis Functions

    NASA Technical Reports Server (NTRS)

    Mackowski, D.W.; Kahnert, F. M.; Mishchenko, Michael I.

    2013-01-01

    A surface integral formulation is developed for the T matrix of a homogenous and isotropic particle of arbitrary shape, which employs scalar basis functions represented by the translation matrix elements of the vector spherical wave functions. The formulation begins with the volume integral equation for scattering by the particle, which is transformed so that the vector and dyadic components in the equation are replaced with associated dipole and multipole level scalar harmonic wave functions. The approach leads to a volume integral formulation for the T matrix, which can be extended, by use of Green's identities, to the surface integral formulation. The result is shown to be equivalent to the traditional surface integral formulas based on the VSWF basis.

  4. Colour based fire detection method with temporal intensity variation filtration

    NASA Astrophysics Data System (ADS)

    Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.

    2015-02-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.

  5. A Gas Dynamics Method Based on The Spectral Deferred Corrections (SDC) Time Integration Technique and The Piecewise Parabolic Method (PPM)

    SciTech Connect

    Samet Y. Kadioglu

    2011-12-01

    We present a computational gas dynamics method based on the Spectral Deferred Corrections (SDC) time integration technique and the Piecewise Parabolic Method (PPM) finite volume method. The PPM framework is used to define edge averaged quantities which are then used to evaluate numerical flux functions. The SDC technique is used to integrate solution in time. This kind of approach was first taken by Anita et al in [17]. However, [17] is problematic when it is implemented to certain shock problems. Here we propose significant improvements to [17]. The method is fourth order (both in space and time) for smooth flows, and provides highly resolved discontinuous solutions. We tested the method by solving variety of problems. Results indicate that the fourth order of accuracy in both space and time has been achieved when the flow is smooth. Results also demonstrate the shock capturing ability of the method.

  6. Base flow separation: A comparison of analytical and mass balance methods

    NASA Astrophysics Data System (ADS)

    Lott, Darline A.; Stewart, Mark T.

    2016-04-01

    Base flow is the ground water contribution to stream flow. Many activities, such as water resource management, calibrating hydrological and climate models, and studies of basin hydrology, require good estimates of base flow. The base flow component of stream flow is usually determined by separating a stream hydrograph into two components, base flow and runoff. Analytical methods, mathematical functions or algorithms used to calculate base flow directly from discharge, are the most widely used base flow separation methods and are often used without calibration to basin or gage-specific parameters other than basin area. In this study, six analytical methods are compared to a mass balance method, the conductivity mass-balance (CMB) method. The base flow index (BFI) values for 35 stream gages are obtained from each of the seven methods with each gage having at least two consecutive years of specific conductance data and 30 years of continuous discharge data. BFI is cumulative base flow divided by cumulative total discharge over the period of record of analysis. The BFI value is dimensionless, and always varies from 0 to 1. Areas of basins used in this study range from 27 km2 to 68,117 km2. BFI was first determined for the uncalibrated analytical methods. The parameters of each analytical method were then calibrated to produce BFI values as close to the CMB derived BFI values as possible. One of the methods, the power function (aQb + cQ) method, is inherently calibrated and was not recalibrated. The uncalibrated analytical methods have an average correlation coefficient of 0.43 when compared to CMB-derived values, and an average correlation coefficient of 0.93 when calibrated with the CMB method. Once calibrated, the analytical methods can closely reproduce the base flow values of a mass balance method. Therefore, it is recommended that analytical methods be calibrated against tracer or mass balance methods.

  7. Alternative processing methods for tungsten-base composite materials

    SciTech Connect

    Ohriner, E.K.; Sikka, V.K.

    1995-12-31

    Tungsten composite materials contain large amounts of tungsten distributed in a continuous matrix phase. Current commercial materials include the tungsten-nickel-iron with cobalt replacing some or all of the iron, and also tungsten-copper materials. Typically, these are fabricated by liquid-phase sintering of blended powders. Liquid-phase sintering offers the advantages of low processing costs, established technology, and generally attractive mechanical properties. However, liquid-phase sintering is restricted to a very limited number of matrix alloying elements and a limited range of tungsten and alloying compositions. In the past few years, there has been interest in a wider range of matrix materials that offer the potential for superior composite properties. These must be processed by solid-state processes and at sufficiently low temperatures to avoid undesired reactions between the tungsten and the matrix phase. These processes, in order of decreasing process temperature requirements, include hot-isostatic pressing (HIPing), hot extrusion, and dynamic compaction. The HIPing and hot extrusion processes have also been used to improve mechanical properties of conventional liquid-phase-sintered materials. Results of laboratory-scale investigations of solid-state consolidation of a variety of matrix materials, including titanium, hafnium, nickel aluminide, and steels are reviewed. The potential advantages and disadvantages of each of the possible alternative consolidation processes are identified. Postconsolidation processing to control microstructure and macrostructure is discussed, including novel methods of controlling microstructure alignment.

  8. Alternative processing methods for tungsten-base composite materials

    SciTech Connect

    Ohriner, E.K.; Sikka, V.K.

    1996-06-01

    Tungsten composite materials contain large amounts of tungsten distributed in a continuous matrix phase. Current commercial materials include the tungsten-nickel-iron with cobalt replacing some or all of the iron, and also tungsten-copper materials. Typically, these are fabricated by liquid-phase sintering of blended powders. Liquid-phase sintering offers the advantages of low processing costs, established technology, and generally attractive mechanical properties. However, liquid-phase sintering is restricted to a very limited number of matrix alloying elements and a limited range of tungsten and alloying compositions. In the past few years, there has been interest in a wider range of matrix materials that offer the potential for superior composite properties. These must be processed by solid-state processes and at sufficiently low temperatures to avoid undesired reactions between the tungsten and the matrix phase. These processes, in order of decreasing process temperature requirements, include hot isostatic pressing (HEPing), hot extrusion, and dynamic compaction. The HIPing and hot extrusion processes have also been used to improve mechanical properties of conventional liquid-phase-sintered materials. The results of laboratory-scale investigations of solid-state consolidation of a variety of matrix materials, including titanium, hafnium, nickel aluminide, and steels are reviewed. The potential advantages and disadvantages of each of the possible alternative consolidation processes are identified. Post consolidation processing to control microstructure and macrostructure is discussed, including novel methods of controlling microstructure alignment.

  9. New methods to titrate EIAV-based lentiviral vectors.

    PubMed

    Martin-Rendon, Enca; White, Linda J; Olsen, Anna; Mitrophanous, Kyriacos A; Mazarakis, Nicholas D

    2002-05-01

    Ideally, gene transfer vectors used in clinical protocols should only express the gene of interest. So far most vectors have contained marker genes to aid their titration. We have used quantitative real-time PCR to titrate equine infectious anemia virus (EIAV) vectors for gene therapy applications. Viral RNA was isolated from vector preparations and analyzed in a one-step RT-PCR reaction in which reverse transcription and amplification were combined in one tube. The PCR assay of vector stocks was quantitative and linear over four orders of magnitude. In tandem, the integration efficiency of these vectors has also been determined by real-time PCR, measuring the number of vector genomes in the target cells. We have found that these methods permit reliable and sensitive titration of lentiviral vectors independent from the expression of a transgene. They also allow us to determine the integration efficiency of different vector genomes. This technology has proved very useful, especially in the absence of marker genes and where vectors express multiple genes.

  10. Control method for mixed refrigerant based natural gas liquefier

    DOEpatents

    Kountz, Kenneth J.; Bishop, Patrick M.

    2003-01-01

    In a natural gas liquefaction system having a refrigerant storage circuit, a refrigerant circulation circuit in fluid communication with the refrigerant storage circuit, and a natural gas liquefaction circuit in thermal communication with the refrigerant circulation circuit, a method for liquefaction of natural gas in which pressure in the refrigerant circulation circuit is adjusted to below about 175 psig by exchange of refrigerant with the refrigerant storage circuit. A variable speed motor is started whereby operation of a compressor is initiated. The compressor is operated at full discharge capacity. Operation of an expansion valve is initiated whereby suction pressure at the suction pressure port of the compressor is maintained below about 30 psig and discharge pressure at the discharge pressure port of the compressor is maintained below about 350 psig. Refrigerant vapor is introduced from the refrigerant holding tank into the refrigerant circulation circuit until the suction pressure is reduced to below about 15 psig, after which flow of the refrigerant vapor from the refrigerant holding tank is terminated. Natural gas is then introduced into a natural gas liquefier, resulting in liquefaction of the natural gas.

  11. An efficient liposome based method for antioxidants encapsulation.

    PubMed

    Paini, Marco; Daly, Sean Ryan; Aliakbarian, Bahar; Fathi, Ali; Tehrany, Elmira Arab; Perego, Patrizia; Dehghani, Fariba; Valtchev, Peter

    2015-12-01

    Apigenin is an antioxidant that has shown a preventive activity against different cancer and cardiovascular disorders. In this study, we encapsulate apigenin with liposome to tackle the issue of its poor bioavailability and low stability. Apigenin loaded liposomes are fabricated with food-grade rapeseed lecithin in an aqueous medium in absence of any organic solvent. The liposome particle characteristics, such as particle size and polydispersity are optimised by tuning ultrasonic processing parameters. In addition, to measure the liposome encapsulation efficiency accurately, we establish a unique high-performance liquid chromatography technique in which an alkaline buffer mobile phase is used to prevent apigenin precipitation in the column;. salt is added to separate lipid particles from the aqeuous phase. Our results demonstrate that apigenin encapsulation efficiency is nearly 98% that is remarkably higher than any other reported value for encapsulation of this compound. In addition, the average particle size of these liposomes is 158.9 ± 6.1 nm that is suitable for the formulation of many food products, such as fortified fruit juice. The encapsulation method developed in this study, therefore have a high potential for the production of innovative, functional foods or nutraceutical products. PMID:26590900

  12. Nuclear-based methods for the study of selenium

    SciTech Connect

    Spyrou, N.M.; Akanle, O.A.; Dhani, A. )

    1988-01-01

    The essentiality of selenium to the human being and in particular its deficiency state, associated with prolonged inadequate dietary intake, have received considerable attention. In addition, the possible relationship between selenium and cancer and the claim that selenium may possess cancer-prevention properties have focused research effort. It has been observed in a number of studies on laboratory animals that selenium supplementation protects the animals against carcinogen-induced neoplastic growth in various organ sites, reduces the incidence of spontaneous mammary tumors, and suppresses the growth of transplanted tumor cells. In these research programs on the relationship between trace element levels and senile dementia and depression and the elemental changes in blood associated with selenium supplementation in a normal group of volunteers, it became obvious that in addition to establishing normal levels of elements in the population of interest, there was a more fundamental requirement for methods to be developed that would allow the study of the distribution of selenium in the body and its binding sites. The authors propose emission tomography and perturbed angular correlation as techniques worth exploring.

  13. Colorize magnetic nanoparticles using a search coil based testing method

    NASA Astrophysics Data System (ADS)

    Wu, Kai; Wang, Yi; Feng, Yinglong; Yu, Lina; Wang, Jian-Ping

    2015-04-01

    Different magnetic nanoparticles (MNPs) possess unique spectral responses to AC magnetic field and we can use this specific magnetic property of MNPs as "colors" in the detection. In this paper, a detection scheme for magnetic nanoparticle size distribution is demonstrated by using an MNPs and search-coils integrated detection system. A low frequency (50 Hz) sinusoidal magnetic field is applied to drive MNPs into saturated region. Then a high frequency sinusoidal field sweeping from 5 kHz to 35 kHz is applied in order to generate mixing frequency signals, which are collected by a pair of balanced search coils. These harmonics are highly specific to the nonlinearity of magnetization curve of the MNPs. Previous work focused on using the amplitude and phase of the 3rd harmonic or the amplitude ratio of the 5th harmonic over 3rd harmonic. Here we demonstrate to use the amplitude and phase information of both 3rd and 5th harmonics as magnetic "colors" of MNPs. It is found that this method effectively reduces the magnetic colorization error.

  14. A 2D/1D coupling neutron transport method based on the matrix MOC and NEM methods

    SciTech Connect

    Zhang, H.; Zheng, Y.; Wu, H.; Cao, L.

    2013-07-01

    A new 2D/1D coupling method based on the matrix MOC method (MMOC) and nodal expansion method (NEM) is proposed for solving the three-dimensional heterogeneous neutron transport problem. The MMOC method, used for radial two-dimensional calculation, constructs a response matrix between source and flux with only one sweep and then solves the linear system by using the restarted GMRES algorithm instead of the traditional trajectory sweeping process during within-group iteration for angular flux update. Long characteristics are generated by using the customization of commercial software AutoCAD. A one-dimensional diffusion calculation is carried out in the axial direction by employing the NEM method. The 2D and ID solutions are coupled through the transverse leakage items. The 3D CMFD method is used to ensure the global neutron balance and adjust the different convergence properties of the radial and axial solvers. A computational code is developed based on these theories. Two benchmarks are calculated to verify the coupling method and the code. It is observed that the corresponding numerical results agree well with references, which indicates that the new method is capable of solving the 3D heterogeneous neutron transport problem directly. (authors)

  15. A molecular beacon-based method for screening cervical cancer.

    PubMed

    Han, Suxia; Li, Ling; Jia, Xi; Ou, Wei; Ma, Jinlu; Wang, Hao; Zhao, Jing; Zhu, Qing

    2012-11-01

    The aim of this study is to develop a new screening method, molecular beacon (MB) imaging, for detection of cervical cancer and to determine its potential clinical applications by examining the sensitivity and specificity of target-specific MBs. Two target-specific molecular beacons were designed and synthesized for survivin and HPV16E6 mRNA. The two designed MBs and a random control MB were used to detect cervical cancer cell lines and a normal cell line. RT-PCR and western blot targeting survivin and HPV16E6 was done for verification. Furthermore the sensitivity and the specificity of the survivin and HPV16E6 mRNA MBs were examined in smears from 125 clinical cervical patients. The survivin and HPV16E6 mRNA MBs generated a strong fluorescence signal in cervical cancer cell lines, but not in the normal cell line, while the random control MB did not generated any signal in both cell lines. The fluorescence intensity correlated well with the gene expression levels in the cells determined by reverse transcription-PCR and Western blot analysis. The clinical sensitivity and the specificity of survivin MB-FITC were 72.5 and 77% while those of HPV16E6 MB-Cy3 were 96.1% and 71.6%, respectively. A parallel test of the two target MBs showed that the sensitivity increased to 98% and the specificity was 70.2%. The survivin and HPV16E6 mRNA MBs showed good reliability and sensitivity. They have great potential for clinical use in cervical cancer screening.

  16. Methods of noninvasive electrophysiological heart examination basing on solution of inverse problem of electrocardiography

    NASA Astrophysics Data System (ADS)

    Grigoriev, M.; Babich, L.

    2015-09-01

    The article represents the main noninvasive methods of heart electrical activity examination, theoretical bases of solution of electrocardiography inverse problem, application of different methods of heart examination in clinical practice, and generalized achievements in this sphere in global experience.

  17. Wurfelspiel-based training data methods for ATR

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-09-01

    A data object is constructed from a P by M Wurfelspiel matrix W by choosing an entry from each column to construct a sequence A0A1"AM-1. Each of the PM possibilities are designed to correspond to the same category according to some chosen measure. This matrix could encode many types of data. (1) Musical fragments, all of which evoke sadness; each column entry is a 4 beat sequence with a chosen A0A1A2 thus 16 beats long (W is P by 3). (2) Paintings, all of which evoke happiness; each column entry is a layer and a given A0A1A2 is a painting constructed using these layers (W is P by 3). (3) abstract feature vectors corresponding to action potentials evoked from a biological cell's exposure to a toxin. The action potential is divided into four relevant regions and each column entry represents the feature vector of a region. A given A0A1A2 is then an abstraction of the excitable cell's output (W is P by 4). (4) abstract feature vectors corresponding to an object such as a face or vehicle. The object is divided into four categories each assigned an abstract feature vector with the resulting concatenation an abstract representation of the object (W is P by 4). All of the examples above correspond to one particular measure (sad music, happy paintings, an introduced toxin, an object to recognize)and hence, when a Wurfelspiel matrix is constructed, relevant training information for recognition is encoded that can be used in many algorithms. The focus of this paper is on the application of these ideas to automatic target recognition (ATR). In addition, we discuss a larger biologically based model of temporal cortex polymodal sensor fusion which can use the feature vectors extracted from the ATR Wurfelspiel data.

  18. Evaluation of the Ves-Matic Cube 200 erythrocyte sedimentation method: comparison with Westergren-based methods.

    PubMed

    Curvers, Joyce; Kooren, Jurgen; Laan, Maartje; van Lierop, Edwin; van de Kerkhof, Daan; Scharnhorst, Volkher; Herruer, Martien

    2010-10-01

    The erythrocyte sedimentation rate (ESR) is still a widely used parameter for acute phase inflammation. Recently, new methods based on direct undiluted measurement of ESR in a standard EDTA tube have been developed. We evaluated the analytic performance of one of these new methods, the Ves-Matic Cube 200 (Diesse Diagnostica Senese, Siena, Italy), and compared it with several established Westergren-based diluted methods. The Ves-Matic Cube 200 showed a poor correlation (r = 0.83) with the International Council for Standardization in Haematology Westergren reference method, mainly caused by a considerable negative bias at low ESR levels. Moreover, a random bias was found at higher ESR levels that correlated with hematocrit levels, suggesting a differential influence of packed cell volume on the Ves-Matic Cube 200 results compared with Westergren results. We conclude that the Ves-Matic Cube 200 method is not interchangeable with Westergren-based diluted methods and generates ESR results that are too deviant to be clinically acceptable.

  19. [Spectra Classification Based on Local Mean-Based K-Nearest Centroid Neighbor Method].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Wang, Zhi-heng; Wei, Peng; Luo, A-li; Zhao, Yong-heng

    2015-04-01

    In the present paper, a local mean-based K-nearest centroid neighbor (LMKNCN) technique is used for the classification of stars, galaxies and quasars (QSOS). The main idea of LMKNCN is that it depends on the principle of the nearest centroid neighborhood(NCN), and selects K centroid neighbors of each class as training samples and then classifies a query pattern into the class with the distance of the local centroid mean vector to the samples . In this paper, KNN, KNCN and LMKNCN were experimentally compared with these three different kinds of spectra data which are from the United States SDSS-DR8. Among these three methods, the rate of correct classification of the LMKNCN algorithm is higher than the other two algorithms or comparable and the average rate of correct classification is higher than the other two algorithms, especially for the identification of quasars. Experiment shows that the results in this work have important significance for studying galaxies, stars and quasars spectra classification.

  20. [Spectra Classification Based on Local Mean-Based K-Nearest Centroid Neighbor Method].

    PubMed

    Tu, Liang-ping; Wei, Hui-ming; Wang, Zhi-heng; Wei, Peng; Luo, A-li; Zhao, Yong-heng

    2015-04-01

    In the present paper, a local mean-based K-nearest centroid neighbor (LMKNCN) technique is used for the classification of stars, galaxies and quasars (QSOS). The main idea of LMKNCN is that it depends on the principle of the nearest centroid neighborhood(NCN), and selects K centroid neighbors of each class as training samples and then classifies a query pattern into the class with the distance of the local centroid mean vector to the samples . In this paper, KNN, KNCN and LMKNCN were experimentally compared with these three different kinds of spectra data which are from the United States SDSS-DR8. Among these three methods, the rate of correct classification of the LMKNCN algorithm is higher than the other two algorithms or comparable and the average rate of correct classification is higher than the other two algorithms, especially for the identification of quasars. Experiment shows that the results in this work have important significance for studying galaxies, stars and quasars spectra classification. PMID:26197610

  1. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    USGS Publications Warehouse

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  2. A novel conformational B-cell epitope prediction method based on mimotope and patch analysis.

    PubMed

    Sun, Pingping; Qi, Jialiang; Zhao, Yizhu; Huang, Yanxin; Yang, Guifu; Ma, Zhiqiang; Li, Yuxin

    2016-04-01

    A B-cell epitope is a group of residues on the surface of an antigen that stimulates humoral immune responses. Identifying B-cell epitopes is important for effective vaccine design. Predicting epitopes by experimental methods is expensive in terms of time, cost and effort; therefore, computational methods that have a low cost and high speed are widely used to predict B-cell epitopes. Recently, epitope prediction based on random peptide library screening has been viewed as a promising method. Some novel software and web-based servers have been proposed that have succeeded in some test cases. Herein, we propose a novel epitope prediction method based on amino acid pairs and patch analysis. The method first divides antigen surfaces into overlapping patches based on both radius (R) and number (N), then predict epitopes based on Amino Acid Pairs (AAPs) from mimotopes and the surface patch. The proposed method yields a mean sensitivity of 0.53, specificity of 0.77, ACC of 0.75 and F-measure of 0.45 for 39 test cases. Compared with mimotope-based methods, patch-based methods and two other prediction methods, the sensitivity of the new method offers a certain improvement. Our findings demonstrate that this proposed method was successful for patch and AAPs analysis and allowed for conformational B-cell epitope prediction. PMID:26804644

  3. Locating the Optic Nerve in Retinal Images: Comparing Model-Based and Bayesian Decision Methods

    SciTech Connect

    Karnowski, Thomas Paul; Tobin Jr, Kenneth William; Muthusamy Govindasamy, Vijaya Priya; Chaum, Edward

    2006-01-01

    In this work we compare two methods for automatic optic nerve (ON) localization in retinal imagery. The first method uses a Bayesian decision theory is criminator based on four spatial features of the retina imagery. The second method uses a principal component-based reconstruction to model the ON. We report on an improvement to the model-based technique by incorporating linear discriminant analysis and Bayesian decision theory methods. We explore a method to combine both techniques to produce a composite technique with high accuracy and rapid throughput. Results are shown for a data set of 395 images with 2-fold validation testing.

  4. Comparing internet-based and venue-based methods to sample MSM in the San Francisco Bay Area.

    PubMed

    Raymond, H Fisher; Rebchook, Greg; Curotto, Alberto; Vaudrey, Jason; Amsden, Matthew; Levine, Deb; McFarland, Willi

    2010-02-01

    Methods of collecting behavioral surveillance data, including Web-based methods, have recently been explored in the United States. Questions have arisen as to what extent Internet recruitment methods yield samples of MSM comparable to those obtained using venue-based recruitment methods. We compare three recruitment methods among MSM with respect to demographic and risk behaviors, one sample was obtained using time location sampling at venues in San Francisco, one using a venue based like approach on the Internet and one using direct-marketing advertisements to recruit participants. The physical venue approach was most successful in completing interviews with approached men than both Internet approaches. Respondents recruited via the three methods reported slight differences in risk behavior. Direct marketing internet recruitment can obtain large samples of MSM in a short time. PMID:19160034

  5. Comparison of a silver nanoparticle-based method and the modified spectrophotometric methods for assessing antioxidant capacity of rapeseed varieties.

    PubMed

    Szydłowska-Czerniak, Aleksandra; Tułodziecka, Agnieszka

    2013-12-01

    The antioxidant capacity of 15 rapeseed varieties was determined by the proposed silver nanoparticle-based (AgNP) method and three modified assays: ferric reducing antioxidant power (FRAP), 2,2'-diphenyl-1-picrylhydrazyl (DPPH) and Folin-Ciocalteu reducing capacity (FC). The average antioxidant capacities of the studied rapeseed cultivars ranged between 5261-9462, 3708-7112, 18864-31245 and 5816-9937 μmol sinapic acid (SA)/100g for AgNP, FRAP, DPPH and FC methods, respectively. There are significant, positive correlations between antioxidant capacities of the studied rapeseed cultivars determined by four analytical methods (r=0.5971-0.9149, p<0.05). The comparable precision for the proposed AgNP method (RSD=1.4-4.4%) and the modified FRAP, DPPH and FC methods (RSD=1.0-4.4%, 0.7-2.1% and 0.8-3.6%, respectively), demonstrate the benefit of the AgNP method in the routine analysis of antioxidant capacity of rapeseed cultivars. The principal component analysis (PCA) and hierarchical cluster analysis (HCA) were used for discrimination the quality of the studied rapeseed varieties based on their antioxidant potential determined by different analytical methods. Three main groups were identified by HCA, while the classification and characterisation of rapeseed varieties within each of these groups were obtained from PCA. The chemometric analyses demonstrated that, rapeseed variety S13 had the highest antioxidant capacity, thus this cultivar should be considered as the richest source of natural antioxidants.

  6. a Range Based Method for Complex Facade Modeling

    NASA Astrophysics Data System (ADS)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel) is displaced according the value of gray (= distance from the plane). This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  7. Parameter correction method for dual position-sensitive-detector-based unit.

    PubMed

    Mao, Shuai; Hu, Pengcheng; Ding, XueMei; Tan, JiuBin

    2016-05-20

    A dual position-sensitive-detector (PSD)-based unit can be used for angular measurements of a multi-degree-of-freedom measurement system and a laser interferometry-based sensing and tracking system. In order to ensure the precision of incident beam direction measurement for a PSD-based unit, model and autoreflection alignment methods for correction of PSD-based unit parameters are proposed. Experimental results demonstrate the deviations between the angular measurements obtained using a dual PSD-based unit and an autocollimator varied by 70″, 20″, and 1″ for three runs of the autoreflection alignment method, respectively, and the model method deviations all varied by 1″ in the 1000″ measurement range for three runs. It is therefore concluded that the model method is more reliable than the autoreflection alignment method for ensuring the accuracy of a dual PSD-based unit. PMID:27411134

  8. Parameter correction method for dual position-sensitive-detector-based unit.

    PubMed

    Mao, Shuai; Hu, Pengcheng; Ding, XueMei; Tan, JiuBin

    2016-05-20

    A dual position-sensitive-detector (PSD)-based unit can be used for angular measurements of a multi-degree-of-freedom measurement system and a laser interferometry-based sensing and tracking system. In order to ensure the precision of incident beam direction measurement for a PSD-based unit, model and autoreflection alignment methods for correction of PSD-based unit parameters are proposed. Experimental results demonstrate the deviations between the angular measurements obtained using a dual PSD-based unit and an autocollimator varied by 70″, 20″, and 1″ for three runs of the autoreflection alignment method, respectively, and the model method deviations all varied by 1″ in the 1000″ measurement range for three runs. It is therefore concluded that the model method is more reliable than the autoreflection alignment method for ensuring the accuracy of a dual PSD-based unit.

  9. Approach-Method Interaction: The Role of Teaching Method on the Effect of Context-Based Approach in Physics Instruction

    ERIC Educational Resources Information Center

    Pesman, Haki; Ozdemir, Omer Faruk

    2012-01-01

    The purpose of this study is to explore not only the effect of context-based physics instruction on students' achievement and motivation in physics, but also how the use of different teaching methods influences it (interaction effect). Therefore, two two-level-independent variables were defined, teaching approach (contextual and non-contextual…

  10. WebMail versus WebApp: Comparing Problem-Based Learning Methods in a Business Research Methods Course

    ERIC Educational Resources Information Center

    Williams van Rooij, Shahron

    2007-01-01

    This study examined the impact of two Problem-Based Learning (PBL) approaches on knowledge transfer, problem-solving self-efficacy, and perceived learning gains among four intact classes of adult learners engaged in a group project in an online undergraduate business research methods course. With two of the classes using a text-only PBL workbook…

  11. Method for PE Pipes Fusion Jointing Based on TRIZ Contradictions Theory

    NASA Astrophysics Data System (ADS)

    Sun, Jianguang; Tan, Runhua; Gao, Jinyong; Wei, Zihui

    The core of the TRIZ theories is the contradiction detection and solution. TRIZ provided various methods for the contradiction solution, but all that is not systematized. Combined with the technique system conception, this paper summarizes an integration solution method for contradiction solution based on the TRIZ contradiction theory. According to the method, a flowchart of integration solution method for contradiction is given. As a casestudy, method of fusion jointing PE pipe is analysised.

  12. Exploring Methods of Analysing Talk in Problem-Based Learning Tutorials

    ERIC Educational Resources Information Center

    Clouston, Teena J.

    2007-01-01

    This article explores the use of discourse analysis and conversation analysis as an evaluation tool in problem-based learning. The basic principles of the methods are discussed and their application in analysing talk in problem-based learning considered. Findings suggest that these methods could enable an understanding of how effective…

  13. System and method for integrating hazard-based decision making tools and processes

    DOEpatents

    Hodgin, C. Reed

    2012-03-20

    A system and method for inputting, analyzing, and disseminating information necessary for identified decision-makers to respond to emergency situations. This system and method provides consistency and integration among multiple groups, and may be used for both initial consequence-based decisions and follow-on consequence-based decisions. The system and method in a preferred embodiment also provides tools for accessing and manipulating information that are appropriate for each decision-maker, in order to achieve more reasoned and timely consequence-based decisions. The invention includes processes for designing and implementing a system or method for responding to emergency situations.

  14. Powder-based adsorbents having high adsorption capacities for recovering dissolved metals and methods thereof

    DOEpatents

    Janke, Christopher J.; Dai, Sheng; Oyola, Yatsandra

    2016-05-03

    A powder-based adsorbent and a related method of manufacture are provided. The powder-based adsorbent includes polymer powder with grafted side chains and an increased surface area per unit weight to increase the adsorption of dissolved metals, for example uranium, from aqueous solutions. A method for forming the powder-based adsorbent includes irradiating polymer powder, grafting with polymerizable reactive monomers, reacting with hydroxylamine, and conditioning with an alkaline solution. Powder-based adsorbents formed according to the present method demonstrated a significantly improved uranium adsorption capacity per unit weight over existing adsorbents.

  15. Foam-based adsorbents having high adsorption capacities for recovering dissolved metals and methods thereof

    DOEpatents

    Janke, Christopher J.; Dai, Sheng; Oyola, Yatsandra

    2015-06-02

    Foam-based adsorbents and a related method of manufacture are provided. The foam-based adsorbents include polymer foam with grafted side chains and an increased surface area per unit weight to increase the adsorption of dissolved metals, for example uranium, from aqueous solutions. A method for forming the foam-based adsorbents includes irradiating polymer foam, grafting with polymerizable reactive monomers, reacting with hydroxylamine, and conditioning with an alkaline solution. Foam-based adsorbents formed according to the present method demonstrated a significantly improved uranium adsorption capacity per unit weight over existing adsorbents.

  16. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. PMID:26917856

  17. A shallow landslide analysis method consisting of contour line based method and slope stability model with critical slip surface

    NASA Astrophysics Data System (ADS)

    Tsutsumi, D.

    2015-12-01

    To mitigate sediment related disaster triggered by rainfall event, it is necessary to predict a landslide occurrence and subsequent debris flow behavior. Many landslide analysis method have been developed and proposed by numerous researchers for several decades. Among them, distributed slope stability models simulating temporal and spatial instability of local slopes are more essential for early warning or evacuation in area of lower part of hill-slopes. In the present study, a distributed, physically based landslide analysis method consisting of contour line-based method that subdivide a watershed area into stream tubes, and a slope stability analysis in which critical slip surface is searched to identify location and shape of the most instable slip surface in each stream tube, is developed. A target watershed area is divided into stream tubes using GIS technique, grand water flow for each stream tubes during a rainfall event is analyzed by a kinematic wave model, and slope stability for each stream tube is calculated by a simplified Janbu method searching for a critical slip surface using a dynamic programming method. Comparing to previous methods that assume infinite slope for slope stability analysis, the proposed method has advantage simulating landslides more accurately in spatially and temporally, and estimating amount of collapsed slope mass, that can be delivered to a debris flow simulation model as a input data. We applied this method to a small watershed in the Izu Oshima, Tokyo, Japan, where shallow and wide landslides triggered by heavy rainfall and subsequent debris flows attacked Oshima Town, in 2013. Figure shows the temporal and spatial change of simulated grand water level and landslides distribution. The simulated landslides are correspond to the uppermost part of actual landslide area, and the timing of the occurrence of landslides agree well with the actual landslides.

  18. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  19. A method for data base management and analysis for wind tunnel data

    NASA Technical Reports Server (NTRS)

    Biser, Aileen O.

    1987-01-01

    To respond to the need for improved data base management and analysis capabilities for wind-tunnel data at the Langley 16-Foot Transonic Tunnel, research was conducted into current methods of managing wind-tunnel data and a method was developed as a solution to this need. This paper describes the development of the data base management and analysis method for wind-tunnel data. The design and implementation of the software system are discussed and examples of its use are shown.

  20. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor

    PubMed Central

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422

  1. [A method for redshift determination of quasars based on cross correlation].

    PubMed

    Liu, Rong; Duan, Fu-qing; Luo, A-li

    2005-07-01

    This paper presents a novel method for redshift determination of quasars. Firstly, a group of redshifts were determined using the emission line info extracted from the observed spectrum; Secondly, the template was redshifted according to the candidates, and the correlation between the observed spectrum and the redshifted template was measured. Finally, the redshift candidate corresponding to the highest correlation was chosen as the redshift. Compared with the existing methods based on spectral line matching, the proposed method has a lower dependence on the quality of spectral line extraction. Experiments show that this method is robust and superior to the methods based on spectral linematching.

  2. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor.

    PubMed

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422

  3. Methods of capturing and immobilizing radioactive nuclei with metal fluorite-based inorganic materials

    DOEpatents

    Wang, Yifeng; Miller, Andy; Bryan, Charles R.; Kruichak, Jessica Nicole

    2015-11-17

    Methods of capturing and immobilizing radioactive nuclei with metal fluorite-based inorganic materials are described. For example, a method of capturing and immobilizing radioactive nuclei includes flowing a gas stream through an exhaust apparatus. The exhaust apparatus includes a metal fluorite-based inorganic material. The gas stream includes a radioactive species. The radioactive species is removed from the gas stream by adsorbing the radioactive species to the metal fluorite-based inorganic material of the exhaust apparatus.

  4. Cryptanalysis of "an improvement over an image encryption method based on total shuffling"

    NASA Astrophysics Data System (ADS)

    Akhavan, A.; Samsudin, A.; Akhshani, A.

    2015-09-01

    In the past two decades, several image encryption algorithms based on chaotic systems had been proposed. Many of the proposed algorithms are meant to improve other chaos based and conventional cryptographic algorithms. Whereas, many of the proposed improvement methods suffer from serious security problems. In this paper, the security of the recently proposed improvement method for a chaos-based image encryption algorithm is analyzed. The results indicate the weakness of the analyzed algorithm against chosen plain-text.

  5. Method to produce nanocrystalline powders of oxide-based phosphors for lighting applications

    DOEpatents

    Loureiro, Sergio Paulo Martins; Setlur, Anant Achyut; Williams, Darryl Stephen; Manoharan, Mohan; Srivastava, Alok Mani

    2007-12-25

    Some embodiments of the present invention are directed toward nanocrystalline oxide-based phosphor materials, and methods for making same. Typically, such methods comprise a steric entrapment route for converting precursors into such phosphor material. In some embodiments, the nanocrystalline oxide-based phosphor materials are quantum splitting phosphors. In some or other embodiments, such nanocrystalline oxide based phosphor materials provide reduced scattering, leading to greater efficiency, when used in lighting applications.

  6. Methods of capturing and immobilizing radioactive nuclei with metal fluorite-based inorganic materials

    SciTech Connect

    Wang, Yifeng; Miller, Andy; Bryan, Charles R; Kruichar, Jessica Nicole

    2015-04-07

    Methods of capturing and immobilizing radioactive nuclei with metal fluorite-based inorganic materials are described. For example, a method of capturing and immobilizing radioactive nuclei includes flowing a gas stream through an exhaust apparatus. The exhaust apparatus includes a metal fluorite-based inorganic material. The gas stream includes a radioactive species. The radioactive species is removed from the gas stream by adsorbing the radioactive species to the metal fluorite-based inorganic material of the exhaust apparatus.

  7. Vector-based plane-wave spectrum method for the propagation of cylindrical electromagnetic fields.

    PubMed

    Shi, S; Prather, D W

    1999-11-01

    We present a vector-based plane-wave spectrum (VPWS) method for efficient propagation of cylindrical electromagnetic fields. In comparison with electromagnetic propagation integrals, the VPWS method significantly reduces time of propagation. Numerical results that illustrate the utility of this method are presented.

  8. Nodal Analysis Optimization Based on the Use of Virtual Current Sources: A Powerful New Pedagogical Method

    ERIC Educational Resources Information Center

    Chatzarakis, G. E.

    2009-01-01

    This paper presents a new pedagogical method for nodal analysis optimization based on the use of virtual current sources, applicable to any linear electric circuit (LEC), regardless of its complexity. The proposed method leads to straightforward solutions, mostly arrived at by inspection. Furthermore, the method is easily adapted to computer…

  9. Betweenness-Based Method to Identify Critical Transmission Sectors for Supply Chain Environmental Pressure Mitigation.

    PubMed

    Liang, Sai; Qu, Shen; Xu, Ming

    2016-02-01

    To develop industry-specific policies for mitigating environmental pressures, previous studies primarily focus on identifying sectors that directly generate large amounts of environmental pressures (a.k.a. production-based method) or indirectly drive large amounts of environmental pressures through supply chains (e.g., consumption-based method). In addition to those sectors as important environmental pressure producers or drivers, there exist sectors that are also important to environmental pressure mitigation as transmission centers. Economy-wide environmental pressure mitigation might be achieved by improving production efficiency of these key transmission sectors, that is, using less upstream inputs to produce unitary output. We develop a betweenness-based method to measure the importance of transmission sectors, borrowing the betweenness concept from network analysis. We quantify the betweenness of sectors by examining supply chain paths extracted from structural path analysis that pass through a particular sector. We take China as an example and find that those critical transmission sectors identified by betweenness-based method are not always identifiable by existing methods. This indicates that betweenness-based method can provide additional insights that cannot be obtained with existing methods on the roles individual sectors play in generating economy-wide environmental pressures. Betweenness-based method proposed here can therefore complement existing methods for guiding sector-level environmental pressure mitigation strategies. PMID:26727352

  10. Implementing a Problem-Based Learning Approach for Teaching Research Methods in Geography

    ERIC Educational Resources Information Center

    Spronken-Smith, Rachel

    2005-01-01

    This paper first describes problem-based learning; second describes how a research methods course in geography is taught using a problem-based learning approach; and finally relates student and staff experiences of this approach. The course is run through regular group meetings, two residential field trips and optional skills-based workshops.…

  11. An Inquiry-Based Approach to Teaching Research Methods in Information Studies

    ERIC Educational Resources Information Center

    Albright, Kendra; Petrulis, Robert; Vasconcelos, Ana; Wood, Jamie

    2012-01-01

    This paper presents the results of a project that aimed at restructuring the delivery of research methods training at the Information School at the University of Sheffield, UK, based on an Inquiry-Based Learning (IBL) approach. The purpose of this research was to implement inquiry-based learning that would allow customization of research methods…

  12. A Comparison of Traditional Teaching Methods and Problem-Based Learning in an Addiction Studies Class.

    ERIC Educational Resources Information Center

    Sevening, Diane; Baron, Mark

    2002-01-01

    Study compared students' achievement gains and attitudes using traditional (lecture-based) teaching methods and problem-based learning (PBL) techniques in an addiction studies class. Results showed students did not respond well to PBL and preferred a lecture-based format. Pretest mean scores indicated the PBL group entered the course at a higher…

  13. An improved poly(A) motifs recognition method based on decision level fusion.

    PubMed

    Zhang, Shanxin; Han, Jiuqiang; Liu, Jun; Zheng, Jiguang; Liu, Ruiling

    2015-02-01

    Polyadenylation is the process of addition of poly(A) tail to mRNA 3' ends. Identification of motifs controlling polyadenylation plays an essential role in improving genome annotation accuracy and better understanding of the mechanisms governing gene regulation. The bioinformatics methods used for poly(A) motifs recognition have demonstrated that information extracted from sequences surrounding the candidate motifs can differentiate true motifs from the false ones greatly. However, these methods depend on either domain features or string kernels. To date, methods combining information from different sources have not been found yet. Here, we proposed an improved poly(A) motifs recognition method by combing different sources based on decision level fusion. First of all, two novel prediction methods was proposed based on support vector machine (SVM): one method is achieved by using the domain-specific features and principle component analysis (PCA) method to eliminate the redundancy (PCA-SVM); the other method is based on Oligo string kernel (Oligo-SVM). Then we proposed a novel machine-learning method for poly(A) motif prediction by marrying four poly(A) motifs recognition methods, including two state-of-the-art methods (Random Forest (RF) and HMM-SVM), and two novel proposed methods (PCA-SVM and Oligo-SVM). A decision level information fusion method was employed to combine the decision values of different classifiers by applying the DS evidence theory. We evaluated our method on a comprehensive poly(A) dataset that consists of 14,740 samples on 12 variants of poly(A) motifs and 2750 samples containing none of these motifs. Our method has achieved accuracy up to 86.13%. Compared with the four classifiers, our evidence theory based method reduces the average error rate by about 30%, 27%, 26% and 16%, respectively. The experimental results suggest that the proposed method is more effective for poly(A) motif recognition. PMID:25594576

  14. An improved poly(A) motifs recognition method based on decision level fusion.

    PubMed

    Zhang, Shanxin; Han, Jiuqiang; Liu, Jun; Zheng, Jiguang; Liu, Ruiling

    2015-02-01

    Polyadenylation is the process of addition of poly(A) tail to mRNA 3' ends. Identification of motifs controlling polyadenylation plays an essential role in improving genome annotation accuracy and better understanding of the mechanisms governing gene regulation. The bioinformatics methods used for poly(A) motifs recognition have demonstrated that information extracted from sequences surrounding the candidate motifs can differentiate true motifs from the false ones greatly. However, these methods depend on either domain features or string kernels. To date, methods combining information from different sources have not been found yet. Here, we proposed an improved poly(A) motifs recognition method by combing different sources based on decision level fusion. First of all, two novel prediction methods was proposed based on support vector machine (SVM): one method is achieved by using the domain-specific features and principle component analysis (PCA) method to eliminate the redundancy (PCA-SVM); the other method is based on Oligo string kernel (Oligo-SVM). Then we proposed a novel machine-learning method for poly(A) motif prediction by marrying four poly(A) motifs recognition methods, including two state-of-the-art methods (Random Forest (RF) and HMM-SVM), and two novel proposed methods (PCA-SVM and Oligo-SVM). A decision level information fusion method was employed to combine the decision values of different classifiers by applying the DS evidence theory. We evaluated our method on a comprehensive poly(A) dataset that consists of 14,740 samples on 12 variants of poly(A) motifs and 2750 samples containing none of these motifs. Our method has achieved accuracy up to 86.13%. Compared with the four classifiers, our evidence theory based method reduces the average error rate by about 30%, 27%, 26% and 16%, respectively. The experimental results suggest that the proposed method is more effective for poly(A) motif recognition.

  15. Advances on Empirical Mode Decomposition-based Time-Frequency Analysis Methods in Hydrocarbon Detection

    NASA Astrophysics Data System (ADS)

    Chen, H. X.; Xue, Y. J.; Cao, J.

    2015-12-01

    Empirical mode decomposition (EMD), which is a data-driven adaptive decomposition method and is not limited by time-frequency uncertainty spreading, is proved to be more suitable for seismic signals which are nonlinear and non-stationary. Compared with other Fourier-based and wavelet-based time-frequency methods, EMD-based time-frequency methods have higher temporal and spatial resolution and yield hydrocarbon interpretations with more statistical significance. Empirical mode decomposition algorithm has now evolved from EMD to Ensemble EMD (EEMD) to Complete Ensemble EMD (CEEMD). Even though EMD-based time-frequency methods offer many promising features for analyzing and processing geophysical data, there are some limitations or defects in EMD-based time-frequency methods. This presentation will present a comparative study on hydrocarbon detection using seven EMD-based time-frequency analysis methods, which include: (1) first, EMD combined with Hilbert transform (HT) as a time-frequency analysis method is used for hydrocarbon detection; and (2) second, Normalized Hilbert transform (NHT) and HU Methods respectively combined with HT as improved time-frequency analysis methods are applied for hydrocarbon detection; and (3) three, EMD combined with Teager-Kaiser energy (EMD/TK) is investigated for hydrocarbon detection; and (4) four, EMD combined with wavelet transform (EMDWave) as a seismic attenuation estimation method is comparatively studied; and (5) EEMD- and CEEMD- based time-frequency analysis methods used as highlight volumes technology are studied. The differences between these methods in hydrocarbon detection will be discussed. The question of getting a meaningful instantaneous frequency by HT and mode-mixing issues in EMD will be analysed. The work was supported by NSFC under grant Nos. 41430323, 41404102 and 41274128.

  16. A hybrid method based upon nonlinear Lamb wave response for locating a delamination in composite laminates.

    PubMed

    Yelve, Nitesh P; Mitra, Mira; Mujumdar, P M; Ramadas, C

    2016-08-01

    A new hybrid method based upon nonlinear Lamb wave response in time and frequency domains is introduced to locate a delamination in composite laminates. In Lamb wave based nonlinear method, the presence of damage is shown by the appearance of higher harmonics in the Lamb wave response. The proposed method not only uses this spectral information but also the corresponding temporal response data, for locating the delamination. Thus, the method is termed as a hybrid method. The paper includes formulation of the method and its application to locate a Barely Visible Impact Damage (BVID) induced delamination in a Carbon Fiber Reinforced Polymer (CFRP) laminate. The method gives the damage location fairly well. It is a baseline free method, as it does not need data from the pristine specimen. PMID:27115575

  17. Evaluation of path-history-based fluorescence Monte Carlo method for photon migration in heterogeneous media.

    PubMed

    Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming

    2014-12-29

    The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium.

  18. A hybrid method based upon nonlinear Lamb wave response for locating a delamination in composite laminates.

    PubMed

    Yelve, Nitesh P; Mitra, Mira; Mujumdar, P M; Ramadas, C

    2016-08-01

    A new hybrid method based upon nonlinear Lamb wave response in time and frequency domains is introduced to locate a delamination in composite laminates. In Lamb wave based nonlinear method, the presence of damage is shown by the appearance of higher harmonics in the Lamb wave response. The proposed method not only uses this spectral information but also the corresponding temporal response data, for locating the delamination. Thus, the method is termed as a hybrid method. The paper includes formulation of the method and its application to locate a Barely Visible Impact Damage (BVID) induced delamination in a Carbon Fiber Reinforced Polymer (CFRP) laminate. The method gives the damage location fairly well. It is a baseline free method, as it does not need data from the pristine specimen.

  19. A component prediction method for flue gas of natural gas combustion based on nonlinear partial least squares method.

    PubMed

    Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

    2014-01-01

    Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness. PMID:24772020

  20. A component prediction method for flue gas of natural gas combustion based on nonlinear partial least squares method.

    PubMed

    Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

    2014-01-01

    Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.

  1. A Component Prediction Method for Flue Gas of Natural Gas Combustion Based on Nonlinear Partial Least Squares Method

    PubMed Central

    Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

    2014-01-01

    Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness. PMID:24772020

  2. MTC: A Fast and Robust Graph-Based Transductive Learning Method.

    PubMed

    Zhang, Yan-Ming; Huang, Kaizhu; Geng, Guang-Gang; Liu, Cheng-Lin

    2015-09-01

    Despite the great success of graph-based transductive learning methods, most of them have serious problems in scalability and robustness. In this paper, we propose an efficient and robust graph-based transductive classification method, called minimum tree cut (MTC), which is suitable for large-scale data. Motivated from the sparse representation of graph, we approximate a graph by a spanning tree. Exploiting the simple structure, we develop a linear-time algorithm to label the tree such that the cut size of the tree is minimized. This significantly improves graph-based methods, which typically have a polynomial time complexity. Moreover, we theoretically and empirically show that the performance of MTC is robust to the graph construction, overcoming another big problem of traditional graph-based methods. Extensive experiments on public data sets and applications on web-spam detection and interactive image segmentation demonstrate our method's advantages in aspect of accuracy, speed, and robustness.

  3. Evaluation of Hybridization Capture Versus Amplicon‐Based Methods for Whole‐Exome Sequencing

    PubMed Central

    Samorodnitsky, Eric; Jewell, Benjamin M.; Hagopian, Raffi; Miya, Jharna; Wing, Michele R.; Lyon, Ezra; Damodaran, Senthilkumar; Bhatt, Darshna; Reeser, Julie W.; Datta, Jharna

    2015-01-01

    ABSTRACT Next‐generation sequencing has aided characterization of genomic variation. While whole‐genome sequencing may capture all possible mutations, whole‐exome sequencing remains cost‐effective and captures most phenotype‐altering mutations. Initial strategies for exome enrichment utilized a hybridization‐based capture approach. Recently, amplicon‐based methods were designed to simplify preparation and utilize smaller DNA inputs. We evaluated two hybridization capture‐based and two amplicon‐based whole‐exome sequencing approaches, utilizing both Illumina and Ion Torrent sequencers, comparing on‐target alignment, uniformity, and variant calling. While the amplicon methods had higher on‐target rates, the hybridization capture‐based approaches demonstrated better uniformity. All methods identified many of the same single‐nucleotide variants, but each amplicon‐based method missed variants detected by the other three methods and reported additional variants discordant with all three other technologies. Many of these potential false positives or negatives appear to result from limited coverage, low variant frequency, vicinity to read starts/ends, or the need for platform‐specific variant calling algorithms. All methods demonstrated effective copy‐number variant calling when evaluated against a single‐nucleotide polymorphism array. This study illustrates some differences between whole‐exome sequencing approaches, highlights the need for selecting appropriate variant calling based on capture method, and will aid laboratories in selecting their preferred approach. PMID:26110913

  4. Assessment of health-care waste disposal methods using a VIKOR-based fuzzy multi-criteria decision making method

    SciTech Connect

    Liu, Hu-Chen; Wu, Jing; Li, Ping

    2013-12-15

    Highlights: • Propose a VIKOR-based fuzzy MCDM technique for evaluating HCW disposal methods. • Linguistic variables are used to assess the ratings and weights for the criteria. • The OWA operator is utilized to aggregate individual opinions of decision makers. • A case study is given to illustrate the procedure of the proposed framework. - Abstract: Nowadays selection of the appropriate treatment method in health-care waste (HCW) management has become a challenge task for the municipal authorities especially in developing countries. Assessment of HCW disposal alternatives can be regarded as a complicated multi-criteria decision making (MCDM) problem which requires consideration of multiple alternative solutions and conflicting tangible and intangible criteria. The objective of this paper is to present a new MCDM technique based on fuzzy set theory and VIKOR method for evaluating HCW disposal methods. Linguistic variables are used by decision makers to assess the ratings and weights for the established criteria. The ordered weighted averaging (OWA) operator is utilized to aggregate individual opinions of decision makers into a group assessment. The computational procedure of the proposed framework is illustrated through a case study in Shanghai, one of the largest cities of China. The HCW treatment alternatives considered in this study include “incineration”, “steam sterilization”, “microwave” and “landfill”. The results obtained using the proposed approach are analyzed in a comparative way.

  5. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis

    PubMed Central

    Kang, Mengjun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction. PMID:25996691

  6. Computed Tomography Analysis of Postsurgery Femoral Component Rotation Based on a Force Sensing Device Method versus Hypothetical Rotational Alignment Based on Anatomical Landmark Methods: A Pilot Study.

    PubMed

    Kreuzer, Stefan W; Pourmoghaddam, Amir; Leffers, Kevin J; Johnson, Clint W; Dettmer, Marius

    2016-01-01

    Rotation of the femoral component is an important aspect of knee arthroplasty, due to its effects on postsurgery knee kinematics and associated functional outcomes. It is still debated which method for establishing rotational alignment is preferable in orthopedic surgery. We compared force sensing based femoral component rotation with traditional anatomic landmark methods to investigate which method is more accurate in terms of alignment to the true transepicondylar axis. Thirty-one patients underwent computer-navigated total knee arthroplasty for osteoarthritis with femoral rotation established via a force sensor. During surgery, three alternative hypothetical femoral rotational alignments were assessed, based on transepicondylar axis, anterior-posterior axis, or the utilization of a posterior condyles referencing jig. Postoperative computed tomography scans were obtained to investigate rotation characteristics. Significant differences in rotation characteristics were found between rotation according to DKB and other methods (P < 0.05). Soft tissue balancing resulted in smaller deviation from anatomical epicondylar axis than any other method. 77% of operated knees were within a range of ±3° of rotation. Only between 48% and 52% of knees would have been rotated appropriately using the other methods. The current results indicate that force sensors may be valuable for establishing correct femoral rotation. PMID:26881086

  7. Provider payment methods and health worker motivation in community-based health insurance: a mixed-methods study.

    PubMed

    Robyn, Paul Jacob; Bärnighausen, Till; Souares, Aurélia; Traoré, Adama; Bicaba, Brice; Sié, Ali; Sauerborn, Rainer

    2014-05-01

    In a community-based health insurance (CBHI) introduced in 2004 in Nouna health district, Burkina Faso, poor perceived quality of care by CBHI enrollees has been a key factor in observed high drop-out rates. The poor quality perceptions have been previously attributed to health worker dissatisfaction with the provider payment method used by the scheme and the resulting financial risk of health centers. This study applied a mixed-methods approach to investigate how health workers working in facilities contracted by the CBHI view the methods of provider payment used by the CBHI. In order to analyze these relationships, we conducted 23 in-depth interviews and a quantitative survey with 98 health workers working in the CBHI intervention zone. The qualitative in-depth interviews identified that insufficient levels of capitation payments, the infrequent schedule of capitation payment, and lack of a payment mechanism for reimbursing service fees were perceived as significant sources of health worker dissatisfaction and loss of work-related motivation. Combining qualitative interview and quantitative survey data in a mixed-methods analysis, this study identified that the declining quality of care due to the CBHI provider payment method was a source of significant professional stress and role strain for health workers. Health workers felt that the following five changes due to the provider payment methods introduced by the CBHI impeded their ability to fulfill professional roles and responsibilities: (i) increased financial volatility of health facilities, (ii) dissatisfaction with eligible costs to be covered by capitation; (iii) increased pharmacy stock-outs; (iv) limited financial and material support from the CBHI; and (v) the lack of mechanisms to increase provider motivation to support the CBHI. To address these challenges and improve CBHI uptake and health outcomes in the targeted populations, the health care financing and delivery model in the study zone should be

  8. Note: Model-based identification method of a cable-driven wearable device for arm rehabilitation.

    PubMed

    Cui, Xiang; Chen, Weihai; Zhang, Jianbin; Wang, Jianhua

    2015-09-01

    Cable-driven exoskeletons have used active cables to actuate the system and are worn on subjects to provide motion assistance. However, this kind of wearable devices usually contains uncertain kinematic parameters. In this paper, a model-based identification method has been proposed for a cable-driven arm exoskeleton to estimate its uncertainties. The identification method is based on the linearized error model derived from the kinematics of the exoskeleton. Experiment has been conducted to demonstrate the feasibility of the proposed model-based method in practical application.

  9. Note: Model-based identification method of a cable-driven wearable device for arm rehabilitation

    NASA Astrophysics Data System (ADS)

    Cui, Xiang; Chen, Weihai; Zhang, Jianbin; Wang, Jianhua

    2015-09-01

    Cable-driven exoskeletons have used active cables to actuate the system and are worn on subjects to provide motion assistance. However, this kind of wearable devices usually contains uncertain kinematic parameters. In this paper, a model-based identification method has been proposed for a cable-driven arm exoskeleton to estimate its uncertainties. The identification method is based on the linearized error model derived from the kinematics of the exoskeleton. Experiment has been conducted to demonstrate the feasibility of the proposed model-based method in practical application.

  10. Needs and Opportunities for Uncertainty-Based Multidisciplinary Design Methods for Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Hemsch, Michael J.; Hilburger, Mark W.; Kenny, Sean P; Luckring, James M.; Maghami, Peiman; Padula, Sharon L.; Stroud, W. Jefferson

    2002-01-01

    This report consists of a survey of the state of the art in uncertainty-based design together with recommendations for a Base research activity in this area for the NASA Langley Research Center. This report identifies the needs and opportunities for computational and experimental methods that provide accurate, efficient solutions to nondeterministic multidisciplinary aerospace vehicle design problems. Barriers to the adoption of uncertainty-based design methods are identified. and the benefits of the use of such methods are explained. Particular research needs are listed.

  11. Multidisciplinary and Evidence-based Method for Prioritizing Diseases of Food-producing Animals and Zoonoses

    PubMed Central

    Humblet, Marie-France; Vandeputte, Sébastien; Albert, Adelin; Gosset, Christiane; Kirschvink, Nathalie; Haubruge, Eric; Fecher-Bourgeois, Fabienne; Pastoret, Paul-Pierre

    2012-01-01

    To prioritize 100 animal diseases and zoonoses in Europe, we used a multicriteria decision-making procedure based on opinions of experts and evidence-based data. Forty international experts performed intracategory and intercategory weighting of 57 prioritization criteria. Two methods (deterministic with mean of each weight and probabilistic with distribution functions of weights by using Monte Carlo simulation) were used to calculate a score for each disease. Consecutive ranking was established. Few differences were observed between each method. Compared with previous prioritization methods, our procedure is evidence based, includes a range of fields and criteria while considering uncertainty, and will be useful for analyzing diseases that affect public health. PMID:22469519

  12. An Efficient Minimum Free Energy Structure-Based Search Method for Riboswitch Identification Based on Inverse RNA Folding

    PubMed Central

    Drory Retwitzer, Matan; Kifer, Ilona; Sengupta, Supratim; Yakhini, Zohar; Barash, Danny

    2015-01-01

    Riboswitches are RNA genetic control elements that were originally discovered in bacteria and provide a unique mechanism of gene regulation. They work without the participation of proteins and are believed to represent ancient regulatory systems in the evolutionary timescale. One of the biggest challenges in riboswitch research is to find additional eukaryotic riboswitches since more than 20 riboswitch classes have been found in prokaryotes but only one class has been found in eukaryotes. Moreover, this single known class of eukaryotic riboswitch, namely the TPP riboswitch class, has been found in bacteria, archaea, fungi and plants but not in animals. The few examples of eukaryotic riboswitches were identified using sequence-based bioinformatics search methods such as a combination of BLAST and pattern matching techniques that incorporate base-pairing considerations. None of these approaches perform energy minimization structure predictions. There is a clear motivation to develop new bioinformatics methods, aside of the ongoing advances in covariance models, that will sample the sequence search space more flexibly using structural guidance while retaining the computational efficiency of sequence-based methods. We present a new energy minimization approach that transforms structure-based search into a sequence-based search, thereby enabling the utilization of well established sequence-based search utilities such as BLAST and FASTA. The transformation to sequence space is obtained by using an extended inverse RNA folding problem solver with sequence and structure constraints, available within RNAfbinv. Examples in applying the new method are presented for the purine and preQ1 riboswitches. The method is described in detail along with its findings in prokaryotes. Potential uses in finding novel eukaryotic riboswitches and optimizing pre-designed synthetic riboswitches based on ligand simulations are discussed. The method components are freely available for use. PMID

  13. On Development of a Problem Based Learning System for Linear Algebra with Simple Input Method

    NASA Astrophysics Data System (ADS)

    Yokota, Hisashi

    2011-08-01

    Learning how to express a matrix using a keyboard inputs requires a lot of time for most of college students. Therefore, for a problem based learning system for linear algebra to be accessible for college students, it is inevitable to develop a simple method for expressing matrices. Studying the two most widely used input methods for expressing matrices, a simpler input method for expressing matrices is obtained. Furthermore, using this input method and educator's knowledge structure as a concept map, a problem based learning system for linear algebra which is capable of assessing students' knowledge structure and skill is developed.

  14. Aqueous based reflux method for green synthesis of nanostructures: Application in CZTS synthesis.

    PubMed

    Aditha, Sai Kiran; Kurdekar, Aditya Dileep; Chunduri, L A Avinash; Patnaik, Sandeep; Kamisetti, Venkataramaniah

    2016-01-01

    The aqueous based reflux method useful for the green synthesis of nanostructures is described in detail. In this method, the parameters: the order of addition of precursors, the time of the reflux and the cooling rate should be optimized in order to obtain the desired phase and morphology of the nanostructures. The application of this method is discussed with reference to the synthesis of CZTS nanoparticles which have great potential as an absorber material in the photovoltaic devices. The highlights of this method are:•Simple.•Low cost.•Aqueous based.

  15. Aqueous based reflux method for green synthesis of nanostructures: Application in CZTS synthesis.

    PubMed

    Aditha, Sai Kiran; Kurdekar, Aditya Dileep; Chunduri, L A Avinash; Patnaik, Sandeep; Kamisetti, Venkataramaniah

    2016-01-01

    The aqueous based reflux method useful for the green synthesis of nanostructures is described in detail. In this method, the parameters: the order of addition of precursors, the time of the reflux and the cooling rate should be optimized in order to obtain the desired phase and morphology of the nanostructures. The application of this method is discussed with reference to the synthesis of CZTS nanoparticles which have great potential as an absorber material in the photovoltaic devices. The highlights of this method are:•Simple.•Low cost.•Aqueous based. PMID:27408826

  16. Surface impedance based microwave imaging method for breast cancer screening: contrast-enhanced scenario.

    PubMed

    Güren, Onan; Çayören, Mehmet; Ergene, Lale Tükenmez; Akduman, Ibrahim

    2014-10-01

    A new microwave imaging method that uses microwave contrast agents is presented for the detection and localization of breast tumours. The method is based on the reconstruction of breast surface impedance through a measured scattered field. The surface impedance modelling allows for representing the electrical properties of the breasts in terms of impedance boundary conditions, which enable us to map the inner structure of the breasts into surface impedance functions. Later a simple quantitative method is proposed to screen breasts against malignant tumours where the detection procedure is based on weighted cross correlations among impedance functions. Numerical results demonstrate that the method is capable of detecting small malignancies and provides reasonable localization.

  17. Multiscale Design of Advanced Materials based on Hybrid Ab Initio and Quasicontinuum Methods

    SciTech Connect

    Luskin, Mitchell

    2014-03-12

    This project united researchers from mathematics, chemistry, computer science, and engineering for the development of new multiscale methods for the design of materials. Our approach was highly interdisciplinary, but it had two unifying themes: first, we utilized modern mathematical ideas about change-of-scale and state-of-the-art numerical analysis to develop computational methods and codes to solve real multiscale problems of DOE interest; and, second, we took very seriously the need for quantum mechanics-based atomistic forces, and based our methods on fast solvers of chemically accurate methods.

  18. A parallel multiple path tracing method based on OptiX for infrared image generation

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Wang, Xia; Liu, Li; Long, Teng; Wu, Zimu

    2015-12-01

    Infrared image generation technology is being widely used in infrared imaging system performance evaluation, battlefield environment simulation and military personnel training, which require a more physically accurate and efficient method for infrared scene simulation. A parallel multiple path tracing method based on OptiX was proposed to solve the problem, which can not only increase computational efficiency compared to serial ray tracing using CPU, but also produce relatively accurate results. First, the flaws of current ray tracing methods in infrared simulation were analyzed and thus a multiple path tracing method based on OptiX was developed. Furthermore, the Monte Carlo integration was employed to solve the radiation transfer equation, in which the importance sampling method was applied to accelerate the integral convergent rate. After that, the framework of the simulation platform and its sensor effects simulation diagram were given. Finally, the results showed that the method could generate relatively accurate radiation images if a precise importance sampling method was available.

  19. A comparison of field-based similarity searching methods: CatShape, FBSS, and ROCS.

    PubMed

    Moffat, Kirstin; Gillet, Valerie J; Whittle, Martin; Bravi, Gianpaolo; Leach, Andrew R

    2008-04-01

    Three field-based similarity methods are compared in retrospective virtual screening experiments. The methods are the CatShape module of CATALYST, ROCS, and an in-house program developed at the University of Sheffield called FBSS. The programs are used in both rigid and flexible searches carried out in the MDL Drug Data Report. UNITY 2D fingerprints are also used to provide a comparison with a more traditional approach to similarity searching, and similarity based on simple whole-molecule properties is used to provide a baseline for the more sophisticated searches. Overall, UNITY 2D fingerprints and ROCS with the chemical force field option gave comparable performance and were superior to the shape-only 3D methods. When the flexible methods were compared with the rigid methods, it was generally found that the flexible methods gave slightly better results than their respective rigid methods; however, the increased performance did not justify the additional computational cost required.

  20. Total variation versus wavelet-based methods for image denoising in fluorescence lifetime imaging microscopy

    PubMed Central

    Chang, Ching-Wei; Mycek, Mary-Ann

    2014-01-01

    We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging. PMID:22415891

  1. Properties of a Formal Method for Prediction of Emergent Behaviors in Swarm-based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James

    2004-01-01

    Autonomous intelligent swarms of satellites are being proposed for NASA missions that have complex behaviors and interactions. The emergent properties of swarms make these missions powerful, but at the same time more difficult to design and assure that proper behaviors will emerge. This paper gives the results of research into formal methods techniques for verification and validation of NASA swarm-based missions. Multiple formal methods were evaluated to determine their effectiveness in modeling and assuring the behavior of swarms of spacecraft. The NASA ANTS mission was used as an example of swarm intelligence for which to apply the formal methods. This paper will give the evaluation of these formal methods and give partial specifications of the ANTS mission using four selected methods. We then give an evaluation of the methods and the needed properties of a formal method for effective specification and prediction of emergent behavior in swarm-based systems.

  2. Method based on the double sideband technique for the dynamic tracking of micrometric particles

    NASA Astrophysics Data System (ADS)

    Ramirez, Claudio; Lizana, Angel; Iemmi, Claudio; Campos, Juan

    2016-06-01

    Digital holography (DH) methods are of interest in a large number of applications. Recently, the double sideband (DSB) technique was proposed, which is a DH based method that, by using double filtering, provides reconstructed images without distortions and is free of twin images by using an in-line configuration. In this work, we implement a method for the investigation of the mobility of particles based on the DSB technique. Particle holographic images obtained using the DSB method are processed with digital picture recognition methods, allowing us to accurately track the spatial position of particles. The dynamic nature of the method is achieved experimentally by using a spatial light modulator. The suitability of the proposed tracking method is validated by determining the trajectory and velocity described by glass microspheres in movement.

  3. A Novel Ship-rocking Forecasting Method based on Hilbert- Huang Transform

    NASA Astrophysics Data System (ADS)

    De-yong, Kang; Yu-jian, Li; Xu-liang, Wang; Zhi, Chen

    2016-02-01

    The ship-rocking is a crucial factor which affects the accuracy of the ocean-based aerospace vehicle measurement. Here we have analysed groups of ship-rocking time series in horizontal and vertical directions utilizing a Hilbert based method from statistical physics. Based on these results we could predict certain amount of future values of the ship-rocking time series based on the current and the previous values. Our predictions are as accurate as the conventional methods from stochastic processes and provide a much wider prediction time range.

  4. Molecular cancer classification using a meta-sample-based regularized robust coding method

    PubMed Central

    2014-01-01

    Motivation Previous studies have demonstrated that machine learning based molecular cancer classification using gene expression profiling (GEP) data is promising for the clinic diagnosis and treatment of cancer. Novel classification methods with high efficiency and prediction accuracy are still needed to deal with high dimensionality and small sample size of typical GEP data. Recently the sparse representation (SR) method has been successfully applied to the cancer classification. Nevertheless, its efficiency needs to be improved when analyzing large-scale GEP data. Results In this paper we present the meta-sample-based regularized robust coding classification (MRRCC), a novel effective cancer classification technique that combines the idea of meta-sample-based cluster method with regularized robust coding (RRC) method. It assumes that the coding residual and the coding coefficient are respectively independent and identically distributed. Similar to meta-sample-based SR classification (MSRC), MRRCC extracts a set of meta-samples from the training samples, and then encodes a testing sample as the sparse linear combination of these meta-samples. The representation fidelity is measured by the l2-norm or l1-norm of the coding residual. Conclusions Extensive experiments on publicly available GEP datasets demonstrate that the proposed method is more efficient while its prediction accuracy is equivalent to existing MSRC-based methods and better than other state-of-the-art dimension reduction based methods. PMID:25473795

  5. Handbook of methods for risk-based analysis of Technical Specification requirements

    SciTech Connect

    Samanta, P.K.; Vesely, W.E.

    1993-12-31

    Technical Specifications (TS) requirements for nuclear power plants define the Limiting Conditions for Operation (LCOs) and Surveillance Requirements (SRs) to assure safety during operation. In general, these requirements were based on deterministic analysis and engineering judgments. Experiences with plant operation indicate that some elements of the requirements are unnecessarily restrictive, while others may not be conducive to safety. Improvements in these requirements are facilitated by the availability of plant specific Probabilistic Safety Assessments (PSAs). The use of risk and reliability-based methods to improve TS requirements has gained wide interest because these methods can: quantitatively evaluate the risk impact and justify changes based on objective risk arguments. Provide a defensible basis for these requirements for regulatory applications. The United States Nuclear Regulatory Commission (USNRC) Office of Research is sponsoring research to develop systematic risk-based methods to improve various aspects of TS requirements. The handbook of methods, which is being prepared, summarizes such risk-based methods. The scope of the handbook includes reliability and risk-based methods for evaluating allowed outage times (AOTs), action statements requiring shutdown where shutdown risk may be substantial, surveillance test intervals (STIs), defenses against common-cause failures, managing plant configurations, and scheduling maintenances. For each topic, the handbook summarizes methods of analysis and data needs, outlines the insights to be gained, lists additional references, and presents examples of evaluations.

  6. Comparing the Cloud Vertical Structure Derived from Several Methods Based on Radiosonde Profiles and Ground-based Remote Sensing Measurements

    SciTech Connect

    Costa-Suros, M.; Calbo, J.; Gonzalez, J. A.; Long, Charles N.

    2014-08-27

    The cloud vertical distribution and especially the cloud base height, which is linked to cloud type, is an important characteristic in order to describe the impact of clouds in a changing climate. In this work several methods to estimate the cloud vertical structure (CVS) based on atmospheric sounding profiles are compared, considering number and position of cloud layers, with a ground based system which is taken as a reference: the Active Remote Sensing of Clouds (ARSCL). All methods establish some conditions on the relative humidity, and differ on the use of other variables, the thresholds applied, or the vertical resolution of the profile. In this study these methods are applied to 125 radiosonde profiles acquired at the ARM Southern Great Plains site during all seasons of year 2009 and endorsed by GOES images, to confirm that the cloudiness conditions are homogeneous enough across their trajectory. The overall agreement for the methods ranges between 44-88%; four methods produce total agreements around 85%. Further tests and improvements are applied on one of these methods. In addition, we attempt to make this method suitable for low resolution vertical profiles, which could be useful in atmospheric modeling. The total agreement, even when using low resolution profiles, can be improved up to 91% if the thresholds for a moist layer to become a cloud layer are modified to minimize false negatives with the current data set, thus improving overall agreement.

  7. A comparative study on hydrocarbon detection using three EMD-based time-frequency analysis methods

    NASA Astrophysics Data System (ADS)

    Xue, Ya-juan; Cao, Jun-xing; Tian, Ren-fei

    2013-02-01

    Due to strong heterogeneity of marine carbonate reservoir, seismic signals become more complex, thus, it is very difficult for hydrocarbon detection. In hydrocarbon reservoir, there usually exist some changes in seismic wave energy and frequency. In their instantaneous spectrums there often exist such phenomena that show the characteristics of attenuation of high frequency energy and enhancement of low-frequency energy. The three EMD-based time-frequency analysis methods' instantaneous spectra all have certain oil and gas detection capability. In this paper, we introduced the Normalized Hilbert Transform (NHT) and a new method named the HU method for hydrocarbon detection. The model results in the Jingbian Gas Field which is located in the eastern Ordos Basin, China, show that NHT and HU methods can be adopted. They also detect the gas-bearing reservoir efficiently as the HHT method does. The three EMD-based methods, that is, the Hilbert-Huang transformation (HHT) and NHT and HU methods, were respectively applied to analyze the seismic data from the Jingbian Gas Field. Firstly, the seismic signals were decomposed into a finite number of intrinsic mode functions (IMFs) by empirical mode decomposition (EMD) method. The second IMF signal (IMF2) of the original seismic section better indicates the distribution of the reservoir. Information on hydrocarbon-bearing reservoir is mainly in IMF2. Secondly, the HHT, NHT and HU methods were respectively used to obtain different frequency division sections from IMF2. Hydrocarbon detection was realized from the energy distribution of the different frequency division sections with these three EMD-based methods. The practical application results show that the three EMD-based methods can all be employed to hydrocarbon detection. Frequency division section of IMF2 using NHT method was better for the seismic data from the Jingbian Gas Field than when using the HHT method and HU method.

  8. A method based on moving least squares for XRII image distortion correction

    SciTech Connect

    Yan Shiju; Wang Chengtao; Ye Ming

    2007-11-15

    This paper presents a novel integrated method to correct geometric distortions of XRII (x-ray image intensifier) images. The method has been compared, in terms of mean-squared residual error measured at control and intermediate points, with two traditional local methods and a traditional global methods. The proposed method is based on the methods of moving least squares (MLS) and polynomial fitting. Extensive experiments were performed on simulated and real XRII images. In simulation, the effect of pincushion distortion, sigmoidal distortion, local distortion, noise, and the number of control points was tested. The traditional local methods were sensitive to pincushion and sigmoidal distortion. The traditional global method was only sensitive to sigmoidal distortion. The proposed method was found neither sensitive to pincushion distortion nor sensitive to sigmoidal distortion. The sensitivity of the proposed method to local distortion was lower than or comparable with that of the traditional global method. The sensitivity of the proposed method to noise was higher than that of all three traditional methods. Nevertheless, provided the standard deviation of noise was not greater than 0.1 pixels, accuracy of the proposed method is still higher than the traditional methods. The sensitivity of the proposed method to the number of control points was greatly lower than that of the traditional methods. Provided that a proper cutoff radius is chosen, accuracy of the proposed method is higher than that of the traditional methods. Experiments on real images, carried out by using a 9 in. XRII, showed that residual error of the proposed method (0.2544{+-}0.2479 pixels) is lower than that of the traditional global method (0.4223{+-}0.3879 pixels) and local methods (0.4555{+-}0.3518 pixels and 0.3696{+-}0.4019 pixels, respectively)

  9. Continental-scale Validation of MODIS-based and LEDAPS Landsat ETM+ Atmospheric Correction Methods

    NASA Technical Reports Server (NTRS)

    Ju, Junchang; Roy, David P.; Vermote, Eric; Masek, Jeffrey; Kovalskyy, Valeriy

    2012-01-01

    The potential of Landsat data processing to provide systematic continental scale products has been demonstrated by several projects including the NASA Web-enabled Landsat Data (WELD) project. The recent free availability of Landsat data increases the need for robust and efficient atmospheric correction algorithms applicable to large volume Landsat data sets. This paper compares the accuracy of two Landsat atmospheric correction methods: a MODIS-based method and the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) method. Both methods are based on the 6SV radiative transfer code but have different atmospheric characterization approaches. The MODIS-based method uses the MODIS Terra derived dynamic aerosol type, aerosol optical thickness, and water vapor to atmospherically correct ETM+ acquisitions in each coincident orbit. The LEDAPS method uses aerosol characterizations derived independently from each Landsat acquisition and assumes a fixed continental aerosol type and uses ancillary water vapor. Validation results are presented comparing ETM+ atmospherically corrected data generated using these two methods with AERONET corrected ETM+ data for 95 10 km×10 km 30 m subsets, a total of nearly 8 million 30 m pixels, located across the conterminous United States. The results indicate that the MODIS-based method has better accuracy than the LEDAPS method for the ETM+ red and longer wavelength bands.

  10. Investigation of self-adaptive LED surgical lighting based on entropy contrast enhancing method

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Huihui; Zhang, Yaqin; Shen, Junfei; Wu, Rengmao; Zheng, Zhenrong; Li, Haifeng; Liu, Xu

    2014-05-01

    Investigation was performed to explore the possibility of enhancing contrast by varying the spectral distribution (SPD) of the surgical lighting. The illumination scenes with different SPDs were generated by the combination of a self-adaptive white light optimization method and the LED ceiling system, the images of biological sample are taken by a CCD camera and then processed by an 'Entropy' based contrast evaluation model which is proposed specific for surgery occasion. Compared with the neutral white LED based and traditional algorithm based image enhancing methods, the illumination based enhancing method turns out a better performance in contrast enhancing and improves the average contrast value about 9% and 6%, respectively. This low cost method is simple, practicable, and thus may provide an alternative solution for the expensive visual facility medical instruments.

  11. From molecules to management: adopting DNA-based methods for monitoring biological invasions in aquatic environments

    EPA Science Inventory

    Recent technological advances have driven rapid development of DNA-based methods designed to facilitate detection and monitoring of invasive species in aquatic environments. These tools promise to significantly alleviate difficulties associated with traditional monitoring approac...

  12. Systems for column-based separations, methods of forming packed columns, and methods of purifying sample components

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2000-01-01

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  13. Systems For Column-Based Separations, Methods Of Forming Packed Columns, And Methods Of Purifying Sample Components.

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2004-08-24

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  14. Systems For Column-Based Separations, Methods Of Forming Packed Columns, And Methods Of Purifying Sample Components

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2006-02-21

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  15. Properties of a Formal Method to Model Emergence in Swarm-Based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Truszkowski, Walt; Rash, James; Hinchey, Mike

    2004-01-01

    Future space missions will require cooperation between multiple satellites and/or rovers. Developers are proposing intelligent autonomous swarms for these missions, but swarm-based systems are difficult or impossible to test with current techniques. This viewgraph presentation examines the use of formal methods in testing swarm-based systems. The potential usefulness of formal methods in modeling the ANTS asteroid encounter mission is also examined.

  16. System and method for air temperature control in an oxygen transport membrane based reactor

    DOEpatents

    Kelly, Sean M

    2016-09-27

    A system and method for air temperature control in an oxygen transport membrane based reactor is provided. The system and method involves introducing a specific quantity of cooling air or trim air in between stages in a multistage oxygen transport membrane based reactor or furnace to maintain generally consistent surface temperatures of the oxygen transport membrane elements and associated reactors. The associated reactors may include reforming reactors, boilers or process gas heaters.

  17. [Application of case-based method in genetics and eugenics teaching].

    PubMed

    Li, Ya-Xuan; Zhao, Xin; Zhang, Fei-Xiong; Hu, Ying-Kao; Yan, Yue-Ming; Cai, Min-Hua; Li, Xiao-Hui

    2012-05-01

    Genetics and Eugenics is a cross-discipline between genetics and eugenics. It is a common curriculum in many Chinese universities. In order to increase the learning interest, we introduced case teaching method and got a better teaching effect. Based on our teaching practices, we summarized some experiences about this subject. In this article, the main problem of case-based method applied in Genetics and Eugenics teaching was discussed.

  18. EOTAS dynamic scheduling method based on wearable man-machine synergy

    NASA Astrophysics Data System (ADS)

    Liu, Zhijun; Wang, Dongmei; Yang, Yukun; Zhao, Jie

    2011-12-01

    By analyzing the dynamic scheduling needs of its inherent nature, made wearable computing based on human-computer natural interaction forms the basis of EOTAS dynamic scheduling methods, and the targeted building, a new concept of wearable man-machine cooperative forms, turn around its concrete implementation and application, a color based on extended fuzzy Petri net EOTAS dynamic scheduling method for the preliminary settlement of the business operating environment EOTAS field applications of the fast scheduling problem.

  19. EOTAS dynamic scheduling method based on wearable man-machine synergy

    NASA Astrophysics Data System (ADS)

    Liu, ZhiJun; Wang, DongMei; Yang, YuKun; Zhao, Jie

    2012-01-01

    By analyzing the dynamic scheduling needs of its inherent nature, made wearable computing based on human-computer natural interaction forms the basis of EOTAS dynamic scheduling methods, and the targeted building, a new concept of wearable man-machine cooperative forms, turn around its concrete implementation and application, a color based on extended fuzzy Petri net EOTAS dynamic scheduling method for the preliminary settlement of the business operating environment EOTAS field applications of the fast scheduling problem.

  20. A case-base sampling method for estimating recurrent event intensities.

    PubMed

    Saarela, Olli

    2016-10-01

    Case-base sampling provides an alternative to risk set sampling based methods to estimate hazard regression models, in particular when absolute hazards are also of interest in addition to hazard ratios. The case-base sampling approach results in a likelihood expression of the logistic regression form, but instead of categorized time, such an expression is obtained through sampling of a discrete set of person-time coordinates from all follow-up data. In this paper, in the context of a time-dependent exposure such as vaccination, and a potentially recurrent adverse event outcome, we show that the resulting partial likelihood for the outcome event intensity has the asymptotic properties of a likelihood. We contrast this approach to self-matched case-base sampling, which involves only within-individual comparisons. The efficiency of the case-base methods is compared to that of standard methods through simulations, suggesting that the information loss due to sampling is minimal.