Science.gov

Sample records for fvm-bem method based

  1. Research on BOM based composable modeling method

    NASA Astrophysics Data System (ADS)

    Zhang, Mingxin; He, Qiang; Gong, Jianxing

    2013-03-01

    Composable modeling method has been a research hotpot in the area of Modeling and Simulation for a long time. In order to increase the reuse and interoperability of BOM based model, this paper put forward a composable modeling method based on BOM, studied on the basic theory of composable modeling method based on BOM, designed a general structure of the coupled model based on BOM, and traversed the structure of atomic and coupled model based on BOM. At last, the paper introduced the process of BOM based composable modeling and made a conclusion on composable modeling method based on BOM. From the prototype we developed and accumulative model stocks, we found this method could increase the reuse and interoperability of models.

  2. Method of recovering oil-based fluid

    SciTech Connect

    Brinkley, H.E.

    1993-07-13

    A method is described of recovering oil-based fluid, said method comprising the steps of: applying an oil-based fluid absorbent cloth of man-made fiber to an oil-based fluid, the cloth having at least a portion thereof that is napped so as to raise ends and loops of the man-made fibers and define voids; and absorbing the oil-based fluid into the napped portion of the cloth.

  3. METHOD OF JOINING CARBIDES TO BASE METALS

    DOEpatents

    Krikorian, N.H.; Farr, J.D.; Witteman, W.G.

    1962-02-13

    A method is described for joining a refractory metal carbide such as UC or ZrC to a refractory metal base such as Ta or Nb. The method comprises carburizing the surface of the metal base and then sintering the base and carbide at temperatures of about 2000 deg C in a non-oxidizing atmosphere, the base and carbide being held in contact during the sintering step. To reduce the sintering temperature and time, a sintering aid such as iron, nickel, or cobait is added to the carbide, not to exceed 5 wt%. (AEC)

  4. Method for comparing content based image retrieval methods

    NASA Astrophysics Data System (ADS)

    Barnard, Kobus; Shirahatti, Nikhil V.

    2003-01-01

    We assume that the goal of content based image retrieval is to find images which are both semantically and visually relevant to users based on image descriptors. These descriptors are often provided by an example image--the query by example paradigm. In this work we develop a very simple method for evaluating such systems based on large collections of images with associated text. Examples of such collections include the Corel image collection, annotated museum collections, news photos with captions, and web images with associated text based on heuristic reasoning on the structure of typical web pages (such as used by Google(tm)). The advantage of using such data is that it is plentiful, and the method we propose can be automatically applied to hundreds of thousands of queries. However, it is critical that such a method be verified against human usage, and to do this we evaluate over 6000 query/result pairs. Our results strongly suggest that at least in the case of the Corel image collection, the automated measure is a good proxy for human evaluation. Importantly, our human evaluation data can be reused for the evaluation of any content based image retrieval system and/or the verification of additional proxy measures.

  5. Design for validation, based on formal methods

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1990-01-01

    Validation of ultra-reliable systems decomposes into two subproblems: (1) quantification of probability of system failure due to physical failure; (2) establishing that Design Errors are not present. Methods of design, testing, and analysis of ultra-reliable software are discussed. It is concluded that a design-for-validation based on formal methods is needed for the digital flight control systems problem, and also that formal methods will play a major role in the development of future high reliability digital systems.

  6. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  7. Wavelet-based Multiresolution Particle Methods

    NASA Astrophysics Data System (ADS)

    Bergdorf, Michael; Koumoutsakos, Petros

    2006-03-01

    Particle methods offer a robust numerical tool for solving transport problems across disciplines, such as fluid dynamics, quantitative biology or computer graphics. Their strength lies in their stability, as they do not discretize the convection operator, and appealing numerical properties, such as small dissipation and dispersion errors. Many problems of interest are inherently multiscale, and their efficient solution requires either multiscale modeling approaches or spatially adaptive numerical schemes. We present a hybrid particle method that employs a multiresolution analysis to identify and adapt to small scales in the solution. The method combines the versatility and efficiency of grid-based Wavelet collocation methods while retaining the numerical properties and stability of particle methods. The accuracy and efficiency of this method is then assessed for transport and interface capturing problems in two and three dimensions, illustrating the capabilities and limitations of our approach.

  8. Recommendation advertising method based on behavior retargeting

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  9. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  10. A MULTICORE BASED PARALLEL IMAGE REGISTRATION METHOD

    PubMed Central

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L.; Foran, David J.

    2012-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  11. Lagrangian based methods for coherent structure detection

    SciTech Connect

    Allshouse, Michael R.; Peacock, Thomas

    2015-09-15

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.

  12. Chapter 11. Community analysis-based methods

    SciTech Connect

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  13. Method for extruding pitch based foam

    DOEpatents

    Klett, James W.

    2002-01-01

    A method and apparatus for extruding pitch based foam is disclosed. The method includes the steps of: forming a viscous pitch foam; passing the precursor through an extrusion tube; and subjecting the precursor in said extrusion tube to a temperature gradient which varies along the length of the extrusion tube to form an extruded carbon foam. The apparatus includes an extrusion tube having a passageway communicatively connected to a chamber in which a viscous pitch foam formed in the chamber paring through the extrusion tube, and a heating mechanism in thermal communication with the tube for heating the viscous pitch foam along the length of the tube in accordance with a predetermined temperature gradient.

  14. Dreamlet-based interpolation using POCS method

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Wu, Ru-Shan; Geng, Yu; Chen, Xiaohong

    2014-10-01

    Due to incomplete and non-uniform coverage of the acquisition system and dead traces, real seismic data always has some missing traces which affect the performance of a multi-channel algorithm, such as Surface-Related Multiple Elimination (SRME), imaging and inversion. Therefore, it is necessary to interpolate seismic data. Dreamlet transform has been successfully used in the modeling of seismic wave propagation and imaging, and this paper explains the application of dreamlet transform to seismic data interpolation. In order to avoid spatial aliasing in transform domain thus getting arbitrary under-sampling rate, improved Jittered under-sampling strategy is proposed to better control the dataset. With L0 constraint and Projection Onto Convex Sets (POCS) method, performances of dreamlet-based and curvelet-based interpolation are compared in terms of recovered signal to noise ratio (SNR) and convergence rate. Tests on synthetic and real cases demonstrate that dreamlet transform has superior performance to curvelet transform.

  15. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  16. Graph-based Methods for Orbit Classification

    SciTech Connect

    Bagherjeiran, A; Kamath, C

    2005-09-29

    An important step in the quest for low-cost fusion power is the ability to perform and analyze experiments in prototype fusion reactors. One of the tasks in the analysis of experimental data is the classification of orbits in Poincare plots. These plots are generated by the particles in a fusion reactor as they move within the toroidal device. In this paper, we describe the use of graph-based methods to extract features from orbits. These features are then used to classify the orbits into several categories. Our results show that existing machine learning algorithms are successful in classifying orbits with few points, a situation which can arise in data from experiments.

  17. Imaging Earth's Interior Based Upon Adjoint Methods

    NASA Astrophysics Data System (ADS)

    Tromp, J.; Komatitsch, D.; Liu, Q.; Tape, C.; Maggi, A.

    2008-12-01

    Modern numerical methods in combination with rapid advances in parallel computing have enabled the simulation of seismic wave propagation in 3D Earth models at unpredcented resolution and accuracy. On a modest PC cluster one can now simulate global seismic wave propagation at periods of 20~s longer accounting for heterogeneity in the crust and mantle, topography, anisotropy, attenuation, fluid-solid interactions, self-gravitation, rotation, and the oceans. On the 'Ranger' system at the Texas Advanced Computing Center one can break the 2~s barrier. By drawing connections between seismic tomography, adjoint methods popular in climate and ocean dynamics, time-reversal imaging, and finite-frequency 'banana-doughnut' kernels, it has been demonstrated that Fréchet derivatives for tomographic and (finite) source inversions in complex 3D Earth models may be obtained based upon just two numerical simulations for each earthquake: one calculation for the current model and a second, 'adjoint', calculation that uses time-reversed signals at the receivers as simultaneous, fictitious sources. The adjoint wavefield is calculated while the regular wavefield is reconstructed on the fly by propagating the last frame of the wavefield saved by a previous forward simulation backward in time. This aproach has been used to calculate sensitivity kernels in regional and global Earth models for various body- and surface-wave arrivals. These kernels illustrate the sensitivity of the observations to the structural parameters and form the basis of 'adjoint tomography'. We use a non-linear conjugate gradient method in combination with a source subspace projection preconditioning technique to iterative minimize the misfit function. Using an automated time window selection algorithm, our emphasis is on matching targeted, frequency-dependent body-wave traveltimes and surface-wave phase anomalies, rather than entire waveforms. To avoid reaching a local minimum in the optimization procedure, we

  18. Adaptive Discontinuous Galerkin Methods in Multiwavelets Bases

    SciTech Connect

    Archibald, Richard K; Fann, George I; Shelton Jr, William Allison

    2011-01-01

    We use a multiwavelet basis with the Discontinuous Galerkin (DG) method to produce a multi-scale DG method. We apply this Multiwavelet DG method to convection and convection-diffusion problems in multiple dimensions. Merging the DG method with multiwavelets allows the adaptivity in the DG method to be resolved through manipulation of multiwavelet coefficients rather than grid manipulation. Additionally, the Multiwavelet DG method is tested on non-linear equations in one dimension and on the cubed sphere.

  19. DNA-based methods of geochemical prospecting

    DOEpatents

    Ashby, Matthew [Mill Valley, CA

    2011-12-06

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  20. Multifractal Framework Based on Blanket Method

    PubMed Central

    Paskaš, Milorad P.; Reljin, Irini S.; Reljin, Branimir D.

    2014-01-01

    This paper proposes two local multifractal measures motivated by blanket method for calculation of fractal dimension. They cover both fractal approaches familiar in image processing. The first two measures (proposed Methods 1 and 3) support model of image with embedded dimension three, while the other supports model of image embedded in space of dimension three (proposed Method 2). While the classical blanket method provides only one value for an image (fractal dimension) multifractal spectrum obtained by any of the proposed measures gives a whole range of dimensional values. This means that proposed multifractal blanket model generalizes classical (monofractal) blanket method and other versions of this monofractal approach implemented locally. Proposed measures are validated on Brodatz image database through texture classification. All proposed methods give similar classification results, while average computation time of Method 3 is substantially longer. PMID:24578664

  1. Method for sequencing DNA base pairs

    DOEpatents

    Sessler, Andrew M.; Dawson, John

    1993-01-01

    The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source.

  2. Servo Control Using Wave-Based Method

    NASA Astrophysics Data System (ADS)

    Marek, O.

    The wave-based control of flexible mechanical systems has been developed. It is based on the idea of sending waves into the mechanical system, measuring of the incoming waves and avoiding the re-sending of these waves into the continuum again. This approach actually absorbs the energy coming from the system. It has been successfully applied for the number of simulations. This paper deals with the implementation of the wave-based control for experiments using servomotor. It particularly describes the implementation on Yaskawa servo-motor and its PLC system.

  3. Adaptive Kernel Based Machine Learning Methods

    DTIC Science & Technology

    2012-10-15

    multiscale collocation method with a matrix compression strategy to discretize the system of integral equations and then use the multilevel...augmentation method to solve the resulting discrete system. A priori and a posteriori 1 parameter choice strategies are developed for thesemethods. The...performance of the proximity algo- rithms for the L1/TV denoising model. This leads us to a new characterization of all solutions to the L1/TV model via fixed

  4. Method for sequencing DNA base pairs

    DOEpatents

    Sessler, A.M.; Dawson, J.

    1993-12-14

    The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source. 6 figures.

  5. HMM-Based Gene Annotation Methods

    SciTech Connect

    Haussler, David; Hughey, Richard; Karplus, Keven

    1999-09-20

    Development of new statistical methods and computational tools to identify genes in human genomic DNA, and to provide clues to their functions by identifying features such as transcription factor binding sites, tissue, specific expression and splicing patterns, and remove homologies at the protein level with genes of known function.

  6. Roadside-based communication system and method

    NASA Technical Reports Server (NTRS)

    Bachelder, Aaron D. (Inventor)

    2007-01-01

    A roadside-based communication system providing backup communication between emergency mobile units and emergency command centers. In the event of failure of a primary communication, the mobile units transmit wireless messages to nearby roadside controllers that may take the form of intersection controllers. The intersection controllers receive the wireless messages, convert the messages into standard digital streams, and transmit the digital streams along a citywide network to a destination intersection or command center.

  7. Method of casting pitch based foam

    DOEpatents

    Klett, James W.

    2002-01-01

    A process for producing molded pitch based foam is disclosed which minimizes cracking. The process includes forming a viscous pitch foam in a container, and then transferring the viscous pitch foam from the container into a mold. The viscous pitch foam in the mold is hardened to provide a carbon foam having a relatively uniform distribution of pore sizes and a highly aligned graphic structure in the struts.

  8. Alaska climate divisions based on objective methods

    NASA Astrophysics Data System (ADS)

    Angeloff, H.; Bieniek, P. A.; Bhatt, U. S.; Thoman, R.; Walsh, J. E.; Daly, C.; Shulski, M.

    2010-12-01

    Alaska is vast geographically, is located at high latitudes, is surrounded on three sides by oceans and has complex topography, encompassing several climate regions. While climate zones exist, there has not been an objective analysis to identify regions of homogeneous climate. In this study we use cluster analysis on a robust set of weather observation stations in Alaska to develop climate divisions for the state. Similar procedures have been employed in the contiguous United States and other parts of the world. Our analysis, based on temperature and precipitation, yielded a set of 10 preliminary climate divisions. These divisions include an eastern and western Arctic (bounded by the Brooks Range to the south), a west coast region along the Bering Sea, and eastern and western Interior regions (bounded to the south by the Alaska Range). South of the Alaska Range there were the following divisions: an area around Cook Inlet (also including Valdez), coastal and inland areas along Bristol Bay including Kodiak and Lake Iliamna, the Aleutians, and Southeast Alaska. To validate the climate divisions based on relatively sparse station data, additional sensitivity analysis was performed. Additional clustering analysis utilizing the gridded North American Regional Reanalysis (NARR) was also conducted. In addition, the divisions were evaluated using correlation analysis. These sensitivity tests support the climate divisions based on cluster analysis.

  9. Method for producing iron-based catalysts

    DOEpatents

    Farcasiu, Malvina; Kaufman, Phillip B.; Diehl, J. Rodney; Kathrein, Hendrik

    1999-01-01

    A method for preparing an acid catalyst having a long shelf-life is provided comprising doping crystalline iron oxides with lattice-compatible metals and heating the now-doped oxide with halogen compounds at elevated temperatures. The invention also provides for a catalyst comprising an iron oxide particle having a predetermined lattice structure, one or more metal dopants for said iron oxide, said dopants having an ionic radius compatible with said lattice structure; and a halogen bound with the iron and the metal dopants on the surface of the particle.

  10. Method for producing iron-based catalysts

    SciTech Connect

    Farcasiu, M.; Kaufman, P.B.; Diehl, J.R.; Kathrein, H.

    1999-09-07

    A method for preparing an acid catalyst having a long shelf-life is provided comprising doping crystalline iron oxides with lattice-compatible metals and heating the now-doped oxide with halogen compounds at elevated temperatures. The invention also provides for a catalyst comprising an iron oxide particle having a predetermined lattice structure, one or more metal dopants for said iron oxide, said dopants having an ionic radius compatible with said lattice structure; and a halogen bound with the iron and the metal dopants on the surface of the particle.

  11. A power function method for estimating base flow.

    PubMed

    Lott, Darline A; Stewart, Mark T

    2013-01-01

    Analytical base flow separation techniques are often used to determine the base flow contribution to total stream flow. Most analytical methods derive base flow from discharge records alone without using basin-specific variables other than basin area. This paper derives a power function for estimating base flow, the form being aQ(b) + cQ, an analytical method calibrated against an integrated basin variable, specific conductance, relating base flow to total discharge, and is consistent with observed mathematical behavior of dissolved solids in stream flow with varying discharge. Advantages of the method are being uncomplicated, reproducible, and applicable to hydrograph separation in basins with limited specific conductance data. The power function relationship between base flow and discharge holds over a wide range of basin areas. It better replicates base flow determined by mass balance methods than analytical methods such as filters or smoothing routines that are not calibrated to natural tracers or empirical basin and gauge-specific variables. Also, it can be used with discharge during periods without specific conductance values, including separating base flow from quick flow for single events. However, it may overestimate base flow during very high flow events. Application of geochemical mass balance and power function base flow separation methods to stream flow and specific conductance records from multiple gauges in the same basin suggests that analytical base flow separation methods must be calibrated at each gauge. Using average values of coefficients introduces a potentially significant and unknown error in base flow as compared with mass balance methods.

  12. Method for hardfacing a ferrous base material

    SciTech Connect

    Sakaguchi, S.; Ito, H.; Shiroyama, M.

    1984-10-23

    Tungsten carbide and nickel-phosphorus alloy coexist in individual particles. The composite powder produced by a mechanical mix of these two substances consists of 30 about 95 percent by weight of tungsten carbide and valanced nickel-phosphorus alloy. This powder is sprayed to the ferrous base material, resulting in a uniform dispersion of both tungsten carbide and nickel-phosphorus, causing tight adhesion to the surface because the tungsten carbide and nickel-phosphorus alloy coexist in individual particles in the composite. A hard metal coating is produced having high hardness and excellent wear resistance, after the surface of the hard metal coating is heated and the high temperature of the nickel-phosphorus alloy causes a liquid phase under the condition of a nonoxidizing atmosphere. This hard metal coating is used for various kinds of the wear-resistant materials.

  13. PCLC flake-based apparatus and method

    DOEpatents

    Cox, Gerald P; Fromen, Cathy A; Marshall, Kenneth L; Jacobs, Stephen D

    2012-10-23

    A PCLC flake/fluid host suspension that enables dual-frequency, reverse drive reorientation and relaxation of the PCLC flakes is composed of a fluid host that is a mixture of: 94 to 99.5 wt % of a non-aqueous fluid medium having a dielectric constant value .di-elect cons., where 1<.di-elect cons.<7, a conductivity value .sigma., where 10.sup.-9>.sigma.>10.sup.-7 Siemens per meter (S/m), and a resistivity r, where 10.sup.7>r>10.sup.10 ohm-meters (.OMEGA.-m), and which is optically transparent in a selected wavelength range .DELTA..lamda.; 0.0025 to 0.25 wt % of an inorganic chloride salt; 0.0475 to 4.75 wt % water; and 0.25 to 2 wt % of an anionic surfactant; and 1 to 5 wt % of PCLC flakes suspended in the fluid host mixture. Various encapsulation forms and methods are disclosed including a Basic test cell, a Microwell, a Microcube, Direct encapsulation (I), Direct encapsulation (II), and Coacervation encapsulation. Applications to display devices are disclosed.

  14. FOCUS: a deconvolution method based on algorithmic complexity

    NASA Astrophysics Data System (ADS)

    Delgado, C.

    2006-07-01

    A new method for improving the resolution of images is presented. It is based on Occam's razor principle implemented using algorithmic complexity arguments. The performance of the method is illustrated using artificial and real test data.

  15. Pyrolyzed-parylene based sensors and method of manufacture

    NASA Technical Reports Server (NTRS)

    Tai, Yu-Chong (Inventor); Liger, Matthieu (Inventor); Miserendino, Scott (Inventor); Konishi, Satoshi (Inventor)

    2007-01-01

    A method (and resulting structure) for fabricating a sensing device. The method includes providing a substrate comprising a surface region and forming an insulating material overlying the surface region. The method also includes forming a film of carbon based material overlying the insulating material and treating to the film of carbon based material to pyrolyzed the carbon based material to cause formation of a film of substantially carbon based material having a resistivity ranging within a predetermined range. The method also provides at least a portion of the pyrolyzed carbon based material in a sensor application and uses the portion of the pyrolyzed carbon based material in the sensing application. In a specific embodiment, the sensing application is selected from chemical, humidity, piezoelectric, radiation, mechanical strain or temperature.

  16. Exploring the Query Expansion Methods for Concept Based Representation

    DTIC Science & Technology

    2014-11-01

    Exploring the Query Expansion Methods for Concept Based Representation Yue Wang and Hui Fang Department of Electrical and Computer Engineering...physicians find relevant medical cases for patients they are dealing with. Concept based representation has been shown to be effective in biomedical...in this paper, we explored two external resources to perform query expansion for the basic concept based representation method, and discussed the

  17. Modeling electrokinetic flow by Lagrangian particle-based method

    NASA Astrophysics Data System (ADS)

    Pan, Wenxiao; Kim, Kyungjoo; Perego, Mauro; Tartakovsky, Alexandre; Parks, Mike

    2015-11-01

    This work focuses on mathematical models and numerical schemes based on Lagrangian particle-based method that can effectively capture mesoscale multiphysics (hydrodynamics, electrostatics, and advection-diffusion) associated in applications of micro-/nano-transport and technology. The order of accuracy is significantly improved for particle-based method with the presented implicit consistent numerical scheme. Specifically, we show simulation results on electrokinetic flows and microfluidic mixing processes in micro-/nano-channel and through semi-permeable porous structures.

  18. Oriented Connectivity-Based Method for Segmenting Solar Loops

    NASA Technical Reports Server (NTRS)

    Lee, J. K.; Newman, T. S.; Gary, G. A.

    2005-01-01

    A method based on oriented connectivity that can automatically segnient arc-like structures (solar loops) from intensity images of the Sun's corona is introduced. The method is a constructive approach that uses model-guided processing to enable extraction of credible loop structures. Since the solar loops are vestiges of the solar magnetic field, the model-guided processing exploits external estimates of this field s local orientations that are derived from a physical magnetic field model. Empirical studies of the method s effectiveness are also presented. The Oriented Connectivity- Based Method is the first automatic method for the segmentation of solar loops.

  19. Correlation theory-based signal processing method for CMF signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-lin; Tu, Ya-qing

    2016-06-01

    Signal processing precision of Coriolis mass flowmeter (CMF) signals affects measurement accuracy of Coriolis mass flowmeters directly. To improve the measurement accuracy of CMFs, a correlation theory-based signal processing method for CMF signals is proposed, which is comprised of the correlation theory-based frequency estimation method and phase difference estimation method. Theoretical analysis shows that the proposed method eliminates the effect of non-integral period sampling signals on frequency and phase difference estimation. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of frequency and phase difference estimation and has better estimation performance than the adaptive notch filter, discrete Fourier transform and autocorrelation methods in terms of frequency estimation and the data extension-based correlation, Hilbert transform, quadrature delay estimator and discrete Fourier transform methods in terms of phase difference estimation, which contributes to improving the measurement accuracy of Coriolis mass flowmeters.

  20. IFCM Based Segmentation Method for Liver Ultrasound Images.

    PubMed

    Jain, Nishant; Kumar, Vinod

    2016-11-01

    In this paper we have proposed an iterative Fuzzy C-Mean (IFCM) method which divides the pixels present in the image into a set of clusters. This set of clusters is then used to segment a focal liver lesion from a liver ultrasound image. Advantage of IFCM methods is that n-clusters FCM method may lead to non-uniform distribution of centroids, whereas in IFCM method centroids will always be uniformly distributed. Proposed method is compared with the edge based Active contour Chan-Vese (CV) method, and MAP-MRF method by implementing the methods on MATLAB. Proposed method is also compared with region based active contour region-scalable fitting energy (RSFE) method whose MATLAB code is available in author's website. Since no comparison is available on a common database, the performance of three methods and the proposed method have been compared on liver ultrasound (US) images available with us. Proposed method gives the best accuracy of 99.8 % as compared to accuracy of 99.46 %, 95.81 % and 90.08 % given by CV, MAP-MRF and RSFE methods respectively. Computation time taken by the proposed segmentation method for segmentation is 14.25 s as compared to 44.71, 41.27 and 49.02 s taken by CV, MAP-MRF and RSFE methods respectively.

  1. Comparison of conventional staining methods and monoclonal antibody-based methods for Cryptosporidium oocyst detection.

    PubMed Central

    Arrowood, M J; Sterling, C R

    1989-01-01

    The sensitivity and specificity of seven microscopy-based Cryptosporidium oocyst detection methods were compared after application to unconcentrated fecal smears. The seven methods were as follows: (i) a commercial acid-fast (AF) stain (VOLU-SOL) method, (ii) Truant auramine-rhodamine (AR) stain method, (iii) fluorescein-conjugated C1B3 monoclonal antibody (MAb) direct fluorescence method, (iv) OW3 MAb indirect fluorescence method, (v) biotinylated OW3 indirect fluorescence method, (vi) biotinylated OW3-indirect diaminobenzidine (DAB) method, and (vii) biotinylated OW3-aminoethylcarbazole (AEC) method. A total of 281 randomly collected Formalin-fixed fecal samples (submitted to the Maricopa County Health Department, Phoenix, Ariz.) and 30 known positives (Formalin-fixed and K2Cr2O7-preserved stools from our laboratory) were examined in a blind test; 32 of 311 samples (10.3%) were confirmed positive. Of the confirmed positives, 40.6% were identified by the AF method, 93.8% were identified by the AR method, 93.8% were identified by the C1B3 method, 81.3% were identified by the OW3-DAB method, 71.9% were identified by the OW3-AEC method, 100% were identified by the OW3 indirect fluorescence method, and 100% were identified by the biotinylated OW3 indirect fluorescence method. False-positives were encountered by the AF and AR methods (52.0 and 85.7% specificity, respectively), while no false-positives encountered by the MAb-based methods. Oocysts in infected tissue sections were easily detected by the MAb-based methods. Images PMID:2475523

  2. Propensity Score-Based Methods versus MTE-Based Methods in Causal Inference: Identification, Estimation, and Application.

    PubMed

    Zhou, Xiang; Xie, Y U

    2016-02-01

    Since the seminal introduction of the propensity score by Rosenbaum and Rubin, propensity-score-based (PS-based) methods have been widely used for drawing causal inferences in the behavioral and social sciences. However, the propensity score approach depends on the ignorability assumption: there are no unobserved confounders once observed covariates are taken into account. For situations where this assumption may be violated, Heckman and his associates have recently developed a novel approach based on marginal treatment effects (MTE). In this paper, we (1) explicate consequences for PS-based methods when aspects of the ignorability assumption are violated; (2) compare PS-based methods and MTE-based methods by making a close examination of their identification assumptions and estimation performances; (3) apply these two approaches in estimating the economic return to college using data from NLSY 1979 and discuss their discrepancies in results. When there is a sorting gain but no systematic baseline difference between treated and untreated units given observed covariates, PS-based methods can identify the treatment effect of the treated (TT). The MTE approach performs best when there is a valid and strong instrumental variable (IV). In addition, this paper introduces the "smoothing-difference PS-based method," which enables us to uncover heterogeneity across people of different propensity scores in both counterfactual outcomes and treatment effects.

  3. An entropy-based objective evaluation method for image segmentation

    NASA Astrophysics Data System (ADS)

    Zhang, Hui; Fritts, Jason E.; Goldman, Sally A.

    2003-12-01

    Accurate image segmentation is important for many image, video and computer vision applications. Over the last few decades, many image segmentation methods have been proposed. However, the results of these segmentation methods are usually evaluated only visually, qualitatively, or indirectly by the effectiveness of the segmentation on the subsequent processing steps. Such methods are either subjective or tied to particular applications. They do not judge the performance of a segmentation method objectively, and cannot be used as a means to compare the performance of different segmentation techniques. A few quantitative evaluation methods have been proposed, but these early methods have been based entirely on empirical analysis and have no theoretical grounding. In this paper, we propose a novel objective segmentation evaluation method based on information theory. The new method uses entropy as the basis for measuring the uniformity of pixel characteristics (luminance is used in this paper) within a segmentation region. The evaluation method provides a relative quality score that can be used to compare different segmentations of the same image. This method can be used to compare both various parameterizations of one particular segmentation method as well as fundamentally different segmentation techniques. The results from this preliminary study indicate that the proposed evaluation method is superior to the prior quantitative segmentation evaluation techniques, and identify areas for future research in objective segmentation evaluation.

  4. A new ultrasound based method for rapid microorganism detection

    NASA Astrophysics Data System (ADS)

    Shukla, Shiva Kant; Segura, Luis Elvira; Sánchez, Carlos José Sierra; López, Pablo Resa

    2012-05-01

    A new method for rapid detection of catalase positive microorganisms by using an ultrasonic measuring method is proposed in this work. The developed technique is based on the detection of oxygen bubbles produced by the hydrolysis of hydrogen peroxide induced by the enzyme catalase which is present in many microorganisms. The bubbles are trapped in a media based on agar gel which was especially developed for microbiological evaluation. It is found that microorganism concentrations of the order of 105 c.f.u./ml can be detected by using this method. The results obtained show up that the proposed method is competitive with other modern commercial methods like luminescence by ATP system. The method can also be used for characterization of enzyme activity.

  5. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  6. Qualitative Assessment of Inquiry-Based Teaching Methods

    ERIC Educational Resources Information Center

    Briggs, Michael; Long, George; Owens, Katrina

    2011-01-01

    A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…

  7. A Novel Method for Learner Assessment Based on Learner Annotations

    ERIC Educational Resources Information Center

    Noorbehbahani, Fakhroddin; Samani, Elaheh Biglar Beigi; Jazi, Hossein Hadian

    2013-01-01

    Assessment is one of the most essential parts of any instructive learning process which aims to evaluate a learner's knowledge about learning concepts. In this work, a new method for learner assessment based on learner annotations is presented. The proposed method exploits the M-BLEU algorithm to find the most similar reference annotations…

  8. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, G.F.; Steindler, M.J.

    1985-05-21

    A method of removing a phosphorus-based poisonous substance from water contaminated is presented. In addition, the toxicity of the phosphorus-based substance is also subsequently destroyed. A water-immiscible organic solvent is first immobilized on a supported liquid membrane before the contaminated water is contacted with one side of the supported liquid membrane to absorb the phosphorus-based substance in the organic solvent. The other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react with phosphorus-based solvated species to form a non-toxic product.

  9. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1987-10-07

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  10. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1990-10-09

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  11. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Moyzis, Robert K.; Ratliff, Robert L.; Shera, E. Brooks; Stewart, Carleton C.

    1990-01-01

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed.

  12. Method of recovering oil-based fluid and apparatus

    SciTech Connect

    Brinkley, H.E.

    1993-07-20

    A method is described for recovering oil-based fluid from a surface having oil-based fluid thereon comprising the steps of: applying to the oil-based fluid on the surface an oil-based fluid absorbent cloth of man-made fibers, the cloth having at least one napped surface that defines voids therein, the nap being formed of raised ends or loops of the fibers; absorbing, with the cloth, oil-based fluid; feeding the cloth having absorbed oil-based fluid to a means for applying a force to the cloth to recover oil-based fluid; and applying force to the cloth to recover oil-based fluid therefrom using the force applying means.

  13. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  14. Optimizing distance-based methods for large data sets

    NASA Astrophysics Data System (ADS)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  15. An overview of modal-based damage identification methods

    SciTech Connect

    Farrar, C.R.; Doebling, S.W.

    1997-09-01

    This paper provides an overview of methods that examine changes in measured vibration response to detect, locate, and characterize damage in structural and mechanical systems. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is first provided. The methods are then categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. This overview is limited to methods that can be adapted to a wide range of structures (i.e., are not dependent on a particular assumed model form for the system such as beam-bending behavior and methods and that are not based on updating finite element models). Next, the methods are described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of modal-based damage identification.

  16. Computer based safety training: an investigation of methods

    PubMed Central

    Wallen, E; Mulloy, K

    2005-01-01

    Background: Computer based methods are increasingly being used for training workers, although our understanding of how to structure this training has not kept pace with the changing abilities of computers. Information on a computer can be presented in many different ways and the style of presentation can greatly affect learning outcomes and the effectiveness of the learning intervention. Many questions about how adults learn from different types of presentations and which methods best support learning remain unanswered. Aims: To determine if computer based methods, which have been shown to be effective on younger students, can also be an effective method for older workers in occupational health and safety training. Methods: Three versions of a computer based respirator training module were developed and presented to manufacturing workers: one consisting of text only; one with text, pictures, and animation; and one with narration, pictures, and animation. After instruction, participants were given two tests: a multiple choice test measuring low level, rote learning; and a transfer test measuring higher level learning. Results: Participants receiving the concurrent narration with pictures and animation scored significantly higher on the transfer test than did workers receiving the other two types of instruction. There were no significant differences between groups on the multiple choice test. Conclusions: Narration with pictures and text may be a more effective method for training workers about respirator safety than other popular methods of computer based training. Further study is needed to determine the conditions for the effective use of this technology. PMID:15778259

  17. Integrated navigation method based on inertial navigation system and Lidar

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyue; Shi, Haitao; Pan, Jianye; Zhang, Chunxi

    2016-04-01

    An integrated navigation method based on the inertial navigational system (INS) and Lidar was proposed for land navigation. Compared with the traditional integrated navigational method and dead reckoning (DR) method, the influence of the inertial measurement unit (IMU) scale factor and misalignment was considered in the new method. First, the influence of the IMU scale factor and misalignment on navigation accuracy was analyzed. Based on the analysis, the integrated system error model of INS and Lidar was established, in which the IMU scale factor and misalignment error states were included. Then the observability of IMU error states was analyzed. According to the results of the observability analysis, the integrated system was optimized. Finally, numerical simulation and a vehicle test were carried out to validate the availability and utility of the proposed INS/Lidar integrated navigational method. Compared with the test result of a traditional integrated navigation method and DR method, the proposed integrated navigational method could result in a higher navigation precision. Consequently, the IMU scale factor and misalignment error were effectively compensated by the proposed method and the new integrated navigational method is valid.

  18. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  19. [Reconstituting evaluation methods based on both qualitative and quantitative paradigms].

    PubMed

    Miyata, Hiroaki; Okubo, Suguru; Yoshie, Satoru; Kai, Ichiro

    2011-01-01

    Debate about the relationship between quantitative and qualitative paradigms is often muddled and confusing and the clutter of terms and arguments has resulted in the concepts becoming obscure and unrecognizable. In this study we conducted content analysis regarding evaluation methods of qualitative healthcare research. We extracted descriptions on four types of evaluation paradigm (validity/credibility, reliability/credibility, objectivity/confirmability, and generalizability/transferability), and classified them into subcategories. In quantitative research, there has been many evaluation methods based on qualitative paradigms, and vice versa. Thus, it might not be useful to consider evaluation methods of qualitative paradigm are isolated from those of quantitative methods. Choosing practical evaluation methods based on the situation and prior conditions of each study is an important approach for researchers.

  20. The Reality-Based Learning Method: A Simple Method for Keeping Teaching Activities Relevant and Effective

    ERIC Educational Resources Information Center

    Smith, Louise W.; Van Doren, Doris C.

    2004-01-01

    Active and experiential learning theory have not dramatically changed collegiate classroom teaching methods, although they have long been included in the pedagogical literature. This article presents an evolved method, reality based learning, that aids professors in including active learning activities with feelings of clarity and confidence. The…

  1. Energy-Based Acoustic Source Localization Methods: A Survey

    PubMed Central

    Meng, Wei; Xiao, Wendong

    2017-01-01

    Energy-based source localization is an important problem in wireless sensor networks (WSNs), which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE) and nonlinear-least-squares (NLS) methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions. PMID:28212281

  2. Energy-Based Acoustic Source Localization Methods: A Survey.

    PubMed

    Meng, Wei; Xiao, Wendong

    2017-02-15

    Energy-based source localization is an important problem in wireless sensor networks (WSNs), which has been studied actively in the literature. Numerous localization algorithms, e.g., maximum likelihood estimation (MLE) and nonlinear-least-squares (NLS) methods, have been reported. In the literature, there are relevant review papers for localization in WSNs, e.g., for distance-based localization. However, not much work related to energy-based source localization is covered in the existing review papers. Energy-based methods are proposed and specially designed for a WSN due to its limited sensor capabilities. This paper aims to give a comprehensive review of these different algorithms for energy-based single and multiple source localization problems, their merits and demerits and to point out possible future research directions.

  3. System and method for deriving a process-based specification

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael Gerard (Inventor); Rash, James Larry (Inventor); Rouff, Christopher A. (Inventor)

    2009-01-01

    A system and method for deriving a process-based specification for a system is disclosed. The process-based specification is mathematically inferred from a trace-based specification. The trace-based specification is derived from a non-empty set of traces or natural language scenarios. The process-based specification is mathematically equivalent to the trace-based specification. Code is generated, if applicable, from the process-based specification. A process, or phases of a process, using the features disclosed can be reversed and repeated to allow for an interactive development and modification of legacy systems. The process is applicable to any class of system, including, but not limited to, biological and physical systems, electrical and electro-mechanical systems in addition to software, hardware and hybrid hardware-software systems.

  4. Fast simulation method for airframe analysis based on big data

    NASA Astrophysics Data System (ADS)

    Liu, Dongliang; Zhang, Lixin

    2016-10-01

    In this paper, we employ the big data method to structural analysis by considering the correlations between loads and loads, loads and results and results and results. By means of fundamental mathematics and physical rules, the principle, feasibility and error control of the method are discussed. We then establish the analysis process and procedures. The method is validated by two examples. The results show that the fast simulation method based on big data is fast and precise when it is applied to structural analysis.

  5. a Minimum Spanning Tree Based Method for Uav Image Segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Wei, Zheng; Cui, Weihong; Lin, Zhiyong

    2016-06-01

    This paper proposes a Minimum Span Tree (MST) based image segmentation method for UAV images in coastal area. An edge weight based optimal criterion (merging predicate) is defined, which based on statistical learning theory (SLT). And we used a scale control parameter to control the segmentation scale. Experiments based on the high resolution UAV images in coastal area show that the proposed merging predicate can keep the integrity of the objects and prevent results from over segmentation. The segmentation results proves its efficiency in segmenting the rich texture images with good boundary of objects.

  6. Group-Based Image Retrieval Method for Video Image Annotation

    NASA Astrophysics Data System (ADS)

    Murabayashi, Noboru; Kurahashi, Setsuya; Yoshida, Kenichi

    This paper proposes a group-based image retrieval method for video image annotation systems. Although the wide spread use of video camera recorders has increased the demand for an automated annotation system for personal videos, conventional image retrieval methods cannot achieve enough accuracy to be used as an annotation engine. Recording conditions, such as change of the brightness by weather condition, shadow by the surroundings, and etc, affect the qualities of images recorded by the personal video camera recorders. The degraded image of personal video makes the retrieval task difficult. Furthermore, it is difficult to discriminate similar images without any auxiliary information. To cope with these difficulties, this paper proposes a group-based image retrieval method. Its characteristics are 1) the use of image similarity based on the wavelet transformation based features and the scale invariant feature transform based features, and 2) the pre-grouping of related images and screening using group information. Experimental results show that the proposed method can improve image retrieval accuracy to 90% up from the conventional method of 40%.

  7. A hybrid method for pancreas extraction from CT image based on level set methods.

    PubMed

    Jiang, Huiyan; Tan, Hanqing; Fujita, Hiroshi

    2013-01-01

    This paper proposes a novel semiautomatic method to extract the pancreas from abdominal CT images. Traditional level set and region growing methods that request locating initial contour near the final boundary of object have problem of leakage to nearby tissues of pancreas region. The proposed method consists of a customized fast-marching level set method which generates an optimal initial pancreas region to solve the problem that the level set method is sensitive to the initial contour location and a modified distance regularized level set method which extracts accurate pancreas. The novelty in our method is the proper selection and combination of level set methods, furthermore an energy-decrement algorithm and an energy-tune algorithm are proposed to reduce the negative impact of bonding force caused by connected tissue whose intensity is similar with pancreas. As a result, our method overcomes the shortages of oversegmentation at weak boundary and can accurately extract pancreas from CT images. The proposed method is compared to other five state-of-the-art medical image segmentation methods based on a CT image dataset which contains abdominal images from 10 patients. The evaluated results demonstrate that our method outperforms other methods by achieving higher accuracy and making less false segmentation in pancreas extraction.

  8. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization.

  9. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  10. Scientific method by using project method in acid, base and salt material

    NASA Astrophysics Data System (ADS)

    Febriana, Beta Wulan; Arlianty, Widinda Normalia; Diniaty, Artina

    2017-03-01

    This study aims to determine the effect of scientific method using project method on student's achievement. This research was conducted at SMPN 2 Karanganyar. The descriptive quantitative method was used in this research. Samples were taken two classes using cluster random sampling. Data was obtained based on cognitive instruments. This data represents the value pretest and posttest. Data was analyzed by using descriptive analysis techniques. The results show that the class which using scientific method using project method has an average value of 37.50% of students' achievement (high), 37.50% (moderate) and 4.16% (very low). On the other hand, the class which using one method, scientific method, have the students' achievement at 33.3% (high), 8.33% (moderate) and 20.83% (very low).

  11. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, George F.; Steindler, Martin J.

    1989-01-01

    A method of removing organic phosphorus-based poisonous substances from water contaminated therewith and of subsequently destroying the toxicity of the substance is disclosed. Initially, a water-immiscible organic is immobilized on a supported liquid membrane. Thereafter, the contaminated water is contacted with one side of the supported liquid membrane to selectively dissolve the phosphorus-based substance in the organic extractant. At the same time, the other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react the phosphorus-based substance dissolved by the organic extractant with a hydroxy ion. This forms a non-toxic reaction product in the base. The organic extractant can be a water-insoluble trialkyl amine, such as trilauryl amine. The phosphorus-based substance can be phosphoryl or a thiophosphoryl.

  12. A Triangle Mesh Standardization Method Based on Particle Swarm Optimization

    PubMed Central

    Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang

    2016-01-01

    To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh. PMID:27509129

  13. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGES

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  14. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    SciTech Connect

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.

  15. Simple noise-reduction method based on nonlinear forecasting

    NASA Astrophysics Data System (ADS)

    Tan, James P. L.

    2017-03-01

    Nonparametric detrending or noise reduction methods are often employed to separate trends from noisy time series when no satisfactory models exist to fit the data. However, conventional noise reduction methods depend on subjective choices of smoothing parameters. Here we present a simple multivariate noise reduction method based on available nonlinear forecasting techniques. These are in turn based on state-space reconstruction for which a strong theoretical justification exists for their use in nonparametric forecasting. The noise reduction method presented here is conceptually similar to Schreiber's noise reduction method using state-space reconstruction. However, we show that Schreiber's method has a minor flaw that can be overcome with forecasting. Furthermore, our method contains a simple but nontrivial extension to multivariate time series. We apply the method to multivariate time series generated from the Van der Pol oscillator, the Lorenz equations, the Hindmarsh-Rose model of neuronal spiking activity, and to two other univariate real-world data sets. It is demonstrated that noise reduction heuristics can be objectively optimized with in-sample forecasting errors that correlate well with actual noise reduction errors.

  16. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2012-01-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  17. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2011-12-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  18. Multilayer neural network models based on grid methods

    NASA Astrophysics Data System (ADS)

    Lazovskaya, T.; Tarkhov, D.

    2016-11-01

    The article discusses building hybrid models relating classical numerical methods for solving ordinary and partial differential equations and the universal neural network approach being developed by D Tarkhov and A Vasilyev. The different ways of constructing multilayer neural network structures based on grid methods are considered. The technique of building a continuous approximation using one simple modification of classical schemes is presented. Introduction non-linear relationships into the classic models with and without posterior learning are investigated. The numerical experiments are conducted.

  19. LINEAR SCANNING METHOD BASED ON THE SAFT COARRAY

    SciTech Connect

    Martin, C. J.; Martinez-Graullera, O.; Romero, D.; Ullate, L. G.; Higuti, R. T.

    2010-02-22

    This work presents a method to obtain B-scan images based on linear array scanning and 2R-SAFT. Using this technique some advantages are obtained: the ultrasonic system is very simple; it avoids the grating lobes formation, characteristic in conventional SAFT; and subaperture size and focussing lens (to compensate emission-reception) can be adapted dynamically to every image point. The proposed method has been experimentally tested in the inspection of CFRP samples.

  20. Review of atom probe FIB-based specimen preparation methods.

    PubMed

    Miller, Michael K; Russell, Kaye F; Thompson, Keith; Alvis, Roger; Larson, David J

    2007-12-01

    Several FIB-based methods that have been developed to fabricate needle-shaped atom probe specimens from a variety of specimen geometries, and site-specific regions are reviewed. These methods have enabled electronic device structures to be characterized. The atom probe may be used to quantify the level and range of gallium implantation and has demonstrated that the use of low accelerating voltages during the final stages of milling can dramatically reduce the extent of gallium implantation.

  1. Innovating Method of Existing Mechanical Product Based on TRIZ Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Cunyou; Shi, Dongyan; Wu, Han

    Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.

  2. How to Reach Evidence-Based Usability Evaluation Methods.

    PubMed

    Marcilly, Romaric; Peute, Linda

    2017-01-01

    This paper discusses how and why to build evidence-based knowledge on usability evaluation methods. At each step of building evidence, requisites and difficulties to achieve it are highlighted. Specifically, the paper presents how usability evaluation studies should be designed to allow capitalizing evidence. Reciprocally, it presents how evidence-based usability knowledge will help improve usability practice. Finally, it underlines that evaluation and evidence participate in a virtuous circle that will help improve scientific knowledge and evaluation practice.

  3. Biorthogonal wavelet-based method of moments for electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Zhang, Qinke

    Wavelet analysis is a technique developed in recent years in mathematics and has found usefulness in signal processing and many other engineering areas. The practical use of wavelets for the solution of partial differential and integral equations in computational electromagnetics has been investigated in this dissertation, with the emphasis on development of biorthogonal wavelet based method of moments for the solution of electric and magnetic field integral equations. The fundamentals and numerical analysis aspects of wavelet theory have been studied. In particular, a family of compactly supported biorthogonal spline wavelet bases on the n-cube (0,1) n has been studied in detail. The wavelet bases were used in this work as a building block to construct biorthogonal wavelet bases on general domain geometry. A specific and practical way of adapting the wavelet bases to certain n- dimensional blocks or elements is proposed based on the domain decomposition and local transformation techniques used in traditional finite element methods and computer aided graphics. The element, with the biorthogonal wavelet base embedded in it, is called a wavelet element in this work. The physical domains which can be treated with this method include general curves, surfaces in 2D and 3D, and 3D volume domains. A two-step mapping is proposed for the purpose of taking full advantage of the zero-moments of wavelets. The wavelet element approach appears to offer several important advantages. It avoids the need of generating very complicated meshes required in traditional finite element based methods, and makes the adaptive analysis easy to implement. A specific implementation procedure for performing adaptive analysis is proposed. The proposed biorthogonal wavelet based method of moments (BWMoM) has been implemented by using object-oriented programming techniques. The main computational issues have been detailed, discussed, and implemented in the whole package. Numerical examples show

  4. Global gravimetric geoid model based a new method

    NASA Astrophysics Data System (ADS)

    Shen, W. B.; Han, J. C.

    2012-04-01

    The geoid, defined as the equipotential surface nearest to the mean sea level, plays a key role in physical geodesy and unification of height datum system. In this study, we introduce a new method, which is quite different from the conventional geoid modeling methods (e.g., Stokes method, Molodensky method), to determine the global gravimetric geoid (GGG). Based on the new method, using the dada base of the external Earth gravity field model EGM2008, digital topographic model DTM2006.0 and crust density distribution model CRUST2.0, we first determined the inner geopotential field until to the depth of D, and then established a GGG model , the accuracy of which is evaluated by comparing with the observations from USA, AUS, some parts of Canada, and some parts of China. The main idea of the new method is stated as follows. Given the geopotential field (e.g. EGM2008) outside the Earth, we may determine the inner geopotential field until to the depth of D by using Newtonian integral, once the density distribution model (e.g. CRUST2.0) of a shallow layer until to the depth D is given. Then, based on the definition of the geoid (i.e. an equipotential surface nearest to the mean sea level) one may determine the GGG. This study is supported by Natural Science Foundation China (grant No.40974015; No.41174011; No.41021061; No.41128003).

  5. Connecting clinical and actuarial prediction with rule-based methods.

    PubMed

    Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H

    2015-06-01

    Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods.

  6. A online credit evaluation method based on AHP and SPA

    NASA Astrophysics Data System (ADS)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  7. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  8. Sonoclot(®)-based method to detect iron enhanced coagulation.

    PubMed

    Nielsen, Vance G; Henderson, Jon

    2016-07-01

    Thrombelastographic methods have been recently introduced to detect iron mediated hypercoagulability in settings such as sickle cell disease, hemodialysis, mechanical circulatory support, and neuroinflammation. However, these inflammatory situations may have heme oxygenase-derived, coexistent carbon monoxide present, which also enhances coagulation as assessed by the same thrombelastographic variables that are affected by iron. This brief report presents a novel, Sonoclot-based method to detect iron enhanced coagulation that is independent of carbon monoxide influence. Future investigation will be required to assess the sensitivity of this new method to detect iron mediated hypercoagulability in clinical settings compared to results obtained with thrombelastographic techniques.

  9. Adaptive Set-Based Methods for Association Testing.

    PubMed

    Su, Yu-Chen; Gauderman, William James; Berhane, Kiros; Lewinger, Juan Pablo

    2016-02-01

    With a typical sample size of a few thousand subjects, a single genome-wide association study (GWAS) using traditional one single nucleotide polymorphism (SNP)-at-a-time methods can only detect genetic variants conferring a sizable effect on disease risk. Set-based methods, which analyze sets of SNPs jointly, can detect variants with smaller effects acting within a gene, a pathway, or other biologically relevant sets. Although self-contained set-based methods (those that test sets of variants without regard to variants not in the set) are generally more powerful than competitive set-based approaches (those that rely on comparison of variants in the set of interest with variants not in the set), there is no consensus as to which self-contained methods are best. In particular, several self-contained set tests have been proposed to directly or indirectly "adapt" to the a priori unknown proportion and distribution of effects of the truly associated SNPs in the set, which is a major determinant of their power. A popular adaptive set-based test is the adaptive rank truncated product (ARTP), which seeks the set of SNPs that yields the best-combined evidence of association. We compared the standard ARTP, several ARTP variations we introduced, and other adaptive methods in a comprehensive simulation study to evaluate their performance. We used permutations to assess significance for all the methods and thus provide a level playing field for comparison. We found the standard ARTP test to have the highest power across our simulations followed closely by the global model of random effects (GMRE) and a least absolute shrinkage and selection operator (LASSO)-based test.

  10. Method of coating an iron-based article

    SciTech Connect

    Magdefrau, Neal; Beals, James T.; Sun, Ellen Y.; Yamanis, Jean

    2016-11-29

    A method of coating an iron-based article includes a first heating step of heating a substrate that includes an iron-based material in the presence of an aluminum source material and halide diffusion activator. The heating is conducted in a substantially non-oxidizing environment, to cause the formation of an aluminum-rich layer in the iron-based material. In a second heating step, the substrate that has the aluminum-rich layer is heated in an oxidizing environment to oxidize the aluminum in the aluminum-rich layer.

  11. A Localization Method for Multistatic SAR Based on Convex Optimization.

    PubMed

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment.

  12. A Localization Method for Multistatic SAR Based on Convex Optimization

    PubMed Central

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  13. An agarose-gel based method for transporting cell lines.

    PubMed

    Yang, Lingzhi; Li, Chufang; Chen, Ling; Li, Zhiyuan

    2009-12-16

    Cryopreserved cells stored in dry ice or liquid nitrogen is the classical method for transporting cells between research laboratories in different cities around the world in order to maintain cell viability. An alternative method is to ship the live cells in flasks filled with cell culture medium. Both methods have limitations of either a requirement on special shipping container or short times for the cells to survive on the shipping process. We have recently developed an agarose gel based method for directly transporting the live adherent cells in cell culture plates or dishes in ambient temperature. This convenient method simplifies the transportation of live cells in long distance that can maintain cells in good viability for several days.

  14. A Novel Camera Calibration Method Based on Polar Coordinate

    PubMed Central

    Gai, Shaoyan; Da, Feipeng; Fang, Xu

    2016-01-01

    A novel calibration method based on polar coordinate is proposed. The world coordinates are expressed in the form of polar coordinates, which are converted to world coordinates in the calibration process. In the beginning, the calibration points are obtained in polar coordinates. By transformation between polar coordinates and rectangular coordinates, the points turn into form of rectangular coordinates. Then, the points are matched with the corresponding image coordinates. At last, the parameters are obtained by objective function optimization. By the proposed method, the relationships between objects and cameras are expressed in polar coordinates easily. It is suitable for multi-camera calibration. Cameras can be calibrated with fewer points. The calibration images can be positioned according to the location of cameras. The experiment results demonstrate that the proposed method is an efficient calibration method. By the method, cameras are calibrated conveniently with high accuracy. PMID:27798651

  15. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    NASA Astrophysics Data System (ADS)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  16. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    PubMed Central

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  17. A Natural Teaching Method Based on Learning Theory.

    ERIC Educational Resources Information Center

    Smilkstein, Rita

    1991-01-01

    The natural teaching method is active and student-centered, based on schema and constructivist theories, and informed by research in neuroplasticity. A schema is a mental picture or understanding of something we have learned. Humans can have knowledge only to the degree to which they have constructed schemas from learning experiences and practice.…

  18. Explorations in Using Arts-Based Self-Study Methods

    ERIC Educational Resources Information Center

    Samaras, Anastasia P.

    2010-01-01

    Research methods courses typically require students to conceptualize, describe, and present their research ideas in writing. In this article, the author describes her exploration in using arts-based techniques for teaching research to support the development of students' self-study research projects. The pedagogical approach emerged from the…

  19. Bead Collage: An Arts-Based Research Method

    ERIC Educational Resources Information Center

    Kay, Lisa

    2013-01-01

    In this paper, "bead collage," an arts-based research method that invites participants to reflect, communicate and construct their experience through the manipulation of beads and found objects is explained. Emphasizing the significance of one's personal biography and experiences as a researcher, I discuss how my background as an…

  20. Metaphoric Investigation of the Phonic-Based Sentence Method

    ERIC Educational Resources Information Center

    Dogan, Birsen

    2012-01-01

    This study aimed to understand the views of prospective teachers with "phonic-based sentence method" through metaphoric images. In this descriptive study, the participants involve the prospective teachers who take reading-writing instruction courses in Primary School Classroom Teaching Program of the Education Faculty of Pamukkale…

  1. Preparing Students for Flipped or Team-Based Learning Methods

    ERIC Educational Resources Information Center

    Balan, Peter; Clark, Michele; Restall, Gregory

    2015-01-01

    Purpose: Teaching methods such as Flipped Learning and Team-Based Learning require students to pre-learn course materials before a teaching session, because classroom exercises rely on students using self-gained knowledge. This is the reverse to "traditional" teaching when course materials are presented during a lecture, and students are…

  2. A Quantum-Based Similarity Method in Virtual Screening.

    PubMed

    Al-Dabbagh, Mohammed Mumtaz; Salim, Naomie; Himmat, Mubarak; Ahmed, Ali; Saeed, Faisal

    2015-10-02

    One of the most widely-used techniques for ligand-based virtual screening is similarity searching. This study adopted the concepts of quantum mechanics to present as state-of-the-art similarity method of molecules inspired from quantum theory. The representation of molecular compounds in mathematical quantum space plays a vital role in the development of quantum-based similarity approach. One of the key concepts of quantum theory is the use of complex numbers. Hence, this study proposed three various techniques to embed and to re-represent the molecular compounds to correspond with complex numbers format. The quantum-based similarity method that developed in this study depending on complex pure Hilbert space of molecules called Standard Quantum-Based (SQB). The recall of retrieved active molecules were at top 1% and top 5%, and significant test is used to evaluate our proposed methods. The MDL drug data report (MDDR), maximum unbiased validation (MUV) and Directory of Useful Decoys (DUD) data sets were used for experiments and were represented by 2D fingerprints. Simulated virtual screening experiment show that the effectiveness of SQB method was significantly increased due to the role of representational power of molecular compounds in complex numbers forms compared to Tanimoto benchmark similarity measure.

  3. Effective Teaching Methods--Project-based Learning in Physics

    ERIC Educational Resources Information Center

    Holubova, Renata

    2008-01-01

    The paper presents results of the research of new effective teaching methods in physics and science. It is found out that it is necessary to educate pre-service teachers in approaches stressing the importance of the own activity of students, in competences how to create an interdisciplinary project. Project-based physics teaching and learning…

  4. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Posner, R.G.; Marrone, B.L.; Hammond, M.L.; Simpson, D.J.

    1995-04-11

    A method is described for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand. 4 figures.

  5. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Posner, Richard G.; Marrone, Babetta L.; Hammond, Mark L.; Simpson, Daniel J.

    1995-01-01

    Method for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand.

  6. Segmentation of pituitary adenoma: a graph-based method vs. a balloon inflation method.

    PubMed

    Egger, Jan; Zukić, Dženan; Freisleben, Bernd; Kolb, Andreas; Nimsky, Christopher

    2013-06-01

    Among all abnormal growths inside the skull, the percentage of tumors in sellar region is approximately 10-15%, and the pituitary adenoma is the most common sellar lesion. A time-consuming process that can be shortened by using adequate algorithms is the manual segmentation of pituitary adenomas. In this contribution, two methods for pituitary adenoma segmentation in the human brain are presented and compared using magnetic resonance imaging (MRI) patient data from the clinical routine: Method A is a graph-based method that sets up a directed and weighted graph and performs a min-cut for optimal segmentation results: Method B is a balloon inflation method that uses balloon inflation forces to detect the pituitary adenoma boundaries. The ground truth of the pituitary adenoma boundaries - for the evaluation of the methods - are manually extracted by neurosurgeons. Comparison is done using the Dice Similarity Coefficient (DSC), a measure for spatial overlap of different segmentation results. The average DSC for all data sets is 77.5±4.5% for the graph-based method and 75.9±7.2% for the balloon inflation method showing no significant difference. The overall segmentation time of the implemented approaches was less than 4s - compared with a manual segmentation that took, on the average, 3.9±0.5min.

  7. PDEs on moving surfaces via the closest point method and a modified grid based particle method

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ruuth, S. J.

    2016-05-01

    Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.

  8. Liver 4DMRI: A retrospective image-based sorting method

    SciTech Connect

    Paganelli, Chiara; Summers, Paul; Bellomi, Massimo; Baroni, Guido; Riboldi, Marco

    2015-08-15

    Purpose: Four-dimensional magnetic resonance imaging (4DMRI) is an emerging technique in radiotherapy treatment planning for organ motion quantification. In this paper, the authors present a novel 4DMRI retrospective image-based sorting method, providing reduced motion artifacts than using a standard monodimensional external respiratory surrogate. Methods: Serial interleaved 2D multislice MRI data were acquired from 24 liver cases (6 volunteers + 18 patients) to test the proposed 4DMRI sorting. Image similarity based on mutual information was applied to automatically identify a stable reference phase and sort the image sequence retrospectively, without the use of additional image or surrogate data to describe breathing motion. Results: The image-based 4DMRI provided a smoother liver profile than that obtained from standard resorting based on an external surrogate. Reduced motion artifacts were observed in image-based 4DMRI datasets with a fitting error of the liver profile measuring 1.2 ± 0.9 mm (median ± interquartile range) vs 2.1 ± 1.7 mm of the standard method. Conclusions: The authors present a novel methodology to derive a patient-specific 4DMRI model to describe organ motion due to breathing, with improved image quality in 4D reconstruction.

  9. Efficient method of image edge detection based on FSVM

    NASA Astrophysics Data System (ADS)

    Cai, Aiping; Xiong, Xiaomei

    2013-07-01

    For efficient object cover edge detection in digital images, this paper studied traditional methods and algorithm based on SVM. It analyzed Canny edge detection algorithm existed some pseudo-edge and poor anti-noise capability. In order to provide a reliable edge extraction method, propose a new detection algorithm based on FSVM. Which contains several steps: first, trains classify sample and gives the different membership function to different samples. Then, a new training sample is formed by increase the punishment some wrong sub-sample, and use the new FSVM classification model for train and test them. Finally the edges are extracted of the object image by using the model. Experimental result shows that good edge detection image will be obtained and adding noise experiments results show that this method has good anti-noise.

  10. Structure-based Methods for Computational Protein Functional Site Prediction

    PubMed Central

    Dukka, B KC

    2013-01-01

    Due to the advent of high throughput sequencing techniques and structural genomic projects, the number of gene and protein sequences has been ever increasing. Computational methods to annotate these genes and proteins are even more indispensable. Proteins are important macromolecules and study of the function of proteins is an important problem in structural bioinformatics. This paper discusses a number of methods to predict protein functional site especially focusing on protein ligand binding site prediction. Initially, a short overview is presented on recent advances in methods for selection of homologous sequences. Furthermore, a few recent structural based approaches and sequence-and-structure based approaches for protein functional sites are discussed in details. PMID:24688745

  11. Matrix-based image reconstruction methods for tomography

    SciTech Connect

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.

  12. Photonic arbitrary waveform generator based on Taylor synthesis method.

    PubMed

    Liao, Shasha; Ding, Yunhong; Dong, Jianji; Yan, Siqi; Wang, Xu; Zhang, Xinliang

    2016-10-17

    Arbitrary waveform generation has been widely used in optical communication, radar system and many other applications. We propose and experimentally demonstrate a silicon-on-insulator (SOI) on chip optical arbitrary waveform generator, which is based on Taylor synthesis method. In our scheme, a Gaussian pulse is launched to some cascaded microrings to obtain first-, second- and third-order differentiations. By controlling amplitude and phase of the initial pulse and successive differentiations, we can realize an arbitrary waveform generator according to Taylor expansion. We obtain several typical waveforms such as square waveform, triangular waveform, flat-top waveform, sawtooth waveform, Gaussian waveform and so on. Unlike other schemes based on Fourier synthesis or frequency-to-time mapping, our scheme is based on Taylor synthesis method. Our scheme does not require any spectral disperser or large dispersion, which are difficult to fabricate on chip. Our scheme is compact and capable for integration with electronics.

  13. Method of plasma etching Ga-based compound semiconductors

    DOEpatents

    Qiu, Weibin; Goddard, Lynford L.

    2012-12-25

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent to the process chamber. The process chamber contains a sample comprising a Ga-based compound semiconductor. The sample is in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. The method includes flowing SiCl.sub.4 gas into the chamber, flowing Ar gas into the chamber, and flowing H.sub.2 gas into the chamber. RF power is supplied independently to the source electrode and the platen. A plasma is generated based on the gases in the process chamber, and regions of a surface of the sample adjacent to one or more masked portions of the surface are etched to create a substantially smooth etched surface including features having substantially vertical walls beneath the masked portions.

  14. Discontinuous Galerkin method based on non-polynomial approximation spaces

    SciTech Connect

    Yuan Ling . E-mail: lyuan@dam.brown.edu; Shu Chiwang . E-mail: shu@dam.brown.edu

    2006-10-10

    In this paper, we develop discontinuous Galerkin (DG) methods based on non-polynomial approximation spaces for numerically solving time dependent hyperbolic and parabolic and steady state hyperbolic and elliptic partial differential equations (PDEs). The algorithm is based on approximation spaces consisting of non-polynomial elementary functions such as exponential functions, trigonometric functions, etc., with the objective of obtaining better approximations for specific types of PDEs and initial and boundary conditions. It is shown that L {sup 2} stability and error estimates can be obtained when the approximation space is suitably selected. It is also shown with numerical examples that a careful selection of the approximation space to fit individual PDE and initial and boundary conditions often provides more accurate results than the DG methods based on the polynomial approximation spaces of the same order of accuracy.

  15. Screw thread parameter measurement system based on image processing method

    NASA Astrophysics Data System (ADS)

    Rao, Zhimin; Huang, Kanggao; Mao, Jiandong; Zhang, Yaya; Zhang, Fan

    2013-08-01

    In the industrial production, as an important transmission part, the screw thread is applied extensively in many automation equipments. The traditional measurement methods of screw thread parameter, including integrated test methods of multiparameters and the single parameter measurement method, belong to contact measurement method. In practical the contact measurement exists some disadvantages, such as relatively high time cost, introducing easily human error and causing thread damage. In this paper, as a new kind of real-time and non-contact measurement method, a screw thread parameter measurement system based on image processing method is developed to accurately measure the outside diameter, inside diameter, pitch diameter, pitch, thread height and other parameters of screw thread. In the system the industrial camera is employed to acquire the image of screw thread, some image processing methods are used to obtain the image profile of screw thread and a mathematics model is established to compute the parameters. The C++Builder 6.0 is employed as the software development platform to realize the image process and computation of screw thread parameters. For verifying the feasibility of the measurement system, some experiments were carried out and the measurement errors were analyzed. The experiment results show the image measurement system satisfies the measurement requirements and suitable for real-time detection of screw thread parameters mentioned above. Comparing with the traditional methods the system based on image processing method has some advantages, such as, non-contact, easy operation, high measuring accuracy, no work piece damage, fast error analysis and so on. In the industrial production, this measurement system can provide an important reference value for development of similar parameter measurement system.

  16. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification.

  17. Nonlinear model-based method for clustering periodically expressed genes.

    PubMed

    Tian, Li-Ping; Liu, Li-Zhi; Zhang, Qian-Wei; Wu, Fang-Xiang

    2011-01-01

    Clustering periodically expressed genes from their time-course expression data could help understand the molecular mechanism of those biological processes. In this paper, we propose a nonlinear model-based clustering method for periodically expressed gene profiles. As periodically expressed genes are associated with periodic biological processes, the proposed method naturally assumes that a periodically expressed gene dataset is generated by a number of periodical processes. Each periodical process is modelled by a linear combination of trigonometric sine and cosine functions in time plus a Gaussian noise term. A two stage method is proposed to estimate the model parameter, and a relocation-iteration algorithm is employed to assign each gene to an appropriate cluster. A bootstrapping method and an average adjusted Rand index (AARI) are employed to measure the quality of clustering. One synthetic dataset and two biological datasets were employed to evaluate the performance of the proposed method. The results show that our method allows the better quality clustering than other clustering methods (e.g., k-means) for periodically expressed gene data, and thus it is an effective cluster analysis method for periodically expressed gene data.

  18. Do dynamic-based MR knee kinematics methods produce the same results as static methods?

    PubMed

    d'Entremont, Agnes G; Nordmeyer-Massner, Jurek A; Bos, Clemens; Wilson, David R; Pruessmann, Klaas P

    2013-06-01

    MR-based methods provide low risk, noninvasive assessment of joint kinematics; however, these methods often use static positions or require many identical cycles of movement. The study objective was to compare the 3D kinematic results approximated from a series of sequential static poses of the knee with the 3D kinematic results obtained from continuous dynamic movement of the knee. To accomplish this objective, we compared kinematic data from a validated static MR method to a fast static MR method, and compared kinematic data from both static methods to a newly developed dynamic MR method. Ten normal volunteers were imaged using the three kinematic methods (dynamic, static standard, and static fast). Results showed that the two sets of static results were in agreement, indicating that the sequences (standard and fast) may be used interchangeably. Dynamic kinematic results were significantly different from both static results in eight of 11 kinematic parameters: patellar flexion, patellar tilt, patellar proximal translation, patellar lateral translation, patellar anterior translation, tibial abduction, tibial internal rotation, and tibial anterior translation. Three-dimensional MR kinematics measured from dynamic knee motion are often different from those measured in a static knee at several positions, indicating that dynamic-based kinematics provides information that is not obtainable from static scans.

  19. Moving sound source localization based on triangulation method

    NASA Astrophysics Data System (ADS)

    Miao, Feng; Yang, Diange; Wen, Junjie; Lian, Xiaomin

    2016-12-01

    This study develops a sound source localization method that extends traditional triangulation to moving sources. First, the possible sound source locating plane is scanned. Secondly, for each hypothetical source location in this possible plane, the Doppler effect is removed through the integration of sound pressure. Taking advantage of the de-Dopplerized signals, the moving time difference of arrival (MTDOA) is calculated, and the sound source is located based on triangulation. Thirdly, the estimated sound source location is compared to the original hypothetical location and the deviations are recorded. Because the real sound source location leads to zero deviation, the sound source can be finally located by minimizing the deviation matrix. Simulations have shown the superiority of MTDOA method over traditional triangulation in case of moving sound sources. The MTDOA method can be used to locate moving sound sources with as high resolution as DAMAS beamforming, as shown in the experiments, offering thus a new method for locating moving sound sources.

  20. Diabatization based on the dipole and quadrupole: The DQ method

    SciTech Connect

    Hoyer, Chad E.; Xu, Xuefei; Ma, Dongxia; Gagliardi, Laura E-mail: truhlar@umn.edu; Truhlar, Donald G. E-mail: truhlar@umn.edu

    2014-09-21

    In this work, we present a method, called the DQ scheme (where D and Q stand for dipole and quadrupole, respectively), for transforming a set of adiabatic electronic states to diabatic states by using the dipole and quadrupole moments to determine the transformation coefficients. It is more broadly applicable than methods based only on the dipole moment; for example, it is not restricted to electron transfer reactions, and it works with any electronic structure method and for molecules with and without symmetry, and it is convenient in not requiring orbital transformations. We illustrate this method by prototype applications to two cases, LiH and phenol, for which we compare the results to those obtained by the fourfold-way diabatization scheme.

  1. A history-based method to estimate animal preference

    PubMed Central

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  2. A Novel Robot Visual Homing Method Based on SIFT Features

    PubMed Central

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-01-01

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method. PMID:26473880

  3. A Novel Method for Pulsometry Based on Traditional Iranian Medicine

    PubMed Central

    Yousefipoor, Farzane; Nafisi, Vahidreza

    2015-01-01

    Arterial pulse measurement is one of the most important methods for evaluation of healthy conditions. In traditional Iranian medicine (TIM), physician may detect radial pulse by holding four fingers on the patient's wrist. By using this method, under standard condition, the detected pulses are subjective and erroneous, in case of weak and/or abnormal pulses, the ambiguity of diagnosis may rise. In this paper, we present an equipment which is designed and implemented for automation of traditional pulse detection method. By this novel system, the developed noninvasive diagnostic method and database based on the TIM are way forward to apply traditional medicine and diagnose patients with present technology. The accuracy for period measuring is 76% and systolic peak is 72%. PMID:26955566

  4. Topography measurement of micro structure by modulation-based method

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Tang, Yan; Liu, Junbo; Deng, Qinyuan; Cheng, Yiguang; Hu, Song

    2016-10-01

    Dimensional metrology for micro structure plays an important role in addressing quality issues and observing the performance of micro-fabricated products. Different from the traditional white-light interferometry approach, the modulation-based method is expected to measure topography of micro structure by the obtained modulation of each interferometry image. Through seeking the maximum modulation of every pixel respectively in Z direction, the method could obtain the corresponding height of individual pixel and finally get topography of the structure. Owing to the characteristic of modulation, the proposed method which is not influenced by the change of background light intensity caused by instable light source and different reflection index of the structure could be widely applied with high stability. The paper both illustrates the principle of this novel method and conducts the experiment to verify the feasibility.

  5. [Fast Implementation Method of Protein Spots Detection Based on CUDA].

    PubMed

    Xiong, Bangshu; Ye, Yijia; Ou, Qiaofeng; Zhang, Haodong

    2016-02-01

    In order to improve the efficiency of protein spots detection, a fast detection method based on CUDA was proposed. Firstly, the parallel algorithms of the three most time-consuming parts in the protein spots detection algorithm: image preprocessing, coarse protein point detection and overlapping point segmentation were studied. Then, according to single instruction multiple threads executive model of CUDA to adopted data space strategy of separating two-dimensional (2D) images into blocks, various optimizing measures such as shared memory and 2D texture memory are adopted in this study. The results show that the operative efficiency of this method is obviously improved compared to CPU calculation. As the image size increased, this method makes more improvement in efficiency, such as for the image with the size of 2,048 x 2,048, the method of CPU needs 52,641 ms, but the GPU needs only 4,384 ms.

  6. Diabatization based on the dipole and quadrupole: The DQ method

    NASA Astrophysics Data System (ADS)

    Hoyer, Chad E.; Xu, Xuefei; Ma, Dongxia; Gagliardi, Laura; Truhlar, Donald G.

    2014-09-01

    In this work, we present a method, called the DQ scheme (where D and Q stand for dipole and quadrupole, respectively), for transforming a set of adiabatic electronic states to diabatic states by using the dipole and quadrupole moments to determine the transformation coefficients. It is more broadly applicable than methods based only on the dipole moment; for example, it is not restricted to electron transfer reactions, and it works with any electronic structure method and for molecules with and without symmetry, and it is convenient in not requiring orbital transformations. We illustrate this method by prototype applications to two cases, LiH and phenol, for which we compare the results to those obtained by the fourfold-way diabatization scheme.

  7. Spindle extraction method for ISAR image based on Radon transform

    NASA Astrophysics Data System (ADS)

    Wei, Xia; Zheng, Sheng; Zeng, Xiangyun; Zhu, Daoyuan; Xu, Gaogui

    2015-12-01

    In this paper, a method of spindle extraction of target in inverse synthetic aperture radar (ISAR) image is proposed which depends on Radon Transform. Firstly, utilizing Radon Transform to detect all straight lines which are collinear with these line segments in image. Then, using Sobel operator to detect image contour. Finally, finding all intersections of each straight line and image contour, the two intersections which have maximum distance between them is the two ends of this line segment and the longest line segment of all line segments is spindle of target. According to the proposed spindle extraction method, one hundred simulated ISAR images which are respectively rotated 0 degrees, 10 degrees, 20 degrees, 30 degrees and 40 degrees in counterclockwise are used to do experiment and the proposed method and the detection results are more close to the real spindle of target than the method based on Hough Transform .

  8. Object Recognition using Feature- and Color-Based Methods

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  9. Lunar-base construction equipment and methods evaluation

    NASA Technical Reports Server (NTRS)

    Boles, Walter W.; Ashley, David B.; Tucker, Richard L.

    1993-01-01

    A process for evaluating lunar-base construction equipment and methods concepts is presented. The process is driven by the need for more quantitative, systematic, and logical methods for assessing further research and development requirements in an area where uncertainties are high, dependence upon terrestrial heuristics is questionable, and quantitative methods are seldom applied. Decision theory concepts are used in determining the value of accurate information and the process is structured as a construction-equipment-and-methods selection methodology. Total construction-related, earth-launch mass is the measure of merit chosen for mathematical modeling purposes. The work is based upon the scope of the lunar base as described in the National Aeronautics and Space Administration's Office of Exploration's 'Exploration Studies Technical Report, FY 1989 Status'. Nine sets of conceptually designed construction equipment are selected as alternative concepts. It is concluded that the evaluation process is well suited for assisting in the establishment of research agendas in an approach that is first broad, with a low level of detail, followed by more-detailed investigations into areas that are identified as critical due to high degrees of uncertainty and sensitivity.

  10. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  11. Frame synchronization methods based on channel symbol measurements

    NASA Technical Reports Server (NTRS)

    Dolinar, S.; Cheung, K.-M.

    1989-01-01

    The current DSN frame synchronization procedure is based on monitoring the decoded bit stream for the appearance of a sync marker sequence that is transmitted once every data frame. The possibility of obtaining frame synchronization by processing the raw received channel symbols rather than the decoded bits is explored. Performance results are derived for three channel symbol sync methods, and these are compared with results for decoded bit sync methods reported elsewhere. It is shown that each class of methods has advantages or disadvantages under different assumptions on the frame length, the global acquisition strategy, and the desired measure of acquisition timeliness. It is shown that the sync statistics based on decoded bits are superior to the statistics based on channel symbols, if the desired operating region utilizes a probability of miss many orders of magnitude higher than the probability of false alarm. This operating point is applicable for very large frame lengths and minimal frame-to-frame verification strategy. On the other hand, the statistics based on channel symbols are superior if the desired operating point has a miss probability only a few orders of magnitude greater than the false alarm probability. This happens for small frames or when frame-to-frame verifications are required.

  12. Evaluation of base widening methods on flexible pavements in Wyoming

    NASA Astrophysics Data System (ADS)

    Offei, Edward

    The surface transportation system forms the biggest infrastructure investment in the United States of which the roadway pavement is an integral part. Maintaining the roadways can involve rehabilitation in the form of widening, which requires a longitudinal joint between the existing and new pavement sections to accommodate wider travel lanes, additional travel lanes or modification to shoulder widths. Several methods are utilized for the joint construction between the existing and new pavement sections including vertical, tapered and stepped joints. The objective of this research is to develop a formal recommendation for the preferred joint construction method that provides the best base layer support for the state of Wyoming. Field collection of Dynamic Cone Penetrometer (DCP) data, Falling Weight Deflectometer (FWD) data, base samples for gradation and moisture content were conducted on 28 existing and 4 newly constructed pavement widening projects. A survey of constructability issues on widening projects as experienced by WYDOT engineers was undertaken. Costs of each joint type were compared as well. Results of the analyses indicate that the tapered joint type showed relatively better pavement strength compared to the vertical joint type and could be the preferred joint construction method. The tapered joint type also showed significant base material savings than the vertical joint type. The vertical joint has an 18% increase in cost compared to the tapered joint. This research is intended to provide information and/or recommendation to state policy makers as to which of the base widening joint techniques (vertical, tapered, stepped) for flexible pavement provides better pavement performance.

  13. Calibration of base flow separation methods with streamflow conductivity.

    PubMed

    Stewart, Mark; Cimino, Joseph; Ross, Mark

    2007-01-01

    The conductivity mass-balance (CMB) method can be used to calibrate analytical base flow separation methods. The principal CMB assumptions are base flow conductivity is equal to streamflow conductivity at lowest flows, runoff conductivity is equal to streamflow conductivity at highest flows, and base flow and runoff conductivities are assumed to be constants over the period of record. To test the CMB assumptions, fluid conductivities of ground water, surface runoff, and streamflow were measured during wet and dry conditions in a 12-km(2) stream basin. Ground water conductivities at wells varied an average of 6% from dry to wet conditions, while stream conductivities varied 58%. Shallow ground water conductivity varied significantly with distance from the stream, with lowest conductivities of 87 microS/cm near the divide, a maximum of 520 microS/cm 59 m from the stream, and 215 microS/cm 22 m from the stream. Runoff conductivities measured in three rain events remained nearly constant, with lower conductivities of 35 microS/cm near the divide and 50 microS/cm near the stream. The CMB method was applied to the records from 10 USGS stream-gauging stations in Texas, Kentucky, Georgia, and Florida to calibrate the USGS base flow separation technique, HYSEP, by varying the time parameter 2N*. There is a statistically significant relationship between basin areas and calibrated values of 2N*, expressed as N = 0.46A(0.44), with N in days and A in km(2). The widely accepted relationship N = 0.83A(0.2) is not valid for these basins. Other analytic methods can also be calibrated with the CMB method.

  14. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  15. A decomposition method based on a model of continuous change.

    PubMed

    Horiuchi, Shiro; Wilmoth, John R; Pletcher, Scott D

    2008-11-01

    A demographic measure is often expressed as a deterministic or stochastic function of multiple variables (covariates), and a general problem (the decomposition problem) is to assess contributions of individual covariates to a difference in the demographic measure (dependent variable) between two populations. We propose a method of decomposition analysis based on an assumption that covariates change continuously along an actual or hypothetical dimension. This assumption leads to a general model that logically justifies the additivity of covariate effects and the elimination of interaction terms, even if the dependent variable itself is a nonadditive function. A comparison with earlier methods illustrates other practical advantages of the method: in addition to an absence of residuals or interaction terms, the method can easily handle a large number of covariates and does not require a logically meaningful ordering of covariates. Two empirical examples show that the method can be applied flexibly to a wide variety of decomposition problems. This study also suggests that when data are available at multiple time points over a long interval, it is more accurate to compute an aggregated decomposition based on multiple subintervals than to compute a single decomposition for the entire study period.

  16. Sensitivity based method for structural dynamic model improvement

    NASA Astrophysics Data System (ADS)

    Lin, R. M.; Du, H.; Ong, J. H.

    1993-05-01

    Sensitivity analysis, the study of how a structure's dynamic characteristics change with design variables, has been used to predict structural modification effects in design for many decades. In this paper, methods for calculating the eigensensitivity, frequency response function sensitivity and its modified new formulation are presented. The implementation of these sensitivity analyses to the practice of finite element model improvement using vibration test data, which is one of the major applications of experimental modal testing, is discussed. Since it is very difficult in practice to measure all the coordinates which are specified in the finite element model, sensitivity based methods become essential and are, in fact, the only appropriate methods of tackling the problem of finite element model improvement. Comparisons of these methods are made in terms of the amount of measured data required, the speed of convergence and the magnitudes of modelling errors. Also, it is identified that the inverse iteration technique can be effectively used to minimize the computational costs involved. The finite element model of a plane truss structure is used in numerical case studies to demonstrate the effectiveness of the applications of these sensitivity based methods to practical engineering structures.

  17. CEMS using hot wet extractive method based on DOAS

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Zhang, Chi; Sun, Changku

    2011-11-01

    A continuous emission monitoring system (CEMS) using hot wet extractive method based on differential optical absorption spectroscopy (DOAS) is designed. The developed system is applied to retrieving the concentration of SO2 and NOx in flue gas on-site. The flue gas is carried along a heated sample line into the sample pool at a constant temperature above the dew point. In this case, the adverse impact of water vapor on measurement accuracy is reduced greatly, and the on-line calibration is implemented. And then the flue gas is discharged from the sample pool after the measuring process is complete. The on-site applicability of the system is enhanced by using Programmable Logic Controller (PLC) to control each valve in the system during the measuring and on-line calibration process. The concentration retrieving method used in the system is based on the partial least squares (PLS) regression nonlinear method. The relationship between the known concentration and the differential absorption feature gathered by the PLS nonlinear method can be figured out after the on-line calibration process. Then the concentration measurement of SO2 and NOx can be easily implemented according to the definite relationship. The concentration retrieving method can identify the information and noise effectively, which improves the measuring accuracy of the system. SO2 with four different concentrations are measured by the system under laboratory conditions. The results proved that the full-scale error of this system is less than 2%FS.

  18. Improved image fusion method based on NSCT and accelerated NMF.

    PubMed

    Wang, Juan; Lai, Siyu; Li, Mingdong

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.

  19. Improved Image Fusion Method Based on NSCT and Accelerated NMF

    PubMed Central

    Wang, Juan; Lai, Siyu; Li, Mingdong

    2012-01-01

    In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms. PMID:22778618

  20. Methods for preparing colloidal nanocrystal-based thin films

    DOEpatents

    Kagan, Cherie R.; Fafarman, Aaron T.; Choi, Ji-Hyuk; Koh, Weon-kyu; Kim, David K.; Oh, Soong Ju; Lai, Yuming; Hong, Sung-Hoon; Saudari, Sangameshwar Rao; Murray, Christopher B.

    2016-05-10

    Methods of exchanging ligands to form colloidal nanocrystals (NCs) with chalcogenocyanate (xCN)-based ligands and apparatuses using the same are disclosed. The ligands may be exchanged by assembling NCs into a thin film and immersing the thin film in a solution containing xCN-based ligands. The ligands may also be exchanged by mixing a xCN-based solution with a dispersion of NCs, flocculating the mixture, centrifuging the mixture, discarding the supernatant, adding a solvent to the pellet, and dispersing the solvent and pellet to form dispersed NCs with exchanged xCN-ligands. The NCs with xCN-based ligands may be used to form thin film devices and/or other electronic, optoelectronic, and photonic devices. Devices comprising nanocrystal-based thin films and methods for forming such devices are also disclosed. These devices may be constructed by depositing NCs on to a substrate to form an NC thin film and then doping the thin film by evaporation and thermal diffusion.

  1. Endoscopic Skull Base Reconstruction: An Evolution of Materials and Methods.

    PubMed

    Sigler, Aaron C; D'Anza, Brian; Lobo, Brian C; Woodard, Troy; Recinos, Pablo F; Sindwani, Raj

    2017-03-31

    Endoscopic skull base surgery has developed rapidly over the last decade, in large part because of the expanding armamentarium of endoscopic repair techniques. This article reviews the available technologies and techniques, including vascularized and nonvascularized flaps, synthetic grafts, sealants and glues, and multilayer reconstruction. Understanding which of these repair methods is appropriate and under what circumstances is paramount to achieving success in this challenging but rewarding field. A graduated approach to skull base reconstruction is presented to provide a systematic framework to guide selection of repair technique to ensure a successful outcome while minimizing morbidity for the patient.

  2. Test method on infrared system range based on space compression

    NASA Astrophysics Data System (ADS)

    Chen, Zhen-xing; Shi, Sheng-bing; Han, Fu-li

    2016-09-01

    Infrared thermal imaging system generates image based on infrared radiation difference between object and background and is a passive work mode. Range is important performance and necessary appraised test item in appraisal test for infrared system. In this paper, aim is carrying out infrared system range test in laboratory , simulated test ground is designed based on object equivalent, background analog, object characteristic control, air attenuation characteristic, infrared jamming analog and so on, repeatable and controllable tests are finished, problem of traditional field test method is solved.

  3. A Flow SPR Immunosensor Based on a Sandwich Direct Method

    PubMed Central

    Tomassetti, Mauro; Conta, Giorgia; Campanella, Luigi; Favero, Gabriele; Sanzò, Gabriella; Mazzei, Franco; Antiochia, Riccarda

    2016-01-01

    In this study, we report the development of an SPR (Surface Plasmon Resonance) immunosensor for the detection of ampicillin, operating under flow conditions. SPR sensors based on both direct (with the immobilization of the antibody) and competitive (with the immobilization of the antigen) methods did not allow the detection of ampicillin. Therefore, a sandwich-based sensor was developed which showed a good linear response towards ampicillin between 10−3 and 10−1 M, a measurement time of ≤20 min and a high selectivity both towards β-lactam antibiotics and antibiotics of different classes. PMID:27187486

  4. Interferometric measurement method of thin film thickness based on FFT

    NASA Astrophysics Data System (ADS)

    Shuai, Gaolong; Su, Junhong; Yang, Lihong; Xu, Junqi

    2009-05-01

    The kernel of modern interferometry is to obtain necessary surface shape and parameter by processing interferogram with reasonable algorithm. The paper studies the basic principle of interferometry involving 2-D FFT, proposes a new method for measuring thin film thickness based on FFT: by CCD receiving and acquired card collecting with the help of Twyman-Green interferometer, can a fringe interferogram of the measured thin film be obtained. Based on the interferogram processing knowledge, an algorithm processing software/program can be prepared to realize identification of the edge films, regional extension, filtering, unwrapping the wrapped phase etc. And in this way can the distribution of film information-coated surface be obtained and the thickness of thin film samples automatically measured. The findings indicate the PV value and RMS value of the measured film samples are 0.256 λ and 0.068 λ respectively and prove the new method has high precision.

  5. Microbial detection method based on sensing molecular hydrogen

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Stoner, G. E.; Boykin, E. H.

    1974-01-01

    A simple method for detecting bacteria, based on the time of hydrogen evolution, was developed and tested against various members of the Enterobacteriaceae group. The test system consisted of (1) two electrodes, platinum and a reference electrode, (2) a buffer amplifier, and (3) a strip-chart recorder. Hydrogen evolution was measured by an increase in voltage in the negative (cathodic) direction. A linear relationship was established between inoculum size and the time hydrogen was detected (lag period). Lag times ranged from 1 h for 1 million cells/ml to 7 h for 1 cell/ml. For each 10-fold decrease in inoculum, length of the lag period increased 60 to 70 min. Based on the linear relationship between inoculum and lag period, these results indicate the potential application of the hydrogen-sensing method for rapidly detecting coliforms and other gas-producing microorganisms in a variety of clinical, food, and other samples.

  6. Method of pectus excavatum measurement based on structured light technique

    NASA Astrophysics Data System (ADS)

    Glinkowski, Wojciech; Sitnik, Robert; Witkowski, Marcin; Kocoń, Hanna; Bolewicki, Pawel; Górecki, Andrzej

    2009-07-01

    We present an automatic method for assessment of pectus excavatum severity based on an optical 3-D markerless shape measurement. A four-directional measurement system based on a structured light projection method is built to capture the shape of the body surface of the patients. The system setup is described and typical measurement parameters are given. The automated data analysis path is explained. Their main steps are: normalization of trunk model orientation, cutting the model into slices, analysis of each slice shape, selecting the proper slice for the assessment of pectus excavatum of the patient, and calculating its shape parameter. We develop a new shape parameter (I3ds) that shows high correlation with the computed tomography (CT) Haller index widely used for assessment of pectus excavatum. Clinical results and the evaluation of developed indexes are presented.

  7. A Model Based Security Testing Method for Protocol Implementation

    PubMed Central

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation. PMID:25105163

  8. Geophysics-based method of locating a stationary earth object

    DOEpatents

    Daily, Michael R.; Rohde, Steven B.; Novak, James L.

    2008-05-20

    A geophysics-based method for determining the position of a stationary earth object uses the periodic changes in the gravity vector of the earth caused by the sun- and moon-orbits. Because the local gravity field is highly irregular over a global scale, a model of local tidal accelerations can be compared to actual accelerometer measurements to determine the latitude and longitude of the stationary object.

  9. Expiratory model-based method to monitor ARDS disease state

    PubMed Central

    2013-01-01

    Introduction Model-based methods can be used to characterise patient-specific condition and response to mechanical ventilation (MV) during treatment for acute respiratory distress syndrome (ARDS). Conventional metrics of respiratory mechanics are based on inspiration only, neglecting data from the expiration cycle. However, it is hypothesised that expiratory data can be used to determine an alternative metric, offering another means to track patient condition and guide positive end expiratory pressure (PEEP) selection. Methods Three fully sedated, oleic acid induced ARDS piglets underwent three experimental phases. Phase 1 was a healthy state recruitment manoeuvre. Phase 2 was a progression from a healthy state to an oleic acid induced ARDS state. Phase 3 was an ARDS state recruitment manoeuvre. The expiratory time-constant model parameter was determined for every breathing cycle for each subject. Trends were compared to estimates of lung elastance determined by means of an end-inspiratory pause method and an integral-based method. All experimental procedures, protocols and the use of data in this study were reviewed and approved by the Ethics Committee of the University of Liege Medical Faculty. Results The overall median absolute percentage fitting error for the expiratory time-constant model across all three phases was less than 10 %; for each subject, indicating the capability of the model to capture the mechanics of breathing during expiration. Provided the respiratory resistance was constant, the model was able to adequately identify trends and fundamental changes in respiratory mechanics. Conclusion Overall, this is a proof of concept study that shows the potential of continuous monitoring of respiratory mechanics in clinical practice. Respiratory system mechanics vary with disease state development and in response to MV settings. Therefore, titrating PEEP to minimal elastance theoretically results in optimal PEEP selection. Trends matched clinical

  10. A PDE-Based Fast Local Level Set Method

    NASA Astrophysics Data System (ADS)

    Peng, Danping; Merriman, Barry; Osher, Stanley; Zhao, Hongkai; Kang, Myungjoo

    1999-11-01

    We develop a fast method to localize the level set method of Osher and Sethian (1988, J. Comput. Phys.79, 12) and address two important issues that are intrinsic to the level set method: (a) how to extend a quantity that is given only on the interface to a neighborhood of the interface; (b) how to reset the level set function to be a signed distance function to the interface efficiently without appreciably moving the interface. This fast local level set method reduces the computational effort by one order of magnitude, works in as much generality as the original one, and is conceptually simple and easy to implement. Our approach differs from previous related works in that we extract all the information needed from the level set function (or functions in multiphase flow) and do not need to find explicitly the location of the interface in the space domain. The complexity of our method to do tasks such as extension and distance reinitialization is O(N), where N is the number of points in space, not O(N log N) as in works by Sethian (1996, Proc. Nat. Acad. Sci. 93, 1591) and Helmsen and co-workers (1996, SPIE Microlithography IX, p. 253). This complexity estimation is also valid for quite general geometrically based front motion for our localized method.

  11. A vision-based method for planar position measurement

    NASA Astrophysics Data System (ADS)

    Chen, Zong-Hao; Huang, Peisen S.

    2016-12-01

    In this paper, a vision-based method is proposed for three-degree-of-freedom (3-DOF) planar position (XY{θZ} ) measurement. This method uses a single camera to capture the image of a 2D periodic pattern and then uses the 2D discrete Fourier transform (2D DFT) method to estimate the phase of its fundamental frequency component for position measurement. To improve position measurement accuracy, the phase estimation error of 2D DFT is analyzed and a phase estimation method is proposed. Different simulations are done to verify the feasibility of this method and study the factors that influence the accuracy and precision of phase estimation. To demonstrate the performance of the proposed method for position measurement, a prototype encoder consisting of a black-and-white industrial camera with VGA resolution (480  ×  640 pixels) and an iPhone 4s has been developed. Experimental results show the peak-to-peak resolutions to be 3.5 nm in X axis, 8 nm in Y axis and 4 μ \\text{rad} in {θZ} axis. The corresponding RMS resolutions are 0.52 nm, 1.06 nm, and 0.60 μ \\text{rad} respectively.

  12. Novel crystal timing calibration method based on total variation

    NASA Astrophysics Data System (ADS)

    Yu, Xingjian; Isobe, Takashi; Watanabe, Mitsuo; Liu, Huafeng

    2016-11-01

    A novel crystal timing calibration method based on total variation (TV), abbreviated as ‘TV merge’, has been developed for a high-resolution positron emission tomography (PET) system. The proposed method was developed for a system with a large number of crystals, it can provide timing calibration at the crystal level. In the proposed method, the timing calibration process was formulated as a linear problem. To robustly optimize the timing resolution, a TV constraint was added to the linear equation. Moreover, to solve the computer memory problem associated with the calculation of the timing calibration factors for systems with a large number of crystals, the merge component was used for obtaining the crystal level timing calibration values. Compared with other conventional methods, the data measured from a standard cylindrical phantom filled with a radioisotope solution was sufficient for performing a high-precision crystal-level timing calibration. In this paper, both simulation and experimental studies were performed to demonstrate the effectiveness and robustness of the TV merge method. We compare the timing resolutions of a 22Na point source, which was located in the field of view (FOV) of the brain PET system, with various calibration techniques. After implementing the TV merge method, the timing resolution improved from 3.34 ns at full width at half maximum (FWHM) to 2.31 ns FWHM.

  13. Evolutionary game theory using agent-based methods.

    PubMed

    Adami, Christoph; Schossau, Jory; Hintze, Arend

    2016-12-01

    Evolutionary game theory is a successful mathematical framework geared towards understanding the selective pressures that affect the evolution of the strategies of agents engaged in interactions with potential conflicts. While a mathematical treatment of the costs and benefits of decisions can predict the optimal strategy in simple settings, more realistic settings such as finite populations, non-vanishing mutations rates, stochastic decisions, communication between agents, and spatial interactions, require agent-based methods where each agent is modeled as an individual, carries its own genes that determine its decisions, and where the evolutionary outcome can only be ascertained by evolving the population of agents forward in time. While highlighting standard mathematical results, we compare those to agent-based methods that can go beyond the limitations of equations and simulate the complexity of heterogeneous populations and an ever-changing set of interactors. We conclude that agent-based methods can predict evolutionary outcomes where purely mathematical treatments cannot tread (for example in the weak selection-strong mutation limit), but that mathematics is crucial to validate the computational simulations.

  14. Development of DNA-based Identification methods to track the ...

    EPA Pesticide Factsheets

    The ability to track the identity and abundance of larval fish, which are ubiquitous during spawning season, may lead to a greater understanding of fish species distributions in Great Lakes nearshore areas including early-detection of invasive fish species before they become established. However, larval fish are notoriously hard to identify using traditional morphological techniques. While DNA-based identification methods could increase the ability of aquatic resource managers to determine larval fish composition, use of these methods in aquatic surveys is still uncommon and presents many challenges. In response to this need, we have been working with the U. S. Fish and Wildlife Service to develop field and laboratory methods to facilitate the identification of larval fish using DNA-meta-barcoding. In 2012, we initiated a pilot-project to develop a workflow for conducting DNA-based identification, and compared the species composition at sites within the St. Louis River Estuary of Lake Superior using traditional identification versus DNA meta-barcoding. In 2013, we extended this research to conduct DNA-identification of fish larvae collected from multiple nearshore areas of the Great Lakes by the USFWS. The species composition of larval fish generally mirrored that of fish species known from the same areas, but was influenced by the timing and intensity of sampling. Results indicate that DNA-based identification needs only very low levels of biomass to detect pre

  15. Crack Diagnosis of Wind Turbine Blades Based on EMD Method

    NASA Astrophysics Data System (ADS)

    Hong-yu, CUI; Ning, DING; Ming, HONG

    2016-11-01

    Wind turbine blades are both the source of power and the core technology of wind generators. After long periods of time or in some extreme conditions, cracks or damage can occur on the surface of the blades. If the wind generators continue to work at this time, the crack will expand until the blade breaks, which can lead to incalculable losses. Therefore, a crack diagnosis method based on EMD for wind turbine blades is proposed in this paper. Based on aerodynamics and fluid-structure coupling theory, an aero-elastic analysis on wind turbine blades model is first made in ANSYS Workbench. Second, based on the aero-elastic analysis and EMD method, the blade cracks are diagnosed and identified in the time and frequency domains, respectively. Finally, the blade model, strain gauge, dynamic signal acquisition and other equipment are used in an experimental study of the aero-elastic analysis and crack damage diagnosis of wind turbine blades to verify the crack diagnosis method proposed in this paper.

  16. Accurate measurement method for tube's endpoints based on machine vision

    NASA Astrophysics Data System (ADS)

    Liu, Shaoli; Jin, Peng; Liu, Jianhua; Wang, Xiao; Sun, Peng

    2017-01-01

    Tubes are used widely in aerospace vehicles, and their accurate assembly can directly affect the assembling reliability and the quality of products. It is important to measure the processed tube's endpoints and then fix any geometric errors correspondingly. However, the traditional tube inspection method is time-consuming and complex operations. Therefore, a new measurement method for a tube's endpoints based on machine vision is proposed. First, reflected light on tube's surface can be removed by using photometric linearization. Then, based on the optimization model for the tube's endpoint measurements and the principle of stereo matching, the global coordinates and the relative distance of the tube's endpoint are obtained. To confirm the feasibility, 11 tubes are processed to remove the reflected light and then the endpoint's positions of tubes are measured. The experiment results show that the measurement repeatability accuracy is 0.167 mm, and the absolute accuracy is 0.328 mm. The measurement takes less than 1 min. The proposed method based on machine vision can measure the tube's endpoints without any surface treatment or any tools and can realize on line measurement.

  17. Bearing diagnostics: A method based on differential geometry

    NASA Astrophysics Data System (ADS)

    Tian, Ye; Wang, Zili; Lu, Chen; Wang, Zhipeng

    2016-12-01

    The structures around bearings are complex, and the working environment is variable. These conditions cause the collected vibration signals to become nonlinear, non-stationary, and chaotic characteristics that make noise reduction, feature extraction, fault diagnosis, and health assessment significantly challenging. Thus, a set of differential geometry-based methods with superiorities in nonlinear analysis is presented in this study. For noise reduction, the Local Projection method is modified by both selecting the neighborhood radius based on empirical mode decomposition and determining noise subspace constrained by neighborhood distribution information. For feature extraction, Hessian locally linear embedding is introduced to acquire manifold features from the manifold topological structures, and singular values of eigenmatrices as well as several specific frequency amplitudes in spectrograms are extracted subsequently to reduce the complexity of the manifold features. For fault diagnosis, information geometry-based support vector machine is applied to classify the fault states. For health assessment, the manifold distance is employed to represent the health information; the Gaussian mixture model is utilized to calculate the confidence values, which directly reflect the health status. Case studies on Lorenz signals and vibration datasets of bearings demonstrate the effectiveness of the proposed methods.

  18. Robust PCA based method for discovering differentially expressed genes.

    PubMed

    Liu, Jin-Xing; Wang, Yu-Tian; Zheng, Chun-Hou; Sha, Wen; Mi, Jian-Xun; Xu, Yong

    2013-01-01

    How to identify a set of genes that are relevant to a key biological process is an important issue in current molecular biology. In this paper, we propose a novel method to discover differentially expressed genes based on robust principal component analysis (RPCA). In our method, we treat the differentially and non-differentially expressed genes as perturbation signals S and low-rank matrix A, respectively. Perturbation signals S can be recovered from the gene expression data by using RPCA. To discover the differentially expressed genes associated with special biological progresses or functions, the scheme is given as follows. Firstly, the matrix D of expression data is decomposed into two adding matrices A and S by using RPCA. Secondly, the differentially expressed genes are identified based on matrix S. Finally, the differentially expressed genes are evaluated by the tools based on Gene Ontology. A larger number of experiments on hypothetical and real gene expression data are also provided and the experimental results show that our method is efficient and effective.

  19. Method of stereo matching based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lu, Chaohui; An, Ping; Zhang, Zhaoyang

    2003-09-01

    A new stereo matching scheme based on image edge and genetic algorithm (GA) is presented to improve the conventional stereo matching method in this paper. In order to extract robust edge feature for stereo matching, infinite symmetric exponential filter (ISEF) is firstly applied to remove the noise of image, and nonlinear Laplace operator together with local variance of intensity are then used to detect edges. Apart from the detected edge, the polarity of edge pixels is also obtained. As an efficient search method, genetic algorithm is applied to find the best matching pair. For this purpose, some new ideas are developed for applying genetic algorithm to stereo matching. Experimental results show that the proposed methods are effective and can obtain good results.

  20. The professional portfolio: an evidence-based assessment method.

    PubMed

    Byrne, Michelle; Schroeter, Kathryn; Carter, Shannon; Mower, Julie

    2009-12-01

    Competency assessment is critical for a myriad of disciplines, including medicine, law, education, and nursing. Many nurse managers and educators are responsible for nursing competency assessment, and assessment results are often used for annual reviews, promotions, and satisfying accrediting agencies' requirements. Credentialing bodies continually seek methods to measure and document the continuing competence of licensees or certificants. Many methods and frameworks for continued competency assessment exist. The portfolio process is one method to validate personal and professional accomplishments in an interactive, multidimensional manner. This article illustrates how portfolios can be used to assess competence. One specialty nursing certification board's process of creating an evidence-based portfolio for recertification or reactivation of a credential is used as an example. The theoretical background, development process, implementation, and future implications may serve as a template for other organizations in developing their own portfolio models.

  1. Advances in nucleic acid-based detection methods.

    PubMed Central

    Wolcott, M J

    1992-01-01

    Laboratory techniques based on nucleic acid methods have increased in popularity over the last decade with clinical microbiologists and other laboratory scientists who are concerned with the diagnosis of infectious agents. This increase in popularity is a result primarily of advances made in nucleic acid amplification and detection techniques. Polymerase chain reaction, the original nucleic acid amplification technique, changed the way many people viewed and used nucleic acid techniques in clinical settings. After the potential of polymerase chain reaction became apparent, other methods of nucleic acid amplification and detection were developed. These alternative nucleic acid amplification methods may become serious contenders for application to routine laboratory analyses. This review presents some background information on nucleic acid analyses that might be used in clinical and anatomical laboratories and describes some recent advances in the amplification and detection of nucleic acids. PMID:1423216

  2. Quench Protection System based on Active Power Method

    NASA Astrophysics Data System (ADS)

    Nanato, Nozomu

    In superconducting coils, local and excessive joule heating may give damage to the superconducting windings when a quench occurs and therefore it is essential that the quench is detected quickly and precisely so that the coils can be safely discharged. We have presented a quench protection system based on the active power method which detects a quench by measuring the instantaneous active power generated in a superconducting coil. The protection system based on this method is strong against the inductive voltage and noise which may cause insufficient quench recognition. However, the proposed system is useful for a single coil but it is vulnerable to the magnetically coupled multi-coil such as high field superconducting coils. Because the proposed system can not avoid insufficient quench recognition by the mutual inductive voltage from the other coils. This paper presents a method to improve the characteristics of the active power method by cancelling the mutual inductive voltage. The experimental results of the quench protection for small Bi2223 coils show that the proposed system is useful for the magnetically coupled coils.

  3. A MUSIC-based method for SSVEP signal processing.

    PubMed

    Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei

    2016-03-01

    The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications.

  4. An efficient neural network based method for medical image segmentation.

    PubMed

    Torbati, Nima; Ayatollahi, Ahmad; Kermani, Ali

    2014-01-01

    The aim of this research is to propose a new neural network based method for medical image segmentation. Firstly, a modified self-organizing map (SOM) network, named moving average SOM (MA-SOM), is utilized to segment medical images. After the initial segmentation stage, a merging process is designed to connect the objects of a joint cluster together. A two-dimensional (2D) discrete wavelet transform (DWT) is used to build the input feature space of the network. The experimental results show that MA-SOM is robust to noise and it determines the input image pattern properly. The segmentation results of breast ultrasound images (BUS) demonstrate that there is a significant correlation between the tumor region selected by a physician and the tumor region segmented by our proposed method. In addition, the proposed method segments X-ray computerized tomography (CT) and magnetic resonance (MR) head images much better than the incremental supervised neural network (ISNN) and SOM-based methods.

  5. A novel duplicate images detection method based on PLSA model

    NASA Astrophysics Data System (ADS)

    Liao, Xiaofeng; Wang, Yongji; Ding, Liping; Gu, Jian

    2012-01-01

    Web image search results usually contain duplicate copies. This paper considers the problem of detecting and clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space. Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this method.

  6. A novel duplicate images detection method based on PLSA model

    NASA Astrophysics Data System (ADS)

    Liao, Xiaofeng; Wang, Yongji; Ding, Liping; Gu, Jian

    2011-12-01

    Web image search results usually contain duplicate copies. This paper considers the problem of detecting and clustering duplicate images contained in web image search results. Detecting and clustering the duplicate images together facilitates users' viewing. A novel method is presented in this paper to detect and cluster duplicate images by measuring similarity between their topics. More specifically, images are viewed as documents consisting of visual words formed by vector quantizing the affine invariant visual features. Then a statistical model widely used in text domain, the PLSA(Probabilistic Latent Semantic Analysis) model, is utilized to map images into a probabilistic latent semantic space. Because the main content remains unchanged despite small digital alteration, duplicate images will be close to each other in the derived semantic space. Based on this, a simple clustering process can successfully detect duplicate images and cluster them together. Comparing to those methods based on comparison between hash value of visual words, this method is more robust to the visual feature level alteration posed on the images. Experiments demonstrates the effectiveness of this method.

  7. Gradient-based image recovery methods from incomplete Fourier measurements.

    PubMed

    Patel, Vishal M; Maleh, Ray; Gilbert, Anna C; Chellappa, Rama

    2012-01-01

    A major problem in imaging applications such as magnetic resonance imaging and synthetic aperture radar is the task of trying to reconstruct an image with the smallest possible set of Fourier samples, every single one of which has a potential time and/or power cost. The theory of compressive sensing (CS) points to ways of exploiting inherent sparsity in such images in order to achieve accurate recovery using sub-Nyquist sampling schemes. Traditional CS approaches to this problem consist of solving total-variation (TV) minimization programs with Fourier measurement constraints or other variations thereof. This paper takes a different approach. Since the horizontal and vertical differences of a medical image are each more sparse or compressible than the corresponding TV image, CS methods will be more successful in recovering these differences individually. We develop an algorithm called GradientRec that uses a CS algorithm to recover the horizontal and vertical gradients and then estimates the original image from these gradients. We present two methods of solving the latter inverse problem, i.e., one based on least-square optimization and the other based on a generalized Poisson solver. After a thorough derivation of our complete algorithm, we present the results of various experiments that compare the effectiveness of the proposed method against other leading methods.

  8. A refined wideband acoustical holography based on equivalent source method

    PubMed Central

    Ping, Guoli; Chu, Zhigang; Xu, Zhongming; Shen, Linbang

    2017-01-01

    This paper is concerned with acoustical engineering and mathematical physics problem for the near-field acoustical holography based on equivalent source method (ESM-based NAH). An important mathematical physics problem in ESM-based NAH is to solve the equivalent source strength, which has multiple solving algorithms, such as Tikhonov regularization ESM (TRESM), iterative weighted ESM (IWESM) and steepest descent iteration ESM (SDIESM). To explore a new solving algorithm which can achieve better reconstruction performance in wide frequency band, a refined wideband acoustical holography (RWAH) is proposed. RWAH adopts IWESM below a transition frequency and switches to SDIESM above that transition frequency, and the principal components of input data in RWAH have been truncated. Further, the superiority of RWAH is verified by the comparison of comprehensive performance of TRESM, IWESM, SDIESM and RWAH. Finally, the experiments are conducted, confirming that RWAH can achieve better reconstruction performance in wide frequency band. PMID:28266531

  9. Method based on bioinspired sample improves autofocusing performances

    NASA Astrophysics Data System (ADS)

    Cao, Jie; Cheng, Yang; Wang, Peng; Peng, Yuxin; Zhang, Kaiyu; Wu, Leina; Xia, Wenze; Yu, Haoyong

    2016-10-01

    In order to solve the issue between fast autofocusing speed and high volume data processing, we propose a bioinspired sampling method based on a retina-like structure. We develop retina-like models and analyze the division of sampling structure. The optimal retina-like sample is obtained by analyzing two key parameters (sectors and radius of blind area) of the retina-like structure through experiments. Under the typical autofocus functions, including Vollath-4, Laplacian, Tenengrad, spatial frequency, and sum-modified-Laplacian (SML), we carry out comparative experiments of computation time based on the retina-like sample and a traditional uniform sample. The results show that the retina-like sample is suitable for those autofocus functions. Based on the autofocus function of SML, the average time of uniform sample decreases from 3.5 to 2.1 s for the retina-like sample.

  10. A refined wideband acoustical holography based on equivalent source method

    NASA Astrophysics Data System (ADS)

    Ping, Guoli; Chu, Zhigang; Xu, Zhongming; Shen, Linbang

    2017-03-01

    This paper is concerned with acoustical engineering and mathematical physics problem for the near-field acoustical holography based on equivalent source method (ESM-based NAH). An important mathematical physics problem in ESM-based NAH is to solve the equivalent source strength, which has multiple solving algorithms, such as Tikhonov regularization ESM (TRESM), iterative weighted ESM (IWESM) and steepest descent iteration ESM (SDIESM). To explore a new solving algorithm which can achieve better reconstruction performance in wide frequency band, a refined wideband acoustical holography (RWAH) is proposed. RWAH adopts IWESM below a transition frequency and switches to SDIESM above that transition frequency, and the principal components of input data in RWAH have been truncated. Further, the superiority of RWAH is verified by the comparison of comprehensive performance of TRESM, IWESM, SDIESM and RWAH. Finally, the experiments are conducted, confirming that RWAH can achieve better reconstruction performance in wide frequency band.

  11. Sparse coding based feature representation method for remote sensing images

    NASA Astrophysics Data System (ADS)

    Oguslu, Ender

    In this dissertation, we study sparse coding based feature representation method for the classification of multispectral and hyperspectral images (HSI). The existing feature representation systems based on the sparse signal model are computationally expensive, requiring to solve a convex optimization problem to learn a dictionary. A sparse coding feature representation framework for the classification of HSI is presented that alleviates the complexity of sparse coding through sub-band construction, dictionary learning, and encoding steps. In the framework, we construct the dictionary based upon the extracted sub-bands from the spectral representation of a pixel. In the encoding step, we utilize a soft threshold function to obtain sparse feature representations for HSI. Experimental results showed that a randomly selected dictionary could be as effective as a dictionary learned from optimization. The new representation usually has a very high dimensionality requiring a lot of computational resources. In addition, the spatial information of the HSI data has not been included in the representation. Thus, we modify the framework by incorporating the spatial information of the HSI pixels and reducing the dimension of the new sparse representations. The enhanced model, called sparse coding based dense feature representation (SC-DFR), is integrated with a linear support vector machine (SVM) and a composite kernels SVM (CKSVM) classifiers to discriminate different types of land cover. We evaluated the proposed algorithm on three well known HSI datasets and compared our method to four recently developed classification methods: SVM, CKSVM, simultaneous orthogonal matching pursuit (SOMP) and image fusion and recursive filtering (IFRF). The results from the experiments showed that the proposed method can achieve better overall and average classification accuracies with a much more compact representation leading to more efficient sparse models for HSI classification. To further

  12. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes.

  13. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  14. Assessment of mesoscopic particle-based methods in microfluidic geometries

    NASA Astrophysics Data System (ADS)

    Zhao, Tongyang; Wang, Xiaogong; Jiang, Lei; Larson, Ronald G.

    2013-08-01

    We assess the accuracy and efficiency of two particle-based mesoscopic simulation methods, namely, Dissipative Particle Dynamics (DPD) and Stochastic Rotation Dynamics (SRD) for predicting a complex flow in a microfluidic geometry. Since both DPD and SRD use soft or weakly interacting particles to carry momentum, both methods contain unavoidable inertial effects and unphysically high fluid compressibility. To assess these effects, we compare the predictions of DPD and SRD for both an exact Stokes-flow solution and nearly exact solutions at finite Reynolds numbers from the finite element method for flow in a straight channel with periodic slip boundary conditions. This flow represents a periodic electro-osmotic flow, which is a complex flow with an analytical solution for zero Reynolds number. We find that SRD is roughly ten-fold faster than DPD in predicting the flow field, with better accuracy at low Reynolds numbers. However, SRD has more severe problems with compressibility effects than does DPD, which limits the Reynolds numbers attainable in SRD to around 25-50, while DPD can achieve Re higher than this before compressibility effects become too large. However, since the SRD method runs much faster than DPD does, we can afford to enlarge the number of grid cells in SRD to reduce the fluid compressibility at high Reynolds number. Our simulations provide a method to estimate the range of conditions for which SRD or DPD is preferable for mesoscopic simulations.

  15. Mode separation of Lamb waves based on dispersion compensation method.

    PubMed

    Xu, Kailiang; Ta, Dean; Moilanen, Petro; Wang, Weiqi

    2012-04-01

    Ultrasonic Lamb modes typically propagate as a combination of multiple dispersive wave packets. Frequency components of each mode distribute widely in time domain due to dispersion and it is very challenging to separate individual modes by traditional signal processing methods. In the present study, a method of dispersion compensation is proposed for the purpose of mode separation. This numerical method compensates, i.e., compresses, the individual dispersive waveforms into temporal pulses, which thereby become nearly un-overlapped in time and frequency and can thus be extracted individually by rectangular time windows. It was further illustrated that the dispersion compensation also provided a method for predicting the plate thickness. Finally, based on reversibility of the numerical compensation method, an artificial dispersion technique was used to restore the original waveform of each mode from the separated compensated pulse. Performances of the compensation separation techniques were evaluated by processing synthetic and experimental signals which consisted of multiple Lamb modes with high dispersion. Individual modes were extracted with good accordance with the original waveforms and theoretical predictions.

  16. Variation block-based genomics method for crop plants

    PubMed Central

    2014-01-01

    Background In contrast with wild species, cultivated crop genomes consist of reshuffled recombination blocks, which occurred by crossing and selection processes. Accordingly, recombination block-based genomics analysis can be an effective approach for the screening of target loci for agricultural traits. Results We propose the variation block method, which is a three-step process for recombination block detection and comparison. The first step is to detect variations by comparing the short-read DNA sequences of the cultivar to the reference genome of the target crop. Next, sequence blocks with variation patterns are examined and defined. The boundaries between the variation-containing sequence blocks are regarded as recombination sites. All the assumed recombination sites in the cultivar set are used to split the genomes, and the resulting sequence regions are termed variation blocks. Finally, the genomes are compared using the variation blocks. The variation block method identified recurring recombination blocks accurately and successfully represented block-level diversities in the publicly available genomes of 31 soybean and 23 rice accessions. The practicality of this approach was demonstrated by the identification of a putative locus determining soybean hilum color. Conclusions We suggest that the variation block method is an efficient genomics method for the recombination block-level comparison of crop genomes. We expect that this method will facilitate the development of crop genomics by bringing genomics technologies to the field of crop breeding. PMID:24929792

  17. Selection of construction methods: a knowledge-based approach.

    PubMed

    Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  18. Sphere-based calibration method for trinocular vision sensor

    NASA Astrophysics Data System (ADS)

    Lu, Rui; Shao, Mingwei

    2017-03-01

    A new method to calibrate a trinocular vision sensor is proposed and two main tasks are finished in this paper, i.e. to determine the transformation matrix between each two cameras and the trifocal tensor of the trinocular vision sensor. A flexible sphere target with several spherical circles is designed. As the isotropy of a sphere, trifocal tensor of the three cameras can be determined exactly from the feature on the sphere target. Then the fundamental matrix between each two cameras can be obtained. Easily, compatible rotation matrix and translation matrix can be deduced base on the singular value decomposition of the fundamental matrix. In our proposed calibration method, image points are not requested one-to-one correspondence. When image points locates in the same feature are obtained, the transformation matrix between each two cameras with the trifocal tensor of trinocular vision sensor can be determined. Experiment results show that the proposed calibration method can obtain precise results, including measurement and matching results. The root mean square error of distance is 0.026 mm with regard to the view field of about 200×200 mm and the feature matching of three images is strict. As a sphere projection is not concerned with its orientation, the calibration method is robust and with an easy operation. Moreover, our calibration method also provides a new approach to obtain the trifocal tensor.

  19. SVM Method used to Study Gender Differences Based on Microelement

    NASA Astrophysics Data System (ADS)

    Chun, Yang; Yuan, Liu; Jun, Du; Bin, Tang

    [objective] Intelligent Algorithm of SVM is used for studying gender differences based on microelement data, which provide reference For the application of Microelement in healthy people, such as providing technical support for the investigation of cases.[Method] Our Long-term test results on hair microelement of health people were consolidated. Support vector machine (SVM) is used to classified model of male and female based on microelement data. The radical basis function (RBF) is adopted as a kernel function of SVM, and the model adjusts C and σ to build the optimization classifier, [Result] Healthy population of men and women of manganese, cadmium and nickel are quite different, The classified model of Microelement based on SVM can classifies the male and female, the correct classification ratio set to be 81.71% and 66.47% by SVM based on 7 test date and 3 test data selection. [conclusion] The classified model of microelement data based on SVM can classifies male and female.

  20. Erythrocyte shape classification using integral-geometry-based methods.

    PubMed

    Gual-Arnau, X; Herold-García, S; Simó, A

    2015-07-01

    Erythrocyte shape deformations are related to different important illnesses. In this paper, we focus on one of the most important: the Sickle cell disease. This disease causes the hardening or polymerization of the hemoglobin that contains the erythrocytes. The study of this process using digital images of peripheral blood smears can offer useful results in the clinical diagnosis of these illnesses. In particular, it would be very valuable to find a rapid and reproducible automatic classification method to quantify the number of deformed cells and so gauge the severity of the illness. In this paper, we show the good results obtained in the automatic classification of erythrocytes in normal cells, sickle cells, and cells with other deformations, when we use a set of functions based on integral-geometry methods, an active contour-based segmentation method, and a k-NN classification algorithm. Blood specimens were obtained from patients with Sickle cell disease. Seventeen peripheral blood smears were obtained for the study, and 45 images of different fields were obtained. A specialist selected the cells to use, determining those cells which were normal, elongated, and with other deformations present in the images. A process of automatic classification, with cross-validation of errors with the proposed descriptors and with other two functions used in previous studies, was realized.

  1. Warped document image correction method based on heterogeneous registration strategies

    NASA Astrophysics Data System (ADS)

    Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan

    2013-03-01

    With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.

  2. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  3. Evaluation of a physically based quasi-linear and a conceptually based nonlinear Muskingum methods

    NASA Astrophysics Data System (ADS)

    Perumal, Muthiah; Tayfur, Gokmen; Rao, C. Madhusudana; Gurarslan, Gurhan

    2017-03-01

    Two variants of the Muskingum flood routing method formulated for accounting nonlinearity of the channel routing process are investigated in this study. These variant methods are: (1) The three-parameter conceptual Nonlinear Muskingum (NLM) method advocated by Gillin 1978, and (2) The Variable Parameter McCarthy-Muskingum (VPMM) method recently proposed by Perumal and Price in 2013. The VPMM method does not require rigorous calibration and validation procedures as required in the case of NLM method due to established relationships of its parameters with flow and channel characteristics based on hydrodynamic principles. The parameters of the conceptual nonlinear storage equation used in the NLM method were calibrated using the Artificial Intelligence Application (AIA) techniques, such as the Genetic Algorithm (GA), the Differential Evolution (DE), the Particle Swarm Optimization (PSO) and the Harmony Search (HS). The calibration was carried out on a given set of hypothetical flood events obtained by routing a given inflow hydrograph in a set of 40 km length prismatic channel reaches using the Saint-Venant (SV) equations. The validation of the calibrated NLM method was investigated using a different set of hypothetical flood hydrographs obtained in the same set of channel reaches used for calibration studies. Both the sets of solutions obtained in the calibration and validation cases using the NLM method were compared with the corresponding solutions of the VPMM method based on some pertinent evaluation measures. The results of the study reveal that the physically based VPMM method is capable of accounting for nonlinear characteristics of flood wave movement better than the conceptually based NLM method which requires the use of tedious calibration and validation procedures.

  4. Design of time interval generator based on hybrid counting method

    NASA Astrophysics Data System (ADS)

    Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge

    2016-10-01

    Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.

  5. Wave-equation based traveltime seismic tomography - Part 1: Method

    NASA Astrophysics Data System (ADS)

    Tong, P.; Zhao, D.; Yang, D.; Yang, X.; Chen, J.; Liu, Q.

    2014-08-01

    In this paper, we propose a wave-equation based traveltime seismic tomography method with a detailed description of its step-by-step process. First, a linear relationship between the traveltime residual Δt = Tobs - Tsyn and the relative velocity perturbation δc(x) / c(x) connected by a finite-frequency traveltime sensitivity kernel K(x) is theoretically derived using the adjoint method. To accurately calculate the traveltime residual Δt, two automatic arrival-time picking techniques including the envelop energy ratio method and the combined ray and cross-correlation method are then developed to compute the arrival times Tsyn for synthetic seismograms. The arrival times Tobs of observed seismograms are usually determined by manual hand picking in real applications. Traveltime sensitivity kernel K(x) is constructed by convolving a forward wavefield u(t,x) with an adjoint wavefield q(t,x). The calculations of synthetic seismograms and sensitivity kernels rely on forward modelling. To make it computationally feasible for tomographic problems involving a large number of seismic records, the forward problem is solved in the two-dimensional (2-D) vertical plane passing through the source and the receiver by a high-order central difference method. The final model is parameterized on 3-D regular grid (inversion) nodes with variable spacings, while model values on each 2-D forward modelling node are linearly interpolated by the values at its eight surrounding 3-D inversion grid nodes. Finally, the tomographic inverse problem is formulated as a regularized optimization problem, which can be iteratively solved by either the LSQR solver or a non-linear conjugate-gradient method. To provide some insights into future 3-D tomographic inversions, Fréchet kernels for different seismic phases are also demonstrated in this study.

  6. Density functional theory based generalized effective fragment potential method

    SciTech Connect

    Nguyen, Kiet A. E-mail: ruth.pachter@wpafb.af.mil; Pachter, Ruth E-mail: ruth.pachter@wpafb.af.mil; Day, Paul N.

    2014-06-28

    We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes.

  7. Dominant partition method. [based on a wave function formalism

    NASA Technical Reports Server (NTRS)

    Dixon, R. M.; Redish, E. F.

    1979-01-01

    By use of the L'Huillier, Redish, and Tandy (LRT) wave function formalism, a partially connected method, the dominant partition method (DPM) is developed for obtaining few body reductions of the many body problem in the LRT and Bencze, Redish, and Sloan (BRS) formalisms. The DPM maps the many body problem to a fewer body one by using the criterion that the truncated formalism must be such that consistency with the full Schroedinger equation is preserved. The DPM is based on a class of new forms for the irreducible cluster potential, which is introduced in the LRT formalism. Connectivity is maintained with respect to all partitions containing a given partition, which is referred to as the dominant partition. Degrees of freedom corresponding to the breakup of one or more of the clusters of the dominant partition are treated in a disconnected manner. This approach for simplifying the complicated BRS equations is appropriate for physical problems where a few body reaction mechanism prevails.

  8. Application of DNA-based methods in forensic entomology.

    PubMed

    Wells, Jeffrey D; Stevens, Jamie R

    2008-01-01

    A forensic entomological investigation can benefit from a variety of widely practiced molecular genotyping methods. The most commonly used is DNA-based specimen identification. Other applications include the identification of insect gut contents and the characterization of the population genetic structure of a forensically important insect species. The proper application of these procedures demands that the analyst be technically expert. However, one must also be aware of the extensive list of standards and expectations that many legal systems have developed for forensic DNA analysis. We summarize the DNA techniques that are currently used in, or have been proposed for, forensic entomology and review established genetic analyses from other scientific fields that address questions similar to those in forensic entomology. We describe how accepted standards for forensic DNA practice and method validation are likely to apply to insect evidence used in a death or other forensic entomological investigation.

  9. Modified risk graph method using fuzzy rule-based approach.

    PubMed

    Nait-Said, R; Zidani, F; Ouzraoui, N

    2009-05-30

    The risk graph is one of the most popular methods used to determine the safety integrity level for safety instrumented functions. However, conventional risk graph as described in the IEC 61508 standard is subjective and suffers from an interpretation problem of risk parameters. Thus, it can lead to inconsistent outcomes that may result in conservative SILs. To overcome this difficulty, a modified risk graph using fuzzy rule-based system is proposed. This novel version of risk graph uses fuzzy scales to assess risk parameters and calibration may be made by varying risk parameter values. Furthermore, the outcomes which are numerical values of risk reduction factor (the inverse of the probability of failure on demand) can be compared directly with those given by quantitative and semi-quantitative methods such as fault tree analysis (FTA), quantitative risk assessment (QRA) and layers of protection analysis (LOPA).

  10. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-07-18

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.

  11. Novel parameter-based flexure bearing design method

    NASA Astrophysics Data System (ADS)

    Amoedo, Simon; Thebaud, Edouard; Gschwendtner, Michael; White, David

    2016-06-01

    A parameter study was carried out on the design variables of a flexure bearing to be used in a Stirling engine with a fixed axial displacement and a fixed outer diameter. A design method was developed in order to assist identification of the optimum bearing configuration. This was achieved through a parameter study of the bearing carried out with ANSYS®. The parameters varied were the number and the width of the arms, the thickness of the bearing, the eccentricity, the size of the starting and ending holes, and the turn angle of the spiral. Comparison was made between the different designs in terms of axial and radial stiffness, the natural frequency, and the maximum induced stresses. Moreover, the Finite Element Analysis (FEA) was compared to theoretical results for a given design. The results led to a graphical design method which assists the selection of flexure bearing geometrical parameters based on pre-determined geometric and material constraints.

  12. An Optimization-based Atomistic-to-Continuum Coupling Method

    SciTech Connect

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally, we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.

  13. Effectiveness of Spray-Based Decontamination Methods for ...

    EPA Pesticide Factsheets

    Report The objective of this project was to assess the effectiveness of spray-based common decontamination methods for inactivating Bacillus (B.) atrophaeus (surrogate for B. anthracis) spores and bacteriophage MS2 (surrogate for foot and mouth disease virus [FMDV]) on selected test surfaces (with or without a model agricultural soil load). Relocation of viable viruses or spores from the contaminated coupon surfaces into aerosol or liquid fractions during the decontamination methods was investigated. This project was conducted to support jointly held missions of the U.S. Department of Homeland Security (DHS) and the U.S. Environmental Protection Agency (EPA). Within the EPA, the project supports the mission of EPA’s Homeland Security Research Program (HSRP) by providing relevant information pertinent to the decontamination of contaminated areas resulting from a biological incident.

  14. Design Method for EPS Control System Based on KANSEI Structure

    NASA Astrophysics Data System (ADS)

    Saitoh, Yumi; Itoh, Hideaki; Ozaki, Fuminori; Nakamura, Takenobu; Kawaji, Shigeyasu

    Recently, it has been identified that a KANSEI engineering plays an important role in functional design developing for realizing highly sophisticated products. However, in practical development methods, we design products and optimise the design trial and error, which indecates that we depend on the skill set of experts. In this paper, we focus on an automobile electric power steering (EPS) for which a functional design is required. First, the KANSEI structure is determined on the basis of the steering feeling of an experienced driver, and an EPS control design based on this KANSEI structure is proposed. Then, the EPS control parameters are adjusted in accordance with the KANSEI index. Finally, by assessing the experimental results obtained from the driver, the effectiveness of the proposed design method is verified.

  15. Feasible methods to estimate disease based price indexes.

    PubMed

    Bradley, Ralph

    2013-05-01

    There is a consensus that statistical agencies should report medical data by disease rather than by service. This study computes price indexes that are necessary to deflate nominal disease expenditures and to decompose their growth into price, treated prevalence and output per patient growth. Unlike previous studies, it uses methods that can be implemented by the Bureau of Labor Statistics (BLS). For the calendar years 2005-2010, I find that these feasible disease based indexes are approximately 1% lower on an annual basis than indexes computed by current methods at BLS. This gives evidence that traditional medical price indexes have not accounted for the more efficient use of medical inputs in treating most diseases.

  16. An Optimization-based Atomistic-to-Continuum Coupling Method

    DOE PAGES

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; ...

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less

  17. Evaluation of Anomaly Detection Method Based on Pattern Recognition

    NASA Astrophysics Data System (ADS)

    Fontugne, Romain; Himura, Yosuke; Fukuda, Kensuke

    The number of threats on the Internet is rapidly increasing, and anomaly detection has become of increasing importance. High-speed backbone traffic is particularly degraded, but their analysis is a complicated task due to the amount of data, the lack of payload data, the asymmetric routing and the use of sampling techniques. Most anomaly detection schemes focus on the statistical properties of network traffic and highlight anomalous traffic through their singularities. In this paper, we concentrate on unusual traffic distributions, which are easily identifiable in temporal-spatial space (e.g., time/address or port). We present an anomaly detection method that uses a pattern recognition technique to identify anomalies in pictures representing traffic. The main advantage of this method is its ability to detect attacks involving mice flows. We evaluate the parameter set and the effectiveness of this approach by analyzing six years of Internet traffic collected from a trans-Pacific link. We show several examples of detected anomalies and compare our results with those of two other methods. The comparison indicates that the only anomalies detected by the pattern-recognition-based method are mainly malicious traffic with a few packets.

  18. Optimal sensor placement using FRFs-based clustering method

    NASA Astrophysics Data System (ADS)

    Li, Shiqi; Zhang, Heng; Liu, Shiping; Zhang, Zhe

    2016-12-01

    The purpose of this work is to develop an optimal sensor placement method by selecting the most relevant degrees of freedom as actual measure position. Based on observation matrix of a structure's frequency response, two optimal criteria are used to avoid the information redundancy of the candidate degrees of freedom. By using principal component analysis, the frequency response matrix can be decomposed into principal directions and their corresponding singular. A relatively small number of principal directions will maintain a system's dominant response information. According to the dynamic similarity of each degree of freedom, the k-means clustering algorithm is designed to classify the degrees of freedom, and effective independence method deletes the sensors which are redundant of each cluster. Finally, two numerical examples and a modal test are included to demonstrate the efficient of the derived method. It is shown that the proposed method provides a way to extract sub-optimal sets and the selected sensors are well distributed on the whole structure.

  19. Multiresolution subspace-based optimization method for inverse scattering problems.

    PubMed

    Oliveri, Giacomo; Zhong, Yu; Chen, Xudong; Massa, Andrea

    2011-10-01

    This paper investigates an approach to inverse scattering problems based on the integration of the subspace-based optimization method (SOM) within a multifocusing scheme in the framework of the contrast source formulation. The scattering equations are solved by a nested three-step procedure composed of (a) an outer multiresolution loop dealing with the identification of the regions of interest within the investigation domain through an iterative information-acquisition process, (b) a spectrum analysis step devoted to the reconstruction of the deterministic components of the contrast sources, and (c) an inner optimization loop aimed at retrieving the ambiguous components of the contrast sources through a conjugate gradient minimization of a suitable objective function. A set of representative reconstruction results is discussed to provide numerical evidence of the effectiveness of the proposed algorithmic approach as well as to assess the features and potentialities of the multifocusing integration in comparison with the state-of-the-art SOM implementation.

  20. Hybrid modeling method for a DEP based particle manipulation.

    PubMed

    Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad

    2013-01-30

    In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results.

  1. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias [Knoxville, TN; Rodriguez, Jr., Miguel; Qi, Hairong [Knoxville, TN; Wang, Xiaoling [San Jose, CA

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  2. Method for fabricating beryllium-based multilayer structures

    DOEpatents

    Skulina, Kenneth M.; Bionta, Richard M.; Makowiecki, Daniel M.; Alford, Craig S.

    2003-02-18

    Beryllium-based multilayer structures and a process for fabricating beryllium-based multilayer mirrors, useful in the wavelength region greater than the beryllium K-edge (111 .ANG. or 11.1 nm). The process includes alternating sputter deposition of beryllium and a metal, typically from the fifth row of the periodic table, such as niobium (Nb), molybdenum (Mo), ruthenium (Ru), and rhodium (Rh). The process includes not only the method of sputtering the materials, but the industrial hygiene controls for safe handling of beryllium. The mirrors made in accordance with the process may be utilized in soft x-ray and extreme-ultraviolet projection lithography, which requires mirrors of high reflectivity (>60%) for x-rays in the range of 60-140 .ANG. (60-14.0 nm).

  3. Transistor-based particle detection systems and methods

    SciTech Connect

    Jain, Ankit; Nair, Pradeep R.; Alam, Muhammad Ashraful

    2015-06-09

    Transistor-based particle detection systems and methods may be configured to detect charged and non-charged particles. Such systems may include a supporting structure contacting a gate of a transistor and separating the gate from a dielectric of the transistor, and the transistor may have a near pull-in bias and a sub-threshold region bias to facilitate particle detection. The transistor may be configured to change current flow through the transistor in response to a change in stiffness of the gate caused by securing of a particle to the gate, and the transistor-based particle detection system may configured to detect the non-charged particle at least from the change in current flow.

  4. Detection of biological thiols based on a colorimetric method*

    PubMed Central

    Xu, Yuan-yuan; Sun, Yang-yang; Zhang, Yu-juan; Lu, Chen-he; Miao, Jin-feng

    2016-01-01

    Biological thiols (biothiols), an important kind of functional biomolecules, such as cysteine (Cys) and glutathione (GSH), play vital roles in maintaining the stability of the intracellular environment. In past decades, studies have demonstrated that metabolic disorder of biothiols is related to many serious disease processes and will lead to extreme damage in human and numerous animals. We carried out a series of experiments to detect biothiols in biosamples, including bovine plasma and cell lysates of seven different cell lines based on a simple colorimetric method. In a typical test, the color of the test solution could gradually change from blue to colorless after the addition of biothiols. Based on the color change displayed, experimental results reveal that the percentage of biothiols in the embryonic fibroblast cell line is significantly higher than those in the other six cell lines, which provides the basis for the following biothiols-related study. PMID:27704750

  5. Method of plasma etching GA-based compound semiconductors

    DOEpatents

    Qiu, Weibin; Goddard, Lynford L.

    2013-01-01

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent thereto. The chamber contains a Ga-based compound semiconductor sample in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. SiCl.sub.4 and Ar gases are flowed into the chamber. RF power is supplied to the platen at a first power level, and RF power is supplied to the source electrode. A plasma is generated. Then, RF power is supplied to the platen at a second power level lower than the first power level and no greater than about 30 W. Regions of a surface of the sample adjacent to one or more masked portions of the surface are etched at a rate of no more than about 25 nm/min to create a substantially smooth etched surface.

  6. An Improved Spectral Background Subtraction Method Based on Wavelet Energy.

    PubMed

    Zhao, Fengkui; Wang, Jian; Wang, Aimin

    2016-12-01

    Most spectral background subtraction methods rely on the difference in frequency response of background compared with characteristic peaks. It is difficult to extract accurately the background components from the spectrum when characteristic peaks and background have overlaps in frequency domain. An improved background estimation algorithm based on iterative wavelet transform (IWT) is presented. The wavelet entropy principle is used to select the best wavelet basis. A criterion based on wavelet energy theory to determine the optimal iteration times is proposed. The case of energy dispersive X-ray spectroscopy is discussed for illustration. A simulated spectrum with a prior known background and an experimental spectrum are tested. The processing results of the simulated spectrum is compared with non-IWT and it demonstrates the superiority of the IWT. It has great significance to improve the accuracy for spectral analysis.

  7. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Smith, Timothy A. (Inventor); Urnes, James M., Sr. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  8. Web-based methods in terrorism and disaster research.

    PubMed

    Schlenger, William E; Silver, Roxane Cohen

    2006-04-01

    This article provides an overview of the use of the Internet for conducting studies after terrorist attacks and other large-scale disasters. We begin with a brief summary of the scientific and logistical challenges of conducting such research, followed by a description of some of the most important design features that are required to produce valid findings. We then describe one approach to Internet surveys that, although not perfect, addresses many of the challenges well. We close with some thoughts about how the Internet-based methods available today are likely to develop further in coming years.

  9. [Others physical methods in psychiatric treatment based on electromagnetic stimulation].

    PubMed

    Zyss, Tomasz; Rachel, Wojciech; Datka, Wojciech; Hese, Robert T; Gorczyca, Piotr; Zięba, Andrzej; Piekoszewski, Wojciech

    2016-01-01

    In the last decades a few new physical methods based on the electromagnetic head stimulation were subjected to the clinical research. To them belong:--vagus nerve stimulation (VNS),--magnetic seizure therapy/magnetoconvulsive therapy (MST/MCT),--deep stimulation of the brain (DBS) and--transcranial direct current stimulation (tDCS). The paper presents a description of mentioned techniques (nature, advantages, defects, restrictions), which were compared to the applied electroconvulsive treatment ECT, earlier described transcranial magnetic stimulation TMS and the pharmacotherapy (the basis of the psychiatric treatment).

  10. Supersampling method for efficient grid-based electronic structure calculations.

    PubMed

    Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn

    2016-03-07

    The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing.

  11. Study on torpedo fuze signal denoising method based on WPT

    NASA Astrophysics Data System (ADS)

    Zhao, Jun; Sun, Changcun; Zhang, Tao; Ren, Zhiliang

    2013-07-01

    Torpedo fuze signal denoising is an important action to ensure reliable operation of fuze. Based on the good characteristics of wavelet packet transform (WPT) in signal denoising, the paper used wavelet packet transform to denoise the fuze signal under a complex background interference, and a simulation of the denoising results with Matlab is performed. Simulation result shows that the WPT denoising method can effectively eliminate background noise exist in torpedo fuze target signal with higher precision and less distortion, leading to advance the reliability of torpedo fuze operation.

  12. Methods and applications of positron-based medical imaging

    NASA Astrophysics Data System (ADS)

    Herzog, H.

    2007-02-01

    Positron emission tomography (PET) is a diagnostic imaging method to examine metabolic functions and their disorders. Dedicated ring systems of scintillation detectors measure the 511 keV γ-radiation produced in the course of the positron emission from radiolabelled metabolically active molecules. A great number of radiopharmaceuticals labelled with 11C, 13N, 15O, or 18F positron emitters have been applied both for research and clinical purposes in neurology, cardiology and oncology. The recent success of PET with rapidly increasing installations is mainly based on the use of [ 18F]fluorodeoxyglucose (FDG) in oncology where it is most useful to localize primary tumours and their metastases.

  13. Rapid Mapping Method Based on Free Blocks of Surveys

    NASA Astrophysics Data System (ADS)

    Yu, Xianwen; Wang, Huiqing; Wang, Jinling

    2016-06-01

    While producing large-scale larger than 1:2000 maps in cities or towns, the obstruction from buildings leads to difficult and heavy tasks of measuring mapping control points. In order to avoid measuring the mapping control points and shorten the time of fieldwork, in this paper, a quick mapping method is proposed. This method adjusts many free blocks of surveys together, and transforms the points from all free blocks of surveys into the same coordinate system. The entire surveying area is divided into many free blocks, and connection points are set on the boundaries between free blocks. An independent coordinate system of every free block is established via completely free station technology, and the coordinates of the connection points, detail points and control points in every free block in the corresponding independent coordinate systems are obtained based on poly-directional open traverses. Error equations are established based on connection points, which are determined together to obtain the transformation parameters. All points are transformed from the independent coordinate systems to a transitional coordinate system via the transformation parameters. Several control points are then measured by GPS in a geodetic coordinate system. All the points can then be transformed from the transitional coordinate system to the geodetic coordinate system. In this paper, the implementation process and mathematical formulas of the new method are presented in detail, and the formula to estimate the precision of surveys is given. An example has demonstrated that the precision of using the new method could meet large-scale mapping needs.

  14. Developing sub-domain verification methods based on GIS tools

    NASA Astrophysics Data System (ADS)

    Smith, J. A.; Foley, T. A.; Raby, J. W.

    2014-12-01

    The meteorological community makes extensive use of the Model Evaluation Tools (MET) developed by National Center for Atmospheric Research for numerical weather prediction model verification through grid-to-point, grid-to-grid and object-based domain level analyses. MET Grid-Stat has been used to perform grid-to-grid neighborhood verification to account for the uncertainty inherent in high resolution forecasting, and MET Method for Object-based Diagnostic Evaluation (MODE) has been used to develop techniques for object-based spatial verification of high resolution forecast grids for continuous meteorological variables. High resolution modeling requires more focused spatial and temporal verification over parts of the domain. With a Geographical Information System (GIS), researchers can now consider terrain type/slope and land use effects and other spatial and temporal variables as explanatory metrics in model assessments. GIS techniques, when coupled with high resolution point and gridded observations sets, allow location-based approaches that permit discovery of spatial and temporal scales where models do not sufficiently resolve the desired phenomena. In this paper we discuss our initial GIS approach to verify WRF-ARW with a one-kilometer horizontal resolution inner domain centered over Southern California. Southern California contains a mixture of urban, sub-urban, agricultural and mountainous terrain types along with a rich array of observational data with which to illustrate our ability to conduct sub-domain verification.

  15. Gradient-based optimum aerodynamic design using adjoint methods

    NASA Astrophysics Data System (ADS)

    Xie, Lei

    2002-09-01

    Continuous adjoint methods and optimal control theory are applied to a pressure-matching inverse design problem of quasi 1-D nozzle flows. Pontryagin's Minimum Principle is used to derive the adjoint system and the reduced gradient of the cost functional. The properties of adjoint variables at the sonic throat and the shock location are studied, revealing a log-arithmic singularity at the sonic throat and continuity at the shock location. A numerical method, based on the Steger-Warming flux-vector-splitting scheme, is proposed to solve the adjoint equations. This scheme can finely resolve the singularity at the sonic throat. A non-uniform grid, with points clustered near the throat region, can resolve it even better. The analytical solutions to the adjoint equations are also constructed via Green's function approach for the purpose of comparing the numerical results. The pressure-matching inverse design is then conducted for a nozzle parameterized by a single geometric parameter. In the second part, the adjoint methods are applied to the problem of minimizing drag coefficient, at fixed lift coefficient, for 2-D transonic airfoil flows. Reduced gradients of several functionals are derived through application of a Lagrange Multiplier Theorem. The adjoint system is carefully studied including the adjoint characteristic boundary conditions at the far-field boundary. A super-reduced design formulation is also explored by treating the angle of attack as an additional state; super-reduced gradients can be constructed either by solving adjoint equations with non-local boundary conditions or by a direct Lagrange multiplier method. In this way, the constrained optimization reduces to an unconstrained design problem. Numerical methods based on Jameson's finite volume scheme are employed to solve the adjoint equations. The same grid system generated from an efficient hyperbolic grid generator are adopted in both the Euler flow solver and the adjoint solver. Several

  16. Tensor-based dynamic reconstruction method for electrical capacitance tomography

    NASA Astrophysics Data System (ADS)

    Lei, J.; Mu, H. P.; Liu, Q. B.; Li, Z. H.; Liu, S.; Wang, X. Y.

    2017-03-01

    Electrical capacitance tomography (ECT) is an attractive visualization measurement method, in which the acquisition of high-quality images is beneficial for the understanding of the underlying physical or chemical mechanisms of the dynamic behaviors of the measurement objects. In real-world measurement environments, imaging objects are often in a dynamic process, and the exploitation of the spatial-temporal correlations related to the dynamic nature will contribute to improving the imaging quality. Different from existing imaging methods that are often used in ECT measurements, in this paper a dynamic image sequence is stacked into a third-order tensor that consists of a low rank tensor and a sparse tensor within the framework of the multiple measurement vectors model and the multi-way data analysis method. The low rank tensor models the similar spatial distribution information among frames, which is slowly changing over time, and the sparse tensor captures the perturbations or differences introduced in each frame, which is rapidly changing over time. With the assistance of the Tikhonov regularization theory and the tensor-based multi-way data analysis method, a new cost function, with the considerations of the multi-frames measurement data, the dynamic evolution information of a time-varying imaging object and the characteristics of the low rank tensor and the sparse tensor, is proposed to convert the imaging task in the ECT measurement into a reconstruction problem of a third-order image tensor. An effective algorithm is developed to search for the optimal solution of the proposed cost function, and the images are reconstructed via a batching pattern. The feasibility and effectiveness of the developed reconstruction method are numerically validated.

  17. Microbial detection method based on sensing molecular hydrogen.

    PubMed

    Wilkins, J R; Stoner, G E; Boykin, E H

    1974-05-01

    A simple method for detecting bacteria, based on the time of hydrogen evolution, was developed and tested against various members of the Enterobacteriaceae group. The test system consisted of (i) two electrodes, platinum and a reference electrode, (ii) a buffer amplifier, and (iii) a strip-chart recorder. Hydrogen evolution was measured by an increase in voltage in the negative (cathodic) direction and recorded on a strip-chart recorder. Hydrogen response curves consisted of (i) a lag period, (ii) a period of rapid buildup in potential due to hydrogen, and (iii) a period of decline in potential. A linear relationship was established between inoculum size and the time hydrogen was detected (lag period). Lag times ranged from 1 h for 10(6) cells/ml to 7 h for 10(0) cells/ml. For each 10-fold decrease in inoculum, length of the lag period increased 60 to 70 min. Mean cell concentrations at the time of hydrogen evolution were 10(6)/ml. Based on the linear relationship between inoculum size and lag period, these results indicate the potential application of the hydrogen-sensing method for rapidly detecting coliforms and other gas-producing microorganisms in a variety of clinical, food, and other samples.

  18. Methods and applications of structure based pharmacophores in drug discovery.

    PubMed

    Pirhadi, Somayeh; Shiri, Fereshteh; Ghasemi, Jahan B

    2013-01-01

    A pharmacophore model does not describe a real molecule or a real association of functional groups but illustrates a molecular recognition of a biological target shared by a group of compounds. Pharmacophores also represent the spatial arrangement of essential interactions in a receptor-binding pocket. Structure based pharmacophores (SBPs) can work both with a free (apo) structure or a macromolecule-ligand complex (holo) structure. The SBP methods that derive pharmacophore from protein-ligand complexes use the potential interactions observed between ligand and protein, whereas, the SBP method that aims to derive pharmacophore from ligand free protein, uses only protein active site information. Therefore SBPs do not encounter to challenging problems such as ligand flexibility, molecular alignment as well as proper selection of training set compounds in ligand based pharmacophore modeling. The current review deals with Hot Spot' analysis of binding site to feature generation, several approaches to feature reduction, and considers shape and excluded volumes to SBP model building. This review continues to represent several applications of SBPs in virtual screening especially in parallel screening approach and multi-target drug design. Also it reports the applications of SBPs in QSAR. This review emphasizes that SBPs are valuable tools for hit to lead optimization, virtual screening, scaffold hopping, and multi-target drug design.

  19. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  20. An improved unsupervised clustering-based intrusion detection method

    NASA Astrophysics Data System (ADS)

    Hai, Yong J.; Wu, Yu; Wang, Guo Y.

    2005-03-01

    Practical Intrusion Detection Systems (IDSs) based on data mining are facing two key problems, discovering intrusion knowledge from real-time network data, and automatically updating them when new intrusions appear. Most data mining algorithms work on labeled data. In order to set up basic data set for mining, huge volumes of network data need to be collected and labeled manually. In fact, it is rather difficult and impractical to label intrusions, which has been a big restrict for current IDSs and has led to limited ability of identifying all kinds of intrusion types. An improved unsupervised clustering-based intrusion model working on unlabeled training data is introduced. In this model, center of a cluster is defined and used as substitution of this cluster. Then all cluster centers are adopted to detect intrusions. Testing on data sets of KDDCUP"99, experimental results demonstrate that our method has good performance in detection rate. Furthermore, the incremental-learning method is adopted to detect those unknown-type intrusions and it decreases false positive rate.

  1. A Progressive Image Compression Method Based on EZW Algorithm

    NASA Astrophysics Data System (ADS)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  2. A microarray-based method to perform nucleic acid selections.

    PubMed

    Aminova, Olga; Disney, Matthew D

    2010-01-01

    This method describes a microarray-based platform to perform nucleic acid selections. Chemical ligands to which a nucleic acid binder is desired are immobilized onto an agarose microarray surface; the array is then incubated with an RNA library. Bound RNA library members are harvested directly from the array surface via gel excision at the position on the array where a ligand was immobilized. The RNA is then amplified via RT-PCR, cloned, and sequenced. This method has the following advantages over traditional resin-based Systematic Evolution of Ligands by Exponential Enrichment (SELEX): (1) multiple selections can be completed in parallel on a single microarray surface; (2) kinetic biases in the selections are mitigated since all RNA binders are harvested from an array via gel excision; (3) the amount of chemical ligand needed to perform a selection is minimized; (4) selections do not require expensive resins or equipment; and (5) the matrix used for selections is inexpensive and easy to prepare. Although this protocol was demonstrated for RNA selections, it should be applicable for any nucleic acid selection.

  3. Evaluation methods for association rules in spatial knowlegde base

    NASA Astrophysics Data System (ADS)

    Niu, X.; Ji, X.

    2014-04-01

    Association rule is an important model in data mining. It describes the relationship between predicates in transactions, makes the expression of knowledge hidden in data more specific and clear. While the developing and applying of remote sensing technology and automatic data collection tools in recent decades, tremendous amounts of spatial and non-spatial data have been collected and stored in large spatial database, so association rules mining from spatial database becomes a significant research area with extensive applications. How to find effective, reliable and interesting association rules from vast information for helping people analyze and make decision has become a significant issue. Evaluation methods measure spatial association rules with evaluation criteria. On the basis of analyzing the existing evaluation criteria, this paper improved the novelty evaluation method, built a spatial knowledge base, and proposed a new evaluation process based on the support-confidence evaluation system. Finally, the feasibility of the new evaluation process was validated by an experiment with real-world geographical spatial data.

  4. Framework of a Contour Based Depth Map Coding Method

    NASA Astrophysics Data System (ADS)

    Wang, Minghui; He, Xun; Jin, Xin; Goto, Satoshi

    Stereo-view and multi-view video formats are heavily investigated topics given their vast application potential. Depth Image Based Rendering (DIBR) system has been developed to improve Multiview Video Coding (MVC). Depth image is introduced to synthesize virtual views on the decoder side in this system. Depth image is a piecewise image, which is filled with sharp contours and smooth interior. Contours in a depth image show more importance than interior in view synthesis process. In order to improve the quality of the synthesized views and reduce the bitrate of depth image, a contour based coding strategy is proposed. First, depth image is divided into layers by different depth value intervals. Then regions, which are defined as the basic coding unit in this work, are segmented from each layer. The region is further divided into the contour and the interior. Two different procedures are employed to code contours and interiors respectively. A vector-based strategy is applied to code the contour lines. Straight lines in contours cost few of bits since they are regarded as vectors. Pixels, which are out of straight lines, are coded one by one. Depth values in the interior of a region are modeled by a linear or nonlinear formula. Coefficients in the formula are retrieved by regression. This process is called interior painting. Unlike conventional block based coding method, the residue between original frame and reconstructed frame (by contour rebuilt and interior painting) is not sent to decoder. In this proposal, contour is coded in a lossless way whereas interior is coded in a lossy way. Experimental results show that the proposed Contour Based Depth map Coding (CBDC) achieves a better performance than JMVC (reference software of MVC) in the high quality scenarios.

  5. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    NASA Astrophysics Data System (ADS)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  6. A method for MREIT-based source imaging: simulation studies

    NASA Astrophysics Data System (ADS)

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data.

  7. Methods for assessing relative importance in preference based outcome measures.

    PubMed

    Kaplan, R M; Feeny, D; Revicki, D A

    1993-12-01

    This paper reviews issues relevant to preference assessment for utility based measures of health-related quality of life. Cost/utility studies require a common measurement of health outcome, such as the quality adjusted life year (QALY). A key element in the QALY methodology is the measure of preference that estimates subjective health quality. Economists and psychologists differ on their preferred approach to preference measurement. Economists rely on utility assessment methods that formally consider economic trades. These methods include the standard gamble, time-trade off and person trade-off. However, some evidence suggests that many of the assumptions that underlie economic measurements of choice are open to challenge because human information processors do poorly at integrating complex probability information when making decisions that involve risk. Further, economic analysis assumes that choices accurately correspond to the way rational humans use information. Psychology experiments suggest that methods commonly used for economic analysis do not represent the underlying true preference continuum and some evidence supports the use of simple rating scales. More recent research by economists attempts integrated cognitive models, while contemporary research by psychologists considers economic models of choice. The review also suggests that difference in preference between different social groups tends to be small.

  8. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  9. Novel Parachlamydia acanthamoebae quantification method based on coculture with amoebae.

    PubMed

    Matsuo, Junji; Hayashi, Yasuhiro; Nakamura, Shinji; Sato, Marie; Mizutani, Yoshihiko; Asaka, Masahiro; Yamaguchi, Hiroyuki

    2008-10-01

    Parachlamydia acanthamoebae, belonging to the order Chlamydiales, is an obligately intracellular bacterium that infects free-living amoebae and is a potential human pathogen. However, no method exists to accurately quantify viable bacterial numbers. We present a novel quantification method for P. acanthamoebae based on coculture with amoebae. P. acanthamoebae was cultured either with Acanthamoeba spp. or with mammalian epithelial HEp-2 or Vero cells. The infection rate of P. acanthamoebae (amoeba-infectious dose [AID]) was determined by DAPI (4',6-diamidino-2-phenylindole) staining and was confirmed by fluorescent in situ hybridization. AIDs were plotted as logistic sigmoid dilution curves, and P. acanthamoebae numbers, defined as amoeba-infectious units (AIU), were calculated. During culture, amoeba numbers and viabilities did not change, and amoebae did not change from trophozoites to cysts. Eight amoeba strains showed similar levels of P. acanthamoebae growth, and bacterial numbers reached ca. 1,000-fold (10(9) AIU preculture) after 4 days. In contrast, no increase was observed for P. acanthamoebae in either mammalian cell line. However, aberrant structures in epithelial cells, implying possible persistent infection, were seen by transmission electron microscopy. Thus, our method could monitor numbers of P. acanthamoebae bacteria in host cells and may be useful for understanding chlamydiae present in the natural environment as human pathogens.

  10. Development of Cross-Assembly Phage PCR-Based Methods ...

    EPA Pesticide Factsheets

    Technologies that can characterize human fecal pollution in environmental waters offer many advantages over traditional general indicator approaches. However, many human-associated methods cross-react with non-human animal sources and lack suitable sensitivity for fecal source identification applications. The genome of a newly discovered bacteriophage (~97 kbp), the Cross-Assembly phage or “crAssphage”, assembled from a human gut metagenome DNA sequence library is predicted to be both highly abundant and predominately occur in human feces suggesting that this double stranded DNA virus may be an ideal human fecal pollution indicator. We report the development of two human-associated crAssphage endpoint PCR methods (crAss056 and crAss064). A shotgun strategy was employed where 384 candidate primers were designed to cover ~41 kbp of the crAssphage genome deemed favorable for method development based on a series of bioinformatics analyses. Candidate primers were subjected to three rounds of testing to evaluate assay optimization, specificity, limit of detection (LOD95), geographic variability, and performance in environmental water samples. The top two performing candidate primer sets exhibited 100% specificity (n = 70 individual samples from 8 different animal species), >90% sensitivity (n = 10 raw sewage samples from different geographic locations), LOD95 of 0.01 ng/µL of total DNA per reaction, and successfully detected human fecal pollution in impaired envi

  11. Updating National Topographic Data Base Using Change Detection Methods

    NASA Astrophysics Data System (ADS)

    Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.

    2016-06-01

    The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.

  12. Methods for Evaluating Respondent Attrition in Web-Based Surveys

    PubMed Central

    Sabo, Roy T; Krist, Alex H; Day, Teresa; Cyrus, John; Woolf, Steven H

    2016-01-01

    Background Electronic surveys are convenient, cost effective, and increasingly popular tools for collecting information. While the online platform allows researchers to recruit and enroll more participants, there is an increased risk of participant dropout in Web-based research. Often, these dropout trends are simply reported, adjusted for, or ignored altogether. Objective To propose a conceptual framework that analyzes respondent attrition and demonstrates the utility of these methods with existing survey data. Methods First, we suggest visualization of attrition trends using bar charts and survival curves. Next, we propose a generalized linear mixed model (GLMM) to detect or confirm significant attrition points. Finally, we suggest applications of existing statistical methods to investigate the effect of internal survey characteristics and patient characteristics on dropout. In order to apply this framework, we conducted a case study; a seventeen-item Informed Decision-Making (IDM) module addressing how and why patients make decisions about cancer screening. Results Using the framework, we were able to find significant attrition points at Questions 4, 6, 7, and 9, and were also able to identify participant responses and characteristics associated with dropout at these points and overall. Conclusions When these methods were applied to survey data, significant attrition trends were revealed, both visually and empirically, that can inspire researchers to investigate the factors associated with survey dropout, address whether survey completion is associated with health outcomes, and compare attrition patterns between groups. The framework can be used to extract information beyond simple responses, can be useful during survey development, and can help determine the external validity of survey results. PMID:27876687

  13. ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS

    SciTech Connect

    E. Blanford; E. Keldrauk; M. Laufer; M. Mieler; J. Wei; B. Stojadinovic; P.F. Peterson

    2010-09-20

    Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement, and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using

  14. Artificial Boundary Conditions Based on the Difference Potentials Method

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.

    1996-01-01

    While numerically solving a problem initially formulated on an unbounded domain, one typically truncates this domain, which necessitates setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The issue of setting the ABC's appears to be most significant in many areas of scientific computing, for example, in problems originating from acoustics, electrodynamics, solid mechanics, and fluid dynamics. In particular, in computational fluid dynamics (where external problems present a wide class of practically important formulations) the proper treatment of external boundaries may have a profound impact on the overall quality and performance of numerical algorithms. Most of the currently used techniques for setting the ABC's can basically be classified into two groups. The methods from the first group (global ABC's) usually provide high accuracy and robustness of the numerical procedure but often appear to be fairly cumbersome and (computationally) expensive. The methods from the second group (local ABC's) are, as a rule, algorithmically simple, numerically cheap, and geometrically universal; however, they usually lack accuracy of computations. In this paper we first present a survey and provide a comparative assessment of different existing methods for constructing the ABC's. Then, we describe a relatively new ABC's technique of ours and review the corresponding results. This new technique, in our opinion, is currently one of the most promising in the field. It enables one to construct such ABC's that combine the advantages relevant to the two aforementioned classes of existing methods. Our approach is based on application of the difference potentials method attributable to V. S. Ryaben'kii. This approach allows us to obtain highly accurate ABC's in the form of certain (nonlocal) boundary operator equations. The operators involved are analogous to the pseudodifferential boundary projections first introduced by A. P. Calderon and then

  15. Physics-Based Imaging Methods for Terahertz Nondestructive Evaluation Applications

    NASA Astrophysics Data System (ADS)

    Kniffin, Gabriel Paul

    Lying between the microwave and far infrared (IR) regions, the "terahertz gap" is a relatively unexplored frequency band in the electromagnetic spectrum that exhibits a unique combination of properties from its neighbors. Like in IR, many materials have characteristic absorption spectra in the terahertz (THz) band, facilitating the spectroscopic "fingerprinting" of compounds such as drugs and explosives. In addition, non-polar dielectric materials such as clothing, paper, and plastic are transparent to THz, just as they are to microwaves and millimeter waves. These factors, combined with sub-millimeter wavelengths and non-ionizing energy levels, makes sensing in the THz band uniquely suited for many NDE applications. In a typical nondestructive test, the objective is to detect a feature of interest within the object and provide an accurate estimate of some geometrical property of the feature. Notable examples include the thickness of a pharmaceutical tablet coating layer or the 3D location, size, and shape of a flaw or defect in an integrated circuit. While the material properties of the object under test are often tightly controlled and are generally known a priori, many objects of interest exhibit irregular surface topographies such as varying degrees of curvature over the extent of their surfaces. Common THz pulsed imaging (TPI) methods originally developed for objects with planar surfaces have been adapted for objects with curved surfaces through use of mechanical scanning procedures in which measurements are taken at normal incidence over the extent of the surface. While effective, these methods often require expensive robotic arm assemblies, the cost and complexity of which would likely be prohibitive should a large volume of tests be needed to be carried out on a production line. This work presents a robust and efficient physics-based image processing approach based on the mature field of parabolic equation methods, common to undersea acoustics, seismology

  16. Method and apparatus for making articles from particle based materials

    DOEpatents

    Moorhead, A.J.; Menchhofer, P.A.

    1995-12-19

    A method and apparatus are disclosed for the production of articles made of a particle-based material; e.g., ceramics and sintered metals. In accordance with the invention, a thermally settable slurry containing a relatively high concentration of the particles is conveyed through an elongate flow area having a desired cross-sectional configuration. The slurry is heated as it is advanced through the flow area causing the slurry to set or harden in a shape which conforms to the cross-sectional configuration of the flow area. The material discharges from the flow area as a self-supporting solid of near net final dimensions. The article may then be sintered to consolidate the particles and provide a high density product. 10 figs.

  17. Method and apparatus for making articles from particle based materials

    DOEpatents

    Moorhead, Arthur J.; Menchhofer, Paul A.

    1995-01-01

    A method and apparatus for the production of articles made of a particle-based material; e.g., ceramics and sintered metals. In accordance with the invention, a thermally settable slurry containing a relatively high concentration of the particles is conveyed through an elongate flow area having a desired cross-sectional configuration. The slurry is heated as it is advanced through the flow area causing the slurry to set or harden in a shape which conforms to the cross-sectional configuration of the flow area. The material discharges from the flow area as a self-supporting solid of near net final dimensions. The article may then be sintered to consolidate the particles and provide a high density product.

  18. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem.

  19. Note: A manifold ranking based saliency detection method for camera

    NASA Astrophysics Data System (ADS)

    Zhang, Libo; Sun, Yihan; Luo, Tiejian; Rahman, Mohammad Muntasir

    2016-09-01

    Research focused on salient object region in natural scenes has attracted a lot in computer vision and has widely been used in many applications like object detection and segmentation. However, an accurate focusing on the salient region, while taking photographs of the real-world scenery, is still a challenging task. In order to deal with the problem, this paper presents a novel approach based on human visual system, which works better with the usage of both background prior and compactness prior. In the proposed method, we eliminate the unsuitable boundary with a fixed threshold to optimize the image boundary selection which can provide more precise estimations. Then, the object detection, which is optimized with compactness prior, is obtained by ranking with background queries. Salient objects are generally grouped together into connected areas that have compact spatial distributions. The experimental results on three public datasets demonstrate that the precision and robustness of the proposed algorithm have been improved obviously.

  20. Optical center alignment technique based on inner profile measurement method

    NASA Astrophysics Data System (ADS)

    Wakayama, Toshitaka; Yoshizawa, Toru

    2014-05-01

    Center alignment is important technique to tune up the spindle of various precision machines in manufacturing industry. Conventionally such a tool as a dial indicator has been used to adjust and to position the axis by manual operations of a technical worker. However, it is not easy to precisely control its axis. In this paper, we developed the optical center alignment technique based on inner profile measurement using a ring beam device. In this case, the center position of the cylinder hole can be determined from circular profile detected by optical sectioning method using a ring beam device. In our trials, the resolution of the center position is proved less than 10 micrometers in extreme cases. This technique is available for practical applications in machine tool industry.

  1. Image-based method for automated phase correction of ghost.

    PubMed

    Chen, Chunxiao; Luo, Limin; Tao, Hua; Wang, Shijie

    2005-01-01

    One of the most common artifacts for echo planar imaging is the ghost artifact, typically overcome with the aid of a reference scan preceding the actual image acquisition. In this work, we describe an automated free-scan-reference method for reducing ghost artifact using image-based correction. The two dimensional Fourier transformation of an entire data of image matrix is used to reconstruct two new images, one is reconstructed only by even rows, the other is only by odd rows, with the remaining ones zero-filled. Phase shift between even echoes and odd echoes can be computed by using the two images. Unwrapped phase shift gained by Marquardt-Levenber unlinear fitting can be used to suppress the ghost effectively.

  2. A novel classification method based on membership function

    NASA Astrophysics Data System (ADS)

    Peng, Yaxin; Shen, Chaomin; Wang, Lijia; Zhang, Guixu

    2011-03-01

    We propose a method for medical image classification using membership function. Our aim is to classify the image as several classes based on a prior knowledge. For every point, we calculate its membership function, i.e., the probability that the point belongs to each class. The point is finally labeled as the class with the highest value of membership function. The classification is reduced to a minimization problem of a functional with arguments of membership functions. Three novelties are in our paper. First, bias correction and Rudin-Osher-Fatemi (ROF) model are adopted to the input image to enhance the image quality. Second, unconstrained functional is used. We use variable substitution to avoid the constraints that membership functions should be positive and with sum one. Third, several techniques are used to fasten the computation. The experimental result of ventricle shows the validity of this approach.

  3. Material measurement method based on femtosecond laser plasma shock wave

    NASA Astrophysics Data System (ADS)

    Zhong, Dong; Li, Zhongming

    2017-03-01

    The acoustic emission signal of laser plasma shock wave, which comes into being when femtosecond laser ablates pure Cu, Fe, and Al target material, has been detected by using the fiber Fabry-Perot (F-P) acoustic emission sensing probe. The spectrum characters of the acoustic emission signals for three kinds of materials have been analyzed and studied by using Fourier transform. The results show that the frequencies of the acoustic emission signals detected from the three kinds of materials are different. Meanwhile, the frequencies are almost identical for the same materials under different ablation energies and detection ranges. Certainly, the amplitudes of the spectral character of the three materials show a fixed pattern. The experimental results and methods suggest a potential application of the plasma shock wave on-line measurement based on the femtosecond laser ablating target by using the fiber F-P acoustic emission sensor probe.

  4. Fiber optic pressure sensing method based on Sagnac interferometer

    NASA Astrophysics Data System (ADS)

    Zhang, Yi; Zhuang, Zhi; Chen, Ying; Yang, Yuanhong

    2014-11-01

    Pressure method using polarization-maintaining photonic crystal fiber (PM-PCF) as sensing element based on Sagnac interferometer is proposed to monitor inter layer pressure in especial compact structure. Sensing model is analyzed and test system is set up, which is validated by experiment. The birefringence can be modified by the deformation of PM-PCF under transverse pressure, realizing pressure measurement by detecting the wavelength shift of one specific valley from output of the Sagnac interferometer. The experiment results show that the output interference fringes were shifted linearly with pressure. The dynamic range of 0 kN ~10kN, sensing precision of 2.6%, and pressure sensitivity of 0.4414nm/kN are achieved, and the strain relaxation phenomenon of cushion can be observed obviously. The sensor has better engineering practicability and capability to restrain interference brought up by fluctuation of environment temperature, which temperature sensitivity is -11.8pm/°C.

  5. Classification data mining method based on dynamic RBF neural networks

    NASA Astrophysics Data System (ADS)

    Zhou, Lijuan; Xu, Min; Zhang, Zhang; Duan, Luping

    2009-04-01

    With the widely application of databases and sharp development of Internet, The capacity of utilizing information technology to manufacture and collect data has improved greatly. It is an urgent problem to mine useful information or knowledge from large databases or data warehouses. Therefore, data mining technology is developed rapidly to meet the need. But DM (data mining) often faces so much data which is noisy, disorder and nonlinear. Fortunately, ANN (Artificial Neural Network) is suitable to solve the before-mentioned problems of DM because ANN has such merits as good robustness, adaptability, parallel-disposal, distributing-memory and high tolerating-error. This paper gives a detailed discussion about the application of ANN method used in DM based on the analysis of all kinds of data mining technology, and especially lays stress on the classification Data Mining based on RBF neural networks. Pattern classification is an important part of the RBF neural network application. Under on-line environment, the training dataset is variable, so the batch learning algorithm (e.g. OLS) which will generate plenty of unnecessary retraining has a lower efficiency. This paper deduces an incremental learning algorithm (ILA) from the gradient descend algorithm to improve the bottleneck. ILA can adaptively adjust parameters of RBF networks driven by minimizing the error cost, without any redundant retraining. Using the method proposed in this paper, an on-line classification system was constructed to resolve the IRIS classification problem. Experiment results show the algorithm has fast convergence rate and excellent on-line classification performance.

  6. A quantitative dimming method for LED based on PWM

    NASA Astrophysics Data System (ADS)

    Wang, Jiyong; Mou, Tongsheng; Wang, Jianping; Tian, Xiaoqing

    2012-10-01

    Traditional light sources were required to provide stable and uniform illumination for a living or working environment considering performance of visual function of human being. The requirement was always reasonable until non-visual functions of the ganglion cells in the retina photosensitive layer were found. New generation of lighting technology, however, is emerging based on novel lighting materials such as LED and photobiological effects on human physiology and behavior. To realize dynamic lighting of LED whose intensity and color were adjustable to the need of photobiological effects, a quantitative dimming method based on Pulse Width Modulation (PWM) and light-mixing technology was presented. Beginning with two channels' PWM, this paper demonstrated the determinacy and limitation of PWM dimming for realizing Expected Photometric and Colorimetric Quantities (EPCQ), in accordance with the analysis on geometrical, photometric, colorimetric and electrodynamic constraints. A quantitative model which mapped the EPCQ into duty cycles was finally established. The deduced model suggested that the determinacy was a unique individuality only for two channels' and three channels' PWM, but the limitation was an inevitable commonness for multiple channels'. To examine the model, a light-mixing experiment with two kinds of white LED simulated variations of illuminance and Correlation Color Temperature (CCT) from dawn to midday. Mean deviations between theoretical values and measured values were obtained, which were 15lx and 23K respectively. Result shows that this method can effectively realize the light spectrum which has a specific requirement of EPCQ, and provides a theoretical basis and a practical way for dynamic lighting of LED.

  7. Data Bases in Writing: Method, Practice, and Metaphor.

    ERIC Educational Resources Information Center

    Schwartz, Helen J.

    1985-01-01

    Points out the need for informed and experienced users of data bases. Discusses the definition of a data base, creating a data base for research, comparison use, and checking written text as a data base. (EL)

  8. A Molecular Selection Index Method Based on Eigenanalysis

    PubMed Central

    Cerón-Rojas, J. Jesús; Castillo-González, Fernando; Sahagún-Castellanos, Jaime; Santacruz-Varela, Amalio; Benítez-Riquelme, Ignacio; Crossa, José

    2008-01-01

    The traditional molecular selection index (MSI) employed in marker-assisted selection maximizes the selection response by combining information on molecular markers linked to quantitative trait loci (QTL) and phenotypic values of the traits of the individuals of interest. This study proposes an MSI based on an eigenanalysis method (molecular eigen selection index method, MESIM), where the first eigenvector is used as a selection index criterion, and its elements determine the proportion of the trait's contribution to the selection index. This article develops the theoretical framework of MESIM. Simulation results show that the genotypic means and the expected selection response from MESIM for each trait are equal to or greater than those from the traditional MSI. When several traits are simultaneously selected, MESIM performs well for traits with relatively low heritability. The main advantages of MESIM over the traditional molecular selection index are that its statistical sampling properties are known and that it does not require economic weights and thus can be used in practical applications when all or some of the traits need to be improved simultaneously. PMID:18716338

  9. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  10. Digital image registration method based upon binary boundary maps

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.; Andrus, J. F.; Campbell, C. W.

    1974-01-01

    A relatively fast method is presented for matching or registering the digital data of imagery from the same ground scene acquired at different times, or from different multispectral images, sensors, or both. It is assumed that the digital images can be registed by using translations and rotations only, that the images are of the same scale, and that little or no distortion exists between images. It is further assumed that by working with several local areas of the image, the rotational effects in the local areas can be neglected. Thus, by treating the misalignments of local areas as translations, it is possible to determine rotational and translational misalignments for a larger portion of the image containing the local areas. This procedure of determining the misalignment and then registering the data according to the misalignment can be repeated until the desired degree of registration is achieved. The method to be presented is based upon the use of binary boundary maps produced from the raw digital imagery rather than the raw digital data.

  11. An analytic reconstruction method for PET based on cubic splines

    NASA Astrophysics Data System (ADS)

    Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.

    2014-03-01

    PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.

  12. Unbiased methods for population-based association studies.

    PubMed

    Devlin, B; Roeder, K; Bacanu, S A

    2001-12-01

    Large, population-based samples and large-scale genotyping are being used to evaluate disease/gene associations. A substantial drawback to such samples is the fact that population substructure can induce spurious associations between genes and disease. We review two methods, called genomic control (GC) and structured association (SA), that obviate many of the concerns about population substructure by using the features of the genomes present in the sample to correct for stratification. The GC approach exploits the fact that population substructure generates "over dispersion" of statistics used to assess association. By testing multiple polymorphisms throughout the genome, only some of which are pertinent to the disease of interest, the degree of overdispersion generated by population substructure can be estimated and taken into account. The SA approach assumes that the sampled population, although heterogeneous, is composed of subpopulations that are themselves homogeneous. By using multiple polymorphisms throughout the genome, this "latent class method" estimates the probability sampled individuals derive from each of these latent subpopulations. GC has the advantage of robustness, simplicity, and wide applicability, even to experimental designs such as DNA pooling. SA is a bit more complicated but has the advantage of greater power in some realistic settings, such as admixed populations or when association varies widely across subpopulations. It, too, is widely applicable. Both also have weaknesses, as elaborated in our review.

  13. Three-Dimensional Imaging Methods Based on Multiview Images

    NASA Astrophysics Data System (ADS)

    Son, Jung-Young; Javidi, Bahram

    2005-09-01

    Three-dimensional imaging methods, based on parallaxes as their depth cues, can be classified into the stereoscopic providing binocular parallax only, and multiview providing both binocular and motion parallaxes. In these methods, the parallaxes are provided by creating a viewing zone with use of either a special optical eyeglasses or a special optical plate as their viewing zone-forming optics. For the stereoscopic image generations, either the eyeglasses or the optical plate can be employed, but for the multiview the optical plate or the eyeglasses with a tracking device. The stereoscopic image pair and the multiview images are presented either simultaneously or as a time sequence with use of projectors or display panels. For the case of multiview images,they can also be presented as two images at a time according to the viewer's movements. The presence of the viewing zone-forming optics often causes undesirable problems, such as appearance of moiré fringes, image quality deterioration,depth reversion, limiting viewing regions, low image brightness, image burring,and inconveniences of wearing.

  14. Structural topology design of container ship based on knowledge-based engineering and level set method

    NASA Astrophysics Data System (ADS)

    Cui, Jin-ju; Wang, De-yu; Shi, Qi-qi

    2015-06-01

    Knowledge-Based Engineering (KBE) is introduced into the ship structural design in this paper. From the implementation of KBE, the design solutions for both Rules Design Method (RDM) and Interpolation Design Method (IDM) are generated. The corresponding Finite Element (FE) models are generated. Topological design of the longitudinal structures is studied where the Gaussian Process (GP) is employed to build the surrogate model for FE analysis. Multi-objective optimization methods inspired by Pareto Front are used to reduce the design tank weight and outer surface area simultaneously. Additionally, an enhanced Level Set Method (LSM) which employs implicit algorithm is applied to the topological design of typical bracket plate which is used extensively in ship structures. Two different sets of boundary conditions are considered. The proposed methods show satisfactory efficiency and accuracy.

  15. A GIS-based method for flood risk assessment

    NASA Astrophysics Data System (ADS)

    Kalogeropoulos, Kleomenis; Stathopoulos, Nikos; Psarogiannis, Athanasios; Penteris, Dimitris; Tsiakos, Chrisovalantis; Karagiannopoulou, Aikaterini; Krikigianni, Eleni; Karymbalis, Efthimios; Chalkias, Christos

    2016-04-01

    Floods are physical global hazards with negative environmental and socio-economic impacts on local and regional scale. The technological evolution during the last decades, especially in the field of geoinformatics, has offered new advantages in hydrological modelling. This study seeks to use this technology in order to quantify flood risk assessment. The study area which was used is an ungauged catchment and by using mostly GIS hydrological and geomorphological analysis together with a GIS-based distributed Unit Hydrograph model, a series of outcomes have risen. More specifically, this paper examined the behaviour of the Kladeos basin (Peloponnese, Greece) using real rainfall data, as well hypothetical storms. The hydrological analysis held using a Digital Elevation Model of 5x5m pixel size, while the quantitative drainage basin characteristics were calculated and were studied in terms of stream order and its contribution to the flood. Unit Hydrographs are, as it known, useful when there is lack of data and in this work, based on time-area method, a sequences of flood risk assessments have been made using the GIS technology. Essentially, the proposed methodology estimates parameters such as discharge, flow velocity equations etc. in order to quantify flood risk assessment. Keywords Flood Risk Assessment Quantification; GIS; hydrological analysis; geomorphological analysis.

  16. A DNA-based method for detecting homologous blood doping.

    PubMed

    Manokhina, Irina; Rupert, James L

    2013-12-01

    Homologous (or allogeneic) blood doping, in which blood is transferred from a donor into a recipient athlete, is the easiest, cheapest, and fastest way to increase red cell mass (hematocrit) and therefore the oxygen-carrying capacity of the blood. Although thought to have been rendered obsolete as a doping strategy by the increased use of rhEPO to increased hematocrits, there is evidence that athletes are still using this potentially dangerous method to improve endurance performance. Current testing for homologous blood doping is based on identification of mixed populations of red blood cells by flow cytometry. This paper proposes that homologous blood doping could also be tested for by high-resolution qPCR-based genotyping and demonstrates that assays could be developed that would detect second populations of cells even if the "donor" blood was depleted of 99% of the DNA-containing leukocytes. Issues of test specificity and sensitivity are discussed as well as some of the ethical considerations that would have to be addressed if athletes' genotypes were to be used by the anti-doping authorities to prevent, or detect, the use of prohibited ergogenic practices.

  17. High accuracy operon prediction method based on STRING database scores.

    PubMed

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-07-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/.

  18. Post-Fragmentation Whole Genome Amplification-Based Method

    NASA Technical Reports Server (NTRS)

    Benardini, James; LaDuc, Myron T.; Langmore, John

    2011-01-01

    This innovation is derived from a proprietary amplification scheme that is based upon random fragmentation of the genome into a series of short, overlapping templates. The resulting shorter DNA strands (<400 bp) constitute a library of DNA fragments with defined 3 and 5 termini. Specific primers to these termini are then used to isothermally amplify this library into potentially unlimited quantities that can be used immediately for multiple downstream applications including gel eletrophoresis, quantitative polymerase chain reaction (QPCR), comparative genomic hybridization microarray, SNP analysis, and sequencing. The standard reaction can be performed with minimal hands-on time, and can produce amplified DNA in as little as three hours. Post-fragmentation whole genome amplification-based technology provides a robust and accurate method of amplifying femtogram levels of starting material into microgram yields with no detectable allele bias. The amplified DNA also facilitates the preservation of samples (spacecraft samples) by amplifying scarce amounts of template DNA into microgram concentrations in just a few hours. Based on further optimization of this technology, this could be a feasible technology to use in sample preservation for potential future sample return missions. The research and technology development described here can be pivotal in dealing with backward/forward biological contamination from planetary missions. Such efforts rely heavily on an increasing understanding of the burden and diversity of microorganisms present on spacecraft surfaces throughout assembly and testing. The development and implementation of these technologies could significantly improve the comprehensiveness and resolving power of spacecraft-associated microbial population censuses, and are important to the continued evolution and advancement of planetary protection capabilities. Current molecular procedures for assaying spacecraft-associated microbial burden and diversity have

  19. Iron-based amorphous alloys and methods of synthesizing iron-based amorphous alloys

    DOEpatents

    Saw, Cheng Kiong; Bauer, William A.; Choi, Jor-Shan; Day, Dan; Farmer, Joseph C.

    2016-05-03

    A method according to one embodiment includes combining an amorphous iron-based alloy and at least one metal selected from a group consisting of molybdenum, chromium, tungsten, boron, gadolinium, nickel phosphorous, yttrium, and alloys thereof to form a mixture, wherein the at least one metal is present in the mixture from about 5 atomic percent (at %) to about 55 at %; and ball milling the mixture at least until an amorphous alloy of the iron-based alloy and the at least one metal is formed. Several amorphous iron-based metal alloys are also presented, including corrosion-resistant amorphous iron-based metal alloys and radiation-shielding amorphous iron-based metal alloys.

  20. Geomorphometry-based method of landform assessment for geodiversity

    NASA Astrophysics Data System (ADS)

    Najwer, Alicja; Zwoliński, Zbigniew

    2015-04-01

    Climate variability primarily induces the variations in the intensity and frequency of surface processes and consequently, principal changes in the landscape. As a result, abiotic heterogeneity may be threatened and the key elements of the natural diversity even decay. The concept of geodiversity was created recently and has rapidly gained the approval of scientists around the world. However, the problem recognition is still at an early stage. Moreover, little progress has been made concerning its assessment and geovisualisation. Geographical Information System (GIS) tools currently provide wide possibilities for the Earth's surface studies. Very often, the main limitation in that analysis is acquisition of geodata in appropriate resolution. The main objective of this study was to develop a proceeding algorithm for the landform geodiversity assessment using geomorphometric parameters. Furthermore, final maps were compared to those resulting from thematic layers method. The study area consists of two peculiar valleys, characterized by diverse landscape units and complex geological setting: Sucha Woda in Polish part of Tatra Mts. and Wrzosowka in Sudetes Mts. Both valleys are located in the National Park areas. The basis for the assessment is a proper selection of geomorphometric parameters with reference to the definition of geodiversity. Seven factor maps were prepared for each valley: General Curvature, Topographic Openness, Potential Incoming Solar Radiation, Topographic Position Index, Topographic Wetness Index, Convergence Index and Relative Heights. After the data integration and performing the necessary geoinformation analysis, the next step with a certain degree of subjectivity is score classification of the input maps using an expert system and geostatistical analysis. The crucial point to generate the final maps of geodiversity by multi-criteria evaluation (MCE) with GIS-based Weighted Sum technique is to assign appropriate weights for each factor map by

  1. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method.

  2. Sensitivity kernels for viscoelastic loading based on adjoint methods

    NASA Astrophysics Data System (ADS)

    Al-Attar, David; Tromp, Jeroen

    2014-01-01

    Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity

  3. A Novel Method for Functional Annotation Prediction Based on Combination of Classification Methods

    PubMed Central

    Jung, Jaehee; Lee, Heung Ki

    2014-01-01

    Automated protein function prediction defines the designation of functions of unknown protein functions by using computational methods. This technique is useful to automatically assign gene functional annotations for undefined sequences in next generation genome analysis (NGS). NGS is a popular research method since high-throughput technologies such as DNA sequencing and microarrays have created large sets of genes. These huge sequences have greatly increased the need for analysis. Previous research has been based on the similarities of sequences as this is strongly related to the functional homology. However, this study aimed to designate protein functions by automatically predicting the function of the genome by utilizing InterPro (IPR), which can represent the properties of the protein family and groups of the protein function. Moreover, we used gene ontology (GO), which is the controlled vocabulary used to comprehensively describe the protein function. To define the relationship between IPR and GO terms, three pattern recognition techniques have been employed under different conditions, such as feature selection and weighted value, instead of a binary one. PMID:25133242

  4. Estimation of Convective Momentum Fluxes Using Satellite-Based Methods

    NASA Astrophysics Data System (ADS)

    Jewett, C.; Mecikalski, J. R.

    2009-12-01

    Research and case studies have shown that convection plays a significant role in large-scale environmental circulations. Convective momentum fluxes (CMFs) have been studied for many years using in-situ and aircraft measurements, along with numerical simulations. However, despite these successes, little work has been conducted on methods that use satellite remote sensing as a tool to diagnose these fluxes. Uses of satellite data have the capability to provide continuous analysis across regions void of ground-based remote sensing. Therefore, the project's overall goal is to develop a synergistic approach for retrieving CMFs using a collection of instruments including GOES, TRMM, CloudSat, MODIS, and QuikScat. However, this particular study will focus on the work using TRMM and QuikScat, and the methodology of using CloudSat. Sound research has already been conducted for computing CMFs using the GOES instruments (Jewett and Mecikalski 2009, submitted to J. Geophys. Res.). Using satellite-derived winds, namely mesoscale atmospheric motion vectors (MAMVs) as described by Bedka and Mecikalski (2005), one can obtain the actual winds occurring within a convective environment as perturbed by convection. Surface outflow boundaries and upper-tropospheric anvil outflow will produce “perturbation” winds on smaller, convective scales. Combined with estimated vertical motion retrieved using geostationary infrared imagery, CMFs were estimated using MAMVs, with an average profile being calculated across a convective regime or a domain covered by active storms. This study involves estimating draft-tilt from TRMM PR radar reflectivity and sub-cloud base fluxes using QuikScat data. The “slope” of falling hydrometeors (relative to Earth) in data are related to u', v' and w' winds within convection. The main up- and down-drafts within convection are described by precipitation patterns (Mecikalski 2003). Vertical motion estimates are made using model results for deep convection

  5. Knowledge Discovery from Climate Data using Graph-Based Methods

    NASA Astrophysics Data System (ADS)

    Steinhaeuser, K.

    2012-04-01

    Climate and Earth sciences have recently experienced a rapid transformation from a historically data-poor to a data-rich environment, thus bringing them into the realm of the Fourth Paradigm of scientific discovery - a term coined by the late Jim Gray (Hey et al. 2009), the other three being theory, experimentation and computer simulation. In particular, climate-related observations from remote sensors on satellites and weather radars, in situ sensors and sensor networks, as well as outputs of climate or Earth system models from large-scale simulations, provide terabytes of spatio-temporal data. These massive and information-rich datasets offer a significant opportunity for advancing climate science and our understanding of the global climate system, yet current analysis techniques are not able to fully realize their potential benefits. We describe a class of computational approaches, specifically from the data mining and machine learning domains, which may be novel to the climate science domain and can assist in the analysis process. Computer scientists have developed spatial and spatio-temporal analysis techniques for a number of years now, and many of them may be applicable and/or adaptable to problems in climate science. We describe a large-scale, NSF-funded project aimed at addressing climate science question using computational analysis methods; team members include computer scientists, statisticians, and climate scientists from various backgrounds. One of the major thrusts is in the development of graph-based methods, and several illustrative examples of recent work in this area will be presented.

  6. Method of Heating a Foam-Based Catalyst Bed

    NASA Technical Reports Server (NTRS)

    Fortini, Arthur J.; Williams, Brian E.; McNeal, Shawn R.

    2009-01-01

    A method of heating a foam-based catalyst bed has been developed using silicon carbide as the catalyst support due to its readily accessible, high surface area that is oxidation-resistant and is electrically conductive. The foam support may be resistively heated by passing an electric current through it. This allows the catalyst bed to be heated directly, requiring less power to reach the desired temperature more quickly. Designed for heterogeneous catalysis, the method can be used by the petrochemical, chemical processing, and power-generating industries, as well as automotive catalytic converters. Catalyst beds must be heated to a light-off temperature before they catalyze the desired reactions. This typically is done by heating the assembly that contains the catalyst bed, which results in much of the power being wasted and/or lost to the surrounding environment. The catalyst bed is heated indirectly, thus requiring excessive power. With the electrically heated catalyst bed, virtually all of the power is used to heat the support, and only a small fraction is lost to the surroundings. Although the light-off temperature of most catalysts is only a few hundred degrees Celsius, the electrically heated foam is able to achieve temperatures of 1,200 C. Lower temperatures are achievable by supplying less electrical power to the foam. Furthermore, because of the foam s open-cell structure, the catalyst can be applied either directly to the foam ligaments or in the form of a catalyst- containing washcoat. This innovation would be very useful for heterogeneous catalysis where elevated temperatures are needed to drive the reaction.

  7. Evaluation of medical students of teacher-based and student-based teaching methods in Infectious diseases course.

    PubMed

    Ghasemzadeh, I; Aghamolaei, T; Hosseini-Parandar, F

    2015-01-01

    Introduction: In recent years, medical education has changed dramatically and many medical schools in the world have been trying for expand modern training methods. Purpose of the research is to appraise the medical students of teacher-based and student-based teaching methods in Infectious diseases course, in the Medical School of Hormozgan Medical Sciences University. Methods: In this interventional study, a total of 52 medical scholars that used Section in this Infectious diseases course were included. About 50% of this course was presented by a teacher-based teaching method (lecture) and 50% by a student-based teaching method (problem-based learning). The satisfaction of students regarding these methods was assessed by a questionnaire and a test was used to measure their learning. information are examined with using SPSS 19 and paired t-test. Results: The satisfaction of students of student-based teaching method (problem-based learning) was more positive than their satisfaction of teacher-based teaching method (lecture).The mean score of students in teacher-based teaching method was 12.03 (SD=4.08) and in the student-based teaching method it was 15.50 (SD=4.26) and where is a considerable variation among them (p<0.001). Conclusion: The use of the student-based teaching method (problem-based learning) in comparison with the teacher-based teaching method (lecture) to present the Infectious diseases course led to the student satisfaction and provided additional learning opportunities.

  8. Evaluation of medical students of teacher-based and student-based teaching methods in Infectious diseases course

    PubMed Central

    Ghasemzadeh, I; Aghamolaei, T; Hosseini-Parandar, F

    2015-01-01

    Introduction: In recent years, medical education has changed dramatically and many medical schools in the world have been trying for expand modern training methods. Purpose of the research is to appraise the medical students of teacher-based and student-based teaching methods in Infectious diseases course, in the Medical School of Hormozgan Medical Sciences University. Methods: In this interventional study, a total of 52 medical scholars that used Section in this Infectious diseases course were included. About 50% of this course was presented by a teacher-based teaching method (lecture) and 50% by a student-based teaching method (problem-based learning). The satisfaction of students regarding these methods was assessed by a questionnaire and a test was used to measure their learning. information are examined with using SPSS 19 and paired t-test. Results: The satisfaction of students of student-based teaching method (problem-based learning) was more positive than their satisfaction of teacher-based teaching method (lecture).The mean score of students in teacher-based teaching method was 12.03 (SD=4.08) and in the student-based teaching method it was 15.50 (SD=4.26) and where is a considerable variation among them (p<0.001). Conclusion: The use of the student-based teaching method (problem-based learning) in comparison with the teacher-based teaching method (lecture) to present the Infectious diseases course led to the student satisfaction and provided additional learning opportunities.

  9. Using Corporate-Based Methods To Assess Technical Communication Programs.

    ERIC Educational Resources Information Center

    Faber, Brenton; Bekins, Linn; Karis, Bill

    2002-01-01

    Investigates methods of program assessment used by corporate learning sites and profiles value added methods as a way to both construct and evaluate academic programs in technical communication. Examines and critiques assessment methods from corporate training environments including methods employed by corporate universities and value added…

  10. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2016-06-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  11. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2017-02-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  12. Agent-based method for distributed clustering of textual information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  13. Jet-based methods to print living cells.

    PubMed

    Ringeisen, Bradley R; Othon, Christina M; Barron, Jason A; Young, Daniel; Spargo, Barry J

    2006-09-01

    Cell printing has been popularized over the past few years as a revolutionary advance in tissue engineering has potentially enabled heterogeneous 3-D scaffolds to be built cell-by-cell. This review article summarizes the state-of-the-art cell printing techniques that utilize fluid jetting phenomena to deposit 2- and 3-D patterns of living eukaryotic cells. There are four distinct categories of jetbased approaches to printing cells. Laser guidance direct write (LG DW) was the first reported technique to print viable cells by forming patterns of embryonic-chick spinal-cord cells on a glass slide (1999). Shortly after this, modified laser-induced forward transfer techniques (LIFT) and modified ink jet printers were also used to print viable cells, followed by the most recent demonstration using an electrohydrodynamic jetting (EHDJ) method. The low cost of some of these printing technologies has spurred debate as to whether they could be used on a large scale to manufacture tissue and possibly even whole organs. This review summarizes the published results of these cell printers (cell viability, retained genotype and phenotype), and also includes a physical description of the various jetting processes with a discussion of the stresses and forces that may be encountered by cells during printing. We conclude the review by comparing and contrasting the different jet-based techniques, while providing a map for future experiments that could lead to significant advances in the field of tissue engineering.

  14. An efficient method for DEM-based overland flow routing

    NASA Astrophysics Data System (ADS)

    Huang, Pin-Chun; Lee, Kwan Tun

    2013-05-01

    The digital elevation model (DEM) is frequently used to represent watershed topographic features based on a raster or a vector data format. It has been widely linked with flow routing equations for watershed runoff simulation. In this study, a recursive formulation was encoded into the conventional kinematic- and diffusion-wave routing algorithms to permit a larger time increment, despite the Courant-Friedrich-Lewy condition having been violated. To meet the requirement of recursive formulation, a novel routing sequence was developed to determine the cell-to-cell computational procedure for the DEM database. The routing sequence can be set either according to the grid elevation in descending order for the kinematic-wave routing or according to the water stage of the grid in descending order for the diffusion-wave routing. The recursive formulation for 1D runoff routing was first applied to a conceptual overland plane to demonstrate the precision of the formulation using an analytical solution for verification. The proposed novel routing sequence with the recursive formulation was then applied to two mountain watersheds for 2D runoff simulations. The results showed that the efficiency of the proposed method was significantly superior to that of the conventional algorithm, especially when applied to a steep watershed.

  15. A Monitoring Method Based on FBG for Concrete Corrosion Cracking.

    PubMed

    Mao, Jianghong; Xu, Fangyuan; Gao, Qian; Liu, Shenglin; Jin, Weiliang; Xu, Yidong

    2016-07-14

    Corrosion cracking of reinforced concrete caused by chloride salt is one of the main determinants of structure durability. Monitoring the entire process of concrete corrosion cracking is critical for assessing the remaining life of the structure and determining if maintenance is needed. Fiber Bragg Grating (FBG) sensing technology is extensively developed in photoelectric monitoring technology and has been used on many projects. FBG can detect the quasi-distribution of strain and temperature under corrosive environments, and thus it is suitable for monitoring reinforced concrete cracking. According to the mechanical principle that corrosion expansion is responsible for the reinforced concrete cracking, a package design of reinforced concrete cracking sensors based on FBG was proposed and investigated in this study. The corresponding relationship between the grating wavelength and strain was calibrated by an equal strength beam test. The effectiveness of the proposed method was verified by an electrically accelerated corrosion experiment. The fiber grating sensing technology was able to track the corrosion expansion and corrosion cracking in real time and provided data to inform decision-making for the maintenance and management of the engineering structure.

  16. Correction of placement error in EBL using model based method

    NASA Astrophysics Data System (ADS)

    Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro

    2016-10-01

    The main source of placement error in maskmaking using electron beam is charging. DISPLACE software provides a method to correct placement errors for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. Unknown physical parameters such as fogging can be found from calibration experiments. A test layout on a single calibration mask was used to calibrate physical parameters used in the correction model. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE, and the mask was fabricated and measured. A good correlation of the measured and predicted values of the correction all over the mask with the complex pattern confirmed the high accuracy of the charging placement error correction.

  17. A Monitoring Method Based on FBG for Concrete Corrosion Cracking

    PubMed Central

    Mao, Jianghong; Xu, Fangyuan; Gao, Qian; Liu, Shenglin; Jin, Weiliang; Xu, Yidong

    2016-01-01

    Corrosion cracking of reinforced concrete caused by chloride salt is one of the main determinants of structure durability. Monitoring the entire process of concrete corrosion cracking is critical for assessing the remaining life of the structure and determining if maintenance is needed. Fiber Bragg Grating (FBG) sensing technology is extensively developed in photoelectric monitoring technology and has been used on many projects. FBG can detect the quasi-distribution of strain and temperature under corrosive environments, and thus it is suitable for monitoring reinforced concrete cracking. According to the mechanical principle that corrosion expansion is responsible for the reinforced concrete cracking, a package design of reinforced concrete cracking sensors based on FBG was proposed and investigated in this study. The corresponding relationship between the grating wavelength and strain was calibrated by an equal strength beam test. The effectiveness of the proposed method was verified by an electrically accelerated corrosion experiment. The fiber grating sensing technology was able to track the corrosion expansion and corrosion cracking in real time and provided data to inform decision-making for the maintenance and management of the engineering structure. PMID:27428972

  18. Kinetic theory based new upwind methods for inviscid compressible flows

    NASA Technical Reports Server (NTRS)

    Deshpande, S. M.

    1986-01-01

    Two new upwind methods called the Kinetic Numerical Method (KNM) and the Kinetic Flux Vector Splitting (KFVS) method for the solution of the Euler equations have been presented. Both of these methods can be regarded as some suitable moments of an upwind scheme for the solution of the Boltzmann equation provided the distribution function is Maxwellian. This moment-method strategy leads to a unification of the Riemann approach and the pseudo-particle approach used earlier in the development of upwind methods for the Euler equations. A very important aspect of the moment-method strategy is that the new upwind methods satisfy the entropy condition because of the Boltzmann H-Theorem and suggest a possible way of extending the Total Variation Diminishing (TVD) principle within the framework of the H-Theorem. The ability of these methods in obtaining accurate wiggle-free solution is demonstrated by applying them to two test problems.

  19. CHAPTER 7. BERYLLIUM ANALYSIS BY NON-PLASMA BASED METHODS

    SciTech Connect

    Ekechukwu, A

    2009-04-20

    The most common method of analysis for beryllium is inductively coupled plasma atomic emission spectrometry (ICP-AES). This method, along with inductively coupled plasma mass spectrometry (ICP-MS), is discussed in Chapter 6. However, other methods exist and have been used for different applications. These methods include spectroscopic, chromatographic, colorimetric, and electrochemical. This chapter provides an overview of beryllium analysis methods other than plasma spectrometry (inductively coupled plasma atomic emission spectrometry or mass spectrometry). The basic methods, detection limits and interferences are described. Specific applications from the literature are also presented.

  20. An improved segmentation-based HMM learning method for Condition-based Maintenance

    NASA Astrophysics Data System (ADS)

    Liu, T.; Lemeire, J.; Cartella, F.; Meganck, S.

    2012-05-01

    In the domain of condition-based maintenance (CBM), persistence of machine states is a valid assumption. Based on this assumption, we present an improved Hidden Markov Model (HMM) learning algorithm for the assessment of equipment states. By a good estimation of initial parameters, more accurate learning can be achieved than by regular HMM learning methods which start with randomly chosen initial parameters. It is also better in avoiding getting trapped in local maxima. The data is segmented with a change-point analysis method which uses a combination of cumulative sum charts (CUSUM) and bootstrapping techniques. The method determines a confidence level that a state change happens. After the data is segmented, in order to label and combine the segments corresponding to the same states, a clustering technique is used based on a low-pass filter or root mean square (RMS) values of the features. The segments with their labelled hidden state are taken as 'evidence' to estimate the parameters of an HMM. Then, the estimated parameters are served as initial parameters for the traditional Baum-Welch (BW) learning algorithms, which are used to improve the parameters and train the model. Experiments on simulated and real data demonstrate that both performance and convergence speed is improved.

  1. Alternative modeling methods for plasma-based Rf ion sources

    SciTech Connect

    Veitzer, Seth A. Kundrapu, Madhusudhan Stoltz, Peter H. Beckwith, Kristian R. C.

    2016-02-15

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H{sup −} source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H{sup −} ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two

  2. Alternative modeling methods for plasma-based Rf ion sources

    NASA Astrophysics Data System (ADS)

    Veitzer, Seth A.; Kundrapu, Madhusudhan; Stoltz, Peter H.; Beckwith, Kristian R. C.

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H- source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H- ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models

  3. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD

  4. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  5. DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...

  6. Online prediction model based on the SVD-KPCA method.

    PubMed

    Elaissi, Ilyes; Jaffel, Ines; Taouali, Okba; Messaoud, Hassani

    2013-01-01

    This paper proposes a new method for online identification of a nonlinear system modelled on Reproducing Kernel Hilbert Space (RKHS). The proposed SVD-KPCA method uses the Singular Value Decomposition (SVD) technique to update the principal components. Then we use the Reduced Kernel Principal Component Analysis (RKPCA) to approach the principal components which represent the observations selected by the KPCA method.

  7. Geometric correction methods for Timepix based large area detectors

    NASA Astrophysics Data System (ADS)

    Zemlicka, J.; Dudak, J.; Karch, J.; Krejci, F.

    2017-01-01

    X-ray micro radiography with the hybrid pixel detectors provides versatile tool for the object inspection in various fields of science. It has proven itself especially suitable for the samples with low intrinsic attenuation contrast (e.g. soft tissue in biology, plastics in material sciences, thin paint layers in cultural heritage, etc.). The limited size of single Medipix type detector (1.96 cm2) was recently overcome by the construction of large area detectors WidePIX assembled of Timepix chips equipped with edgeless silicon sensors. The largest already built device consists of 100 chips and provides fully sensitive area of 14.3 × 14.3 cm2 without any physical gaps between sensors. The pixel resolution of this device is 2560 × 2560 pixels (6.5 Mpix). The unique modular detector layout requires special processing of acquired data to avoid occurring image distortions. It is necessary to use several geometric compensations after standard corrections methods typical for this type of pixel detectors (i.e. flat-field, beam hardening correction). The proposed geometric compensations cover both concept features and particular detector assembly misalignment of individual chip rows of large area detectors based on Timepix assemblies. The former deals with larger border pixels in individual edgeless sensors and their behaviour while the latter grapple with shifts, tilts and steps between detector rows. The real position of all pixels is defined in Cartesian coordinate system and together with non-binary reliability mask it is used for the final image interpolation. The results of geometric corrections for test wire phantoms and paleo botanic material are presented in this article.

  8. Comparing the Principle-Based SBH Maieutic Method to Traditional Case Study Methods of Teaching Media Ethics

    ERIC Educational Resources Information Center

    Grant, Thomas A.

    2012-01-01

    This quasi-experimental study at a Northwest university compared two methods of teaching media ethics, a class taught with the principle-based SBH Maieutic Method (n = 25) and a class taught with a traditional case study method (n = 27), with a control group (n = 21) that received no ethics training. Following a 16-week intervention, a one-way…

  9. Novel method of manufacturing hydrogen storage materials combining with numerical analysis based on discrete element method

    NASA Astrophysics Data System (ADS)

    Zhao, Xuzhe

    High efficiency hydrogen storage method is significant in development of fuel cell vehicle. Seeking for a high energy density material as the fuel becomes the key of wide spreading fuel cell vehicle. LiBH4 + MgH 2 system is a strong candidate due to their high hydrogen storage density and the reaction between them is reversible. However, LiBH4 + MgH 2 system usually requires the high temperature and hydrogen pressure for hydrogen release and uptake reaction. In order to reduce the requirements of this system, nanoengineering is the simple and efficient method to improve the thermodynamic properties and reduce kinetic barrier of reaction between LiBH4 and MgH2. Based on ab initio density functional theory (DFT) calculations, the previous study has indicated that the reaction between LiBH4 and MgH2 can take place at temperature near 200°C or below. However, the predictions have been shown to be inconsistent with many experiments. Therefore, it is the first time that our experiment using ball milling with aerosol spraying (BMAS) to prove the reaction between LiBH4 and MgH2 can happen during high energy ball milling at room temperature. Through this BMAS process we have found undoubtedly the formation of MgB 2 and LiH during ball milling of MgH2 while aerosol spraying of the LiBH4/THF solution. Aerosol nanoparticles from LiBH 4/THF solution leads to form Li2B12H12 during BMAS process. The Li2B12H12 formed then reacts with MgH2 in situ during ball milling to form MgB 2 and LiH. Discrete element modeling (DEM) is a useful tool to describe operation of various ball milling processes. EDEM is software based on DEM to predict power consumption, liner and media wear and mill output. In order to further improve the milling efficiency of BMAS process, EDEM is conducted to make analysis for complicated ball milling process. Milling speed and ball's filling ratio inside the canister as the variables are considered to determine the milling efficiency. The average and maximum

  10. Ground-based ULF methods of monitoring the magnetospheric plasma

    NASA Astrophysics Data System (ADS)

    Romanova, Natalia; Pilipenko, Viacheslav; Stepanova, Marina; Kozyreva, Olga; Kawano, Hideaki

    The terrestrial magnetosphere is a giant natural MHD resonator. The magnetospheric Alfven resonator is formed by the geomagnetic field lines terminated by the conductive ionospheres. Though a source of Pc3-5 waves is not reliably known, the identification of resonant frequency enables one to determine the magnetospheric plasma density and ionospheric conductance from ground magnetometer observations. However, a spectral peak does not necessarily correspond to a local resonant frequency, and the width of a spectral peak cannot be directly used to determine the quality factor of the magnetospheric resonator. This ambiguity can be resolved with the help of various gradient and polarization methods, reviewed in this presentation: Gradient method (GM), Amplitude-Phase Gradient method (APGM),Polarization methods (including H/D method), and Hodograph (H) method. These methods can be regarded as tools for the "hydromagnetic spectroscopy“ to diagnose the magnetosphere. The H-method has additional possibilities as compared with the gradient method: one can determine continuous distribution of the magnetospheric resonant frequencies and Q-factors in the range of latitudes beyond the observation baseline. These methods are illustrated by results of their application to the SAMBA magnetometers array data.

  11. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    NASA Astrophysics Data System (ADS)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  12. A genetic algorithm based method for docking flexible molecules

    SciTech Connect

    Judson, R.S.; Jaeger, E.P.; Treasurywala, A.M.

    1993-11-01

    The authors describe a computational method for docking flexible molecules into protein binding sites. The method uses a genetic algorithm (GA) to search the combined conformation/orientation space of the molecule to find low energy conformation. Several techniques are described that increase the efficiency of the basic search method. These include the use of several interacting GA subpopulations or niches; the use of a growing algorithm that initially docks only a small part of the molecule; and the use of gradient minimization during the search. To illustrate the method, they dock Cbz-GlyP-Leu-Leu (ZGLL) into thermolysin. This system was chosen because a well refined crystal structure is available and because another docking method had previously been tested on this system. Their method is able to find conformations that lie physically close to and in some cases lower in energy than the crystal conformation in reasonable periods of time on readily available hardware.

  13. Methods for Data-based Delineation of Spatial Regions

    SciTech Connect

    Wilson, John E.

    2012-10-01

    In data analysis, it is often useful to delineate or segregate areas of interest from the general population of data in order to concentrate further analysis efforts on smaller areas. Three methods are presented here for automatically generating polygons around spatial data of interest. Each method addresses a distinct data type. These methods were developed for and implemented in the sample planning tool called Visual Sample Plan (VSP). Method A is used to delineate areas of elevated values in a rectangular grid of data (raster). The data used for this method are spatially related. Although VSP uses data from a kriging process for this method, it will work for any type of data that is spatially coherent and appears on a regular grid. Method B is used to surround areas of interest characterized by individual data points that are congregated within a certain distance of each other. Areas where data are “clumped” together spatially will be delineated. Method C is used to recreate the original boundary in a raster of data that separated data values from non-values. This is useful when a rectangular raster of data contains non-values (missing data) that indicate they were outside of some original boundary. If the original boundary is not delivered with the raster, this method will approximate the original boundary.

  14. Aperture-Tolerant, Chemical-Based Methods to Reduce Channeling

    SciTech Connect

    Randall S. Seright

    2007-09-30

    This final technical progress report describes work performed from October 1, 2004, through May 16, 2007, for the project, 'Aperture-Tolerant, Chemical-Based Methods to Reduce Channeling'. We explored the potential of pore-filling gels for reducing excess water production from both fractured and unfractured production wells. Several gel formulations were identified that met the requirements--i.e., providing water residual resistance factors greater than 2,000 and ultimate oil residual resistance factors (F{sub rro}) of 2 or less. Significant oil throughput was required to achieve low F{sub rro} values, suggesting that gelant penetration into porous rock must be small (a few feet or less) for existing pore-filling gels to provide effective disproportionate permeability reduction. Compared with adsorbed polymers and weak gels, strong pore-filling gels can provide greater reliability and behavior that is insensitive to the initial rock permeability. Guidance is provided on where relative-permeability-modification/disproportionate-permeability-reduction treatments can be successfully applied for use in either oil or gas production wells. When properly designed and executed, these treatments can be successfully applied to a limited range of oilfield excessive-water-production problems. We examined whether gel rheology can explain behavior during extrusion through fractures. The rheology behavior of the gels tested showed a strong parallel to the results obtained from previous gel extrusion experiments. However, for a given aperture (fracture width or plate-plate separation), the pressure gradients measured during the gel extrusion experiments were much higher than anticipated from rheology measurements. Extensive experiments established that wall slip and first normal stress difference were not responsible for the pressure gradient discrepancy. To explain the discrepancy, we noted that the aperture for gel flow (for mobile gel wormholing through concentrated immobile

  15. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    PubMed Central

    Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  16. Classification of Polarimetric SAR Image Based on the Subspace Method

    NASA Astrophysics Data System (ADS)

    Xu, J.; Li, Z.; Tian, B.; Chen, Q.; Zhang, P.

    2013-07-01

    Land cover classification is one of the most significant applications in remote sensing. Compared to optical sensing technologies, synthetic aperture radar (SAR) can penetrate through clouds and have all-weather capabilities. Therefore, land cover classification for SAR image is important in remote sensing. The subspace method is a novel method for the SAR data, which reduces data dimensionality by incorporating feature extraction into the classification process. This paper uses the averaged learning subspace method (ALSM) method that can be applied to the fully polarimetric SAR image for classification. The ALSM algorithm integrates three-component decomposition, eigenvalue/eigenvector decomposition and textural features derived from the gray-level cooccurrence matrix (GLCM). The study site, locates in the Dingxing county, in Hebei Province, China. We compare the subspace method with the traditional supervised Wishart classification. By conducting experiments on the fully polarimetric Radarsat-2 image, we conclude the proposed method yield higher classification accuracy. Therefore, the ALSM classification method is a feasible and alternative method for SAR image.

  17. Analysis of spatiotemporal signals: A method based on perturbation theory

    NASA Astrophysics Data System (ADS)

    Hutt, A.; Uhl, C.; Friedrich, R.

    1999-08-01

    We present a method of analyzing spatiotemporal signals with respect to its underlying dynamics. The algorithm aims at the determination of spatial modes and a criterion for the number of interacting modes. Simultaneously, a way of filtering of nonorthogonal noise is shown. The method is discussed by examples of simulated stable fixpoints and the Lorenz attractor.

  18. Job Search Methods: Consequences for Gender-based Earnings Inequality.

    ERIC Educational Resources Information Center

    Huffman, Matt L.; Torres, Lisa

    2001-01-01

    Data from adults in Atlanta, Boston, and Los Angeles (n=1,942) who searched for work using formal (ads, agencies) or informal (networks) methods indicated that type of method used did not contribute to the gender gap in earnings. Results do not support formal job search as a way to reduce gender inequality. (Contains 55 references.) (SK)

  19. Method of detecting meter base on image-processing

    NASA Astrophysics Data System (ADS)

    Wang, Hong-ping; Wang, Peng; Yu, Zheng-lin

    2008-03-01

    This paper proposes a new idea of detecting meter using image arithmetic- logic operation and high-precision raster sensor. This method regards the data measured by precision raster as real value, the data obtained by digital image-processing as measuring value, and achieves the aim of detecting meter through the compare of above two datum finally. This method utilizes the dynamic change of meter pointer to complete subtraction processing of image, to realize image segmentation, and to achieve warp-value of image pointer of border time. This method using the technology of image segmentation replaces the traditional method which is low accuracy and low repetition caused by manual operation and ocular reading. Its precision reaches technology index demand according to the arithmetic of nation detecting rules and experiment indicates it is reliable, high accuracy. The paper introduces the total scheme of detecting meter, capturing method of image pointer, and also shows the precision analysis of indicating value error.

  20. The research of positioning methods based on Internet of Things

    NASA Astrophysics Data System (ADS)

    Zou, Dongyao; Liu, Jia; Sun, Hui; Li, Nana; Han, Xueqin

    2013-03-01

    With the advent of Internet of Things time, more and more applications require location-based services. This article describes the concept and basic principles of several of Internet of things positioning technology such as GPS positioning, Base Station positioning, ZigBee positioning. And then the advantages and disadvantages of these types of positioning technologies are compared.

  1. 3D face recognition by projection-based methods

    NASA Astrophysics Data System (ADS)

    Dutagaci, Helin; Sankur, Bülent; Yemez, Yücel

    2006-02-01

    In this paper, we investigate recognition performances of various projection-based features applied on registered 3D scans of faces. Some features are data driven, such as ICA-based features or NNMF-based features. Other features are obtained using DFT or DCT-based schemes. We apply the feature extraction techniques to three different representations of registered faces, namely, 3D point clouds, 2D depth images and 3D voxel. We consider both global and local features. Global features are extracted from the whole face data, whereas local features are computed over the blocks partitioned from 2D depth images. The block-based local features are fused both at feature level and at decision level. The resulting feature vectors are matched using Linear Discriminant Analysis. Experiments using different combinations of representation types and feature vectors are conducted on the 3D-RMA dataset.

  2. Developing a Self-Report-Based Sequential Analysis Method for Educational Technology Systems: A Process-Based Usability Evaluation

    ERIC Educational Resources Information Center

    Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse

    2015-01-01

    The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…

  3. Comparison of Different Recruitment Methods for Sexual and Reproductive Health Research: Social Media–Based Versus Conventional Methods

    PubMed Central

    Motoki, Yoko; Taguri, Masataka; Asai-Sato, Mikiko; Enomoto, Takayuki; Wark, John Dennis; Garland, Suzanne Marie

    2017-01-01

    Background Prior research about the sexual and reproductive health of young women has relied mostly on self-reported survey studies. Thus, participant recruitment using Web-based methods can improve sexual and reproductive health research about cervical cancer prevention. In our prior study, we reported that Facebook is a promising way to reach young women for sexual and reproductive health research. However, it remains unknown whether Web-based or other conventional recruitment methods (ie, face-to-face or flyer distribution) yield comparable survey responses from similar participants. Objective We conducted a survey to determine whether there was a difference in the sexual and reproductive health survey responses of young Japanese women based on recruitment methods: social media–based and conventional methods. Methods From July 2012 to March 2013 (9 months), we invited women of ages 16-35 years in Kanagawa, Japan, to complete a Web-based questionnaire. They were recruited through either a social media–based (social networking site, SNS, group) or by conventional methods (conventional group). All participants enrolled were required to fill out and submit their responses through a Web-based questionnaire about their sexual and reproductive health for cervical cancer prevention. Results Of the 243 participants, 52.3% (127/243) were recruited by SNS, whereas 47.7% (116/243) were recruited by conventional methods. We found no differences between recruitment methods in responses to behaviors and attitudes to sexual and reproductive health survey, although more participants from the conventional group (15%, 14/95) chose not to answer the age of first intercourse compared with those from the SNS group (5.2%, 6/116; P=.03). Conclusions No differences were found between recruitment methods in the responses of young Japanese women to a Web–based sexual and reproductive health survey. PMID:28283466

  4. Financial time series analysis based on information categorization method

    NASA Astrophysics Data System (ADS)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  5. [Comparison of sustainable development status in Heilongjiang Province based on traditional ecological footprint method and emergy ecological footprint method].

    PubMed

    Chen, Chun-feng; Wang, Hong-yan; Xiao, Du-ning; Wang, Da-qing

    2008-11-01

    By using traditional ecological footprint method and its modification, emergy ecological footprint method, the sustainable development status of Heilongjiang Province in 2005 was analyzed. The results showed that the ecological deficits of Heilongjiang Province in 2005 based on emergy and conventional ecological footprint methods were 1.919 and 0.6256 hm2 x cap(-1), respectively. The ecological footprint value based on the two methods both exceeded its carrying capacity, which indicated that the social and economic development of the study area was not sustainable. Emergy ecological footprint method was used to discuss the relationships between human's material demand and ecosystem resources supply, and more stable parameters such as emergy transformity and emergy density were introduced into emergy ecological footprint method, which overcame some of the shortcomings of conventional ecological method.

  6. Bayesian Stereo Matching Method Based on Edge Constraints.

    PubMed

    Li, Jie; Shi, Wenxuan; Deng, Dexiang; Jia, Wenyan; Sun, Mingui

    2012-12-01

    A new global stereo matching method is presented that focuses on the handling of disparity, discontinuity and occlusion. The Bayesian approach is utilized for dense stereo matching problem formulated as a maximum a posteriori Markov Random Field (MAP-MRF) problem. In order to improve stereo matching performance, edges are incorporated into the Bayesian model as a soft constraint. Accelerated belief propagation is applied to obtain the maximum a posteriori estimates in the Markov random field. The proposed algorithm is evaluated using the Middlebury stereo benchmark. Our experimental results comparing with some state-of-the-art stereo matching methods demonstrate that the proposed method provides superior disparity maps with a subpixel precision.

  7. Human body region enhancement method based on Kinect infrared imaging

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Fan, Yubo; Song, Xiaowei; Cai, Wenjing

    2016-10-01

    To effectively improve the low contrast of human body region in the infrared images, a combing method of several enhancement methods is utilized to enhance the human body region. Firstly, for the infrared images acquired by Kinect, in order to improve the overall contrast of the infrared images, an Optimal Contrast-Tone Mapping (OCTM) method with multi-iterations is applied to balance the contrast of low-luminosity infrared images. Secondly, to enhance the human body region better, a Level Set algorithm is employed to improve the contour edges of human body region. Finally, to further improve the human body region in infrared images, Laplacian Pyramid decomposition is adopted to enhance the contour-improved human body region. Meanwhile, the background area without human body region is processed by bilateral filtering to improve the overall effect. With theoretical analysis and experimental verification, the results show that the proposed method could effectively enhance the human body region of such infrared images.

  8. Improved color interpolation method based on Bayer image

    NASA Astrophysics Data System (ADS)

    Wang, Jin

    2012-10-01

    Image sensors are important components of lunar exploration device. Considering volume and cost, image sensors generally adopt a single CCD or CMOS at the present time, and the surface of the sensor is covered with a layer of color filter array(CFA), which is usually Bayer CFA. In the Bayer CFA, each pixel can only get one of tricolor, so it is necessary to carry out color interpolation in order to get the full color image. An improved Bayer image interpolation method is presented, which is novel, practical, and also easy to be realized. The results of experiments to prove the effect of the interpolation are shown. Comparing with classic methods, this method can find edge of image more accurately, reduce the saw tooth phenomenon in the edge area, and keep the image smooth in other area. This method is applied successfully in a certain exploration imaging system.

  9. Cleaning Verification Monitor Technique Based on Infrared Optical Methods

    DTIC Science & Technology

    2004-10-01

    Cleaning Verification Techniques.” Real-time methods to provide both qualitative and quantitative assessments of surface cleanliness are needed for a...detection VCPI method offer a wide range of complementary capabilities in real-time surface cleanliness verification. Introduction Currently...also has great potential to reduce or eliminate premature failures of surface coatings caused by a lack of surface cleanliness . Additional

  10. Respiratory Pattern Variability Analysis Based on Nonlinear Prediction Methods

    DTIC Science & Technology

    2007-11-02

    Brobely. All-night sleep EEG and artificial stochastic control signals have similar correlation dimensions . Electroencephalogr. Clin. Neurophisiol...methods. These methods use the volume signals generated by the respiratory system in order to construct a model of its dynamics, and then to estimate the...definition have been considered. The incidence of different prediction depths and embedding dimensions have been analyzed. A group of 12 patients on

  11. A Finger Vein Identification Method Based on Template Matching

    NASA Astrophysics Data System (ADS)

    Zou, Hui; Zhang, Bing; Tao, Zhigang; Wang, Xiaoping

    2016-01-01

    New methods for extracting vein features from finger vein image and generating templates for matching are proposed. In the algorithm for generating templates, we proposed a parameter-templates quality factor (TQF) - to measure the quality of generated templates. So that we can use fewer finger vein samples to generate templates that meet the quality requirement of identification. The recognition accuracy of using proposed methods of finger vein feature extraction and template generation strategy for identification is 97.14%.

  12. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results.

  13. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  14. Silicon-Based Anode and Method for Manufacturing the Same

    NASA Technical Reports Server (NTRS)

    Yushin, Gleb Nikolayevich (Inventor); Luzinov, Igor (Inventor); Zdyrko, Bogdan (Inventor); Magasinski, Alexandre (Inventor)

    2017-01-01

    A silicon-based anode comprising silicon, a carbon coating that coats the surface of the silicon, a polyvinyl acid that binds to at least a portion of the silicon, and vinylene carbonate that seals the interface between the silicon and the polyvinyl acid. Because of its properties, polyvinyl acid binders offer improved anode stability, tunable properties, and many other attractive attributes for silicon-based anodes, which enable the anode to withstand silicon cycles of expansion and contraction during charging and discharging.

  15. Comparison of sequencing-based methods to profile DNA methylation and identification of monoallelic epigenetic modifications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Analysis of DNA methylation patterns relies increasingly on sequencing-based profiling methods. The four most frequently used sequencing-based technologies are the bisulfite-based methods MethylC-seq and reduced representation bisulfite sequencing (RRBS), and the enrichment-based techniques methylat...

  16. A comparison between boat-based and diver-based methods for quantifying coral bleaching

    USGS Publications Warehouse

    Zawada, David G.; Ruzicka, Rob; Colella, Michael A.

    2015-01-01

    Recent increases in both the frequency and severity of coral bleaching events have spurred numerous surveys to quantify the immediate impacts and monitor the subsequent community response. Most of these efforts utilize conventional diver-based methods, which are inherently time-consuming, expensive, and limited in spatial scope unless they deploy large teams of scientifically-trained divers. In this study, we evaluated the effectiveness of the Along-Track Reef Imaging System (ATRIS), an automated image-acquisition technology, for assessing a moderate bleaching event that occurred in the summer of 2011 in the Florida Keys. More than 100,000 images were collected over 2.7 km of transects spanning four patch reefs in a 3-h period. In contrast, divers completed 18, 10-m long transects at nine patch reefs over a 5-day period. Corals were assigned to one of four categories: not bleached, pale, partially bleached, and bleached. The prevalence of bleaching estimated by ATRIS was comparable to the results obtained by divers, but only for corals > 41 cm in size. The coral size-threshold computed for ATRIS in this study was constrained by prevailing environmental conditions (turbidity and sea state) and, consequently, needs to be determined on a study-by-study basis. Both ATRIS and diver-based methods have innate strengths and weaknesses that must be weighed with respect to project goals.

  17. Bootstrap embedding: An internally consistent fragment-based method

    NASA Astrophysics Data System (ADS)

    Welborn, Matthew; Tsuchimochi, Takashi; Van Voorhis, Troy

    2016-08-01

    Strong correlation poses a difficult problem for electronic structure theory, with computational cost scaling quickly with system size. Fragment embedding is an attractive approach to this problem. By dividing a large complicated system into smaller manageable fragments "embedded" in an approximate description of the rest of the system, we can hope to ameliorate the steep cost of correlated calculations. While appealing, these methods often converge slowly with fragment size because of small errors at the boundary between fragment and bath. We describe a new electronic embedding method, dubbed "Bootstrap Embedding," a self-consistent wavefunction-in-wavefunction embedding theory that uses overlapping fragments to improve the description of fragment edges. We apply this method to the one dimensional Hubbard model and a translationally asymmetric variant, and find that it performs very well for energies and populations. We find Bootstrap Embedding converges rapidly with embedded fragment size, overcoming the surface-area-to-volume-ratio error typical of many embedding methods. We anticipate that this method may lead to a low-scaling, high accuracy treatment of electron correlation in large molecular systems.

  18. Bootstrap embedding: An internally consistent fragment-based method.

    PubMed

    Welborn, Matthew; Tsuchimochi, Takashi; Van Voorhis, Troy

    2016-08-21

    Strong correlation poses a difficult problem for electronic structure theory, with computational cost scaling quickly with system size. Fragment embedding is an attractive approach to this problem. By dividing a large complicated system into smaller manageable fragments "embedded" in an approximate description of the rest of the system, we can hope to ameliorate the steep cost of correlated calculations. While appealing, these methods often converge slowly with fragment size because of small errors at the boundary between fragment and bath. We describe a new electronic embedding method, dubbed "Bootstrap Embedding," a self-consistent wavefunction-in-wavefunction embedding theory that uses overlapping fragments to improve the description of fragment edges. We apply this method to the one dimensional Hubbard model and a translationally asymmetric variant, and find that it performs very well for energies and populations. We find Bootstrap Embedding converges rapidly with embedded fragment size, overcoming the surface-area-to-volume-ratio error typical of many embedding methods. We anticipate that this method may lead to a low-scaling, high accuracy treatment of electron correlation in large molecular systems.

  19. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  20. A silica gel based method for extracting insect surface hydrocarbons.

    PubMed

    Choe, Dong-Hwan; Ramírez, Santiago R; Tsutsui, Neil D

    2012-02-01

    Here, we describe a novel method for the extraction of insect cuticular hydrocarbons using silica gel, herein referred to as "silica-rubbing". This method permits the selective sampling of external hydrocarbons from insect cuticle surfaces for subsequent analysis using gas chromatography-mass spectrometry (GC-MS). The cuticular hydrocarbons are first adsorbed to silica gel particles by rubbing the cuticle of insect specimens with the materials, and then are subsequently eluted using organic solvents. We compared the cuticular hydrocarbon profiles that resulted from extractions using silica-rubbing and solvent-soaking methods in four ant and one bee species: Linepithema humile, Azteca instabilis, Camponotus floridanus, Pogonomyrmex barbatus (Hymenoptera: Formicidae), and Euglossa dilemma (Hymenoptera: Apidae). We also compared the hydrocarbon profiles of Euglossa dilemma obtained via silica-rubbing and solid phase microextraction (SPME). Comparison of hydrocarbon profiles obtained by different extraction methods indicates that silica rubbing selectively extracts the hydrocarbons that are present on the surface of the cuticular wax layer, without extracting hydrocarbons from internal glands and tissues. Due to its surface specificity, efficiency, and low cost, this new method may be useful for studying the biology of insect cuticular hydrocarbons.

  1. Areal Feature Matching Based on Similarity Using Critic Method

    NASA Astrophysics Data System (ADS)

    Kim, J.; Yu, K.

    2015-10-01

    In this paper, we propose an areal feature matching method that can be applied for many-to-many matching, which involves matching a simple entity with an aggregate of several polygons or two aggregates of several polygons with fewer user intervention. To this end, an affine transformation is applied to two datasets by using polygon pairs for which the building name is the same. Then, two datasets are overlaid with intersected polygon pairs that are selected as candidate matching pairs. If many polygons intersect at this time, we calculate the inclusion function between such polygons. When the value is more than 0.4, many of the polygons are aggregated as single polygons by using a convex hull. Finally, the shape similarity is calculated between the candidate pairs according to the linear sum of the weights computed in CRITIC method and the position similarity, shape ratio similarity, and overlap similarity. The candidate pairs for which the value of the shape similarity is more than 0.7 are determined as matching pairs. We applied the method to two geospatial datasets: the digital topographic map and the KAIS map in South Korea. As a result, the visual evaluation showed two polygons that had been well detected by using the proposed method. The statistical evaluation indicates that the proposed method is accurate when using our test dataset with a high F-measure of 0.91.

  2. Seamless Method- and Model-based Software and Systems Engineering

    NASA Astrophysics Data System (ADS)

    Broy, Manfred

    Today engineering software intensive systems is still more or less handicraft or at most at the level of manufacturing. Many steps are done ad-hoc and not in a fully systematic way. Applied methods, if any, are not scientifically justified, not justified by empirical data and as a result carrying out large software projects still is an adventure. However, there is no reason why the development of software intensive systems cannot be done in the future with the same precision and scientific rigor as in established engineering disciplines. To do that, however, a number of scientific and engineering challenges have to be mastered. The first one aims at a deep understanding of the essentials of carrying out such projects, which includes appropriate models and effective management methods. What is needed is a portfolio of models and methods coming together with a comprehensive support by tools as well as deep insights into the obstacles of developing software intensive systems and a portfolio of established and proven techniques and methods with clear profiles and rules that indicate when which method is ready for application. In the following we argue that there is scientific evidence and enough research results so far to be confident that solid engineering of software intensive systems can be achieved in the future. However, yet quite a number of scientific research problems have to be solved.

  3. Sunspot drawings handwritten character recognition method based on deep learning

    NASA Astrophysics Data System (ADS)

    Zheng, Sheng; Zeng, Xiangyun; Lin, Ganghua; Zhao, Cui; Feng, Yongli; Tao, Jinping; Zhu, Daoyuan; Xiong, Li

    2016-05-01

    High accuracy scanned sunspot drawings handwritten characters recognition is an issue of critical importance to analyze sunspots movement and store them in the database. This paper presents a robust deep learning method for scanned sunspot drawings handwritten characters recognition. The convolution neural network (CNN) is one algorithm of deep learning which is truly successful in training of multi-layer network structure. CNN is used to train recognition model of handwritten character images which are extracted from the original sunspot drawings. We demonstrate the advantages of the proposed method on sunspot drawings provided by Chinese Academy Yunnan Observatory and obtain the daily full-disc sunspot numbers and sunspot areas from the sunspot drawings. The experimental results show that the proposed method achieves a high recognition accurate rate.

  4. Aerosol detection methods in lidar-based atmospheric profiling

    NASA Astrophysics Data System (ADS)

    Elbakary, Mohamed I.; Iftekharuddin, Khan M.; De Young, Russell; Afrifa, Kwasi

    2016-09-01

    A compact light detection and ranging (LiDAR) system provides aerosols profile measurements by identifying the aerosol scattering ratio as function of the altitude. The aerosol scattering ratios are used to obtain multiple aerosol intensive ratio parameters known as backscatter color ratio, depolarization ratio and lidar ratio. The aerosol ratio parameters are known to vary with aerosol type, size, and shape. Different methods in the literature are employed for detection and classification of aerosol from the measurements. In this paper, a comprehensive review for aerosol detection methods is presented. In addition, results of implemented methods of quantifying aerosols in the atmosphere on real data are compared and presented showing how the backscatter color, depolarization and lidar ratios vary with presence of aerosols in the atmosphere.

  5. Calibration of EMI data based on different electrical methods

    NASA Astrophysics Data System (ADS)

    Nüsch, Anne-Kathrin; Werban, Ulrike; Dietrich, Peter

    2013-04-01

    The advantages of the electromagnetic induction (EMI)-method have been known to soil scientists for many years. Thus it is used for many soil investigations, ranging from salinity measurements over water content monitoring to classification of different soil types. There are several companies that provide instruments for each type of investigation. However, a major disadvantage of the method is that measurements obtained under different conditions (e.g. with different instruments, or at different times or field sites) are not easily comparable. Data values yielded when using the instruments are not absolute, which is an important prerequisite for the correct application of EMI, especially at the landscape scale. Furthermore drifts can occur, potentially caused by weather conditions or instrument errors and subsequently give results with variations in conductivities, which are not actually reflective of actual test results. With the help of reference lines and repeated measurements, drifts can be detected and eliminated. Different measurements (spatial and temporal) are more comparable, but the final corrected values are still not absolute. The best solution that allows for absolute values to be obtained is to calibrate the EMI-Data with the help of a known conductivity from other electrical methods. In a series of test measurements, we studied which electrical method is most feasible for a calibration of EMI-data. The chosen field site is situated at the floodplain of the river Mulde in Saxony (Germany). We chose a profile 100 meters in length which is very heterogeneous and crosses a buried back water channel. Results show a significant variance of conductivities. Several EMI-instruments were tested. Among these are EM38DD and EM31 devices from Geonics. These instruments are capable of investigating the subsurface to a depth of up to six meters. For the calibration process, we chose electrical resistivity tomography (ERT), Vertical Electrical Sounding (VES), and

  6. Space Object Tracking Method Based on a Snake Model

    NASA Astrophysics Data System (ADS)

    Zhan-wei, Xu; Xin, Wang

    2016-04-01

    In this paper, aiming at the problem of unstable tracking of low-orbit variable and bright space objects, adopting an active contour model, a kind of improved GVF (Gradient Vector Flow) - Snake algorithm is proposed to realize the real-time search of the real object contour on the CCD image. Combined with the Kalman filter for prediction, a new adaptive tracking method is proposed for space objects. Experiments show that this method can overcome the tracking error caused by the fixed window, and improve the tracking robustness.

  7. IR decoys modeling method based on particle system

    NASA Astrophysics Data System (ADS)

    Liu, Jun-yu; Wu, Kai-feng; Dong, Yan-bing

    2016-10-01

    Due to the complexity in combustion processes of IR decoys, it is difficult to describe its infrared radiation characteristics by deterministic model. In this work, the IR decoys simulation based on particle system was found. The measured date of the IR decoy is used to analyze the typical characteristic of the IR decoy. A semi-empirical model of the IR decoy motion law has been set up based on friction factors and a IR decoys simulation model has been build up based on particle system. The infrared imaging characteristic and time varying characteristic of the IR decoy were simulated by making use of the particle feature such as lifetime, speed and color. The dynamic IR decoys simulation is realized with the VC++6.0 and OpenGL.

  8. Gender-based violence: concepts, methods, and findings.

    PubMed

    Russo, Nancy Felipe; Pirlott, Angela

    2006-11-01

    The United Nations has identified gender-based violence against women as a global health and development issue, and a host of policies, public education, and action programs aimed at reducing gender-based violence have been undertaken around the world. This article highlights new conceptualizations, methodological issues, and selected research findings that can inform such activities. In addition to describing recent research findings that document relationships between gender, power, sexuality, and intimate violence cross-nationally, it identifies cultural factors, including linkages between sex and violence through media images that may increase women's risk for violence, and profiles a host of negative physical, mental, and behavioral health outcomes associated with victimization including unwanted pregnancy and abortion. More research is needed to identify the causes, dynamics, and outcomes of gender-based violence, including media effects, and to articulate how different forms of such violence vary in outcomes depending on cultural context.

  9. A multivariate based event detection method and performance comparison with two baseline methods.

    PubMed

    Liu, Shuming; Smith, Kate; Che, Han

    2015-09-01

    Early warning systems have been widely deployed to protect water systems from accidental and intentional contamination events. Conventional detection algorithms are often criticized for having high false positive rates and low true positive rates. This mainly stems from the inability of these methods to determine whether variation in sensor measurements is caused by equipment noise or the presence of contamination. This paper presents a new detection method that identifies the existence of contamination by comparing Euclidean distances of correlation indicators, which are derived from the correlation coefficients of multiple water quality sensors. The performance of the proposed method was evaluated using data from a contaminant injection experiment and compared with two baseline detection methods. The results show that the proposed method can differentiate between fluctuations caused by equipment noise and those due to the presence of contamination. It yielded higher possibility of detection and a lower false alarm rate than the two baseline methods. With optimized parameter values, the proposed method can correctly detect 95% of all contamination events with a 2% false alarm rate.

  10. Edge-based finite element method for shallow water equations

    NASA Astrophysics Data System (ADS)

    Ribeiro, F. L. B.; Galeão, A. C.; Landau, L.

    2001-07-01

    This paper describes an edge-based implementation of the generalized residual minimum (GMRES) solver for the fully coupled solution of non-linear systems arising from finite element discretization of shallow water equations (SWEs). The gain in terms of memory, floating point operations and indirect addressing is quantified for semi-discrete and space-time analyses. Stabilized formulations, including Petrov-Galerkin models and discontinuity-capturing operators, are also discussed for both types of discretization. Results illustrating the quality of the stabilized solutions and the advantages of using the edge-based approach are presented at the end of the paper. Copyright

  11. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  12. Methods and Approaches to Mass Spectroscopy Based Protein Identification

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This book chapter is a review of current mass spectrometers and the role in the field of proteomics. Various instruments are discussed and their strengths and weaknesses are highlighted. In addition, the methods of protein identification using a mass spectrometer are explained as well as data vali...

  13. Neural Network Based Method for Estimating Helicopter Low Airspeed

    DTIC Science & Technology

    1996-10-24

    The present invention relates generally to virtual sensors and, more particularly, to a means and method utilizing a neural network for estimating...helicopter airspeed at speeds below about 50 knots using only fixed system parameters (i.e., parameters measured or determined in a reference frame fixed relative to the helicopter fuselage) as inputs to the neural network .

  14. Kernel method based human model for enhancing interactive evolutionary optimization.

    PubMed

    Pei, Yan; Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly.

  15. Objective, Way and Method of Faculty Management Based on Ergonomics

    ERIC Educational Resources Information Center

    WANG, Hong-bin; Liu, Yu-hua

    2008-01-01

    The core problem that influences educational quality of talents in colleges and universities is the faculty management. Without advanced faculty, it is difficult to cultivate excellent talents. With regard to some problems in present faculty construction of colleges and universities, this paper puts forward the new objectives, ways and methods of…

  16. A special purpose knowledge-based face localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  17. Methods of use for sensor based fluid detection devices

    NASA Technical Reports Server (NTRS)

    Lewis, Nathan S. (Inventor)

    2001-01-01

    Methods of use and devices for detecting analyte in fluid. A system for detecting an analyte in a fluid is described comprising a substrate having a sensor comprising a first organic material and a second organic material where the sensor has a response to permeation by an analyte. A detector is operatively associated with the sensor. Further, a fluid delivery appliance is operatively associated with the sensor. The sensor device has information storage and processing equipment, which is operably connected with the device. This device compares a response from the detector with a stored ideal response to detect the presence of analyte. An integrated system for detecting an analyte in a fluid is also described where the sensing device, detector, information storage and processing device, and fluid delivery device are incorporated in a substrate. Methods for use for the above system are also described where the first organic material and a second organic material are sensed and the analyte is detected with a detector operatively associated with the sensor. The method provides for a device, which delivers fluid to the sensor and measures the response of the sensor with the detector. Further, the response is compared to a stored ideal response for the analyte to determine the presence of the analyte. In different embodiments, the fluid measured may be a gaseous fluid, a liquid, or a fluid extracted from a solid. Methods of fluid delivery for each embodiment are accordingly provided.

  18. Contact resistance calculations based on a variational method

    NASA Astrophysics Data System (ADS)

    Leong, M. S.; Choo, S. C.; Tan, L. S.; Goh, T. L.

    1988-07-01

    Noble's variational method is used to solve the contact resistance problem that arises when a circular disc source electrode is in contact with a semiconductor slab through an infinitesimally thin layer of resistive material. The method assumes that the source current density distribution J( r) has the form K 1(1 - r 2) -μ + K 2(1 - r 2) {1}/{2} + K 3(1 - r 2) {3}/{2}, where the parameters K1, K2, K3 and μ are determined by variational principles. Calculations of the source current density and the total slab resistance, performed for a wide range of contact resistivities, show that the results are practically indistinguishable from those derived from an exact mixed boundary value method proposed earlier by us. Whilst this method of using an optimised μ is very accurate, it is computationally slow. By fixing μ at a constant value of {1}/{4}, we find that we can drastically reduce the computation time for each calculation of the total slab resistance to 1.5 s on an Apple II microcomputer, and still achieve an overall accuracy of 1%. Tables of the abscissas and weights required for implementation of the numerical scheme are provided in the paper.

  19. Analysis of Methods for Collecting Test-based Judgments.

    ERIC Educational Resources Information Center

    Reckase, Mark D.

    Standard setting is a fairly widespread activity in educational and psychological measurement, but there is no formal psychometric theory to guide the development of standard setting methodology. This paper presents a conceptual framework for such a psychometric theory and uses the conceptual framework to analyze a number of methods for setting…

  20. New method of contour-based mask-shape compiler

    NASA Astrophysics Data System (ADS)

    Matsuoka, Ryoichi; Sugiyama, Akiyuki; Onizawa, Akira; Sato, Hidetoshi; Toyoda, Yasutaka

    2007-10-01

    We have developed a new method of accurately profiling a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, it is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method for a DFM solution in which two dimensional data are extracted for an error free practical simulation by precise reproduction of a real mask shape in addition to the mask data simulation. The flow centering around the design data is fully automated and provides an environment where optimization and verification for fully automated model calibration with much less error is available. It also allows complete consolidation of input and output functions with an EDA system by constructing a design data oriented system structure. This method therefore is regarded as a strategic DFM approach in the semiconductor metrology.

  1. A Methods-Based Biotechnology Course for Undergraduates

    ERIC Educational Resources Information Center

    Chakrabarti, Debopam

    2009-01-01

    This new course in biotechnology for upper division undergraduates provides a comprehensive overview of the process of drug discovery that is relevant to biopharmaceutical industry. The laboratory exercises train students in both cell-free and cell-based assays. Oral presentations by the students delve into recent progress in drug discovery.…

  2. The Teaching of Protein Synthesis--A Microcomputer Based Method.

    ERIC Educational Resources Information Center

    Goodridge, Frank

    1983-01-01

    Describes two computer programs (BASIC for 32K Commodore PET) for teaching protein synthesis. The first is an interactive test of base-pairing knowledge, and the second generates random DNA nucleotide sequences, with instructions for substitution, insertion, and deletion printed out for each student. (JN)

  3. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  4. Comparison of an EMG-based and a stress-based method to predict shoulder muscle forces.

    PubMed

    Engelhardt, Christoph; Malfroy Camine, Valérie; Ingram, David; Müllhaupt, Philippe; Farron, Alain; Pioletti, Dominique; Terrier, Alexandre

    2015-01-01

    The estimation of muscle forces in musculoskeletal shoulder models is still controversial. Two different methods are widely used to solve the indeterminacy of the system: electromyography (EMG)-based methods and stress-based methods. The goal of this work was to evaluate the influence of these two methods on the prediction of muscle forces, glenohumeral load and joint stability after total shoulder arthroplasty. An EMG-based and a stress-based method were implemented into the same musculoskeletal shoulder model. The model replicated the glenohumeral joint after total shoulder arthroplasty. It contained the scapula, the humerus, the joint prosthesis, the rotator cuff muscles supraspinatus, subscapularis and infraspinatus and the middle, anterior and posterior deltoid muscles. A movement of abduction was simulated in the plane of the scapula. The EMG-based method replicated muscular activity of experimentally measured EMG. The stress-based method minimised a cost function based on muscle stresses. We compared muscle forces, joint reaction force, articular contact pressure and translation of the humeral head. The stress-based method predicted a lower force of the rotator cuff muscles. This was partly counter-balanced by a higher force of the middle part of the deltoid muscle. As a consequence, the stress-based method predicted a lower joint load (16% reduced) and a higher superior-inferior translation of the humeral head (increased by 1.2 mm). The EMG-based method has the advantage of replicating the observed cocontraction of stabilising muscles of the rotator cuff. This method is, however, limited to available EMG measurements. The stress-based method has thus an advantage of flexibility, but may overestimate glenohumeral subluxation.

  5. Application of age estimation methods based on teeth eruption: how easy is Olze method to use?

    PubMed

    De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C

    2014-09-01

    The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training.

  6. Modified van der Pauw method based on formulas solvable by the Banach fixed point method

    NASA Astrophysics Data System (ADS)

    Cieśliński, Jan L.

    2012-11-01

    We propose a modification of the standard van der Pauw method for determining the resistivity and Hall coefficient of flat thin samples of arbitrary shape. Considering a different choice of resistance measurements we derive a new formula which can be numerically solved (with respect to sheet resistance) by the Banach fixed point method for any values of experimental data. The convergence is especially fast in the case of almost symmetric van der Pauw configurations (e.g., clover shaped samples).

  7. Method and apparatus to detoxify aqueous based hazardous waste

    SciTech Connect

    Schultheis, A.; Landrigan, M.A.; Lakhani, A.

    1992-02-11

    This patent describes an apparatus for processing aqueous water wherein the waste comprises a plurality of solid components and an aqueous component including organics and heavy metals. It comprises: means for substantially concurrently removing at least some organics and at least some heavy metals from the aqueous waste, including, means for receiving the waste, and including a first means for removing at least one of the plurality of solid components from the waste; a plurality of storage means for holding the waste, as least some of the storage means receiving the waste from the means for receiving at least one of the plurality of storage means being for storage the waste and at least one of the storage means being for phase separating a second one of the solid components and establishing an aqueous stream comprising the at least some organics and the at least some heavy metals. This patent also describes a process. It comprises: substantially concurrently removing organics and heavy metals from an aqueous based stream, by, removing solids from the aqueous based stream; storing the aqueous based stream; adding a chelating agent to the aqueous based stream to generate an organometallic complex comprising at least some of the heavy metals; dissolving some of the organics and the organometallic complex in a solvent; separating an extract from the aqueous based stream, the extract comprising the solvent, the organometall complex, at least some of the organics and at least some of the heavy metals to generate a clean aqueous stream; and recovering the solvent from the extract.

  8. A new template matching method based on contour information

    NASA Astrophysics Data System (ADS)

    Cai, Huiying; Zhu, Feng; Wu, Qingxiao; Li, Sicong

    2014-11-01

    Template matching is a significant approach in machine vision due to its effectiveness and robustness. However, most of the template matching methods are so time consuming that they can't be used to many real time applications. The closed contour matching method is a popular kind of template matching methods. This paper presents a new closed contour template matching method which is suitable for two dimensional objects. Coarse-to-fine searching strategy is used to improve the matching efficiency and a partial computation elimination scheme is proposed to further speed up the searching process. The method consists of offline model construction and online matching. In the process of model construction, triples and distance image are obtained from the template image. A certain number of triples which are composed by three points are created from the contour information that is extracted from the template image. The rule to select the three points is that the template contour is divided equally into three parts by these points. The distance image is obtained here by distance transform. Each point on the distance image represents the nearest distance between current point and the points on the template contour. During the process of matching, triples of the searching image are created with the same rule as the triples of the model. Through the similarity that is invariant to rotation, translation and scaling between triangles, the triples corresponding to the triples of the model are found. Then we can obtain the initial RST (rotation, translation and scaling) parameters mapping the searching contour to the template contour. In order to speed up the searching process, the points on the searching contour are sampled to reduce the number of the triples. To verify the RST parameters, the searching contour is projected into the distance image, and the mean distance can be computed rapidly by simple operations of addition and multiplication. In the fine searching process

  9. A point cloud modeling method based on geometric constraints mixing the robust least squares method

    NASA Astrophysics Data System (ADS)

    Yue, JIanping; Pan, Yi; Yue, Shun; Liu, Dapeng; Liu, Bin; Huang, Nan

    2016-10-01

    The appearance of 3D laser scanning technology has provided a new method for the acquisition of spatial 3D information. It has been widely used in the field of Surveying and Mapping Engineering with the characteristics of automatic and high precision. 3D laser scanning data processing process mainly includes the external laser data acquisition, the internal industry laser data splicing, the late 3D modeling and data integration system. For the point cloud modeling, domestic and foreign researchers have done a lot of research. Surface reconstruction technology mainly include the point shape, the triangle model, the triangle Bezier surface model, the rectangular surface model and so on, and the neural network and the Alfa shape are also used in the curved surface reconstruction. But in these methods, it is often focused on single surface fitting, automatic or manual block fitting, which ignores the model's integrity. It leads to a serious problems in the model after stitching, that is, the surfaces fitting separately is often not satisfied with the well-known geometric constraints, such as parallel, vertical, a fixed angle, or a fixed distance. However, the research on the special modeling theory such as the dimension constraint and the position constraint is not used widely. One of the traditional modeling methods adding geometric constraints is a method combing the penalty function method and the Levenberg-Marquardt algorithm (L-M algorithm), whose stability is pretty good. But in the research process, it is found that the method is greatly influenced by the initial value. In this paper, we propose an improved method of point cloud model taking into account the geometric constraint. We first apply robust least-squares to enhance the initial value's accuracy, and then use penalty function method to transform constrained optimization problems into unconstrained optimization problems, and finally solve the problems using the L-M algorithm. The experimental results

  10. The Impact of Group Technology-Based Shipbuilding Methods on Naval Ship Design and Acquisition Practices

    DTIC Science & Technology

    1988-05-01

    7~~ne ~lECOPY THE IMPACT OF GROUP TECHNOLOGY-BASED SHIPBUILDING METHODS ON NAVAL SHIP DESIGN AND ACQUISITION PRACTICES by JOHN SUTHERLAND HEFFRON B...Chairman Departmental Graduate Committee ,. Department of-Ocean Engineering 1* ___•___ _____ _____ THE IMPACT OF GROUP TECHNOLOGY-BASED SHIPBUILDING METHODS...stimulated their se rch for more efficient and productive ship constructio methods. As a result, group technology-based shipbuildi g methods havebeen

  11. A Dynamic Interval Decision-Making Method Based on GRA

    NASA Astrophysics Data System (ADS)

    Xue-jun, Tang; Jia, Chen

    According to the basic theory of grey relational analysis, this paper constructs a three-dimensional grey interval relation degree model for the three dimensions of time, index and scheme. On its basis, it sets up and solves a single-targeted optimization model, and obtains each scheme's affiliate degree for the positive/negative ideal scheme and also arranges the schemes in sequence. The result shows that the three-dimensional grey relation degree simplifies the traditional dynamic multi-attribute decision-making method and can better resolve the dynamic multi-attribute decision-making method of interval numbers. Finally, this paper proves the practicality and efficiency of the model through a case study.

  12. Computational aeroacoustics applications based on a discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Delorme, Philippe; Mazet, Pierre; Peyret, Christophe; Ventribout, Yoan

    2005-09-01

    CAA simulation requires the calculation of the propagation of acoustic waves with low numerical dissipation and dispersion error, and to take into account complex geometries. To give, at the same time, an answer to both challenges, a Discontinuous Galerkin Method is developed for Computational AeroAcoustics. Euler's linearized equations are solved with the Discontinuous Galerkin Method using flux splitting technics. Boundary conditions are established for rigid wall, non-reflective boundary and imposed values. A first validation, for induct propagation is realized. Then, applications illustrate: the Chu and Kovasznay's decomposition of perturbation inside uniform flow in term of independent acoustic and rotational modes, Kelvin-Helmholtz instability and acoustic diffraction by an air wing. To cite this article: Ph. Delorme et al., C. R. Mecanique 333 (2005).

  13. Iris-based cyclotorsional image alignment method for wavefront registration.

    PubMed

    Chernyak, Dimitri A

    2005-12-01

    In refractive surgery, especially wavefront-guided refractive surgery, correct registration of the treatment to the cornea is of paramount importance. The specificity of the custom ablation formula requires that the ablation be applied to the cornea only when it has been precisely aligned with the mapped area. If, however, the eye has rotated between measurement and ablation, and this cyclotorsion is not compensated for, the rotational misalignment could impair the effectiveness of the refractive surgery. To achieve precise registration, a noninvasive method for torsional rotational alignment of the captured wavefront image to the patient's eyes at surgery has been developed. This method applies a common coordinate system to the wavefront and the eye. Video cameras on the laser and wavefront devices precisely establish the spatial relationship between the optics of the eye and the natural features of the iris, enabling the surgeon to identify and compensate for cyclotorsional eye motion, whatever its cause.

  14. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  15. Blood grouping based on PCR methods and agarose gel electrophoresis.

    PubMed

    Sell, Ana Maria; Visentainer, Jeane Eliete Laguila

    2015-01-01

    The study of erythrocyte antigens continues to be an intense field of research, particularly after the development of molecular testing methods. More than 300 specificities have been described by the International Society for Blood Transfusion as belonging to 33 blood group systems. The polymerase chain reaction (PCR) is a central tool for red blood cells (RBC) genotyping. PCR and agarose gel electrophoresis are low cost, easy, and versatile in vitro methods for amplifying defined target DNA (RBC polymorphic region). Multiplex-PCR, AS-PCR (Specific Allele Polymerase Chain Reaction), and RFLP-PCR (Restriction Fragment Length Polymorphism-Polymerase Chain Reaction) techniques are usually to identify RBC polymorphisms. Furthermore, it is an easy methodology to implement. This chapter describes the PCR methodology and agarose gel electrophoresis to identify the polymorphisms of the Kell, Duffy, Kidd, and MNS blood group systems.

  16. Vulnerability Assessment Using a Fuzzy Logic Based Method

    DTIC Science & Technology

    1993-12-01

    evaluating computer security vulnerabilities is very labor intensive. To help ease this workload, this thesis presents two automated methods possibly...eal 3n, 0 e) 0 n It -f0 . nts reg"roreg Iths OU raen estre -tte In Vt )thef awfict Of this ~.,i~t 14,-, A I’ K1- 2 3" toe 17 %1d3.rV. ~ 0 C .~ Ats ,glt

  17. Residual-based Methods for Controlling Discretization Error in CFD

    DTIC Science & Technology

    2015-08-24

    Jackson, PhD (expected, 2017) William Tyson, PhD (expected 2018) I. Introduction Computational Fluid Dynamics (CFD) has enormous potential to...are not included for the approximate TE method as the DE estimates were much worse than the rest. X. Mesh Adaptation Introduction Performing...Computational Physics, October 2014. 2. J. M. Derlaga, T. S. Phillips, and C. J. Roy, “SENSEI Computational Fluid Dynamics Code: A Case Study in Modern

  18. Hyperspectral image classification based on NMF Features Selection Method

    NASA Astrophysics Data System (ADS)

    Abe, Bolanle T.; Jordaan, J. A.

    2013-12-01

    Hyperspectral instruments are capable of collecting hundreds of images corresponding to wavelength channels for the same area on the earth surface. Due to the huge number of features (bands) in hyperspectral imagery, land cover classification procedures are computationally expensive and pose a problem known as the curse of dimensionality. In addition, higher correlation among contiguous bands increases the redundancy within the bands. Hence, dimension reduction of hyperspectral data is very crucial so as to obtain good classification accuracy results. This paper presents a new feature selection technique. Non-negative Matrix Factorization (NMF) algorithm is proposed to obtain reduced relevant features in the input domain of each class label. This aimed to reduce classification error and dimensionality of classification challenges. Indiana pines of the Northwest Indiana dataset is used to evaluate the performance of the proposed method through experiments of features selection and classification. The Waikato Environment for Knowledge Analysis (WEKA) data mining framework is selected as a tool to implement the classification using Support Vector Machines and Neural Network. The selected features subsets are subjected to land cover classification to investigate the performance of the classifiers and how the features size affects classification accuracy. Results obtained shows that performances of the classifiers are significant. The study makes a positive contribution to the problems of hyperspectral imagery by exploring NMF, SVMs and NN to improve classification accuracy. The performances of the classifiers are valuable for decision maker to consider tradeoffs in method accuracy versus method complexity.

  19. Optimizing methods for PCR-based analysis of predation

    PubMed Central

    Sint, Daniela; Raso, Lorna; Kaufmann, Rüdiger; Traugott, Michael

    2011-01-01

    Molecular methods have become an important tool for studying feeding interactions under natural conditions. Despite their growing importance, many methodological aspects have not yet been evaluated but need to be considered to fully exploit the potential of this approach. Using feeding experiments with high alpine carabid beetles and lycosid spiders, we investigated how PCR annealing temperature affects prey DNA detection success and how post-PCR visualization methods differ in their sensitivity. Moreover, the replicability of prey DNA detection among individual PCR assays was tested using beetles and spiders that had digested their prey for extended times postfeeding. By screening all predators for three differently sized prey DNA fragments (range 116–612 bp), we found that only in the longest PCR product, a marked decrease in prey detection success occurred. Lowering maximum annealing temperatures by 4 °C resulted in significantly increased prey DNA detection rates in both predator taxa. Among the three post-PCR visualization methods, an eightfold difference in sensitivity was observed. Repeated screening of predators increased the total number of samples scoring positive, although the proportion of samples testing positive did not vary significantly between different PCRs. The present findings demonstrate that assay sensitivity, in combination with other methodological factors, plays a crucial role to obtain robust trophic interaction data. Future work employing molecular prey detection should thus consider and minimize the methodologically induced variation that would also allow for better cross-study comparisons. PMID:21507208

  20. Accurate optical CD profiler based on specialized finite element method

    NASA Astrophysics Data System (ADS)

    Carrero, Jesus; Perçin, Gökhan

    2012-03-01

    As the semiconductor industry is moving to very low-k1 patterning solutions, the metrology problems facing process engineers are becoming much more complex. Choosing the right optical critical dimension (OCD) metrology technique is essential for bridging the metrology gap and achieving the required manufacturing volume throughput. The critical dimension scanning electron microscope (CD-SEM) measurement is usually distorted by the high aspect ratio of the photoresist and hard mask layers. CD-SEM measurements cease to correlate with complex three-dimensional profiles, such as the cases for double patterning and FinFETs, thus necessitating sophisticated, accurate and fast computational methods to bridge the gap. In this work, a suite of computational methods that complement advanced OCD equipment, and enabling them to operate at higher accuracies, are developed. In this article, a novel method for accurately modeling OCD profiles is presented. A finite element formulation in primal form is used to discretize the equations. The implementation uses specialized finite element spaces to solve Maxwell equations in two dimensions.

  1. Optimizing methods for PCR-based analysis of predation.

    PubMed

    Sint, Daniela; Raso, Lorna; Kaufmann, Rüdiger; Traugott, Michael

    2011-09-01

    Molecular methods have become an important tool for studying feeding interactions under natural conditions. Despite their growing importance, many methodological aspects have not yet been evaluated but need to be considered to fully exploit the potential of this approach. Using feeding experiments with high alpine carabid beetles and lycosid spiders, we investigated how PCR annealing temperature affects prey DNA detection success and how post-PCR visualization methods differ in their sensitivity. Moreover, the replicability of prey DNA detection among individual PCR assays was tested using beetles and spiders that had digested their prey for extended times postfeeding. By screening all predators for three differently sized prey DNA fragments (range 116-612 bp), we found that only in the longest PCR product, a marked decrease in prey detection success occurred. Lowering maximum annealing temperatures by 4 °C resulted in significantly increased prey DNA detection rates in both predator taxa. Among the three post-PCR visualization methods, an eightfold difference in sensitivity was observed. Repeated screening of predators increased the total number of samples scoring positive, although the proportion of samples testing positive did not vary significantly between different PCRs. The present findings demonstrate that assay sensitivity, in combination with other methodological factors, plays a crucial role to obtain robust trophic interaction data. Future work employing molecular prey detection should thus consider and minimize the methodologically induced variation that would also allow for better cross-study comparisons.

  2. CHull: a generic convex-hull-based model selection method.

    PubMed

    Wilderjans, Tom F; Ceulemans, Eva; Meers, Kristof

    2013-03-01

    When analyzing data, researchers are often confronted with a model selection problem (e.g., determining the number of components/factors in principal components analysis [PCA]/factor analysis or identifying the most important predictors in a regression analysis). To tackle such a problem, researchers may apply some objective procedure, like parallel analysis in PCA/factor analysis or stepwise selection methods in regression analysis. A drawback of these procedures is that they can only be applied to the model selection problem at hand. An interesting alternative is the CHull model selection procedure, which was originally developed for multiway analysis (e.g., multimode partitioning). However, the key idea behind the CHull procedure--identifying a model that optimally balances model goodness of fit/misfit and model complexity--is quite generic. Therefore, the procedure may also be used when applying many other analysis techniques. The aim of this article is twofold. First, we demonstrate the wide applicability of the CHull method by showing how it can be used to solve various model selection problems in the context of PCA, reduced K-means, best-subset regression, and partial least squares regression. Moreover, a comparison of CHull with standard model selection methods for these problems is performed. Second, we present the CHULL software, which may be downloaded from http://ppw.kuleuven.be/okp/software/CHULL/, to assist the user in applying the CHull procedure.

  3. Video-based Nearshore Depth Inversion using WDM Method

    NASA Astrophysics Data System (ADS)

    Hampson, R. W.; Kirby, J. T.

    2008-12-01

    A new remote sensing method for estimating nearshore water depths from video imagery has been developed and applied as part of an ongoing field study at Bethany Beach, Delaware. The new method applies Donelan et al's Wavelet Direction Method (WDM) to compact arrays of pixel intensity time series extracted from video images. The WDM generates a non-stationary time series of the wavenumber and wave direction at different frequencies that can be used to create frequency-wavenumber and directional spectrums. The water depth is estimated at the center of each compact array by fitting the linear dispersion relation to the frequency-wavenumber spectrum. Directional spectral results show good correlation to directional spectral results obtained from a slope array located just offshore of Bethany Beach. Additionally, depth estimations from the WDM are compared to depth measurements taken with a kayak survey system at Bethany Beach. Continuous measurements of the bathymetry at Bethany Beach are needed for inputs to fluid dynamics and sediment transport models to study the morphodynamics in the nearshore zone and can be used to monitor the success of the recent beach replenishment project along the Delaware coast.

  4. Methods for SAXS-Based Structure Determination of Biomolecular Complexes

    DOE PAGES

    Yang, Sichun

    2014-05-30

    Measurements from small-angle X-ray scattering (SAXS) are highly informative to determine the structures of bimolecular complexes in solution. Here, we describe current and recent SAXS-driven developments, with an emphasis on computational modeling. In particular, accurate methods to computing one theoretical scattering profile from a given structure model are discussed, with a key focus on structure factor coarse-graining and hydration contribution. Methods for reconstructing topological structures from an experimental SAXS profile are currently under active development. We report on several modeling tools designed for conformation generation that make use of either atomic-level or coarse-grained representations. Furthermore, since large, flexible biomolecules canmore » adopt multiple well-defined conformations, a traditional single-conformation SAXS analysis is inappropriate, so we also discuss recent methods that utilize the concept of ensemble optimization, weighing in on the SAXS contributions of a heterogeneous mixture of conformations. These tools will ultimately posit the usefulness of SAXS data beyond a simple space-filling approach by providing a reliable structure characterization of biomolecular complexes under physiological conditions.« less

  5. Vital sign sensing method based on EMD in terahertz band

    NASA Astrophysics Data System (ADS)

    Xu, Zhengwu; Liu, Tong

    2014-12-01

    Non-contact respiration and heartbeat rates detection could be applied to find survivors trapped in the disaster or the remote monitoring of the respiration and heartbeat of a patient. This study presents an improved algorithm that extracts the respiration and heartbeat rates of humans by utilizing the terahertz radar, which further lessens the effects of noise, suppresses the cross-term, and enhances the detection accuracy. A human target echo model for the terahertz radar is first presented. Combining the over-sampling method, low-pass filter, and Empirical Mode Decomposition improves the signal-to-noise ratio. The smoothed pseudo Wigner-Ville distribution time-frequency technique and the centroid of the spectrogram are used to estimate the instantaneous velocity of the target's cardiopulmonary motion. The down-sampling method is adopted to prevent serious distortion. Finally, a second time-frequency analysis is applied to the centroid curve to extract the respiration and heartbeat rates of the individual. Simulation results show that compared with the previously presented vital sign sensing method, the improved algorithm enhances the signal-to-noise ratio to 1 dB with a detection accuracy of 80%. The improved algorithm is an effective approach for the detection of respiration and heartbeat signal in a complicated environment.

  6. A New Quaternion-Based Encryption Method for DICOM Images.

    PubMed

    Dzwonkowski, Mariusz; Papaj, Michal; Rykaczewski, Roman

    2015-11-01

    In this paper, a new quaternion-based lossless encryption technique for digital image and communication on medicine (DICOM) images is proposed. We have scrutinized and slightly modified the concept of the DICOM network to point out the best location for the proposed encryption scheme, which significantly improves speed of DICOM images encryption in comparison with those originally embedded into DICOM advanced encryption standard and triple data encryption standard algorithms. The proposed algorithm decomposes a DICOM image into two 8-bit gray-tone images in order to perform encryption. The algorithm implements Feistel network like the scheme proposed by Sastry and Kumar. It uses special properties of quaternions to perform rotations of data sequences in 3D space for each of the cipher rounds. The images are written as Lipschitz quaternions, and modular arithmetic was implemented for operations with the quaternions. A computer-based analysis has been carried out, and the obtained results are shown at the end of this paper.

  7. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  8. Analog filter diagnosis using the oscillation based method

    NASA Astrophysics Data System (ADS)

    Andrejević Stošović, Miona; Milić, Miljana; Litovski, Vanćo

    2012-12-01

    Oscillation Based Testing (OBT) is an effective and simple solution to the testing problem of continuous time analogue electronic filters. In this paper, diagnosis based on OBT is described for the first time. It will be referred to as OBD. A fault dictionary is created and used to perform diagnosis with artificial neural networks (ANNs) implemented as classifiers. The robustness of the ANN diagnostic concept is also demonstrated by the addition of white noise to the “measured” signals. The implementation of the new concept is demonstrated by testing and diagnosis of a second order notch cell realized with one operational amplifier. Single soft and catastrophic faults are considered in detail and an example of the diagnosis of double soft faults is also given.

  9. A new method of cardiographic image segmentation based on grammar

    NASA Astrophysics Data System (ADS)

    Hamdi, Salah; Ben Abdallah, Asma; Bedoui, Mohamed H.; Alimi, Adel M.

    2011-10-01

    The measurement of the most common ultrasound parameters, such as aortic area, mitral area and left ventricle (LV) volume, requires the delineation of the organ in order to estimate the area. In terms of medical image processing this translates into the need to segment the image and define the contours as accurately as possible. The aim of this work is to segment an image and make an automated area estimation based on grammar. The entity "language" will be projected to the entity "image" to perform structural analysis and parsing of the image. We will show how the idea of segmentation and grammar-based area estimation is applied to real problems of cardio-graphic image processing.

  10. Highly selective CNTFET based sensors using metal diversification methods

    NASA Astrophysics Data System (ADS)

    Bondavalli, P.; Gorintin, L.; Longnos, F.; Feugnet, G.

    2011-10-01

    This contribution deals with Carbon Nanotubes Field Effect transistors (CNTFETs) based gas sensors fabricated using a new dynamic spray based technique for SWCNTs deposition. This technique is compatible with large surfaces, flexible substrates and allows to fabricate high performances transistors exploiting the percolation effect of the SWCNTs networks achieved with extremely reproducible characteristics. Recently, we have been able to achieve extremely selective measurement of NO2 , NH3 and DMMP using four CNTFETS fabricated using different metals as electrodes (Pt, Au, Ti, Pd), exploiting the specific interaction between gas and metal/SWCNT junction. In this way we have identify a sort of electronic fingerprinting of the gas. The time response is evaluated at less than 30sec and the sensitivity can reach 20ppb for NO2, 100ppb for NH3 and 1ppm for DMMP (Di-Methyl-Methyl-Phosphonate).

  11. Ensemble method: Community detection based on game theory

    NASA Astrophysics Data System (ADS)

    Zhang, Xia; Xia, Zhengyou; Xu, Shengwu; Wang, J. D.

    2014-08-01

    Timely and cost-effective analytics over social network has emerged as a key ingredient for success in many businesses and government endeavors. Community detection is an active research area of relevance to analyze online social network. The problem of selecting a particular community detection algorithm is crucial if the aim is to unveil the community structure of a network. The choice of a given methodology could affect the outcome of the experiments because different algorithms have different advantages and depend on tuning specific parameters. In this paper, we propose a community division model based on the notion of game theory, which can combine advantages of previous algorithms effectively to get a better community classification result. By making experiments on some standard dataset, it verifies that our community detection model based on game theory is valid and better.

  12. Method of polishing nickel-base alloys and stainless steels

    DOEpatents

    Steeves, Arthur F.; Buono, Donald P.

    1981-01-01

    A chemical attack polish and polishing procedure for use on metal surfaces such as nickel base alloys and stainless steels. The chemical attack polish comprises Fe(NO.sub.3).sub.3, concentrated CH.sub.3 COOH, concentrated H.sub.2 SO.sub.4 and H.sub.2 O. The polishing procedure includes saturating a polishing cloth with the chemical attack polish and submicron abrasive particles and buffing the metal surface.

  13. Gold Based Electrical Contact Materials, and Method Therefor.

    DTIC Science & Technology

    carburized by internal carburization by exposing a gold based solid solution containing the refractory element to an atmosphere of a gaseous oxide of carbon...at an elevated temperature. The elevated temperature is chosen to be below the melting point of the solid solution and high enough to cause gaseous...decomposition of a carbon material packed with the solid solution within an enclosing container. The carburizable refractory element with the solid

  14. Diffusion Geometry Based Nonlinear Methods for Hyperspectral Change Detection

    DTIC Science & Technology

    2010-05-12

    Schaum and A. Stocker, “Hyperspectral change detection and supervised matched filtering based on covariance equalization,” Proceedings of the SPIE, vol...5425, pp. 77- 90 (2004). 10. A. Schaum and A. Stocker, “Linear chromodynamics models for hyperspectral target detection,” Proceedings of the IEEE...Aerospace Conference (February 2003). 11. A. Schaum and A. Stocker, “Linear chromodynamics models for hyperspectral target detection

  15. New mobile methods for dietary assessment: review of image-assisted and image-based dietary assessment methods.

    PubMed

    Boushey, C J; Spoden, M; Zhu, F M; Delp, E J; Kerr, D A

    2016-12-12

    For nutrition practitioners and researchers, assessing dietary intake of children and adults with a high level of accuracy continues to be a challenge. Developments in mobile technologies have created a role for images in the assessment of dietary intake. The objective of this review was to examine peer-reviewed published papers covering development, evaluation and/or validation of image-assisted or image-based dietary assessment methods from December 2013 to January 2016. Images taken with handheld devices or wearable cameras have been used to assist traditional dietary assessment methods for portion size estimations made by dietitians (image-assisted methods). Image-assisted approaches can supplement either dietary records or 24-h dietary recalls. In recent years, image-based approaches integrating application technology for mobile devices have been developed (image-based methods). Image-based approaches aim at capturing all eating occasions by images as the primary record of dietary intake, and therefore follow the methodology of food records. The present paper reviews several image-assisted and image-based methods, their benefits and challenges; followed by details on an image-based mobile food record. Mobile technology offers a wide range of feasible options for dietary assessment, which are easier to incorporate into daily routines. The presented studies illustrate that image-assisted methods can improve the accuracy of conventional dietary assessment methods by adding eating occasion detail via pictures captured by an individual (dynamic images). All of the studies reduced underreporting with the help of images compared with results with traditional assessment methods. Studies with larger sample sizes are needed to better delineate attributes with regards to age of user, degree of error and cost.

  16. Distributed Cooperation Solution Method of Complex System Based on MAS

    NASA Astrophysics Data System (ADS)

    Weijin, Jiang; Yuhui, Xu

    To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.

  17. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  18. An algebra-based method for inferring gene regulatory networks

    PubMed Central

    2014-01-01

    Background The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. Results This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also

  19. An aquaculture-based method for calibrated bivalve isotope paleothermometry

    NASA Astrophysics Data System (ADS)

    Wanamaker, Alan D.; Kreutz, Karl J.; Borns, Harold W.; Introne, Douglas S.; Feindel, Scott; Barber, Bruce J.

    2006-09-01

    To quantify species-specific relationships between bivalve carbonate isotope geochemistry (δ18Oc) and water conditions (temperature and salinity, related to water isotopic composition [δ18Ow]), an aquaculture-based methodology was developed and applied to Mytilus edulis (blue mussel). The four-by-three factorial design consisted of four circulating temperature baths (7, 11, 15, and 19°C) and three salinity ranges (23, 28, and 32 parts per thousand (ppt); monitored for δ18Ow weekly). In mid-July of 2003, 4800 juvenile mussels were collected in Salt Bay, Damariscotta, Maine, and were placed in each configuration. The size distribution of harvested mussels, based on 105 specimens, ranged from 10.9 mm to 29.5 mm with a mean size of 19.8 mm. The mussels were grown in controlled conditions for up to 8.5 months, and a paleotemperature relationship based on juvenile M. edulis from Maine was developed from animals harvested at months 4, 5, and 8.5. This relationship [T°C = 16.19 (±0.14) - 4.69 (±0.21) {δ18Oc VPBD - δ18Ow VSMOW} + 0.17 (±0.13) {δ18Oc VPBD - δ18Ow VSMOW}2; r2 = 0.99; N = 105; P < 0.0001] is nearly identical to the Kim and O'Neil (1997) abiogenic calcite equation over the entire temperature range (7-19°C), and it closely resembles the commonly used paleotemperature equations of Epstein et al. (1953) and Horibe and Oba (1972). Further, the comparison of the M. edulis paleotemperature equation with the Kim and O'Neil (1997) equilibrium-based equation indicates that M. edulis specimens used in this study precipitated their shell in isotopic equilibrium with ambient water within the experimental uncertainties of both studies. The aquaculture-based methodology described here allows similar species-specific isotope paleothermometer calibrations to be performed with other bivalve species and thus provides improved quantitative paleoenvironmental reconstructions.

  20. Theoretical research and comparison of forces in optical tweezers based on ray optics method and T matrix method

    NASA Astrophysics Data System (ADS)

    Li, Zhenggang; Hu, Huizhu; Fu, ZhenHai; Zhu, Qi; Shen, Yu

    2016-10-01

    Based on ray tracing method of ray optics (RO) theory and T-matrix method of electromagnetic scattering theory, we establish optical trap force model and calculate the optical trap force of trapped microspheres whose size is in the beam wavelength scale. Calculation results of axial and transverse trapping efficiency based on the two models agree qualitatively, but differ quantitatively. Then we introduce a trapping efficiency calculation deviation parameter to characterize the difference between these two methods, and analyze how the deviation parameter is influenced by trapped microsphere radius and trapping beam waist radius. Simulation result shows that best agreement between RO model and T matrix calculation method is met when a strongly focused laser beam traps a large microsphere in near the beam waist plane area. In such cases both ray optics approximation conditions and T matrix method approximate conditions are satisfied. Numerical results coincide well with theoretical expectations.

  1. Comparison of a new cobinamide-based method to a standard laboratory method for measuring cyanide in human blood.

    PubMed

    Swezey, Robert; Shinn, Walter; Green, Carol; Drover, David R; Hammer, Gregory B; Schulman, Scott R; Zajicek, Anne; Jett, David A; Boss, Gerry R

    2013-01-01

    Most hospital laboratories do not measure blood cyanide concentrations, and samples must be sent to reference laboratories. A simple method is needed for measuring cyanide in hospitals. The authors previously developed a method to quantify cyanide based on the high binding affinity of the vitamin B12 analog, cobinamide, for cyanide and a major spectral change observed for cyanide-bound cobinamide. This method is now validated in human blood, and the findings include a mean inter-assay accuracy of 99.1%, precision of 8.75% and a lower limit of quantification of 3.27 µM cyanide. The method was applied to blood samples from children treated with sodium nitroprusside and it yielded measurable results in 88 of 172 samples (51%), whereas the reference laboratory yielded results in only 19 samples (11%). In all 19 samples, the cobinamide-based method also yielded measurable results. The two methods showed reasonable agreement when analyzed by linear regression, but not when analyzed by a standard error of the estimate or paired t-test. Differences in results between the two methods may be because samples were assayed at different times on different sample types. The cobinamide-based method is applicable to human blood, and can be used in hospital laboratories and emergency rooms.

  2. Bearing diagnosis based on Mahalanobis-Taguchi-Gram-Schmidt method

    NASA Astrophysics Data System (ADS)

    Shakya, Piyush; Kulkarni, Makarand S.; Darpe, Ashish K.

    2015-02-01

    A methodology is developed for defect type identification in rolling element bearings using the integrated Mahalanobis-Taguchi-Gram-Schmidt (MTGS) method. Vibration data recorded from bearings with seeded defects on outer race, inner race and balls are processed in time, frequency, and time-frequency domains. Eleven damage identification parameters (RMS, Peak, Crest Factor, and Kurtosis in time domain, amplitude of outer race, inner race, and ball defect frequencies in FFT spectrum and HFRT spectrum in frequency domain and peak of HHT spectrum in time-frequency domain) are computed. Using MTGS, these damage identification parameters (DIPs) are fused into a single DIP, Mahalanobis distance (MD), and gain values for the presence of all DIPs are calculated. The gain value is used to identify the usefulness of DIP and the DIPs with positive gain are again fused into MD by using Gram-Schmidt Orthogonalization process (GSP) in order to calculate Gram-Schmidt Vectors (GSVs). Among the remaining DIPs, sign of GSVs of frequency domain DIPs is checked to classify the probable defect. The approach uses MTGS method for combining the damage parameters and in conjunction with the GSV classifies the defect. A Defect Occurrence Index (DOI) is proposed to rank the probability of existence of a type of bearing damage (ball defect/inner race defect/outer race defect/other anomalies). The methodology is successfully validated on vibration data from a different machine, bearing type and shape/configuration of the defect. The proposed methodology is also applied on the vibration data acquired from the accelerated life test on the bearings, which established the applicability of the method on naturally induced and naturally progressed defect. It is observed that the methodology successfully identifies the correct type of bearing defect. The proposed methodology is also useful in identifying the time of initiation of a defect and has potential for implementation in a real time environment.

  3. Hybrid modelling framework by using mathematics-based and information-based methods

    NASA Astrophysics Data System (ADS)

    Ghaboussi, J.; Kim, J.; Elnashai, A.

    2010-06-01

    Mathematics-based computational mechanics involves idealization in going from the observed behaviour of a system into mathematical equations representing the underlying mechanics of that behaviour. Idealization may lead mathematical models that exclude certain aspects of the complex behaviour that may be significant. An alternative approach is data-centric modelling that constitutes a fundamental shift from mathematical equations to data that contain the required information about the underlying mechanics. However, purely data-centric methods often fail for infrequent events and large state changes. In this article, a new hybrid modelling framework is proposed to improve accuracy in simulation of real-world systems. In the hybrid framework, a mathematical model is complemented by information-based components. The role of informational components is to model aspects which the mathematical model leaves out. The missing aspects are extracted and identified through Autoprogressive Algorithms. The proposed hybrid modelling framework has a wide range of potential applications for natural and engineered systems. The potential of the hybrid methodology is illustrated through modelling highly pinched hysteretic behaviour of beam-to-column connections in steel frames.

  4. Microbial detection method based on sensing molecular hydrogen

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Stoner, G. E.; Boykin, E. H.

    1974-01-01

    An approach involving the measurement of hydrogen evolution by test organisms was used to detect and enumerate various members of the Enterobacteriaceae group. The experimental setup for measuring hydrogen evolution consisted of a test tube containing two electrodes plus broth and organisms. The test tube was kept in a water bath at a temperature of 35 C. It is pointed out that the hydrogen-sensing method, coupled with the pressure transducer technique reported by Wilkins (1974) could be used in various experiments in which gas production by microorganisms is being measured.

  5. System and method for attitude determination based on optical imaging

    NASA Technical Reports Server (NTRS)

    Junkins, John L. (Inventor); Pollock, Thomas C. (Inventor); Mortari, Daniele (Inventor)

    2003-01-01

    A method and apparatus is provide for receiving a first set of optical data from a first field of view and receiving a second set of optical data from a second field of view. A portion of the first set of optical data is communicated and a portion of the second set of optical data is reflected, both toward an optical combiner. The optical combiner then focuses the portions onto the image plane such that information at the image plane that is associated with the first and second fields of view is received by an optical detector and used to determine an attitude characteristic.

  6. Optical tissue phantoms based on spin coating method

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Ha, Myungjin; Yu, Sung Kon; Radfar, Edalat; Jun, Eunkwon; Lee, Nara; Jung, Byungjo

    2015-03-01

    Fabrication of optical tissue phantom (OTP) simulating whole skin structure has been regarded as laborious and time consuming work. This study fabricated multilayer OTP optically and structurally simulating epidermis-dermis structure including blood vessel. Spin coating method was used to produce thin layer mimicking epidermal layer, then optimized for reference epoxy and silicone matrix. Adequacy of both materials in phantom fabrication was considered by comparison the fabrication results. In addition similarities between OTP and biological tissue in optical property and thickness was measured to evaluate this fabrication process.

  7. Differentiated protection method in passive optical networks based on OPEX

    NASA Astrophysics Data System (ADS)

    Zhang, Zhicheng; Guo, Wei; Jin, Yaohui; Sun, Weiqiang; Hu, Weisheng

    2011-12-01

    Reliable service delivery becomes more significant due to increased dependency on electronic services all over society and the growing importance of reliable service delivery. As the capability of PON increasing, both residential and business customers may be included in a PON. Meanwhile, OPEX have been proven to be a very important factor of the total cost for a telecommunication operator. Thus, in this paper, we present the partial protection PON architecture and compare the operational expenditures (OPEX) of fully duplicated protection and partly duplicated protection for ONUs with different distributed fiber length, reliability requirement and penalty cost per hour. At last, we propose a differentiated protection method to minimize OPEX.

  8. Pavement crack identification based on automatic threshold iterative method

    NASA Astrophysics Data System (ADS)

    Lu, Guofeng; Zhao, Qiancheng; Liao, Jianguo; He, Yongbiao

    2017-01-01

    Crack detection is an important issue in concrete infrastructure. Firstly, the accuracy of crack geometry parameters measurement is directly affected by the extraction accuracy, the same as the accuracy of the detection system. Due to the properties of unpredictability, randomness and irregularity, it is difficult to establish recognition model of crack. Secondly, various image noise, caused by irregular lighting conditions, dark spots, freckles and bump, exerts an influence on the crack detection accuracy. Peak threshold selection method is improved in this paper, and the processing of enhancement, smoothing and denoising is conducted before iterative threshold selection, which can complete the automatic selection of the threshold value in real time and stability.

  9. Determining the base resistance of InP HBTs: An evaluation of methods and structures

    NASA Astrophysics Data System (ADS)

    Nardmann, Tobias; Krause, Julia; Pawlak, Andreas; Schroter, Michael

    2016-09-01

    Many different methods can be found in the literature for determining both the internal and external base series resistance based on single transistor terminal characteristics. Those methods are not equally reliable or applicable for all technologies, device sizes and speeds. In this review, the most common methods are evaluated regarding their suitability for InP heterojunction bipolar transistors (HBTs) based on both measured and simulated data. Using data generated by a sophisticated physics-based compact model allows an evaluation of the extraction method precision by comparing the extracted parameter value to its known value. Based on these simulations, this study provides insight into the limitations of the applied methods, causes for errors and possible error mitigation. In addition to extraction methods based on just transistor terminal characteristics, test structures for separately determining the components of the base resistance from sheet and specific contact resistances are discussed and applied to serve as reference for the experimental evaluation.

  10. Method for Stereo Mapping Based on Objectarx and Pipeline Technology

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; Lin, Z.; Yang, Y.

    2012-07-01

    Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  11. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  12. METHOD FOR ANNEALING AND ROLLING ZIRCONIUM-BASE ALLOYS

    DOEpatents

    Picklesimer, M.L.

    1959-07-14

    A fabrication procedure is presented for alpha-stabilized zirconium-base alloys, and in particular Zircaloy-2. The alloy is initially worked at a temperature outside the alpha-plus-beta range (810 to 970 deg ), held at a temperature above 970 deg C for 30 minutes and cooled rapidly. The alloy is then cold-worked to reduce the size at least 20% and annealed at a temperature from 700 to 810 deg C. This procedure serves both to prevent the formation of stringers and to provide a randomly oriented crystal structure.

  13. State Machine Based Method for Consolidating Vehicle Data

    NASA Astrophysics Data System (ADS)

    Dittmann, Florian; Geramani, Konstantina; Fäßler, Victor; Damiani, Sergio

    The increasing number of information and assistance systems built into modern vehicles raises the demand for appropriate preparation of their output. On one side, crucial information has to be emphasized and prioritized, as well as relevant changes in the driving situation and surrounding environment have to be recognized and transmitted. On the other side, marginal alterations should be suitably filtered, while duplications of messages should be avoided completely. These issues hold in particular when assistance systems overlap each other in terms of their situation coverage. In this work it is described how such a consolidation of information can be meaningfully supported. The method is integrated in a system that collects messages from various data acquisition units and prepares them to be forwarded. Thus, subsequent actions can be taken on a consolidated and tailored set of messages. Situation assessment modules that rely on immediate estimation of situations are primary recipients of the messages. To meet their major demand—rapid decision taking—the method generates events by applying the concept of state machines. The state machines form the anchor to merge and fuse input, track changes, and generate output messages on higher levels. Besides this feature of consolidating vehicle data, the state machines also facilitate the transformation of continuous data to event messages for the rapid decision taking. Eventually, comprehensive driver support is facilitated, also enabling unprecedented features to improve road safety by decreasing the cognitive workload of drivers.

  14. Satellite image based methods for fuels maps updating

    NASA Astrophysics Data System (ADS)

    Alonso-Benito, Alfonso; Hernandez-Leal, Pedro A.; Arbelo, Manuel; Gonzalez-Calvo, Alejandro; Moreno-Ruiz, Jose A.; Garcia-Lazaro, Jose R.

    2016-10-01

    Regular updating of fuels maps is important for forest fire management. Nevertheless complex and time consuming field work is usually necessary for this purpose, which prevents a more frequent update. That is why the assessment of the usefulness of satellite data and the development of remote sensing techniques that enable the automatic updating of these maps, is of vital interest. In this work, we have tested the use of the spectral bands of OLI (Operational Land Imager) sensor on board Landsat 8 satellite, for updating the fuels map of El Hierro Island (Spain). From previously digitized map, a set of 200 reference plots for different fuel types was created. A 50% of the plots were randomly used as a training set and the rest were considered for validation. Six supervised and 2 unsupervised classification methods were applied, considering two levels of detail. A first level with only 5 classes (Meadow, Brushwood, Undergrowth canopy cover >50%, Undergrowth canopy cover <15%, and Xeric formations), and the second one containing 19 fuel types. The level 1 classification methods yielded an overall accuracy ranging from 44% for Parellelepided to an 84% for Maximun Likelihood. Meanwhile, level 2 results showed at best, an unacceptable overall accuracy of 34%, which prevents the use of this data for such a detailed characterization. Anyway it has been demonstrated that in some conditions, images of medium spatial resolution, like Landsat 8-OLI, could be a valid tool for an automatic upgrade of fuels maps, minimizing costs and complementing traditional methodologies.

  15. PSO based Gabor wavelet feature extraction and tracking method

    NASA Astrophysics Data System (ADS)

    Sun, Hongguang; Bu, Qian; Zhang, Huijie

    2008-12-01

    The paper is the study of 2D Gabor wavelet and its application in grey image target recognition and tracking. The new optimization algorithms and technologies in the system realization are studied and discussed in theory and practice. Optimization of Gabor wavelet's parameters of translation, orientation, and scale is used to make it approximates a local image contour region. The method of Sobel edge detection is used to get the initial position and orientation value of optimization in order to improve the convergence speed. In the wavelet characteristic space, we adopt PSO (particle swarm optimization) algorithm to identify points on the security border of the system, it can ensure reliable convergence of the target, which can improve convergence speed; the time of feature extraction is shorter. By test in low contrast image, the feasibility and effectiveness of the algorithm are demonstrated by VC++ simulation platform in experiments. Adopting improve Gabor wavelet method in target tracking and making up its frame of tracking, which realize moving target tracking used algorithm, and realize steady target tracking in circumrotate affine distortion.

  16. Method for forming bismuth-based superconducting ceramics

    DOEpatents

    Maroni, Victor A.; Merchant, Nazarali N.; Parrella, Ronald D.

    2005-05-17

    A method for reducing the concentration of non-superconducting phases during the heat treatment of Pb doped Ag/Bi-2223 composites having Bi-2223 and Bi-2212 superconducting phases is disclosed. A Pb doped Ag/Bi-2223 composite having Bi-2223 and Bi-2212 superconducting phases is heated in an atmosphere having an oxygen partial pressure not less than about 0.04 atmospheres and the temperature is maintained at the lower of a non-superconducting phase take-off temperature and the Bi-2223 superconducting phase grain growth take-off temperature. The oxygen partial pressure is varied and the temperature is varied between about 815.degree. C. and about 835.degree. C. to produce not less than 80 percent conversion to Pb doped Bi-2223 superconducting phase and not greater than about 20 volume percent non-superconducting phases. The oxygen partial pressure is preferably varied between about 0.04 and about 0.21 atmospheres. A product by the method is disclosed.

  17. Speckle reduction methods in laser-based picture projectors

    NASA Astrophysics Data System (ADS)

    Akram, M. Nadeem; Chen, Xuyuan

    2016-02-01

    Laser sources have been promised for many years to be better light sources as compared to traditional lamps or light-emitting diodes (LEDs) for projectors, which enable projectors having wide colour gamut for vivid image, super brightness and high contrast for the best picture quality, long lifetime for maintain free operation, mercury free, and low power consumption for green environment. A major technology obstacle in using lasers for projection has been the speckle noise caused by to the coherent nature of the lasers. For speckle reduction, current state of the art solutions apply moving parts with large physical space demand. Solutions beyond the state of the art need to be developed such as integrated optical components, hybrid MOEMS devices, and active phase modulators for compact speckle reduction. In this article, major methods reported in the literature for the speckle reduction in laser projectors are presented and explained. With the advancement in semiconductor lasers with largely reduced cost for the red, green and the blue primary colours, and the developed methods for their speckle reduction, it is hoped that the lasers will be widely utilized in different projector applications in the near future.

  18. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, J.L.; Wiczer, J.J.

    1994-01-25

    A system and a method for imaging desired surfaces of a workpiece is described. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.

  19. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, James L.; Wiczer, James J.

    1994-01-01

    A system and a method for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.

  20. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, J.L.; Wiczer, J.J.

    1995-01-03

    A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.

  1. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, James L.; Wiczer, James J.

    1995-01-01

    A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.

  2. Stable modeling based control methods using a new RBF network.

    PubMed

    Beyhan, Selami; Alci, Musa

    2010-10-01

    This paper presents a novel model with radial basis functions (RBFs), which is applied successively for online stable identification and control of nonlinear discrete-time systems. First, the proposed model is utilized for direct inverse modeling of the plant to generate the control input where it is assumed that inverse plant dynamics exist. Second, it is employed for system identification to generate a sliding-mode control input. Finally, the network is employed to tune PID (proportional + integrative + derivative) controller parameters automatically. The adaptive learning rate (ALR), which is employed in the gradient descent (GD) method, provides the global convergence of the modeling errors. Using the Lyapunov stability approach, the boundedness of the tracking errors and the system parameters are shown both theoretically and in real time. To show the superiority of the new model with RBFs, its tracking results are compared with the results of a conventional sigmoidal multi-layer perceptron (MLP) neural network and the new model with sigmoid activation functions. To see the real-time capability of the new model, the proposed network is employed for online identification and control of a cascaded parallel two-tank liquid-level system. Even though there exist large disturbances, the proposed model with RBFs generates a suitable control input to track the reference signal better than other methods in both simulations and real time.

  3. Energetics-Based Methods for Protein Folding and Stability Measurements

    NASA Astrophysics Data System (ADS)

    Geer, M. Ariel; Fitzgerald, Michael C.

    2014-06-01

    Over the past 15 years, a series of energetics-based techniques have been developed for the thermodynamic analysis of protein folding and stability. These techniques include Stability of Unpurified Proteins from Rates of amide H/D Exchange (SUPREX), pulse proteolysis, Stability of Proteins from Rates of Oxidation (SPROX), slow histidine H/D exchange, lysine amidination, and quantitative cysteine reactivity (QCR). The above techniques, which are the subject of this review, all utilize chemical or enzymatic modification reactions to probe the chemical denaturant- or temperature-induced equilibrium unfolding properties of proteins and protein-ligand complexes. They employ various mass spectrometry-, sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE)-, and optical spectroscopy-based readouts that are particularly advantageous for high-throughput and in some cases multiplexed analyses. This has created the opportunity to use protein folding and stability measurements in new applications such as in high-throughput screening projects to identify novel protein ligands and in mode-of-action studies to identify protein targets of a particular ligand.

  4. Spectral methods and cluster structure in correlation-based networks

    NASA Astrophysics Data System (ADS)

    Heimo, Tapio; Tibély, Gergely; Saramäki, Jari; Kaski, Kimmo; Kertész, János

    2008-10-01

    We investigate how in complex systems the eigenpairs of the matrices derived from the correlations of multichannel observations reflect the cluster structure of the underlying networks. For this we use daily return data from the NYSE and focus specifically on the spectral properties of weight W=|-δ and diffusion matrices D=W/sj-δ, where C is the correlation matrix and si=∑jW the strength of node j. The eigenvalues (and corresponding eigenvectors) of the weight matrix are ranked in descending order. As in the earlier observations, the first eigenvector stands for a measure of the market correlations. Its components are, to first approximation, equal to the strengths of the nodes and there is a second order, roughly linear, correction. The high ranking eigenvectors, excluding the highest ranking one, are usually assigned to market sectors and industrial branches. Our study shows that both for weight and diffusion matrices the eigenpair analysis is not capable of easily deducing the cluster structure of the network without a priori knowledge. In addition we have studied the clustering of stocks using the asset graph approach with and without spectrum based noise filtering. It turns out that asset graphs are quite insensitive to noise and there is no sharp percolation transition as a function of the ratio of bonds included, thus no natural threshold value for that ratio seems to exist. We suggest that these observations can be of use for other correlation based networks as well.

  5. Modeling Damage in Composite Materials Using an Enrichment Based Multiscale Method

    DTIC Science & Technology

    2015-03-01

    Multiscale Enrichment Technique The approach to implementing structural based enrichment varies depending on the governing method . In this...the two simulations. Using the enrichment method without any damaged RVEs still reduced the error by 6% over the homogenization approaches . When using...Technical Report ARWSB-TR-15002 Modeling Damage in Composite Materials Using an Enrichment Based Multiscale Method Michael F

  6. Evaluation of contents-based image retrieval methods for a database of logos on drug tablets

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Hardy, Huub; Poortman, Anneke; Bijhold, Jurrien

    2001-02-01

    In this research an evaluation has been made of the different ways of contents based image retrieval of logos of drug tablets. On a database of 432 illicitly produced tablets (mostly containing MDMA), we have compared different retrieval methods. Two of these methods were available from commercial packages, QBIC and Imatch, where the implementation of the contents based image retrieval methods are not exactly known. We compared the results for this database with the MPEG-7 shape comparison methods, which are the contour-shape, bounding box and region-based shape methods. In addition, we have tested the log polar method that is available from our own research.

  7. A hybrid semi-automatic method for liver segmentation based on level-set methods using multiple seed points.

    PubMed

    Yang, Xiaopeng; Yu, Hee Chul; Choi, Younggeun; Lee, Wonsup; Wang, Baojian; Yang, Jaedo; Hwang, Hongpil; Kim, Ji Hyun; Song, Jisoo; Cho, Baik Hwan; You, Heecheon

    2014-01-01

    The present study developed a hybrid semi-automatic method to extract the liver from abdominal computerized tomography (CT) images. The proposed hybrid method consists of a customized fast-marching level-set method for detection of an optimal initial liver region from multiple seed points selected by the user and a threshold-based level-set method for extraction of the actual liver region based on the initial liver region. The performance of the hybrid method was compared with those of the 2D region growing method implemented in OsiriX using abdominal CT datasets of 15 patients. The hybrid method showed a significantly higher accuracy in liver extraction (similarity index, SI=97.6 ± 0.5%; false positive error, FPE = 2.2 ± 0.7%; false negative error, FNE=2.5 ± 0.8%; average symmetric surface distance, ASD=1.4 ± 0.5mm) than the 2D (SI=94.0 ± 1.9%; FPE = 5.3 ± 1.1%; FNE=6.5 ± 3.7%; ASD=6.7 ± 3.8mm) region growing method. The total liver extraction time per CT dataset of the hybrid method (77 ± 10 s) is significantly less than the 2D region growing method (575 ± 136 s). The interaction time per CT dataset between the user and a computer of the hybrid method (28 ± 4 s) is significantly shorter than the 2D region growing method (484 ± 126 s). The proposed hybrid method was found preferred for liver segmentation in preoperative virtual liver surgery planning.

  8. Research on palmprint identification method based on quantum algorithms.

    PubMed

    Li, Hui; Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  9. Nanotunneling Junction-based Hyperspectal Polarimetric Photodetector and Detection Method

    NASA Technical Reports Server (NTRS)

    Son, Kyung-ah (Inventor); Moon, Jeongsun J. (Inventor); Chattopadhyay, Goutam (Inventor); Liao, Anna (Inventor); Ting, David (Inventor)

    2009-01-01

    A photodetector, detector array, and method of operation thereof in which nanojunctions are formed by crossing layers of nanowires. The crossing nanowires are separated by a few nm thick electrical barrier layer which allows tunneling. Each nanojunction is coupled to a slot antenna for efficient and frequency-selective coupling to photo signals. The nanojunctions formed at the intersection of the crossing wires defines a vertical tunneling diode that rectifies the AC signal from a coupled antenna and generates a DC signal suitable for reforming a video image. The nanojunction sensor allows multi/hyper spectral imaging of radiation within a spectral band ranging from terahertz to visible light, and including infrared (IR) radiation. This new detection approach also offers unprecedented speed, sensitivity and fidelity at room temperature.

  10. Phase measurement profilometry based on a virtual reference plane method

    NASA Astrophysics Data System (ADS)

    Ren, Hongbing; Lee, Jinlong; Gao, Xiaorong

    2016-09-01

    In Phase Measurement Profilometry(PMP), the setting of the reference plane plays an important role. It is a critical step to capture the grating fringe projected onto the reference plane in PMP. However, it is sometimes difficult to choose and place the reference plane in practical applications. In this paper, a virtual reference plane is introduced into PMP, with which 3D measurement can be realized without using the physical reference plane. The virtual reference plane is generated through extracting a partial area of the deformed fringe image that corresponds to a planar region and employing the interpolation algorithm. The method is proved theoretically through simulation experiments, providing a new suggestion for actual measurement by PMP.

  11. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  12. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  13. Well casing-based geophysical sensor apparatus, system and method

    DOEpatents

    Daily, William D.

    2010-03-09

    A geophysical sensor apparatus, system, and method for use in, for example, oil well operations, and in particular using a network of sensors emplaced along and outside oil well casings to monitor critical parameters in an oil reservoir and provide geophysical data remote from the wells. Centralizers are affixed to the well casings and the sensors are located in the protective spheres afforded by the centralizers to keep from being damaged during casing emplacement. In this manner, geophysical data may be detected of a sub-surface volume, e.g. an oil reservoir, and transmitted for analysis. Preferably, data from multiple sensor types, such as ERT and seismic data are combined to provide real time knowledge of the reservoir and processes such as primary and secondary oil recovery.

  14. Formal Methods for Autonomic and Swarm-based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James

    2004-01-01

    Swarms of intelligent rovers and spacecraft are being considered for a number of future NASA missions. These missions will provide MSA scientist and explorers greater flexibility and the chance to gather more science than traditional single spacecraft missions. These swarms of spacecraft are intended to operate for large periods of time without contact with the Earth. To do this, they must be highly autonomous, have autonomic properties and utilize sophisticated artificial intelligence. The Autonomous Nano Technology Swarm (ANTS) mission is an example of one of the swarm type of missions NASA is considering. This mission will explore the asteroid belt using an insect colony analogy cataloging the mass, density, morphology, and chemical composition of the asteroids, including any anomalous concentrations of specific minerals. Verifying such a system would be a huge task. This paper discusses ongoing work to develop a formal method for verifying swarm and autonomic systems.

  15. 3D model retrieval method based on mesh segmentation

    NASA Astrophysics Data System (ADS)

    Gan, Yuanchao; Tang, Yan; Zhang, Qingchen

    2012-04-01

    In the process of feature description and extraction, current 3D model retrieval algorithms focus on the global features of 3D models but ignore the combination of global and local features of the model. For this reason, they show less effective performance to the models with similar global shape and different local shape. This paper proposes a novel algorithm for 3D model retrieval based on mesh segmentation. The key idea is to exact the structure feature and the local shape feature of 3D models, and then to compares the similarities of the two characteristics and the total similarity between the models. A system that realizes this approach was built and tested on a database of 200 objects and achieves expected results. The results show that the proposed algorithm improves the precision and the recall rate effectively.

  16. PCNN document segmentation method based on bacterial foraging optimization algorithm

    NASA Astrophysics Data System (ADS)

    Liao, Yanping; Zhang, Peng; Guo, Qiang; Wan, Jian

    2014-04-01

    Pulse Coupled Neural Network(PCNN) is widely used in the field of image processing, but it is a difficult task to define the relative parameters properly in the research of the applications of PCNN. So far the determination of parameters of its model needs a lot of experiments. To deal with the above problem, a document segmentation based on the improved PCNN is proposed. It uses the maximum entropy function as the fitness function of bacterial foraging optimization algorithm, adopts bacterial foraging optimization algorithm to search the optimal parameters, and eliminates the trouble of manually set the experiment parameters. Experimental results show that the proposed algorithm can effectively complete document segmentation. And result of the segmentation is better than the contrast algorithms.

  17. Tetraethyl orthosilicate-based glass composition and method

    DOEpatents

    Wicks, G.G.; Livingston, R.R.; Baylor, L.C.; Whitaker, M.J.; O`Rourke, P.E.

    1997-06-10

    A tetraethyl orthosilicate-based, sol-gel glass composition with additives selected for various applications is described. The composition is made by mixing ethanol, water, and tetraethyl orthosilicate, adjusting the pH into the acid range, and aging the mixture at room temperature. The additives, such as an optical indicator, filler, or catalyst, are then added to the mixture to form the composition which can be applied to a substrate before curing. If the additive is an indicator, the light-absorbing characteristics of which vary upon contact with a particular analyte, the indicator can be applied to a lens, optical fiber, reagent strip, or flow cell for use in chemical analysis. Alternatively, an additive such as alumina particles is blended into the mixture to form a filler composition for patching cracks in metal, glass, or ceramic piping. 12 figs.

  18. A T Matrix Method Based upon Scalar Basis Functions

    NASA Technical Reports Server (NTRS)

    Mackowski, D.W.; Kahnert, F. M.; Mishchenko, Michael I.

    2013-01-01

    A surface integral formulation is developed for the T matrix of a homogenous and isotropic particle of arbitrary shape, which employs scalar basis functions represented by the translation matrix elements of the vector spherical wave functions. The formulation begins with the volume integral equation for scattering by the particle, which is transformed so that the vector and dyadic components in the equation are replaced with associated dipole and multipole level scalar harmonic wave functions. The approach leads to a volume integral formulation for the T matrix, which can be extended, by use of Green's identities, to the surface integral formulation. The result is shown to be equivalent to the traditional surface integral formulas based on the VSWF basis.

  19. Tetraethyl orthosilicate-based glass composition and method

    DOEpatents

    Wicks, George G.; Livingston, Ronald R.; Baylor, Lewis C.; Whitaker, Michael J.; O'Rourke, Patrick E.

    1997-01-01

    A tetraethyl orthosilicate-based, sol-gel glass composition with additives selected for various applications. The composition is made by mixing ethanol, water, and tetraethyl orthosilicate, adjusting the pH into the acid range, and aging the mixture at room temperature. The additives, such as an optical indicator, filler, or catalyst, are then added to the mixture to form the composition which can be applied to a substrate before curing. If the additive is an indicator, the light-absorbing characteristics of which vary upon contact with a particular analyte, the indicator can be applied to a lens, optical fiber, reagant strip, or flow cell for use in chemical analysis. Alternatively, an additive such as alumina particles is blended into the mixture to form a filler composition for patching cracks in metal, glass, or ceramic piping.

  20. Colour based fire detection method with temporal intensity variation filtration

    NASA Astrophysics Data System (ADS)

    Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.

    2015-02-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.

  1. Understanding exoplanet populations with simulation-based methods

    NASA Astrophysics Data System (ADS)

    Morehead, Robert Charles

    The Kepler candidate catalog represents an unprecedented sample of exoplanet host stars. This dataset is ideal for probing the populations of exoplanet systems and exploring their architectures. Confirming transiting exoplanets candidates through traditional follow-up methods is challenging, especially for faint host stars. Most of Kepler's validated planets relied on statistical methods to separate true planets from false-positives. Multiple transiting planet systems (MTPS) have been previously shown to have low false-positive rates and over 850 planets in MTPSs have been statistically validated so far. We show that the period-normalized transit duration ratio (xi) offers additional information that can be used to establish the planetary nature of these systems. We briefly discuss the observed distribution of xi for the Q1-Q17 Kepler Candidate Search. We also use xi to develop a Bayesian statistical framework combined with Monte Carlo methods to determine which pairs of planet candidates in an MTPS are consistent with the planet hypothesis for a sample of 862 MTPSs that include candidate planets, confirmed planets, and known false-positives. This analysis proves to be efficient and advantageous in that it only requires catalog-level bulk candidate properties and galactic population modeling to compute the probabilities of a myriad of feasible scenarios composed of background and companion stellar blends in the photometric aperture, without needing additional observational follow-up. Our results agree with the previous results of a low false-positive rate in the Kepler MTPSs. This implies, independently of any other estimates, that most of the MTPSs detected by Kepler are planetary in nature, but that a substantial fraction could be orbiting stars other than then the putative target star, and therefore may be subject to significant error in the inferred planet parameters resulting from unknown or mismeasured stellar host attributes. We also apply approximate

  2. Module Based Differential Coexpression Analysis Method for Type 2 Diabetes

    PubMed Central

    Yuan, Lin; Zheng, Chun-Hou; Xia, Jun-Feng; Huang, De-Shuang

    2015-01-01

    More and more studies have shown that many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional biological pathway or network and are highly correlated. Differential coexpression analysis, as a more comprehensive technique to the differential expression analysis, was raised to research gene regulatory networks and biological pathways of phenotypic changes through measuring gene correlation changes between disease and normal conditions. In this paper, we propose a gene differential coexpression analysis algorithm in the level of gene sets and apply the algorithm to a publicly available type 2 diabetes (T2D) expression dataset. Firstly, we calculate coexpression biweight midcorrelation coefficients between all gene pairs. Then, we select informative correlation pairs using the “differential coexpression threshold” strategy. Finally, we identify the differential coexpression gene modules using maximum clique concept and k-clique algorithm. We apply the proposed differential coexpression analysis method on simulated data and T2D data. Two differential coexpression gene modules about T2D were detected, which should be useful for exploring the biological function of the related genes. PMID:26339648

  3. Alternative processing methods for tungsten-base composite materials

    SciTech Connect

    Ohriner, E.K.; Sikka, V.K.

    1996-06-01

    Tungsten composite materials contain large amounts of tungsten distributed in a continuous matrix phase. Current commercial materials include the tungsten-nickel-iron with cobalt replacing some or all of the iron, and also tungsten-copper materials. Typically, these are fabricated by liquid-phase sintering of blended powders. Liquid-phase sintering offers the advantages of low processing costs, established technology, and generally attractive mechanical properties. However, liquid-phase sintering is restricted to a very limited number of matrix alloying elements and a limited range of tungsten and alloying compositions. In the past few years, there has been interest in a wider range of matrix materials that offer the potential for superior composite properties. These must be processed by solid-state processes and at sufficiently low temperatures to avoid undesired reactions between the tungsten and the matrix phase. These processes, in order of decreasing process temperature requirements, include hot isostatic pressing (HEPing), hot extrusion, and dynamic compaction. The HIPing and hot extrusion processes have also been used to improve mechanical properties of conventional liquid-phase-sintered materials. The results of laboratory-scale investigations of solid-state consolidation of a variety of matrix materials, including titanium, hafnium, nickel aluminide, and steels are reviewed. The potential advantages and disadvantages of each of the possible alternative consolidation processes are identified. Post consolidation processing to control microstructure and macrostructure is discussed, including novel methods of controlling microstructure alignment.

  4. Alternative processing methods for tungsten-base composite materials

    SciTech Connect

    Ohriner, E.K.; Sikka, V.K.

    1995-12-31

    Tungsten composite materials contain large amounts of tungsten distributed in a continuous matrix phase. Current commercial materials include the tungsten-nickel-iron with cobalt replacing some or all of the iron, and also tungsten-copper materials. Typically, these are fabricated by liquid-phase sintering of blended powders. Liquid-phase sintering offers the advantages of low processing costs, established technology, and generally attractive mechanical properties. However, liquid-phase sintering is restricted to a very limited number of matrix alloying elements and a limited range of tungsten and alloying compositions. In the past few years, there has been interest in a wider range of matrix materials that offer the potential for superior composite properties. These must be processed by solid-state processes and at sufficiently low temperatures to avoid undesired reactions between the tungsten and the matrix phase. These processes, in order of decreasing process temperature requirements, include hot-isostatic pressing (HIPing), hot extrusion, and dynamic compaction. The HIPing and hot extrusion processes have also been used to improve mechanical properties of conventional liquid-phase-sintered materials. Results of laboratory-scale investigations of solid-state consolidation of a variety of matrix materials, including titanium, hafnium, nickel aluminide, and steels are reviewed. The potential advantages and disadvantages of each of the possible alternative consolidation processes are identified. Postconsolidation processing to control microstructure and macrostructure is discussed, including novel methods of controlling microstructure alignment.

  5. Control method for mixed refrigerant based natural gas liquefier

    SciTech Connect

    Kountz, Kenneth J.; Bishop, Patrick M.

    2003-01-01

    In a natural gas liquefaction system having a refrigerant storage circuit, a refrigerant circulation circuit in fluid communication with the refrigerant storage circuit, and a natural gas liquefaction circuit in thermal communication with the refrigerant circulation circuit, a method for liquefaction of natural gas in which pressure in the refrigerant circulation circuit is adjusted to below about 175 psig by exchange of refrigerant with the refrigerant storage circuit. A variable speed motor is started whereby operation of a compressor is initiated. The compressor is operated at full discharge capacity. Operation of an expansion valve is initiated whereby suction pressure at the suction pressure port of the compressor is maintained below about 30 psig and discharge pressure at the discharge pressure port of the compressor is maintained below about 350 psig. Refrigerant vapor is introduced from the refrigerant holding tank into the refrigerant circulation circuit until the suction pressure is reduced to below about 15 psig, after which flow of the refrigerant vapor from the refrigerant holding tank is terminated. Natural gas is then introduced into a natural gas liquefier, resulting in liquefaction of the natural gas.

  6. Printer resolution measurement based on slanted edge method

    NASA Astrophysics Data System (ADS)

    Bang, Yousun; Kim, Sang Ho; Choi, Don Chul

    2008-01-01

    Printer resolution is an important attribute for determining print quality, and it has been frequently referred to hardware optical resolution. However, the spatial addressability of hardcopy is not directly related to optical resolution because it is affected by printing mechanism, media, or software data processing such as resolution enhancement techniques (RET). The international organization ISO/IEC SC28 addresses this issue, and makes efforts to develop a new metric to measure this effective resolution. As the development process, this paper proposes a candidate metric for measuring printer resolution. Slanted edge method has been used to evaluate image sharpness for scanners and digital still cameras. In this paper, it is applied to monochrome laser printers. A test chart is modified to reduce the effect of halftone patterns. Using a flatbed scanner, the spatial frequency response (SFR) is measured and modeled with a spline function. The frequency corresponding to 0.1 SFR is used in the metric for printer resolution. The stability of the metric is investigated in five separate experiments: (1) page to page variations, (2) different ROI locations, (3) different ROI sizes, (4) variations of toner density, and (5) correlation with visual quality. The 0.1 SFR frequencies of ten printers are analyzed. Experimental results show the strong correlation between a proposed metric and perceptual quality.

  7. Nuclear-based methods for the study of selenium

    SciTech Connect

    Spyrou, N.M.; Akanle, O.A.; Dhani, A. )

    1988-01-01

    The essentiality of selenium to the human being and in particular its deficiency state, associated with prolonged inadequate dietary intake, have received considerable attention. In addition, the possible relationship between selenium and cancer and the claim that selenium may possess cancer-prevention properties have focused research effort. It has been observed in a number of studies on laboratory animals that selenium supplementation protects the animals against carcinogen-induced neoplastic growth in various organ sites, reduces the incidence of spontaneous mammary tumors, and suppresses the growth of transplanted tumor cells. In these research programs on the relationship between trace element levels and senile dementia and depression and the elemental changes in blood associated with selenium supplementation in a normal group of volunteers, it became obvious that in addition to establishing normal levels of elements in the population of interest, there was a more fundamental requirement for methods to be developed that would allow the study of the distribution of selenium in the body and its binding sites. The authors propose emission tomography and perturbed angular correlation as techniques worth exploring.

  8. A rapid wire-based sampling method for DNA profiling.

    PubMed

    Chen, Tong; Catcheside, David E A; Stephenson, Alice; Hefford, Chris; Kirkbride, K Paul; Burgoyne, Leigh A

    2012-03-01

    This paper reports the results of a commission to develop a field deployable rapid short tandem repeat (STR)-based DNA profiling system to enable discrimination between tissues derived from a small number of individuals. Speed was achieved by truncation of sample preparation and field deployability by use of an Agilent 2100 Bioanalyser(TM). Human blood and tissues were stabbed with heated stainless steel wire and the resulting sample dehydrated with isopropanol prior to direct addition to a PCR. Choice of a polymerase tolerant of tissue residues and cycles of amplification appropriate for the amount of template expected yielded useful profiles with a custom-designed quintuplex primer set suitable for use with the Bioanalyser(TM). Samples stored on wires remained amplifiable for months, allowing their transportation unrefrigerated from remote locations to a laboratory for analysis using AmpFlSTR(®) Profiler Plus(®) without further processing. The field system meets the requirements for discrimination of samples from small sets and retains access to full STR profiling when required.

  9. Forecasting method in multilateration accuracy based on laser tracker measurement

    NASA Astrophysics Data System (ADS)

    Aguado, Sergio; Santolaria, Jorge; Samper, David; José Aguilar, Juan

    2017-02-01

    Multilateration based on a laser tracker (LT) requires the measurement of a set of points from three or more positions. Although the LTs’ angular information is not used, multilateration produces a volume of measurement uncertainty. This paper presents two new coefficients from which to determine whether the measurement of a set of points, before performing the necessary measurements, will improve or worsen the accuracy of the multilateration results, avoiding unnecessary measurement, and reducing the time and economic cost required. The first specific coefficient measurement coefficient (MCLT) is unique for each laser tracker. It determines the relationship between the radial and angular laser tracker measurement noise. Similarly, the second coefficient is related with specific conditions of measurement β. It is related with the spatial angle between the laser tracker positions α and its effect on error reduction. Both parameters MCLT and β are linked in error reduction limits. Beside these, a new methodology to determine the multilateration reduction limit according to the multilateration technique of an ideal laser tracker distribution and a random one are presented. It provides general rules and advice from synthetic tests that are validated through a real test carried out in a coordinate measurement machine.

  10. Affinity-based methods in drug-target discovery.

    PubMed

    Rylova, Gabriela; Ozdian, Tomas; Varanasi, Lakshman; Soural, Miroslav; Hlavac, Jan; Holub, Dusan; Dzubak, Petr; Hajduch, Marian

    2015-01-01

    Target discovery using the molecular approach, as opposed to the more traditional systems approach requires the study of the cellular or biological process underlying a condition or disease. The approaches that are employed by the "bench" scientist may be genetic, genomic or proteomic and each has its rightful place in the drug-target discovery process. Affinity-based proteomic techniques currently used in drug-discovery draw upon several disciplines, synthetic chemistry, cell-biology, biochemistry and mass spectrometry. An important component of such techniques is the probe that is specifically designed to pick out a protein or set of proteins from amongst the varied thousands in a cell lysate. A second component, that is just as important, is liquid-chromatography tandem massspectrometry (LC-MS/MS). LC-MS/MS and the supporting theoretical framework has come of age and is the tool of choice for protein identification and quantification. These proteomic tools are critical to maintaining the drug-candidate supply, in the larger context of drug discovery.

  11. Instruments and Methods A physically based method for correcting temperature profile measurements made using thermocouples

    NASA Astrophysics Data System (ADS)

    Cathles, L. Maclagan, IV; Cathles, L. M., III; Albert, M. R.

    High-frequency (diurnal) temperature variations occur simultaneously at multiple depths separated by meters of snow in at least several and probably many Arctic and Antarctic thermocouple datasets. These temperature variations cannot be caused by heat conduction from the surface because their amplitudes are too large and there is no phase lag with depth, and they cannot be caused by heat advection because the air flux required is greater than is available. Rather, the simultaneous temperature variations (STVs) appear to originate within the box that houses the data logger as thermocouple-like offset voltages, wire heating or thermistor error. The STVs can be corrected by requiring that the temperatures vary smoothly with time at the greatest depth at which temperature is measured. The correction voltage determined in this fashion, when applied to the thermocouples at other depths, corrects the entire dataset. The method successfully removes STVs with 24 hour period that are up to 3.8°C in amplitude, and is superior to the averaging techniques commonly used to correct thermocouple data because it introduces no spurious (non-physical) temperature variations. The correction method described can be applied to all thermocouple data where temperature measurements have been made at depths > ˜0.5 m into the snowpack. The corrections should allow more physical process and parameter information to be extracted more confidently from existing firn temperature data. Identification of the STVs and their probable cause also suggests how better data might be collected in the future.

  12. Wurfelspiel-based training data methods for ATR

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-09-01

    A data object is constructed from a P by M Wurfelspiel matrix W by choosing an entry from each column to construct a sequence A0A1"AM-1. Each of the PM possibilities are designed to correspond to the same category according to some chosen measure. This matrix could encode many types of data. (1) Musical fragments, all of which evoke sadness; each column entry is a 4 beat sequence with a chosen A0A1A2 thus 16 beats long (W is P by 3). (2) Paintings, all of which evoke happiness; each column entry is a layer and a given A0A1A2 is a painting constructed using these layers (W is P by 3). (3) abstract feature vectors corresponding to action potentials evoked from a biological cell's exposure to a toxin. The action potential is divided into four relevant regions and each column entry represents the feature vector of a region. A given A0A1A2 is then an abstraction of the excitable cell's output (W is P by 4). (4) abstract feature vectors corresponding to an object such as a face or vehicle. The object is divided into four categories each assigned an abstract feature vector with the resulting concatenation an abstract representation of the object (W is P by 4). All of the examples above correspond to one particular measure (sad music, happy paintings, an introduced toxin, an object to recognize)and hence, when a Wurfelspiel matrix is constructed, relevant training information for recognition is encoded that can be used in many algorithms. The focus of this paper is on the application of these ideas to automatic target recognition (ATR). In addition, we discuss a larger biologically based model of temporal cortex polymodal sensor fusion which can use the feature vectors extracted from the ATR Wurfelspiel data.

  13. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    USGS Publications Warehouse

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  14. Monte Carlo based angular distribution estimation method of multiply scattered photons for underwater imaging

    NASA Astrophysics Data System (ADS)

    Li, Shengfu; Chen, Guanghua; Wang, Rongbo; Luo, Zhengxiong; Peng, Qixian

    2016-12-01

    This paper proposes a Monte Carlo (MC) based angular distribution estimation method of multiply scattered photons for underwater imaging. This method targets on turbid waters. Our method is based on applying typical Monte Carlo ideas to the present problem by combining all the points on a spherical surface. The proposed method is validated with the numerical solution of the radiative transfer equation (RTE). The simulation results based on typical optical parameters of turbid waters show that the proposed method is effective in terms of computational speed and sensitivity.

  15. Comparison of a silver nanoparticle-based method and the modified spectrophotometric methods for assessing antioxidant capacity of rapeseed varieties.

    PubMed

    Szydłowska-Czerniak, Aleksandra; Tułodziecka, Agnieszka

    2013-12-01

    The antioxidant capacity of 15 rapeseed varieties was determined by the proposed silver nanoparticle-based (AgNP) method and three modified assays: ferric reducing antioxidant power (FRAP), 2,2'-diphenyl-1-picrylhydrazyl (DPPH) and Folin-Ciocalteu reducing capacity (FC). The average antioxidant capacities of the studied rapeseed cultivars ranged between 5261-9462, 3708-7112, 18864-31245 and 5816-9937 μmol sinapic acid (SA)/100g for AgNP, FRAP, DPPH and FC methods, respectively. There are significant, positive correlations between antioxidant capacities of the studied rapeseed cultivars determined by four analytical methods (r=0.5971-0.9149, p<0.05). The comparable precision for the proposed AgNP method (RSD=1.4-4.4%) and the modified FRAP, DPPH and FC methods (RSD=1.0-4.4%, 0.7-2.1% and 0.8-3.6%, respectively), demonstrate the benefit of the AgNP method in the routine analysis of antioxidant capacity of rapeseed cultivars. The principal component analysis (PCA) and hierarchical cluster analysis (HCA) were used for discrimination the quality of the studied rapeseed varieties based on their antioxidant potential determined by different analytical methods. Three main groups were identified by HCA, while the classification and characterisation of rapeseed varieties within each of these groups were obtained from PCA. The chemometric analyses demonstrated that, rapeseed variety S13 had the highest antioxidant capacity, thus this cultivar should be considered as the richest source of natural antioxidants.

  16. a Range Based Method for Complex Facade Modeling

    NASA Astrophysics Data System (ADS)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel) is displaced according the value of gray (= distance from the plane). This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  17. WebMail versus WebApp: Comparing Problem-Based Learning Methods in a Business Research Methods Course

    ERIC Educational Resources Information Center

    Williams van Rooij, Shahron

    2007-01-01

    This study examined the impact of two Problem-Based Learning (PBL) approaches on knowledge transfer, problem-solving self-efficacy, and perceived learning gains among four intact classes of adult learners engaged in a group project in an online undergraduate business research methods course. With two of the classes using a text-only PBL workbook…

  18. Approach-Method Interaction: The Role of Teaching Method on the Effect of Context-Based Approach in Physics Instruction

    ERIC Educational Resources Information Center

    Pesman, Haki; Ozdemir, Omer Faruk

    2012-01-01

    The purpose of this study is to explore not only the effect of context-based physics instruction on students' achievement and motivation in physics, but also how the use of different teaching methods influences it (interaction effect). Therefore, two two-level-independent variables were defined, teaching approach (contextual and non-contextual…

  19. Novel Fingertip Image-Based Heart Rate Detection Methods for a Smartphone.

    PubMed

    Zaman, Rifat; Cho, Chae Ho; Hartmann-Vaccarezza, Konrad; Phan, Tra Nguyen; Yoon, Gwonchan; Chong, Jo Woon

    2017-02-12

    We hypothesize that our smartphone-based fingertip image-based heart rate detection methods reliably detect the heart rhythm and rate of subjects. We propose fingertip curve line movement-based and fingertip image intensity-based detection methods, which both use the movement of successive fingertip images obtained from smartphone cameras. To investigate the performance of the proposed methods, heart rhythm and rate of the proposed methods are compared to those of the conventional method, which is based on average image pixel intensity. Using a smartphone, we collected 120 s pulsatile time series from each recruited subject. The results show that the proposed fingertip curve line movement-based method detects heart rate with a maximum deviation of 0.0832 Hz and 0.124 Hz using time- and frequency-domain based estimation, respectively, compared to the conventional method. Moreover, another proposed fingertip image intensity-based method detects heart rate with a maximum deviation of 0.125 Hz and 0.03 Hz using time- and frequency-based estimation, respectively.

  20. The Tool for Designing Engineering Systems Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.

  1. Hybrid statistics-simulations based method for atom-counting from ADF STEM images.

    PubMed

    De Wael, Annelies; De Backer, Annick; Jones, Lewys; Nellist, Peter D; Van Aert, Sandra

    2017-01-25

    A hybrid statistics-simulations based method for atom-counting from annular dark field scanning transmission electron microscopy (ADF STEM) images of monotype crystalline nanostructures is presented. Different atom-counting methods already exist for model-like systems. However, the increasing relevance of radiation damage in the study of nanostructures demands a method that allows atom-counting from low dose images with a low signal-to-noise ratio. Therefore, the hybrid method directly includes prior knowledge from image simulations into the existing statistics-based method for atom-counting, and accounts in this manner for possible discrepancies between actual and simulated experimental conditions. It is shown by means of simulations and experiments that this hybrid method outperforms the statistics-based method, especially for low electron doses and small nanoparticles. The analysis of a simulated low dose image of a small nanoparticle suggests that this method allows for far more reliable quantitative analysis of beam-sensitive materials.

  2. Method for PE Pipes Fusion Jointing Based on TRIZ Contradictions Theory

    NASA Astrophysics Data System (ADS)

    Sun, Jianguang; Tan, Runhua; Gao, Jinyong; Wei, Zihui

    The core of the TRIZ theories is the contradiction detection and solution. TRIZ provided various methods for the contradiction solution, but all that is not systematized. Combined with the technique system conception, this paper summarizes an integration solution method for contradiction solution based on the TRIZ contradiction theory. According to the method, a flowchart of integration solution method for contradiction is given. As a casestudy, method of fusion jointing PE pipe is analysised.

  3. Novel Fingertip Image-Based Heart Rate Detection Methods for a Smartphone

    PubMed Central

    Zaman, Rifat; Cho, Chae Ho; Hartmann-Vaccarezza, Konrad; Phan, Tra Nguyen; Yoon, Gwonchan; Chong, Jo Woon

    2017-01-01

    We hypothesize that our fingertip image-based heart rate detection methods using smartphone reliably detect the heart rhythm and rate of subjects. We propose fingertip curve line movement-based and fingertip image intensity-based detection methods, which both use the movement of successive fingertip images obtained from smartphone cameras. To investigate the performance of the proposed methods, heart rhythm and rate of the proposed methods are compared to those of the conventional method, which is based on average image pixel intensity. Using a smartphone, we collected 120 s pulsatile time series data from each recruited subject. The results show that the proposed fingertip curve line movement-based method detects heart rate with a maximum deviation of 0.0832 Hz and 0.124 Hz using time- and frequency-domain based estimation, respectively, compared to the conventional method. Moreover, another proposed fingertip image intensity-based method detects heart rate with a maximum deviation of 0.125 Hz and 0.03 Hz using time- and frequency-based estimation, respectively. PMID:28208678

  4. System and method for integrating hazard-based decision making tools and processes

    DOEpatents

    Hodgin, C Reed [Westminster, CO

    2012-03-20

    A system and method for inputting, analyzing, and disseminating information necessary for identified decision-makers to respond to emergency situations. This system and method provides consistency and integration among multiple groups, and may be used for both initial consequence-based decisions and follow-on consequence-based decisions. The system and method in a preferred embodiment also provides tools for accessing and manipulating information that are appropriate for each decision-maker, in order to achieve more reasoned and timely consequence-based decisions. The invention includes processes for designing and implementing a system or method for responding to emergency situations.

  5. Powder-based adsorbents having high adsorption capacities for recovering dissolved metals and methods thereof

    DOEpatents

    Janke, Christopher J.; Dai, Sheng; Oyola, Yatsandra

    2016-05-03

    A powder-based adsorbent and a related method of manufacture are provided. The powder-based adsorbent includes polymer powder with grafted side chains and an increased surface area per unit weight to increase the adsorption of dissolved metals, for example uranium, from aqueous solutions. A method for forming the powder-based adsorbent includes irradiating polymer powder, grafting with polymerizable reactive monomers, reacting with hydroxylamine, and conditioning with an alkaline solution. Powder-based adsorbents formed according to the present method demonstrated a significantly improved uranium adsorption capacity per unit weight over existing adsorbents.

  6. Foam-based adsorbents having high adsorption capacities for recovering dissolved metals and methods thereof

    DOEpatents

    Janke, Christopher J.; Dai, Sheng; Oyola, Yatsandra

    2015-06-02

    Foam-based adsorbents and a related method of manufacture are provided. The foam-based adsorbents include polymer foam with grafted side chains and an increased surface area per unit weight to increase the adsorption of dissolved metals, for example uranium, from aqueous solutions. A method for forming the foam-based adsorbents includes irradiating polymer foam, grafting with polymerizable reactive monomers, reacting with hydroxylamine, and conditioning with an alkaline solution. Foam-based adsorbents formed according to the present method demonstrated a significantly improved uranium adsorption capacity per unit weight over existing adsorbents.

  7. A shallow landslide analysis method consisting of contour line based method and slope stability model with critical slip surface

    NASA Astrophysics Data System (ADS)

    Tsutsumi, D.

    2015-12-01

    To mitigate sediment related disaster triggered by rainfall event, it is necessary to predict a landslide occurrence and subsequent debris flow behavior. Many landslide analysis method have been developed and proposed by numerous researchers for several decades. Among them, distributed slope stability models simulating temporal and spatial instability of local slopes are more essential for early warning or evacuation in area of lower part of hill-slopes. In the present study, a distributed, physically based landslide analysis method consisting of contour line-based method that subdivide a watershed area into stream tubes, and a slope stability analysis in which critical slip surface is searched to identify location and shape of the most instable slip surface in each stream tube, is developed. A target watershed area is divided into stream tubes using GIS technique, grand water flow for each stream tubes during a rainfall event is analyzed by a kinematic wave model, and slope stability for each stream tube is calculated by a simplified Janbu method searching for a critical slip surface using a dynamic programming method. Comparing to previous methods that assume infinite slope for slope stability analysis, the proposed method has advantage simulating landslides more accurately in spatially and temporally, and estimating amount of collapsed slope mass, that can be delivered to a debris flow simulation model as a input data. We applied this method to a small watershed in the Izu Oshima, Tokyo, Japan, where shallow and wide landslides triggered by heavy rainfall and subsequent debris flows attacked Oshima Town, in 2013. Figure shows the temporal and spatial change of simulated grand water level and landslides distribution. The simulated landslides are correspond to the uppermost part of actual landslide area, and the timing of the occurrence of landslides agree well with the actual landslides.

  8. Comparative analysis of machine learning methods in ligand-based virtual screening of large compound libraries.

    PubMed

    Ma, Xiao H; Jia, Jia; Zhu, Feng; Xue, Ying; Li, Ze R; Chen, Yu Z

    2009-05-01

    Machine learning methods have been explored as ligand-based virtual screening tools for facilitating drug lead discovery. These methods predict compounds of specific pharmacodynamic, pharmacokinetic or toxicological properties based on their structure-derived structural and physicochemical properties. Increasing attention has been directed at these methods because of their capability in predicting compounds of diverse structures and complex structure-activity relationships without requiring the knowledge of target 3D structure. This article reviews current progresses in using machine learning methods for virtual screening of pharmacodynamically active compounds from large compound libraries, and analyzes and compares the reported performances of machine learning tools with those of structure-based and other ligand-based (such as pharmacophore and clustering) virtual screening methods. The feasibility to improve the performance of machine learning methods in screening large libraries is discussed.

  9. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  10. Fast calculation with point-based method to make CGHs of the polygon model

    NASA Astrophysics Data System (ADS)

    Ogihara, Yuki; Ichikawa, Tsubasa; Sakamoto, Yuji

    2014-02-01

    Holography is one of the three-dimensional technology. Light waves from an object are recorded and reconstructed by using a hologram. Computer generated holograms (CGHs), which are made by simulating light propagation using a computer, are able to represent virtual object. However, an enormous amount of computation time is required to make CGHs. There are two primary methods of calculating CGHs: the polygon-based method and the point-based method. In the polygon-based method with Fourier transforms, CGHs are calculated using a fast Fourier transform (FFT). The calculation of complex objects composed of multiple polygons requires as many FFTs, so unfortunately the calculation time become enormous. In contrast, in the point-based method, it is easy to express complex objects, an enormous calculation time is still required. Graphics processing units (GPUs) have been used to speed up the calculations of point-based method. Because a GPU is specialized for parallel computation and CGH calculation can be calculated independently for each pixel. However, expressing a planar object by the point-based method requires a signi cant increase in the density of points and consequently in the number of point light sources. In this paper, we propose a fast calculation algorithm to express planar objects by the point-based method with a GPU. The proposed method accelerate calculation by obtaining the distance between a pixel and the point light source from the adjacent point light source by a difference method. Under certain speci ed conditions, the difference between adjacent object points becomes constant, so the distance is obtained by only an additions. Experimental results showed that the proposed method is more effective than the polygon-based method with FFT when the number of polygons composing an objects are high.

  11. Methods of capturing and immobilizing radioactive nuclei with metal fluorite-based inorganic materials

    SciTech Connect

    Wang, Yifeng; Miller, Andy; Bryan, Charles R; Kruichar, Jessica Nicole

    2015-04-07

    Methods of capturing and immobilizing radioactive nuclei with metal fluorite-based inorganic materials are described. For example, a method of capturing and immobilizing radioactive nuclei includes flowing a gas stream through an exhaust apparatus. The exhaust apparatus includes a metal fluorite-based inorganic material. The gas stream includes a radioactive species. The radioactive species is removed from the gas stream by adsorbing the radioactive species to the metal fluorite-based inorganic material of the exhaust apparatus.

  12. Method to produce nanocrystalline powders of oxide-based phosphors for lighting applications

    DOEpatents

    Loureiro, Sergio Paulo Martins; Setlur, Anant Achyut; Williams, Darryl Stephen; Manoharan, Mohan; Srivastava, Alok Mani

    2007-12-25

    Some embodiments of the present invention are directed toward nanocrystalline oxide-based phosphor materials, and methods for making same. Typically, such methods comprise a steric entrapment route for converting precursors into such phosphor material. In some embodiments, the nanocrystalline oxide-based phosphor materials are quantum splitting phosphors. In some or other embodiments, such nanocrystalline oxide based phosphor materials provide reduced scattering, leading to greater efficiency, when used in lighting applications.

  13. Microwave Plasma Based Single-Step Method for Generation of Carbon Nanostructures

    DTIC Science & Technology

    2013-07-01

    31st ICPIG, July 14-19, 2013, Granada, Spain Microwave plasma based single-step method for generation of carbon nanostructures A. Dias 1 , E...Nowadays, carbon based two-dimensional (2D) nanostructures are one of the ongoing strategic research areas in science and technology. Graphene, an...fabrication, to obtain transferable sheets [1]. A plasma based method to synthesize substrate free, i.e., “free–standing” graphene at ambient conditions has

  14. Methods of capturing and immobilizing radioactive nuclei with metal fluorite-based inorganic materials

    SciTech Connect

    Wang, Yifeng; Miller, Andy; Bryan, Charles R.; Kruichak, Jessica Nicole

    2015-11-17

    Methods of capturing and immobilizing radioactive nuclei with metal fluorite-based inorganic materials are described. For example, a method of capturing and immobilizing radioactive nuclei includes flowing a gas stream through an exhaust apparatus. The exhaust apparatus includes a metal fluorite-based inorganic material. The gas stream includes a radioactive species. The radioactive species is removed from the gas stream by adsorbing the radioactive species to the metal fluorite-based inorganic material of the exhaust apparatus.

  15. The Simulation of the Recharging Method Based on Solar Radiation for an Implantable Biosensor

    PubMed Central

    Li, Yun; Song, Yong; Kong, Xianyue; Li, Maoyuan; Zhao, Yufei; Hao, Qun; Gao, Tianxin

    2016-01-01

    A method of recharging implantable biosensors based on solar radiation is proposed. Firstly, the models of the proposed method are developed. Secondly, the recharging processes based on solar radiation are simulated using Monte Carlo (MC) method and the energy distributions of sunlight within the different layers of human skin have been achieved and discussed. Finally, the simulation results are verified experimentally, which indicates that the proposed method will contribute to achieve a low-cost, convenient and safe method for recharging implantable biosensors. PMID:27626422

  16. Design Tool Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.

  17. Steam turbine start up method based on predictive monitoring and control of thermal stresses

    SciTech Connect

    Matsumura, J.; Matsumoto, H.; Niyawara, S.; Urushidani, H.

    1985-04-01

    A turbine start up program decision and control method based on rotor thermal stresses has been developed. This method featured scheduling punctuality for the start up program, in addition to the start up optimization, and was especially suited to daily start up and shut down (DSS) units. The method was applied to a 375MW DSS unit which verified its effectiveness.

  18. Nodal Analysis Optimization Based on the Use of Virtual Current Sources: A Powerful New Pedagogical Method

    ERIC Educational Resources Information Center

    Chatzarakis, G. E.

    2009-01-01

    This paper presents a new pedagogical method for nodal analysis optimization based on the use of virtual current sources, applicable to any linear electric circuit (LEC), regardless of its complexity. The proposed method leads to straightforward solutions, mostly arrived at by inspection. Furthermore, the method is easily adapted to computer…

  19. An Inquiry-Based Approach to Teaching Research Methods in Information Studies

    ERIC Educational Resources Information Center

    Albright, Kendra; Petrulis, Robert; Vasconcelos, Ana; Wood, Jamie

    2012-01-01

    This paper presents the results of a project that aimed at restructuring the delivery of research methods training at the Information School at the University of Sheffield, UK, based on an Inquiry-Based Learning (IBL) approach. The purpose of this research was to implement inquiry-based learning that would allow customization of research methods…

  20. Implementing a Problem-Based Learning Approach for Teaching Research Methods in Geography

    ERIC Educational Resources Information Center

    Spronken-Smith, Rachel

    2005-01-01

    This paper first describes problem-based learning; second describes how a research methods course in geography is taught using a problem-based learning approach; and finally relates student and staff experiences of this approach. The course is run through regular group meetings, two residential field trips and optional skills-based workshops.…

  1. Implementing Web-Based Scientific Inquiry in Preservice Science Methods Courses

    ERIC Educational Resources Information Center

    Bodzin, Alec M.

    2005-01-01

    This paper describes how the Web-based Inquiry for Learning Science (WBI) instrument was used with preservice elementary and secondary science teachers in science methods courses to enhance their understanding of Web-based scientific inquiry. The WBI instrument is designed to help teachers identify Web-based inquiry activities for learning science…

  2. A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization

    PubMed Central

    Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao

    2016-01-01

    The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322

  3. An improved poly(A) motifs recognition method based on decision level fusion.

    PubMed

    Zhang, Shanxin; Han, Jiuqiang; Liu, Jun; Zheng, Jiguang; Liu, Ruiling

    2015-02-01

    Polyadenylation is the process of addition of poly(A) tail to mRNA 3' ends. Identification of motifs controlling polyadenylation plays an essential role in improving genome annotation accuracy and better understanding of the mechanisms governing gene regulation. The bioinformatics methods used for poly(A) motifs recognition have demonstrated that information extracted from sequences surrounding the candidate motifs can differentiate true motifs from the false ones greatly. However, these methods depend on either domain features or string kernels. To date, methods combining information from different sources have not been found yet. Here, we proposed an improved poly(A) motifs recognition method by combing different sources based on decision level fusion. First of all, two novel prediction methods was proposed based on support vector machine (SVM): one method is achieved by using the domain-specific features and principle component analysis (PCA) method to eliminate the redundancy (PCA-SVM); the other method is based on Oligo string kernel (Oligo-SVM). Then we proposed a novel machine-learning method for poly(A) motif prediction by marrying four poly(A) motifs recognition methods, including two state-of-the-art methods (Random Forest (RF) and HMM-SVM), and two novel proposed methods (PCA-SVM and Oligo-SVM). A decision level information fusion method was employed to combine the decision values of different classifiers by applying the DS evidence theory. We evaluated our method on a comprehensive poly(A) dataset that consists of 14,740 samples on 12 variants of poly(A) motifs and 2750 samples containing none of these motifs. Our method has achieved accuracy up to 86.13%. Compared with the four classifiers, our evidence theory based method reduces the average error rate by about 30%, 27%, 26% and 16%, respectively. The experimental results suggest that the proposed method is more effective for poly(A) motif recognition.

  4. Seed based registration for intraoperative brachytherapy dosimetry: a comparison of methods

    NASA Astrophysics Data System (ADS)

    Su, Yi; Davis, Brian J.; Herman, Michael G.; Robb, Richard A.

    2006-03-01

    Several approaches for registering a subset of imaged points to their true origins were analyzed and compared for seed based TRUS-fluoroscopy registration. The methods include the Downhill Simplex method (DS), the Powell's method (POW), the Iterative Closest Point (ICP) method, the Robust Point Matching method (RPM) and variants of RPM. Several modifications were made to the standard RPM method to improve its performance. One hundred simulations were performed for each combination of noise level, seed detection rate and spurious points and the registration accuracy was evaluated and compared. The noise level ranges from 0 to 5mm, the seed detection ratio ranges from 0.2 to 0.6, and the number of spurious points ranges from 0 to 20. An actual clinical post-implant dataset from permanent prostate brachytherapy was used for the simulation study. The experiments provided evidence that our modified RPM method is superior to other methods, especially when there are many outliers. The RPM based method produced the best results at all noise levels and seed detection rates. The DS based method performed reasonably well, especially at low noise levels without spurious points. There was no significant performance difference between the standard RPM and our modified RPM methods without spurious points. The modified RPM methods outperformed the standard RPM method with large number of spurious points. The registration error was within 2mm, even with 20 outlier points and a noise level of 3mm.

  5. A component prediction method for flue gas of natural gas combustion based on nonlinear partial least squares method.

    PubMed

    Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun

    2014-01-01

    Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.

  6. A hybrid method based upon nonlinear Lamb wave response for locating a delamination in composite laminates.

    PubMed

    Yelve, Nitesh P; Mitra, Mira; Mujumdar, P M; Ramadas, C

    2016-08-01

    A new hybrid method based upon nonlinear Lamb wave response in time and frequency domains is introduced to locate a delamination in composite laminates. In Lamb wave based nonlinear method, the presence of damage is shown by the appearance of higher harmonics in the Lamb wave response. The proposed method not only uses this spectral information but also the corresponding temporal response data, for locating the delamination. Thus, the method is termed as a hybrid method. The paper includes formulation of the method and its application to locate a Barely Visible Impact Damage (BVID) induced delamination in a Carbon Fiber Reinforced Polymer (CFRP) laminate. The method gives the damage location fairly well. It is a baseline free method, as it does not need data from the pristine specimen.

  7. Evaluation of path-history-based fluorescence Monte Carlo method for photon migration in heterogeneous media.

    PubMed

    Jiang, Xu; Deng, Yong; Luo, Zhaoyang; Wang, Kan; Lian, Lichao; Yang, Xiaoquan; Meglinski, Igor; Luo, Qingming

    2014-12-29

    The path-history-based fluorescence Monte Carlo method used for fluorescence tomography imaging reconstruction has attracted increasing attention. In this paper, we first validate the standard fluorescence Monte Carlo (sfMC) method by experimenting with a cylindrical phantom. Then, we describe a path-history-based decoupled fluorescence Monte Carlo (dfMC) method, analyze different perturbation fluorescence Monte Carlo (pfMC) methods, and compare the calculation accuracy and computational efficiency of the dfMC and pfMC methods using the sfMC method as a reference. The results show that the dfMC method is more accurate and efficient than the pfMC method in heterogeneous medium.

  8. MTC: A Fast and Robust Graph-Based Transductive Learning Method.

    PubMed

    Zhang, Yan-Ming; Huang, Kaizhu; Geng, Guang-Gang; Liu, Cheng-Lin

    2015-09-01

    Despite the great success of graph-based transductive learning methods, most of them have serious problems in scalability and robustness. In this paper, we propose an efficient and robust graph-based transductive classification method, called minimum tree cut (MTC), which is suitable for large-scale data. Motivated from the sparse representation of graph, we approximate a graph by a spanning tree. Exploiting the simple structure, we develop a linear-time algorithm to label the tree such that the cut size of the tree is minimized. This significantly improves graph-based methods, which typically have a polynomial time complexity. Moreover, we theoretically and empirically show that the performance of MTC is robust to the graph construction, overcoming another big problem of traditional graph-based methods. Extensive experiments on public data sets and applications on web-spam detection and interactive image segmentation demonstrate our method's advantages in aspect of accuracy, speed, and robustness.

  9. Assessment of health-care waste disposal methods using a VIKOR-based fuzzy multi-criteria decision making method

    SciTech Connect

    Liu, Hu-Chen; Wu, Jing; Li, Ping

    2013-12-15

    Highlights: • Propose a VIKOR-based fuzzy MCDM technique for evaluating HCW disposal methods. • Linguistic variables are used to assess the ratings and weights for the criteria. • The OWA operator is utilized to aggregate individual opinions of decision makers. • A case study is given to illustrate the procedure of the proposed framework. - Abstract: Nowadays selection of the appropriate treatment method in health-care waste (HCW) management has become a challenge task for the municipal authorities especially in developing countries. Assessment of HCW disposal alternatives can be regarded as a complicated multi-criteria decision making (MCDM) problem which requires consideration of multiple alternative solutions and conflicting tangible and intangible criteria. The objective of this paper is to present a new MCDM technique based on fuzzy set theory and VIKOR method for evaluating HCW disposal methods. Linguistic variables are used by decision makers to assess the ratings and weights for the established criteria. The ordered weighted averaging (OWA) operator is utilized to aggregate individual opinions of decision makers into a group assessment. The computational procedure of the proposed framework is illustrated through a case study in Shanghai, one of the largest cities of China. The HCW treatment alternatives considered in this study include “incineration”, “steam sterilization”, “microwave” and “landfill”. The results obtained using the proposed approach are analyzed in a comparative way.

  10. A homotopy method based on WENO schemes for solving steady state problems of hyperbolic conservation laws

    DTIC Science & Technology

    2012-09-03

    use of so-called probability-one methods [22]. The significant advantage of homotopy method to compute steady state solutions is free of Courant ...A homotopy method based on WENO schemes for solving steady state problems of hyperbolic conservation laws Wenrui Hao∗ Jonathan D. Hauenstein† Chi...robustness of the new method . Keywords homotopy continuation, hyperbolic conservation laws, WENO scheme, steady state problems. ∗Department of Applied and

  11. Provider payment methods and health worker motivation in community-based health insurance: a mixed-methods study.

    PubMed

    Robyn, Paul Jacob; Bärnighausen, Till; Souares, Aurélia; Traoré, Adama; Bicaba, Brice; Sié, Ali; Sauerborn, Rainer

    2014-05-01

    In a community-based health insurance (CBHI) introduced in 2004 in Nouna health district, Burkina Faso, poor perceived quality of care by CBHI enrollees has been a key factor in observed high drop-out rates. The poor quality perceptions have been previously attributed to health worker dissatisfaction with the provider payment method used by the scheme and the resulting financial risk of health centers. This study applied a mixed-methods approach to investigate how health workers working in facilities contracted by the CBHI view the methods of provider payment used by the CBHI. In order to analyze these relationships, we conducted 23 in-depth interviews and a quantitative survey with 98 health workers working in the CBHI intervention zone. The qualitative in-depth interviews identified that insufficient levels of capitation payments, the infrequent schedule of capitation payment, and lack of a payment mechanism for reimbursing service fees were perceived as significant sources of health worker dissatisfaction and loss of work-related motivation. Combining qualitative interview and quantitative survey data in a mixed-methods analysis, this study identified that the declining quality of care due to the CBHI provider payment method was a source of significant professional stress and role strain for health workers. Health workers felt that the following five changes due to the provider payment methods introduced by the CBHI impeded their ability to fulfill professional roles and responsibilities: (i) increased financial volatility of health facilities, (ii) dissatisfaction with eligible costs to be covered by capitation; (iii) increased pharmacy stock-outs; (iv) limited financial and material support from the CBHI; and (v) the lack of mechanisms to increase provider motivation to support the CBHI. To address these challenges and improve CBHI uptake and health outcomes in the targeted populations, the health care financing and delivery model in the study zone should be

  12. An optimized fast image resizing method based on content-aware

    NASA Astrophysics Data System (ADS)

    Lu, Yan; Gao, Kun; Wang, Kewang; Xu, Tingfa

    2014-11-01

    In traditional image resizing theory based on interpolation, the prominent object may cause distortion, and the image resizing method based on content-aware has become a research focus in image processing because the prominent content and structural features of images are considered in this method. In this paper, we present an optimized fast image resizing method based on content-aware. Firstly, an appropriate energy function model is constructed on the basis of image meshes, and multiple energy constraint templates are established. In addition, this paper deducts the image saliency constraints, and then the problem of image resizing is used to reformulate a kind of convex quadratic program task. Secondly, a method based on neural network is presented in solving the problem of convex quadratic program. The corresponding neural network model is constructed; moreover, some sufficient conditions of the neural network stability are given. Compared with the traditional numerical algorithm such as iterative method, the neural network method is essentially parallel and distributed, which can expedite the calculation speed. Finally, the effects of image resizing by the proposed method and traditional image resizing method based on interpolation are compared by adopting MATLAB software. Experiment results show that this method has a higher performance of identifying the prominent object, and the prominent features can be preserved effectively after the image is resized. It also has the advantages of high portability and good real-time performance with low visual distortion.

  13. [An automatic peak detection method for LIBS spectrum based on continuous wavelet transform].

    PubMed

    Chen, Peng-Fei; Tian, Di; Qiao, Shu-Jun; Yang, Guang

    2014-07-01

    Spectrum peak detection in the laser-induced breakdown spectroscopy (LIBS) is an essential step, but the presence of background and noise seriously disturb the accuracy of peak position. The present paper proposed a method applied to automatic peak detection for LIBS spectrum in order to enhance the ability of overlapping peaks searching and adaptivity. We introduced the ridge peak detection method based on continuous wavelet transform to LIBS, and discussed the choice of the mother wavelet and optimized the scale factor and the shift factor. This method also improved the ridge peak detection method with a correcting ridge method. The experimental results show that compared with other peak detection methods (the direct comparison method, derivative method and ridge peak search method), our method had a significant advantage on the ability to distinguish overlapping peaks and the precision of peak detection, and could be be applied to data processing in LIBS.

  14. A Method of DTM Construction Based on Quadrangular Irregular Networks and Related Error Analysis.

    PubMed

    Kang, Mengjun; Wang, Mingjun; Du, Qingyun

    2015-01-01

    A new method of DTM construction based on quadrangular irregular networks (QINs) that considers all the original data points and has a topological matrix is presented. A numerical test and a real-world example are used to comparatively analyse the accuracy of QINs against classical interpolation methods and other DTM representation methods, including SPLINE, KRIGING and triangulated irregular networks (TINs). The numerical test finds that the QIN method is the second-most accurate of the four methods. In the real-world example, DTMs are constructed using QINs and the three classical interpolation methods. The results indicate that the QIN method is the most accurate method tested. The difference in accuracy rank seems to be caused by the locations of the data points sampled. Although the QIN method has drawbacks, it is an alternative method for DTM construction.

  15. Evaluation of Vacuum Blasting and Heat Guns as Methods for Abating Lead- Based Paint on Buildings

    DTIC Science & Technology

    1993-09-01

    profile measured on metal and wood after cleaning. 20 5 USING VACUUM ABRASIVE TO REMOVE LEAD- BASED PAINT Abrasive Cleaning Units The results of vacuum...Laboratories Evaluation of Vacuum Blasting and Heat Guns as Methods for Abating Lead- based Paint on Buildings by Jan W. Gooch Susan A. Drozdz The U.S. Army...exterior surfaces painted with lead- based paint. To minimize potential health problems resulting from exposure to lead- based paint, the Army is

  16. Needs and Opportunities for Uncertainty-Based Multidisciplinary Design Methods for Aerospace Vehicles

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Hemsch, Michael J.; Hilburger, Mark W.; Kenny, Sean P; Luckring, James M.; Maghami, Peiman; Padula, Sharon L.; Stroud, W. Jefferson

    2002-01-01

    This report consists of a survey of the state of the art in uncertainty-based design together with recommendations for a Base research activity in this area for the NASA Langley Research Center. This report identifies the needs and opportunities for computational and experimental methods that provide accurate, efficient solutions to nondeterministic multidisciplinary aerospace vehicle design problems. Barriers to the adoption of uncertainty-based design methods are identified. and the benefits of the use of such methods are explained. Particular research needs are listed.

  17. Note: Model-based identification method of a cable-driven wearable device for arm rehabilitation

    NASA Astrophysics Data System (ADS)

    Cui, Xiang; Chen, Weihai; Zhang, Jianbin; Wang, Jianhua

    2015-09-01

    Cable-driven exoskeletons have used active cables to actuate the system and are worn on subjects to provide motion assistance. However, this kind of wearable devices usually contains uncertain kinematic parameters. In this paper, a model-based identification method has been proposed for a cable-driven arm exoskeleton to estimate its uncertainties. The identification method is based on the linearized error model derived from the kinematics of the exoskeleton. Experiment has been conducted to demonstrate the feasibility of the proposed model-based method in practical application.

  18. Multidisciplinary and Evidence-based Method for Prioritizing Diseases of Food-producing Animals and Zoonoses

    PubMed Central

    Humblet, Marie-France; Vandeputte, Sébastien; Albert, Adelin; Gosset, Christiane; Kirschvink, Nathalie; Haubruge, Eric; Fecher-Bourgeois, Fabienne; Pastoret, Paul-Pierre

    2012-01-01

    To prioritize 100 animal diseases and zoonoses in Europe, we used a multicriteria decision-making procedure based on opinions of experts and evidence-based data. Forty international experts performed intracategory and intercategory weighting of 57 prioritization criteria. Two methods (deterministic with mean of each weight and probabilistic with distribution functions of weights by using Monte Carlo simulation) were used to calculate a score for each disease. Consecutive ranking was established. Few differences were observed between each method. Compared with previous prioritization methods, our procedure is evidence based, includes a range of fields and criteria while considering uncertainty, and will be useful for analyzing diseases that affect public health. PMID:22469519

  19. Fast-Fourier-transform based numerical integration method for the Rayleigh-Sommerfeld diffraction formula

    NASA Astrophysics Data System (ADS)

    Shen, Fabin; Wang, Anbo

    2006-02-01

    The numerical calculation of the Rayleigh-Sommerfeld diffraction integral is investigated. The implementation of a fast-Fourier-transform (FFT) based direct integration (FFT-DI) method is presented, and Simpson's rule is used to improve the calculation accuracy. The sampling interval, the size of the computation window, and their influence on numerical accuracy and on computational complexity are discussed for the FFT-DI and the FFT-based angular spectrum (FFT-AS) methods. The performance of the FFT-DI method is verified by numerical simulation and compared with that of the FFT-AS method.

  20. Quantitative evaluation of registration methods for atlas-based diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Wu, Xue; Eggebrecht, Adam T.; Culver, Joseph P.; Zhan, Yuxuan; Basevi, Hector; Dehghani, Hamid

    2013-06-01

    In Diffuse Optical Tomography (DOT), an atlas-based model can be used as an alternative to a subject-specific anatomical model for recovery of brain activity. The main step of the generation of atlas-based subject model is the registration of atlas model to the subject head. The accuracy of the DOT then relies on the accuracy of registration method. In this work, 11 registration methods are quantitatively evaluated. The registration method with EEG 10/20 systems with 19 landmarks and non-iterative point to point algorithm provides approximately 1.4 mm surface error and is considered as the most efficient registration method.

  1. Surface impedance based microwave imaging method for breast cancer screening: contrast-enhanced scenario.

    PubMed

    Güren, Onan; Çayören, Mehmet; Ergene, Lale Tükenmez; Akduman, Ibrahim

    2014-10-07

    A new microwave imaging method that uses microwave contrast agents is presented for the detection and localization of breast tumours. The method is based on the reconstruction of breast surface impedance through a measured scattered field. The surface impedance modelling allows for representing the electrical properties of the breasts in terms of impedance boundary conditions, which enable us to map the inner structure of the breasts into surface impedance functions. Later a simple quantitative method is proposed to screen breasts against malignant tumours where the detection procedure is based on weighted cross correlations among impedance functions. Numerical results demonstrate that the method is capable of detecting small malignancies and provides reasonable localization.

  2. Infrared image segmentation method based on spatial coherence histogram and maximum entropy

    NASA Astrophysics Data System (ADS)

    Liu, Songtao; Shen, Tongsheng; Dai, Yao

    2014-11-01

    In order to segment the target well and suppress background noises effectively, an infrared image segmentation method based on spatial coherence histogram and maximum entropy is proposed. First, spatial coherence histogram is presented by weighting the importance of the different position of these pixels with the same gray-level, which is obtained by computing their local density. Then, after enhancing the image by spatial coherence histogram, 1D maximum entropy method is used to segment the image. The novel method can not only get better segmentation results, but also have a faster computation time than traditional 2D histogram-based segmentation methods.

  3. On Development of a Problem Based Learning System for Linear Algebra with Simple Input Method

    NASA Astrophysics Data System (ADS)

    Yokota, Hisashi

    2011-08-01

    Learning how to express a matrix using a keyboard inputs requires a lot of time for most of college students. Therefore, for a problem based learning system for linear algebra to be accessible for college students, it is inevitable to develop a simple method for expressing matrices. Studying the two most widely used input methods for expressing matrices, a simpler input method for expressing matrices is obtained. Furthermore, using this input method and educator's knowledge structure as a concept map, a problem based learning system for linear algebra which is capable of assessing students' knowledge structure and skill is developed.

  4. Multiscale Design of Advanced Materials based on Hybrid Ab Initio and Quasicontinuum Methods

    SciTech Connect

    Luskin, Mitchell

    2014-03-12

    This project united researchers from mathematics, chemistry, computer science, and engineering for the development of new multiscale methods for the design of materials. Our approach was highly interdisciplinary, but it had two unifying themes: first, we utilized modern mathematical ideas about change-of-scale and state-of-the-art numerical analysis to develop computational methods and codes to solve real multiscale problems of DOE interest; and, second, we took very seriously the need for quantum mechanics-based atomistic forces, and based our methods on fast solvers of chemically accurate methods.

  5. [A method for the medical image registration based on the statistics samples averaging distribution theory].

    PubMed

    Xu, Peng; Yao, Dezhong; Luo, Fen

    2005-08-01

    The registration method based on mutual information is currently a popular technique for the medical image registration, but the computation for the mutual information is complex and the registration speed is slow. In engineering process, a subsampling technique is taken to accelerate the registration speed at the cost of registration accuracy. In this paper a new method based on statistics sample theory is developed, which has both a higher speed and a higher accuracy as compared with the normal subsampling method, and the simulation results confirm the validity of the new method.

  6. Method based on the double sideband technique for the dynamic tracking of micrometric particles

    NASA Astrophysics Data System (ADS)

    Ramirez, Claudio; Lizana, Angel; Iemmi, Claudio; Campos, Juan

    2016-06-01

    Digital holography (DH) methods are of interest in a large number of applications. Recently, the double sideband (DSB) technique was proposed, which is a DH based method that, by using double filtering, provides reconstructed images without distortions and is free of twin images by using an in-line configuration. In this work, we implement a method for the investigation of the mobility of particles based on the DSB technique. Particle holographic images obtained using the DSB method are processed with digital picture recognition methods, allowing us to accurately track the spatial position of particles. The dynamic nature of the method is achieved experimentally by using a spatial light modulator. The suitability of the proposed tracking method is validated by determining the trajectory and velocity described by glass microspheres in movement.

  7. Total variation versus wavelet-based methods for image denoising in fluorescence lifetime imaging microscopy.

    PubMed

    Chang, Ching-Wei; Mycek, Mary-Ann

    2012-05-01

    We report the first application of wavelet-based denoising (noise removal) methods to time-domain box-car fluorescence lifetime imaging microscopy (FLIM) images and compare the results to novel total variation (TV) denoising methods. Methods were tested first on artificial images and then applied to low-light live-cell images. Relative to undenoised images, TV methods could improve lifetime precision up to 10-fold in artificial images, while preserving the overall accuracy of lifetime and amplitude values of a single-exponential decay model and improving local lifetime fitting in live-cell images. Wavelet-based methods were at least 4-fold faster than TV methods, but could introduce significant inaccuracies in recovered lifetime values. The denoising methods discussed can potentially enhance a variety of FLIM applications, including live-cell, in vivo animal, or endoscopic imaging studies, especially under challenging imaging conditions such as low-light or fast video-rate imaging.

  8. Properties of a Formal Method for Prediction of Emergent Behaviors in Swarm-based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James

    2004-01-01

    Autonomous intelligent swarms of satellites are being proposed for NASA missions that have complex behaviors and interactions. The emergent properties of swarms make these missions powerful, but at the same time more difficult to design and assure that proper behaviors will emerge. This paper gives the results of research into formal methods techniques for verification and validation of NASA swarm-based missions. Multiple formal methods were evaluated to determine their effectiveness in modeling and assuring the behavior of swarms of spacecraft. The NASA ANTS mission was used as an example of swarm intelligence for which to apply the formal methods. This paper will give the evaluation of these formal methods and give partial specifications of the ANTS mission using four selected methods. We then give an evaluation of the methods and the needed properties of a formal method for effective specification and prediction of emergent behavior in swarm-based systems.

  9. A comparison of field-based similarity searching methods: CatShape, FBSS, and ROCS.

    PubMed

    Moffat, Kirstin; Gillet, Valerie J; Whittle, Martin; Bravi, Gianpaolo; Leach, Andrew R

    2008-04-01

    Three field-based similarity methods are compared in retrospective virtual screening experiments. The methods are the CatShape module of CATALYST, ROCS, and an in-house program developed at the University of Sheffield called FBSS. The programs are used in both rigid and flexible searches carried out in the MDL Drug Data Report. UNITY 2D fingerprints are also used to provide a comparison with a more traditional approach to similarity searching, and similarity based on simple whole-molecule properties is used to provide a baseline for the more sophisticated searches. Overall, UNITY 2D fingerprints and ROCS with the chemical force field option gave comparable performance and were superior to the shape-only 3D methods. When the flexible methods were compared with the rigid methods, it was generally found that the flexible methods gave slightly better results than their respective rigid methods; however, the increased performance did not justify the additional computational cost required.

  10. A parallel multiple path tracing method based on OptiX for infrared image generation

    NASA Astrophysics Data System (ADS)

    Wang, Hao; Wang, Xia; Liu, Li; Long, Teng; Wu, Zimu

    2015-12-01

    Infrared image generation technology is being widely used in infrared imaging system performance evaluation, battlefield environment simulation and military personnel training, which require a more physically accurate and efficient method for infrared scene simulation. A parallel multiple path tracing method based on OptiX was proposed to solve the problem, which can not only increase computational efficiency compared to serial ray tracing using CPU, but also produce relatively accurate results. First, the flaws of current ray tracing methods in infrared simulation were analyzed and thus a multiple path tracing method based on OptiX was developed. Furthermore, the Monte Carlo integration was employed to solve the radiation transfer equation, in which the importance sampling method was applied to accelerate the integral convergent rate. After that, the framework of the simulation platform and its sensor effects simulation diagram were given. Finally, the results showed that the method could generate relatively accurate radiation images if a precise importance sampling method was available.

  11. Single-track absolute position encoding method based on spatial frequency of stripes

    NASA Astrophysics Data System (ADS)

    Xiang, Xiansong; Lu, Yancong; Wei, Chunlong; Zhou, Changhe

    2014-11-01

    A new method of single-track absolute position encoding based on spatial frequency of stripes is proposed. Instead of using pseudorandom-sequence arranged stripes as in conventional situations, this kind of encoding method stores the location information in the frequency space of the stripes, which means the spatial frequency of stripes varies with position and indicates position. This encoding method has a strong fault-tolerant capability with single-stripe detecting errors. The method can be applied to absolute linear encoders, absolute photoelectric angle encoders or two-dimensional absolute linear encoders. The measuring apparatus includes a CCD image sensor and a microscope system, and the method of decoding this frequency code is based on FFT algorithm. This method should be highly interesting for practical applications as an absolute position encoding method.

  12. Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study

    PubMed Central

    Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M

    2017-01-01

    Background The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. Objective The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. Methods We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. Results We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). Conclusions In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms

  13. Comparing the Cloud Vertical Structure Derived from Several Methods Based on Radiosonde Profiles and Ground-based Remote Sensing Measurements

    SciTech Connect

    Costa-Suros, M.; Calbo, J.; Gonzalez, J. A.; Long, Charles N.

    2014-08-27

    The cloud vertical distribution and especially the cloud base height, which is linked to cloud type, is an important characteristic in order to describe the impact of clouds in a changing climate. In this work several methods to estimate the cloud vertical structure (CVS) based on atmospheric sounding profiles are compared, considering number and position of cloud layers, with a ground based system which is taken as a reference: the Active Remote Sensing of Clouds (ARSCL). All methods establish some conditions on the relative humidity, and differ on the use of other variables, the thresholds applied, or the vertical resolution of the profile. In this study these methods are applied to 125 radiosonde profiles acquired at the ARM Southern Great Plains site during all seasons of year 2009 and endorsed by GOES images, to confirm that the cloudiness conditions are homogeneous enough across their trajectory. The overall agreement for the methods ranges between 44-88%; four methods produce total agreements around 85%. Further tests and improvements are applied on one of these methods. In addition, we attempt to make this method suitable for low resolution vertical profiles, which could be useful in atmospheric modeling. The total agreement, even when using low resolution profiles, can be improved up to 91% if the thresholds for a moist layer to become a cloud layer are modified to minimize false negatives with the current data set, thus improving overall agreement.

  14. Novel lattice Boltzmann method based on integrated edge and region information for medical image segmentation.

    PubMed

    Wen, Junling; Yan, Zhuangzhi; Jiang, Jiehui

    2014-01-01

    The lattice Boltzmann (LB) method is a mesoscopic method based on kinetic theory and statistical mechanics. The main advantage of the LB method is parallel computation, which increases the speed of calculation. In the past decade, LB methods have gradually been introduced for image processing, e.g., image segmentation. However, a major shortcoming of existing LB methods is that they can only be applied to the processing of medical images with intensity homogeneity. In practice, however, many medical images possess intensity inhomogeneity. In this study, we developed a novel LB method to integrate edge and region information for medical image segmentation. In contrast to other segmentation methods, we added edge information as a relaxing factor and used region information as a source term. The proposed method facilitates the segmentation of medical images with intensity inhomogeneity and it still allows parallel computation. Preliminary tests of the proposed method are presented in this paper.

  15. Continental-scale Validation of MODIS-based and LEDAPS Landsat ETM+ Atmospheric Correction Methods

    NASA Technical Reports Server (NTRS)

    Ju, Junchang; Roy, David P.; Vermote, Eric; Masek, Jeffrey; Kovalskyy, Valeriy

    2012-01-01

    The potential of Landsat data processing to provide systematic continental scale products has been demonstrated by several projects including the NASA Web-enabled Landsat Data (WELD) project. The recent free availability of Landsat data increases the need for robust and efficient atmospheric correction algorithms applicable to large volume Landsat data sets. This paper compares the accuracy of two Landsat atmospheric correction methods: a MODIS-based method and the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) method. Both methods are based on the 6SV radiative transfer code but have different atmospheric characterization approaches. The MODIS-based method uses the MODIS Terra derived dynamic aerosol type, aerosol optical thickness, and water vapor to atmospherically correct ETM+ acquisitions in each coincident orbit. The LEDAPS method uses aerosol characterizations derived independently from each Landsat acquisition and assumes a fixed continental aerosol type and uses ancillary water vapor. Validation results are presented comparing ETM+ atmospherically corrected data generated using these two methods with AERONET corrected ETM+ data for 95 10 km×10 km 30 m subsets, a total of nearly 8 million 30 m pixels, located across the conterminous United States. The results indicate that the MODIS-based method has better accuracy than the LEDAPS method for the ETM+ red and longer wavelength bands.

  16. Systems for column-based separations, methods of forming packed columns, and methods of purifying sample components

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2000-01-01

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  17. Systems For Column-Based Separations, Methods Of Forming Packed Columns, And Methods Of Purifying Sample Components.

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2004-08-24

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  18. Systems For Column-Based Separations, Methods Of Forming Packed Columns, And Methods Of Purifying Sample Components

    DOEpatents

    Egorov, Oleg B.; O'Hara, Matthew J.; Grate, Jay W.; Chandler, Darrell P.; Brockman, Fred J.; Bruckner-Lea, Cynthia J.

    2006-02-21

    The invention encompasses systems for column-based separations, methods of packing and unpacking columns and methods of separating components of samples. In one aspect, the invention includes a method of packing and unpacking a column chamber, comprising: a) packing a matrix material within a column chamber to form a packed column; and b) after the packing, unpacking the matrix material from the column chamber without moving the column chamber. In another aspect, the invention includes a system for column-based separations, comprising: a) a fluid passageway, the fluid passageway comprising a column chamber and a flow path in fluid communication with the column chamber, the flow path being obstructed by a retaining material permeable to a carrier fluid and impermeable to a column matrix material suspended in the carrier fluid, the flow path extending through the column chamber and through the retaining material, the flow path being configured to form a packed column within the column chamber when a suspension of the fluid and the column matrix material is flowed along the flow path; and b) the fluid passageway extending through a valve intermediate the column chamber and the retaining material.

  19. The method of registration of screw dislocations in polychromatic light based on the Young's interference scheme

    NASA Astrophysics Data System (ADS)

    Shostka, N. V.

    2011-06-01

    A new experimental method of registration of phase dislocations in polychromatic light is proposed and described, which is based on the Young's interference scheme using the screen with lots of pairs of holes.

  20. Investigation of self-adaptive LED surgical lighting based on entropy contrast enhancing method

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Huihui; Zhang, Yaqin; Shen, Junfei; Wu, Rengmao; Zheng, Zhenrong; Li, Haifeng; Liu, Xu

    2014-05-01

    Investigation was performed to explore the possibility of enhancing contrast by varying the spectral distribution (SPD) of the surgical lighting. The illumination scenes with different SPDs were generated by the combination of a self-adaptive white light optimization method and the LED ceiling system, the images of biological sample are taken by a CCD camera and then processed by an 'Entropy' based contrast evaluation model which is proposed specific for surgery occasion. Compared with the neutral white LED based and traditional algorithm based image enhancing methods, the illumination based enhancing method turns out a better performance in contrast enhancing and improves the average contrast value about 9% and 6%, respectively. This low cost method is simple, practicable, and thus may provide an alternative solution for the expensive visual facility medical instruments.