Science.gov

Sample records for fvm-bem method based

  1. DISPLACEMENT BASED SEISMIC DESIGN METHODS.

    SciTech Connect

    HOFMAYER,C.MILLER,C.WANG,Y.COSTELLO,J.

    2003-07-15

    A research effort was undertaken to determine the need for any changes to USNRC's seismic regulatory practice to reflect the move, in the earthquake engineering community, toward using expected displacement rather than force (or stress) as the basis for assessing design adequacy. The research explored the extent to which displacement based seismic design methods, such as given in FEMA 273, could be useful for reviewing nuclear power stations. Two structures common to nuclear power plants were chosen to compare the results of the analysis models used. The first structure is a four-story frame structure with shear walls providing the primary lateral load system, referred herein as the shear wall model. The second structure is the turbine building of the Diablo Canyon nuclear power plant. The models were analyzed using both displacement based (pushover) analysis and nonlinear dynamic analysis. In addition, for the shear wall model an elastic analysis with ductility factors applied was also performed. The objectives of the work were to compare the results between the analyses, and to develop insights regarding the work that would be needed before the displacement based analysis methodology could be considered applicable to facilities licensed by the NRC. A summary of the research results, which were published in NUREGICR-6719 in July 2001, is presented in this paper.

  2. DOM Based XSS Detecting Method Based on Phantomjs

    NASA Astrophysics Data System (ADS)

    Dong, Ri-Zhan; Ling, Jie; Liu, Yi

    Because malicious code does not appear in html source code, DOM based XSS cannot be detected by traditional methods. By analyzing the causes of DOM based XSS, this paper proposes a detection method of DOM based XSS based on phantomjs. This paper uses function hijacking to detect dangerous operation and achieves a prototype system. Comparing with existing tools shows that the system improves the detection rate and the method is effective to detect DOM based XSS.

  3. Method of recovering oil-based fluid

    SciTech Connect

    Brinkley, H.E.

    1993-07-13

    A method is described of recovering oil-based fluid, said method comprising the steps of: applying an oil-based fluid absorbent cloth of man-made fiber to an oil-based fluid, the cloth having at least a portion thereof that is napped so as to raise ends and loops of the man-made fibers and define voids; and absorbing the oil-based fluid into the napped portion of the cloth.

  4. Design for validation, based on formal methods

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1990-01-01

    Validation of ultra-reliable systems decomposes into two subproblems: (1) quantification of probability of system failure due to physical failure; (2) establishing that Design Errors are not present. Methods of design, testing, and analysis of ultra-reliable software are discussed. It is concluded that a design-for-validation based on formal methods is needed for the digital flight control systems problem, and also that formal methods will play a major role in the development of future high reliability digital systems.

  5. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  6. [Family planning methods based on fertility awareness].

    PubMed

    Haghenbeck-Altamirano, Francisco Javier; Ayala-Yáñez, Rodrigo; Herrera-Meillón, Héctor

    2012-04-01

    The desire to limit fertility is recognized both by individuals and by nations. The concept of family planning is based on the right of individuals and couples to regulate their fertility and is based in the area of health, human rights and population. Despite the changes in policies and family planning programs worldwide, there are large geographic areas that have not yet met the minimum requirements in this regard, the reasons are multiple, including economic reasons but also ideological or religious. Knowledge on the physiology of the menstrual cycle, specifically ovulation process has been further enhanced due to the advances in reproductive medicine research. The series of events around ovulation are used to detect the "fertile window", this way women will look for the possibility of postponing their pregnancy or actually start looking for it. The aim of this article is to review the current methods of family planning based on fertility awareness, from the historical methods like the core temperature determination and rhythm, to the most popular ones like the Billings ovulation method, the Sympto-thermal method and current methods like the two days, and the standard days method. There are also mentioned methods that require electronic devices or specifically computer designed ones to detect this "window of fertility". The spread and popularity of these methods is low and their knowledge among physicians, including gynecologists, is also quite scarce. The effectiveness of these methods has been difficult to quantify due to the lack of well designed, randomized studies which are affected by small populations of patients using these methods. The publications mention high effectiveness with their proper use, but not with typical use, what indicates the need for increased awareness among medical practitioners and trainers, obtaining a better use and understanding of methods and reducing these discrepancies. PMID:22808858

  7. A Property Restriction Based Knowledge Merging Method

    NASA Astrophysics Data System (ADS)

    Che, Haiyan; Chen, Wei; Feng, Tie; Zhang, Jiachen

    Merging new instance knowledge extracted from the Web according to certain domain ontology into the knowledge base (KB for short) is essential for the knowledge management and should be processed carefully, since this may introduce redundant or contradictory knowledge, and the quality of the knowledge in the KB, which is very important for a knowledge-based system to provide users high quality services, will suffer from such "bad" knowledge. Advocates a property restriction based knowledge merging method, it can identify the equivalent instances, redundant or contradictory knowledge according to the property restrictions defined in the domain ontology and can consolidate the knowledge about equivalent instances and discard the redundancy and conflict to keep the KB compact and consistent. This knowledge merging method has been used in a semantic-based search engine project: CRAB and the effect is satisfactory.

  8. Recommendation advertising method based on behavior retargeting

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  9. Bayesian individualization via sampling-based methods.

    PubMed

    Wakefield, J

    1996-02-01

    We consider the situation where we wish to adjust the dosage regimen of a patient based on (in general) sparse concentration measurements taken on-line. A Bayesian decision theory approach is taken which requires the specification of an appropriate prior distribution and loss function. A simple method for obtaining samples from the posterior distribution of the pharmacokinetic parameters of the patient is described. In general, these samples are used to obtain a Monte Carlo estimate of the expected loss which is then minimized with respect to the dosage regimen. Some special cases which yield analytic solutions are described. When the prior distribution is based on a population analysis then a method of accounting for the uncertainty in the population parameters is described. Two simulation studies showing how the methods work in practice are presented. PMID:8827585

  10. A T-EOF Based Prediction Method.

    NASA Astrophysics Data System (ADS)

    Lee, Yung-An

    2002-01-01

    A new statistical time series prediction method based on temporal empirical orthogonal function (T-EOF) is introduced in this study. This method first applies singular spectrum analysis (SSA) to extract dominant T-EOFs from historical data. Then, the most recent data are projected onto an optimal subset of the T-EOFs to estimate the corresponding temporal principal components (T-PCs). Finally, a forecast is constructed from these T-EOFs and T-PCs. Results from forecast experiments on the El Niño sea surface temperature (SST) indices from 1993 to 2000 showed that this method consistently yielded better correlation skill than autoregressive models for a lead time longer than 6 months. Furthermore, the correlation skills of this method in predicting Niño-3 index remained above 0.5 for a lead time up to 36 months during this period. However, this method still encountered the `spring barrier' problem. Because the 1990s exhibited relatively weak spring barrier, these results indicate that the T-EOF based prediction method has certain extended forecasting capability in the period when the spring barrier is weak. They also suggest that the potential predictability of ENSO in a certain period may be longer than previously thought.

  11. Bare PCB test method based on AI

    NASA Astrophysics Data System (ADS)

    Li, Aihua; Zhou, Huiyang; Wan, Nianhong; Qu, Liangsheng

    1995-08-01

    The shortcomings of conventional methods used for developing test sets on current automated printed circuit board (PCB) test machines consist of overlooking the information from CAD, historical test data, and the experts' knowledge. Thus, the generated test sets and proposed test sequence may be sub-optimal and inefficient. This paper presents a weighting bare PCB test method based on analysis and utilization of the CAD information. AI technique is applied for faults statistics and faults identification. Also, the generation of test sets and the planning of test procedure are discussed. A faster and more efficient test system is achieved.

  12. An image mosaic method based on corner

    NASA Astrophysics Data System (ADS)

    Jiang, Zetao; Nie, Heting

    2015-08-01

    In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.

  13. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  14. A multicore based parallel image registration method.

    PubMed

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  15. Lagrangian based methods for coherent structure detection

    SciTech Connect

    Allshouse, Michael R.; Peacock, Thomas

    2015-09-15

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.

  16. Lagrangian based methods for coherent structure detection.

    PubMed

    Allshouse, Michael R; Peacock, Thomas

    2015-09-01

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows. PMID:26428570

  17. Chapter 11. Community analysis-based methods

    SciTech Connect

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  18. Method for extruding pitch based foam

    DOEpatents

    Klett, James W.

    2002-01-01

    A method and apparatus for extruding pitch based foam is disclosed. The method includes the steps of: forming a viscous pitch foam; passing the precursor through an extrusion tube; and subjecting the precursor in said extrusion tube to a temperature gradient which varies along the length of the extrusion tube to form an extruded carbon foam. The apparatus includes an extrusion tube having a passageway communicatively connected to a chamber in which a viscous pitch foam formed in the chamber paring through the extrusion tube, and a heating mechanism in thermal communication with the tube for heating the viscous pitch foam along the length of the tube in accordance with a predetermined temperature gradient.

  19. Homogenization method based on the inverse problem

    SciTech Connect

    Tota, A.; Makai, M.

    2013-07-01

    We present a method for deriving homogeneous multi-group cross sections to replace a heterogeneous region's multi-group cross sections; providing that the fluxes and the currents on the external boundary, and the region averaged fluxes are preserved. The method is developed using diffusion approximation to the neutron transport equation in a symmetrical slab geometry. Assuming that the boundary fluxes are given, two response matrices (RMs) can be defined. The first derives the boundary current from the boundary flux, the second derives the flux integral over the region from the boundary flux. Assuming that these RMs are known, we present a formula which reconstructs the multi-group cross-section matrix and the diffusion coefficients from the RMs of a homogeneous slab. Applying this formula to the RMs of a slab with multiple homogeneous regions yields a homogenization method; which produce such homogenized multi-group cross sections and homogenized diffusion coefficients, that the fluxes and the currents on the external boundary, and the region averaged fluxes are preserved. The method is based on the determination of the eigenvalues and the eigenvectors of the RMs. We reproduce the four-group cross section matrix and the diffusion constants from the RMs in numerical examples. We give conditions for replacing a heterogeneous region by a homogeneous one so that the boundary current and the region-averaged flux are preserved for a given boundary flux. (authors)

  20. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  1. Graph-based Methods for Orbit Classification

    SciTech Connect

    Bagherjeiran, A; Kamath, C

    2005-09-29

    An important step in the quest for low-cost fusion power is the ability to perform and analyze experiments in prototype fusion reactors. One of the tasks in the analysis of experimental data is the classification of orbits in Poincare plots. These plots are generated by the particles in a fusion reactor as they move within the toroidal device. In this paper, we describe the use of graph-based methods to extract features from orbits. These features are then used to classify the orbits into several categories. Our results show that existing machine learning algorithms are successful in classifying orbits with few points, a situation which can arise in data from experiments.

  2. Subjective evidence based ethnography: method and applications.

    PubMed

    Lahlou, Saadi; Le Bellu, Sophie; Boesen-Mariani, Sabine

    2015-06-01

    Subjective Evidence Based Ethnography (SEBE) is a method designed to access subjective experience. It uses First Person Perspective (FPP) digital recordings as a basis for analytic Replay Interviews (RIW) with the participants. This triggers their memory and enables a detailed step by step understanding of activity: goals, subgoals, determinants of actions, decision-making processes, etc. This paper describes the technique and two applications. First, the analysis of professional practices for know-how transferring purposes in industry is illustrated with the analysis of nuclear power-plant operators' gestures. This shows how SEBE enables modelling activity, describing good and bad practices, risky situations, and expert tacit knowledge. Second, the analysis of full days lived by Polish mothers taking care of their children is described, with a specific focus on how they manage their eating and drinking. This research has been done on a sub-sample of a large scale intervention designed to increase plain water drinking vs sweet beverages. It illustrates the interest of SEBE as an exploratory technique in complement to other more classic approaches such as questionnaires and behavioural diaries. It provides the detailed "how" of the effects that are measured at aggregate level by other techniques. PMID:25579747

  3. DNA-based methods of geochemical prospecting

    DOEpatents

    Ashby, Matthew

    2011-12-06

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  4. Multifractal Framework Based on Blanket Method

    PubMed Central

    Paskaš, Milorad P.; Reljin, Irini S.; Reljin, Branimir D.

    2014-01-01

    This paper proposes two local multifractal measures motivated by blanket method for calculation of fractal dimension. They cover both fractal approaches familiar in image processing. The first two measures (proposed Methods 1 and 3) support model of image with embedded dimension three, while the other supports model of image embedded in space of dimension three (proposed Method 2). While the classical blanket method provides only one value for an image (fractal dimension) multifractal spectrum obtained by any of the proposed measures gives a whole range of dimensional values. This means that proposed multifractal blanket model generalizes classical (monofractal) blanket method and other versions of this monofractal approach implemented locally. Proposed measures are validated on Brodatz image database through texture classification. All proposed methods give similar classification results, while average computation time of Method 3 is substantially longer. PMID:24578664

  5. Method for sequencing DNA base pairs

    DOEpatents

    Sessler, Andrew M.; Dawson, John

    1993-01-01

    The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source.

  6. Method for sequencing DNA base pairs

    DOEpatents

    Sessler, A.M.; Dawson, J.

    1993-12-14

    The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source. 6 figures.

  7. New ITF measure method based on fringes

    NASA Astrophysics Data System (ADS)

    Fang, Qiaoran; Liu, Shijie; Gao, Wanrong; Zhou, You; Liu, HuanHuan

    2016-01-01

    With the unprecedented developments of the intense laser and aerospace projects', the interferometer is widely used in detecting middle frequency indicators of the optical elements, which put forward very high request towards the interferometer system transfer function (ITF). Conventionally, the ITF is measured by comparing the power spectra of known phase objects such as high-quality phase step. However, the fabrication of phase step is complex and high-cost, especially in the measurement of large-aperture interferometer. In this paper, a new fringe method is proposed to measure the ITF without additional objects. The frequency was changed by adjusting the number of fringes, and the normalized transfer function value was measured at different frequencies. The ITF value measured by fringe method was consistent with the traditional phase step method, which confirms the feasibility of proposed method. Moreover, the measurement error caused by defocus was analyzed. The proposed method does not require the preparation of a step artifact, which greatly reduces the test cost, and is of great significance to the ITF measurement of large aperture interferometer.

  8. HMM-Based Gene Annotation Methods

    SciTech Connect

    Haussler, David; Hughey, Richard; Karplus, Keven

    1999-09-20

    Development of new statistical methods and computational tools to identify genes in human genomic DNA, and to provide clues to their functions by identifying features such as transcription factor binding sites, tissue, specific expression and splicing patterns, and remove homologies at the protein level with genes of known function.

  9. Immunoassay control method based on light scattering

    NASA Astrophysics Data System (ADS)

    Bilyi, Olexander I.; Kiselyov, Eugene M.; Petrina, R. O.; Ferensovich, Yaroslav P.; Yaremyk, Roman Y.

    1999-11-01

    The physics principle of registration immune reaction by light scattering methods is concerned. The operation of laser nephelometry for measuring antigen-antibody reaction is described. The technique of obtaining diagnostic and immune reactions of interaction latex agglutination for diphtheria determination is described.

  10. Method of casting pitch based foam

    DOEpatents

    Klett, James W.

    2002-01-01

    A process for producing molded pitch based foam is disclosed which minimizes cracking. The process includes forming a viscous pitch foam in a container, and then transferring the viscous pitch foam from the container into a mold. The viscous pitch foam in the mold is hardened to provide a carbon foam having a relatively uniform distribution of pore sizes and a highly aligned graphic structure in the struts.

  11. Roadside-based communication system and method

    NASA Technical Reports Server (NTRS)

    Bachelder, Aaron D. (Inventor)

    2007-01-01

    A roadside-based communication system providing backup communication between emergency mobile units and emergency command centers. In the event of failure of a primary communication, the mobile units transmit wireless messages to nearby roadside controllers that may take the form of intersection controllers. The intersection controllers receive the wireless messages, convert the messages into standard digital streams, and transmit the digital streams along a citywide network to a destination intersection or command center.

  12. Method for producing iron-based catalysts

    DOEpatents

    Farcasiu, Malvina; Kaufman, Phillip B.; Diehl, J. Rodney; Kathrein, Hendrik

    1999-01-01

    A method for preparing an acid catalyst having a long shelf-life is provided comprising doping crystalline iron oxides with lattice-compatible metals and heating the now-doped oxide with halogen compounds at elevated temperatures. The invention also provides for a catalyst comprising an iron oxide particle having a predetermined lattice structure, one or more metal dopants for said iron oxide, said dopants having an ionic radius compatible with said lattice structure; and a halogen bound with the iron and the metal dopants on the surface of the particle.

  13. PCLC flake-based apparatus and method

    DOEpatents

    Cox, Gerald P; Fromen, Cathy A; Marshall, Kenneth L; Jacobs, Stephen D

    2012-10-23

    A PCLC flake/fluid host suspension that enables dual-frequency, reverse drive reorientation and relaxation of the PCLC flakes is composed of a fluid host that is a mixture of: 94 to 99.5 wt % of a non-aqueous fluid medium having a dielectric constant value .di-elect cons., where 1<.di-elect cons.<7, a conductivity value .sigma., where 10.sup.-9>.sigma.>10.sup.-7 Siemens per meter (S/m), and a resistivity r, where 10.sup.7>r>10.sup.10 ohm-meters (.OMEGA.-m), and which is optically transparent in a selected wavelength range .DELTA..lamda.; 0.0025 to 0.25 wt % of an inorganic chloride salt; 0.0475 to 4.75 wt % water; and 0.25 to 2 wt % of an anionic surfactant; and 1 to 5 wt % of PCLC flakes suspended in the fluid host mixture. Various encapsulation forms and methods are disclosed including a Basic test cell, a Microwell, a Microcube, Direct encapsulation (I), Direct encapsulation (II), and Coacervation encapsulation. Applications to display devices are disclosed.

  14. The Consistency and Ranking Method Based on Comparison Linguistic Variable

    NASA Astrophysics Data System (ADS)

    Zhao, Qisheng; Wei, Fajie; Zhou, Shenghan

    The study developed a consistency approximation and ranking method based on the comparison Linguistic variable. The method constructs the consistency fuzzy complementary judgment matrix by using the judgment matrix of linguistic variable. The judgment matrix is defined by the fuzzy set or vague set of comparison linguistic variable. The method obtains the VPIS and VNIS based on TOPSIS method. And the relative similar approach degrees with the distance between alternatives and VPIS or VNIS are defined. Then the study analyzes the impact on quality of evaluation which caused by evaluation method, index weight and appraiser. Finally, the improving methods were discussed, and an example is presented to illustrate the proposed method.

  15. Brain Based Teaching: Fad or Promising Teaching Method.

    ERIC Educational Resources Information Center

    Winters, Clyde A.

    This paper discusses brain-based teaching and examines its relevance as a teaching method and knowledge base. Brain-based teaching is very popular among early childhood educators. Positive attributes of brain-based education include student engagement and active involvement in their own learning, teachers teaching for meaning and understanding,…

  16. Shrinkage regression-based methods for microarray missing value imputation

    PubMed Central

    2013-01-01

    Background Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. Results To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Conclusions Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods. PMID:24565159

  17. Decision Making Method Based on Paraconsistent Annotated Logic and Statistical Method: a Comparison

    NASA Astrophysics Data System (ADS)

    de Carvalho, Fábio Romeu; Brunstein, Israel; Abe, Jair Minoro

    2008-10-01

    Presently, there are new kinds of logic capable of handling uncertain and contradictory data without becoming trivial. Decision making theories based on these logics have shown to be powerful in many aspects regarding more traditional methods based on Statistics. In this paper we intend to outline a first study for a decision making theory based on Paraconsistent Annotated Evidential Logic Eτ (Paraconsistent Decision Method (PDM)) and classical Statistical Decision Method (SDM). Some discussion is presented below.

  18. Pyrolyzed-parylene based sensors and method of manufacture

    NASA Technical Reports Server (NTRS)

    Tai, Yu-Chong (Inventor); Liger, Matthieu (Inventor); Miserendino, Scott (Inventor); Konishi, Satoshi (Inventor)

    2007-01-01

    A method (and resulting structure) for fabricating a sensing device. The method includes providing a substrate comprising a surface region and forming an insulating material overlying the surface region. The method also includes forming a film of carbon based material overlying the insulating material and treating to the film of carbon based material to pyrolyzed the carbon based material to cause formation of a film of substantially carbon based material having a resistivity ranging within a predetermined range. The method also provides at least a portion of the pyrolyzed carbon based material in a sensor application and uses the portion of the pyrolyzed carbon based material in the sensing application. In a specific embodiment, the sensing application is selected from chemical, humidity, piezoelectric, radiation, mechanical strain or temperature.

  19. Correlation theory-based signal processing method for CMF signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-lin; Tu, Ya-qing

    2016-06-01

    Signal processing precision of Coriolis mass flowmeter (CMF) signals affects measurement accuracy of Coriolis mass flowmeters directly. To improve the measurement accuracy of CMFs, a correlation theory-based signal processing method for CMF signals is proposed, which is comprised of the correlation theory-based frequency estimation method and phase difference estimation method. Theoretical analysis shows that the proposed method eliminates the effect of non-integral period sampling signals on frequency and phase difference estimation. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of frequency and phase difference estimation and has better estimation performance than the adaptive notch filter, discrete Fourier transform and autocorrelation methods in terms of frequency estimation and the data extension-based correlation, Hilbert transform, quadrature delay estimator and discrete Fourier transform methods in terms of phase difference estimation, which contributes to improving the measurement accuracy of Coriolis mass flowmeters.

  20. EPA (ENVIRONMENTAL PROTECTION AGENCY) METHOD STUDY 30, METHOD 625 - BASE/NEUTRALS, ACIDS AND PESTICIDES

    EPA Science Inventory

    The work which is described in this report was performed for the purpose of validating, through an interlaboratory study, Method 625 for the analysis of the base/neutral, acid, and pesticide priority pollutants. This method is based on the extraction and concentration of the vari...

  1. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  2. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results. PMID:26405856

  3. Propensity Score–Based Methods versus MTE-Based Methods in Causal Inference: Identification, Estimation, and Application*

    PubMed Central

    ZHOU, XIANG; XIE, YU

    2012-01-01

    Since the seminal introduction of the propensity score by Rosenbaum and Rubin, propensity-score-based (PS-based) methods have been widely used for drawing causal inferences in the behavioral and social sciences. However, the propensity score approach depends on the ignorability assumption: there are no unobserved confounders once observed covariates are taken into account. For situations where this assumption may be violated, Heckman and his associates have recently developed a novel approach based on marginal treatment effects (MTE). In this paper, we (1) explicate consequences for PS-based methods when aspects of the ignorability assumption are violated; (2) compare PS-based methods and MTE-based methods by making a close examination of their identification assumptions and estimation performances; (3) apply these two approaches in estimating the economic return to college using data from NLSY 1979 and discuss their discrepancies in results. When there is a sorting gain but no systematic baseline difference between treated and untreated units given observed covariates, PS-based methods can identify the treatment effect of the treated (TT). The MTE approach performs best when there is a valid and strong instrumental variable (IV). In addition, this paper introduces the “smoothing-difference PS-based method,” which enables us to uncover heterogeneity across people of different propensity scores in both counterfactual outcomes and treatment effects. PMID:26877562

  4. Modeling of Tumor Growth Based on Adomian Decomposition Method

    NASA Astrophysics Data System (ADS)

    Mahiddin, Norhasimah; Ali, Siti Aishah Hashim

    2008-01-01

    Modeling of a growing tumor over time is extremely difficult. This is due to the complex biological phenomena underlying cancer growth. Existing models mostly based on numerical methods and could describe spherically-shaped avascular tumors but they cannot match the highly heterogeneous and complex shaped tumors seen in cancer patients. We propose a new technique based on decomposition method to solve analytically cancer model.

  5. Optimal assignment methods for ligand-based virtual screening

    PubMed Central

    2009-01-01

    Background Ligand-based virtual screening experiments are an important task in the early drug discovery stage. An ambitious aim in each experiment is to disclose active structures based on new scaffolds. To perform these "scaffold-hoppings" for individual problems and targets, a plethora of different similarity methods based on diverse techniques were published in the last years. The optimal assignment approach on molecular graphs, a successful method in the field of quantitative structure-activity relationships, has not been tested as a ligand-based virtual screening method so far. Results We evaluated two already published and two new optimal assignment methods on various data sets. To emphasize the "scaffold-hopping" ability, we used the information of chemotype clustering analyses in our evaluation metrics. Comparisons with literature results show an improved early recognition performance and comparable results over the complete data set. A new method based on two different assignment steps shows an increased "scaffold-hopping" behavior together with a good early recognition performance. Conclusion The presented methods show a good combination of chemotype discovery and enrichment of active structures. Additionally, the optimal assignment on molecular graphs has the advantage to investigate and interpret the mappings, allowing precise modifications of internal parameters of the similarity measure for specific targets. All methods have low computation times which make them applicable to screen large data sets. PMID:20150995

  6. Spectrum reconstruction based on the constrained optimal linear inverse methods.

    PubMed

    Ren, Wenyi; Zhang, Chunmin; Mu, Tingkui; Dai, Haishan

    2012-07-01

    The dispersion effect of birefringent material results in spectrally varying Nyquist frequency for the Fourier transform spectrometer based on birefringent prism. Correct spectral information cannot be retrieved from the observed interferogram if the dispersion effect is not appropriately compensated. Some methods, such as nonuniform fast Fourier transforms and compensation method, were proposed to reconstruct the spectrum. In this Letter, an alternative constrained spectrum reconstruction method is suggested for the stationary polarization interference imaging spectrometer (SPIIS) based on the Savart polariscope. In the theoretical model of the interferogram, the noise and the total measurement error are included, and the spectrum reconstruction is performed by using the constrained optimal linear inverse methods. From numerical simulation, it is found that the proposed method is much more effective and robust than the nonconstrained spectrum reconstruction method proposed by Jian, and provides a useful spectrum reconstruction approach for the SPIIS. PMID:22743461

  7. A Novel Method for Learner Assessment Based on Learner Annotations

    ERIC Educational Resources Information Center

    Noorbehbahani, Fakhroddin; Samani, Elaheh Biglar Beigi; Jazi, Hossein Hadian

    2013-01-01

    Assessment is one of the most essential parts of any instructive learning process which aims to evaluate a learner's knowledge about learning concepts. In this work, a new method for learner assessment based on learner annotations is presented. The proposed method exploits the M-BLEU algorithm to find the most similar reference annotations…

  8. Learning to Teach within Practice-Based Methods Courses

    ERIC Educational Resources Information Center

    Kazemi, Elham; Waege, Kjersti

    2015-01-01

    Supporting prospective teachers to enact high quality instruction requires transforming their methods preparation. This study follows three teachers through a practice-based elementary methods course. Weekly class sessions took place in an elementary school. The setting afforded opportunities for prospective teachers to engage in cycles of…

  9. Qualitative Assessment of Inquiry-Based Teaching Methods

    ERIC Educational Resources Information Center

    Briggs, Michael; Long, George; Owens, Katrina

    2011-01-01

    A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…

  10. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  11. A Channelization-Based DOA Estimation Method for Wideband Signals.

    PubMed

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  12. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, G.F.; Steindler, M.J.

    1985-05-21

    A method of removing a phosphorus-based poisonous substance from water contaminated is presented. In addition, the toxicity of the phosphorus-based substance is also subsequently destroyed. A water-immiscible organic solvent is first immobilized on a supported liquid membrane before the contaminated water is contacted with one side of the supported liquid membrane to absorb the phosphorus-based substance in the organic solvent. The other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react with phosphorus-based solvated species to form a non-toxic product.

  13. [Synchrotron-based characterization methods applied to ancient materials (I)].

    PubMed

    Anheim, Étienne; Thoury, Mathieu; Bertrand, Loïc

    2015-12-01

    This article aims at presenting the first results of a transdisciplinary research programme in heritage sciences. Based on the growing use and on the potentialities of micro- and nano-characterization synchrotron-based methods to study ancient materials (archaeology, palaeontology, cultural heritage, past environments), this contribution will identify and test conceptual and methodological elements of convergence between physicochemical and historical sciences. PMID:25200450

  14. Isolate Speech Recognition Based on Time-Frequency Analysis Methods

    NASA Astrophysics Data System (ADS)

    Mantilla-Caeiros, Alfredo; Nakano Miyatake, Mariko; Perez-Meana, Hector

    A feature extraction method for isolate speech recognition is proposed, which is based on a time frequency analysis using a critical band concept similar to that performed in the inner ear model; which emulates the inner ear behavior by performing signal decomposition, similar to carried out by the basilar membrane. Evaluation results show that the proposed method performs better than other previously proposed feature extraction methods when it is used to characterize normal as well as esophageal speech signal.

  15. Fertility awareness-based methods: another option for family planning.

    PubMed

    Pallone, Stephen R; Bergus, George R

    2009-01-01

    Modern fertility awareness-based methods (FABMs) of family planning have been offered as alternative methods of family planning. Billings Ovulation Method, the Creighton Model, and the Symptothermal Method are the more widely used FABMs and can be more narrowly defined as natural family planning. The first 2 methods are based on the examination of cervical secretions to assess fertility. The Symptothermal Method combines characteristics of cervical secretions, basal body temperature, and historical cycle data to determine fertility. FABMs also include the more recently developed Standard Days Method and TwoDays Method. All are distinct from the more traditional rhythm and basal body temperature methods alone. Although these older methods are not highly effective, modern FABMs have typical-use unintended pregnancy rates of 1% to 3% in both industrialized and nonindustrialized nations. Studies suggest that in the United States physician knowledge of FABMs is frequently incomplete. We review the available evidence about the effectiveness for preventing unintended pregnancy, prognostic social demographics of users of the methods, and social outcomes related to FABMs, all of which suggest that family physicians can offer modern FABMs as effective means of family planning. We also provide suggestions about useful educational and instructional resources for family physicians and their patients. PMID:19264938

  16. A method for selecting training samples based on camera response

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua

    2016-09-01

    In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.

  17. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1987-10-07

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  18. Method of recovering oil-based fluid and apparatus

    SciTech Connect

    Brinkley, H.E.

    1993-07-20

    A method is described for recovering oil-based fluid from a surface having oil-based fluid thereon comprising the steps of: applying to the oil-based fluid on the surface an oil-based fluid absorbent cloth of man-made fibers, the cloth having at least one napped surface that defines voids therein, the nap being formed of raised ends or loops of the fibers; absorbing, with the cloth, oil-based fluid; feeding the cloth having absorbed oil-based fluid to a means for applying a force to the cloth to recover oil-based fluid; and applying force to the cloth to recover oil-based fluid therefrom using the force applying means.

  19. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Moyzis, Robert K.; Ratliff, Robert L.; Shera, E. Brooks; Stewart, Carleton C.

    1990-01-01

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed.

  20. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1990-10-09

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  1. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  2. Optimizing distance-based methods for large data sets

    NASA Astrophysics Data System (ADS)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  3. Leaf image segmentation method based on multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Jin-Wei; Shi, Wen; Liao, Gui-Ping

    2013-12-01

    To identify singular regions of crop leaf affected by diseases, based on multifractal detrended fluctuation analysis (MF-DFA), an image segmentation method is proposed. In the proposed method, first, we defend a new texture descriptor: local generalized Hurst exponent, recorded as LHq based on MF-DFA. And then, box-counting dimension f(LHq) is calculated for sub-images constituted by the LHq of some pixels, which come from a specific region. Consequently, series of f(LHq) of the different regions can be obtained. Finally, the singular regions are segmented according to the corresponding f(LHq). Six kinds of corn diseases leaf's images are tested in our experiments. Both the proposed method and other two segmentation methods—multifractal spectrum based and fuzzy C-means clustering have been compared in the experiments. The comparison results demonstrate that the proposed method can recognize the lesion regions more effectively and provide more robust segmentations.

  4. An overview of modal-based damage identification methods

    SciTech Connect

    Farrar, C.R.; Doebling, S.W.

    1997-09-01

    This paper provides an overview of methods that examine changes in measured vibration response to detect, locate, and characterize damage in structural and mechanical systems. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is first provided. The methods are then categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. This overview is limited to methods that can be adapted to a wide range of structures (i.e., are not dependent on a particular assumed model form for the system such as beam-bending behavior and methods and that are not based on updating finite element models). Next, the methods are described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of modal-based damage identification.

  5. Correction of Misclassifications Using a Proximity-Based Estimation Method

    NASA Astrophysics Data System (ADS)

    Niemistö, Antti; Shmulevich, Ilya; Lukin, Vladimir V.; Dolia, Alexander N.; Yli-Harja, Olli

    2004-12-01

    An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial) information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  6. Integrated navigation method based on inertial navigation system and Lidar

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyue; Shi, Haitao; Pan, Jianye; Zhang, Chunxi

    2016-04-01

    An integrated navigation method based on the inertial navigational system (INS) and Lidar was proposed for land navigation. Compared with the traditional integrated navigational method and dead reckoning (DR) method, the influence of the inertial measurement unit (IMU) scale factor and misalignment was considered in the new method. First, the influence of the IMU scale factor and misalignment on navigation accuracy was analyzed. Based on the analysis, the integrated system error model of INS and Lidar was established, in which the IMU scale factor and misalignment error states were included. Then the observability of IMU error states was analyzed. According to the results of the observability analysis, the integrated system was optimized. Finally, numerical simulation and a vehicle test were carried out to validate the availability and utility of the proposed INS/Lidar integrated navigational method. Compared with the test result of a traditional integrated navigation method and DR method, the proposed integrated navigational method could result in a higher navigation precision. Consequently, the IMU scale factor and misalignment error were effectively compensated by the proposed method and the new integrated navigational method is valid.

  7. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  8. Consistency-based ellipse detection method for complicated images

    NASA Astrophysics Data System (ADS)

    Zhang, Lijun; Huang, Xuexiang; Feng, Weichun; Liang, Shuli; Hu, Tianjian

    2016-05-01

    Accurate ellipse detection in complicated images is a challenging problem due to corruptions from image clutter, noise, or occlusion of other objects. To cope with this problem, an edge-following-based ellipse detection method is proposed which promotes the performances of the subprocesses based on consistency. The ellipse detector models edge connectivity by line segments and exploits inconsistent endpoints of the line segments to split the edge contours into smooth arcs. The smooth arcs are further refined with a novel arc refinement method which iteratively improves the consistency degree of the smooth arc. A two-phase arc integration method is developed to group disconnected elliptical arcs belonging to the same ellipse, and two constraints based on consistency are defined to increase the effectiveness and speed of the merging process. Finally, an efficient ellipse validation method is proposed to evaluate the saliency of the elliptic hypotheses. Detailed evaluation on synthetic images shows that our method outperforms other state-of-the-art ellipse detection methods in terms of effectiveness and speed. Additionally, we test our detector on three challenging real-world datasets. The F-measure score and execution time of results demonstrate that our method is effective and fast in complicated images. Therefore, the proposed method is suitable for practical applications.

  9. Multi-Point Combinatorial Optimization Method with Distance Based Interaction

    NASA Astrophysics Data System (ADS)

    Yasuda, Keiichiro; Jinnai, Hiroyuki; Ishigame, Atsushi

    This paper proposes a multi-point combinatorial optimization method based on Proximate Optimality Principle (POP), which method has several advantages for solving large-scale combinatorial optimization problems. The proposed algorithm uses not only the distance between search points but also the interaction among search points in order to utilize POP in several types of combinatorial optimization problems. The proposed algorithm is applied to several typical combinatorial optimization problems, a knapsack problem, a traveling salesman problem, and a flow shop scheduling problem, in order to verify the performance of the proposed algorithm. The simulation results indicate that the proposed method has higher optimality than the conventional combinatorial optimization methods.

  10. A threshold selection method based on edge preserving

    NASA Astrophysics Data System (ADS)

    Lou, Liantang; Dan, Wei; Chen, Jiaqi

    2015-12-01

    A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.

  11. Unstructured road segmentation based on Otsu-entropy method

    NASA Astrophysics Data System (ADS)

    Shi, Chaoxia; Wang, Yanqing; Liu, Hanxiang; Yang, Jingyu

    2011-10-01

    Unstructured road segmentation plays an important role in visual guiding navigation for intelligent vehicle. A novel vision-based road segmentation method that combined the Otsu double-threshold method with the maximum entropy double-threshold method was proposed to handle those problems caused by illumination variations and road surface dilapidation. Spatial correlation by analyzing the grey-level histogram of the original image and temporal correlation by matching of the selected referenced region was used to estimate the coarse range of the road region. Road segmentation experiments executed in different road scenes have demonstrate that the method proposed in this paper is robust against illumination variations and surface dilapidation.

  12. System and method for deriving a process-based specification

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael Gerard (Inventor); Rash, James Larry (Inventor); Rouff, Christopher A. (Inventor)

    2009-01-01

    A system and method for deriving a process-based specification for a system is disclosed. The process-based specification is mathematically inferred from a trace-based specification. The trace-based specification is derived from a non-empty set of traces or natural language scenarios. The process-based specification is mathematically equivalent to the trace-based specification. Code is generated, if applicable, from the process-based specification. A process, or phases of a process, using the features disclosed can be reversed and repeated to allow for an interactive development and modification of legacy systems. The process is applicable to any class of system, including, but not limited to, biological and physical systems, electrical and electro-mechanical systems in addition to software, hardware and hybrid hardware-software systems.

  13. a Minimum Spanning Tree Based Method for Uav Image Segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Wei, Zheng; Cui, Weihong; Lin, Zhiyong

    2016-06-01

    This paper proposes a Minimum Span Tree (MST) based image segmentation method for UAV images in coastal area. An edge weight based optimal criterion (merging predicate) is defined, which based on statistical learning theory (SLT). And we used a scale control parameter to control the segmentation scale. Experiments based on the high resolution UAV images in coastal area show that the proposed merging predicate can keep the integrity of the objects and prevent results from over segmentation. The segmentation results proves its efficiency in segmenting the rich texture images with good boundary of objects.

  14. The Reality-Based Learning Method: A Simple Method for Keeping Teaching Activities Relevant and Effective

    ERIC Educational Resources Information Center

    Smith, Louise W.; Van Doren, Doris C.

    2004-01-01

    Active and experiential learning theory have not dramatically changed collegiate classroom teaching methods, although they have long been included in the pedagogical literature. This article presents an evolved method, reality based learning, that aids professors in including active learning activities with feelings of clarity and confidence. The…

  15. Camera self-calibration method based on two vanishing points

    NASA Astrophysics Data System (ADS)

    Duan, Shaoli; Zang, Huaping; Xu, Mengmeng; Zhang, Xiaofang; Gong, Qiaoxia; Tian, Yongzhi; Liang, Erjun; Liu, Xiaomin

    2015-10-01

    Camera calibration is one of the indispensable processes to obtain 3D depth information from 2D images in the field of computer vision. Camera self-calibration is more convenient and flexible, especially in the application of large depth of fields, wide fields of view, and scene conversion, as well as other occasions like zooms. In this paper, a self-calibration method based on two vanishing points is proposed, the geometric characteristic of disappear points formed by two groups of orthogonal parallel lines is applied to camera self-calibration. By using the vectors' orthogonal properties of connection optical centers and the vanishing points, the constraint equations on the camera intrinsic parameters are established. By this method, four internal parameters of the camera can be solved though only four images taken from different viewpoints in a scene. Compared with the two other self-calibration methods with absolute quadric and calibration plate, the method based on two vanishing points does not require calibration objects, camera movement, the information on the size and location of parallel lines, without strict experimental equipment, and having convenient calibration process and simple algorithm. Compared with the experimental results of the method based on calibration plate, self-calibration method by using machine vision software Halcon, the practicability and effectiveness of the proposed method in this paper is verified.

  16. Irrigation scheduling: advantages and pitfalls of plant-based methods.

    PubMed

    Jones, Hamlyn G

    2004-11-01

    This paper reviews the various methods available for irrigation scheduling, contrasting traditional water-balance and soil moisture-based approaches with those based on sensing of the plant response to water deficits. The main plant-based methods for irrigation scheduling, including those based on direct or indirect measurement of plant water status and those based on plant physiological responses to drought, are outlined and evaluated. Specific plant-based methods include the use of dendrometry, fruit gauges, and other tissue water content sensors, while measurements of growth, sap flow, and stomatal conductance are also outlined. Recent advances, especially in the use of infrared thermometry and thermography for the study of stomatal conductance changes, are highlighted. The relative suitabilities of different approaches for specific crop and climatic situations are discussed, with the aim of indicating the strengths and weaknesses of different approaches, and highlighting their suitability over different spatial and temporal scales. The potential of soil- and plant-based systems for automated irrigation control using various scheduling techniques is also discussed. PMID:15286143

  17. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  18. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  19. Quaternion-based discriminant analysis method for color face recognition.

    PubMed

    Xu, Yong

    2012-01-01

    Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition. PMID:22937054

  20. Two DL-based Methods for Auditing Medical Terminological Systems

    PubMed Central

    Cornet, Ronald; Abu-Hanna, Ameen

    2005-01-01

    Medical terminological systems (TSs) play an increasingly important role in health care by supporting recording, retrieval and analysis of patient information. As the size and complexity of TSs are growing, the need arises for means to audit them, i.e. verify and maintain (logical) consistency and (semantic) correctness of their contents. In this paper we describe two methods based on description logics (DLs) for the audit of TSs. One method uses non-primitive definitions to detect concepts with equivalent definitions. The other method is characterized by stringent assumptions that are made about concept definitions, in order to detect inconsistent definitions. We discuss the possibility of applying these methods to the Foundational Model of Anatomy (FMA) to demonstrate the potentials and pitfalls of these methods. We show that the methods are complementary, and can indeed improve the contents of medical TSs. PMID:16779023

  1. A differential augmentation method based on aerostat reference stations

    NASA Astrophysics Data System (ADS)

    Shi, Zhengfa; Gong, Yingkui; Chen, Xiao

    2016-01-01

    Ground based regional augmentation systems is unable to cover regions such as the oceans, mountains and deserts. And its signal is vulnerable of building block. Besides, its positioning precision for high airspace object is limited. To settle such problems, a Differential augmentation method based on troposphere error corrections using aerostat reference stations is proposed. This method utilizes altitudes of mobile station and aerostat station to estimate troposphere delay errors, resulting in troposphere delay difference value between mobile stations and aerostat reference stations. With the aid of satellite navigation information of mobile stations and aerostat station and both troposphere delay difference values, mobile stations' positioning precision is enhanced by eliminating measurement errors (Satellite clock error, Ephemeris error, Ionospheric delay error, Tropospheric delay error) after differential. It is showed by simulation test that aerostat reference station Differential augmentation method based on tropospheric error corrections improves 3D positioning precision of mobile station to within 2m.

  2. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    SciTech Connect

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.

  3. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGESBeta

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  4. Image mosaic method based on SIFT features of line segment.

    PubMed

    Zhu, Jun; Ren, Mingwu

    2014-01-01

    This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling. PMID:24511326

  5. Altazimuth mount based dynamic calibration method for GNSS attitude measurement

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; He, Tao; Sun, Shaohua; Gu, Qing

    2015-02-01

    As the key process to ensure the test accuracy and quality, the dynamic calibration of the GNSS attitude measuring instrument is often embarrassed by the lack of the rigid enough test platform and an accurate enough calibration reference. To solve the problems, a novel dynamic calibration method for GNSS attitude measurement based on altazimuth mount is put forward in this paper. The principle and implementation of this method are presented, and then the feasibility and usability of the method are analyzed in detail involving the applicability of the mount, calibrating precision, calibrating range, base line rigidity and the satellite signal involved factors. Furthermore, to verify and test the method, a confirmatory experiment is carried out with the survey ship GPS attitude measuring instrument, and the experimental results prove that it is a feasible way to the dynamic calibration for GNSS attitude measurement.

  6. A Triangle Mesh Standardization Method Based on Particle Swarm Optimization

    PubMed Central

    Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang

    2016-01-01

    To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh. PMID:27509129

  7. An XQDD-Based Verification Method for Quantum Circuits

    NASA Astrophysics Data System (ADS)

    Wang, Shiou-An; Lu, Chin-Yung; Tsai, I.-Ming; Kuo, Sy-Yen

    Synthesis of quantum circuits is essential for building quantum computers. It is important to verify that the circuits designed perform the correct functions. In this paper, we propose an algorithm which can be used to verify the quantum circuits synthesized by any method. The proposed algorithm is based on BDD (Binary Decision Diagram) and is called X-decomposition Quantum Decision Diagram (XQDD). In this method, quantum operations are modeled using a graphic method and the verification process is based on comparing these graphic diagrams. We also develop an algorithm to verify reversible circuits even if they have a different number of garbage qubits. In most cases, the number of nodes used in XQDD is less than that in other representations. In general, the proposed method is more efficient in terms of space and time and can be used to verify many quantum circuits in polynomial time.

  8. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, George F.; Steindler, Martin J.

    1989-01-01

    A method of removing organic phosphorus-based poisonous substances from water contaminated therewith and of subsequently destroying the toxicity of the substance is disclosed. Initially, a water-immiscible organic is immobilized on a supported liquid membrane. Thereafter, the contaminated water is contacted with one side of the supported liquid membrane to selectively dissolve the phosphorus-based substance in the organic extractant. At the same time, the other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react the phosphorus-based substance dissolved by the organic extractant with a hydroxy ion. This forms a non-toxic reaction product in the base. The organic extractant can be a water-insoluble trialkyl amine, such as trilauryl amine. The phosphorus-based substance can be phosphoryl or a thiophosphoryl.

  9. Method of removing and detoxifying a phosphorus-based substance

    SciTech Connect

    Vandegrift, G.F.; Steindler, M.J.

    1989-07-25

    A method of removing organic phosphorus-based poisonous substances from water contaminated therewith and of subsequently destroying the toxicity of the substances is disclosed. Initially, a water-immiscible organic is immobilized on a supported liquid membrane. Thereafter, the contaminated water is contacted with one side of the supported liquid membrane to selectively dissolve the phosphorus-based substance in the organic extractant. At the same time, the other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react the phosphorus-based substance dissolved by the organic extractant with a hydroxy ion. This forms a non-toxic reaction product in the base. The organic extractant can be a water-insoluble trialkyl amine, such as trilauryl amine. The phosphorus-based substance can be phosphoryl or a thiophosphoryl.

  10. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method. PMID:25119982

  11. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2012-01-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  12. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2011-12-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  13. LINEAR SCANNING METHOD BASED ON THE SAFT COARRAY

    SciTech Connect

    Martin, C. J.; Martinez-Graullera, O.; Romero, D.; Ullate, L. G.; Higuti, R. T.

    2010-02-22

    This work presents a method to obtain B-scan images based on linear array scanning and 2R-SAFT. Using this technique some advantages are obtained: the ultrasonic system is very simple; it avoids the grating lobes formation, characteristic in conventional SAFT; and subaperture size and focussing lens (to compensate emission-reception) can be adapted dynamically to every image point. The proposed method has been experimentally tested in the inspection of CFRP samples.

  14. Multispectral face liveness detection method based on gradient features

    NASA Astrophysics Data System (ADS)

    Hou, Ya-Li; Hao, Xiaoli; Wang, Yueyang; Guo, Changqing

    2013-11-01

    Face liveness detection aims to distinguish genuine faces from disguised faces. Most previous works under visible light focus on classification of genuine faces and planar photos or videos. To handle the three-dimensional (3-D) disguised faces, liveness detection based on multispectral images has been shown to be an effective choice. In this paper, a gradient-based multispectral method has been proposed for face liveness detection. Three feature vectors are developed to reduce the influence of varying illuminations. The reflectance-based feature achieves the best performance, which has a true positive rate of 98.3% and a true negative rate of 98.7%. The developed methods are also tested on individual bands to provide a clue for band selection in the imaging system. Preliminary results on different face orientations are also shown. The contributions of this paper are threefold. First, a gradient-based multispectral method has been proposed for liveness detection, which considers the reflectance properties of all the distinctive regions in a face. Second, three illumination-robust features are studied based on a dataset with two-dimensional planar photos, 3-D mannequins, and masks. Finally, the performance of the method on different spectral bands and face orientations is also shown in the evaluations.

  15. Innovating Method of Existing Mechanical Product Based on TRIZ Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Cunyou; Shi, Dongyan; Wu, Han

    Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.

  16. An analysis method for evaluating gradient-index fibers based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Yoshida, S.; Horiuchi, S.; Ushiyama, Z.; Yamamoto, M.

    2011-05-01

    We propose a numerical analysis method for evaluating gradient-index (GRIN) optical fiber using the Monte Carlo method. GRIN optical fibers are widely used in optical information processing and communication applications, such as an image scanner, fax machine, optical sensor, and so on. An important factor which decides the performance of GRIN optical fiber is modulation transfer function (MTF). The MTF of a fiber is swayed by condition of manufacturing process such as temperature. Actual measurements of the MTF of a GRIN optical fiber using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. But the results differ greatly from those by experiment. In this paper, we explain the manufacturing process which affects the performance of GRIN optical fibers and a new evaluation method similar to the experimental system based on the Monte Carlo method. We verified that it more closely matches the experimental results than the conventional method.

  17. Improving merge methods for grid-based digital elevation models

    NASA Astrophysics Data System (ADS)

    Leitão, J. P.; Prodanović, D.; Maksimović, Č.

    2016-03-01

    Digital Elevation Models (DEMs) are used to represent the terrain in applications such as, for example, overland flow modelling or viewshed analysis. DEMs generated from digitising contour lines or obtained by LiDAR or satellite data are now widely available. However, in some cases, the area of study is covered by more than one of the available elevation data sets. In these cases the relevant DEMs may need to be merged. The merged DEM must retain the most accurate elevation information available while generating consistent slopes and aspects. In this paper we present a thorough analysis of three conventional grid-based DEM merging methods that are available in commercial GIS software. These methods are evaluated for their applicability in merging DEMs and, based on evaluation results, a method for improving the merging of grid-based DEMs is proposed. DEMs generated by the proposed method, called MBlend, showed significant improvements when compared to DEMs produced by the three conventional methods in terms of elevation, slope and aspect accuracy, ensuring also smooth elevation transitions between the original DEMs. The results produced by the improved method are highly relevant different applications in terrain analysis, e.g., visibility, or spotting irregularities in landforms and for modelling terrain phenomena, such as overland flow.

  18. Sonoclot(®)-based method to detect iron enhanced coagulation.

    PubMed

    Nielsen, Vance G; Henderson, Jon

    2016-07-01

    Thrombelastographic methods have been recently introduced to detect iron mediated hypercoagulability in settings such as sickle cell disease, hemodialysis, mechanical circulatory support, and neuroinflammation. However, these inflammatory situations may have heme oxygenase-derived, coexistent carbon monoxide present, which also enhances coagulation as assessed by the same thrombelastographic variables that are affected by iron. This brief report presents a novel, Sonoclot-based method to detect iron enhanced coagulation that is independent of carbon monoxide influence. Future investigation will be required to assess the sensitivity of this new method to detect iron mediated hypercoagulability in clinical settings compared to results obtained with thrombelastographic techniques. PMID:26497986

  19. A new image fusion method based on curvelet transform

    NASA Astrophysics Data System (ADS)

    Chu, Binbin; Yang, Xiushun; Qi, Dening; Li, Congli; Lu, Wei

    2010-02-01

    A new image fusion method based on Multiscale Geometric Analysis (MGA), which uses the improved fusion rules, is put forward in this paper. Firstly, the input low-level-light image and infrared image are decomposed by Curvelet transform, which is realized by Unequally-Spaced Fast Fourier Transforms. Secondly, the decomposed coefficients in different scales and directions are fused by corresponding fusion rules. At last, the fusion image is acquired by recomposing the fused coefficients. The simulation results show that this method performs better than the conventional wavelet method both in the subjective vision aspect and the objective estimation indices.

  20. A Star Pattern Recognition Method Based on Decreasing Redundancy Matching

    NASA Astrophysics Data System (ADS)

    Yao, Lu; Xiao-xiang, Zhang; Rong-yu, Sun

    2016-04-01

    During the optical observation of space objects, it is difficult to enable the background stars to get matched when the telescope pointing error and tracking error are significant. Based on the idea of decreasing redundancy matching, an effective recognition method for background stars is proposed in this paper. The simulative images under different conditions and the observed images are used to verify the proposed method. The experimental results show that the proposed method has raised the rate of recognition and reduced the time consumption, it can be used to match star patterns accurately and rapidly.

  1. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  2. A online credit evaluation method based on AHP and SPA

    NASA Astrophysics Data System (ADS)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  3. Network motif-based method for identifying coronary artery disease

    PubMed Central

    LI, YIN; CONG, YAN; ZHAO, YUN

    2016-01-01

    The present study aimed to develop a more efficient method for identifying coronary artery disease (CAD) than the conventional method using individual differentially expressed genes (DEGs). GSE42148 gene microarray data were downloaded, preprocessed and screened for DEGs. Additionally, based on transcriptional regulation data obtained from ENCODE database and protein-protein interaction data from the HPRD, the common genes were downloaded and compared with genes annotated from gene microarrays to screen additional common genes in order to construct an integrated regulation network. FANMOD was then used to detect significant three-gene network motifs. Subsequently, GlobalAncova was used to screen differential three-gene network motifs between the CAD group and the normal control data from GSE42148. Genes involved in the differential network motifs were then subjected to functional annotation and pathway enrichment analysis. Finally, clustering analysis of the CAD and control samples was performed based on individual DEGs and the top 20 network motifs identified. In total, 9,008 significant three-node network motifs were detected from the integrated regulation network; these were categorized into 22 interaction modes, each containing a minimum of one transcription factor. Subsequently, 1,132 differential network motifs involving 697 genes were screened between the CAD and control group. The 697 genes were enriched in 154 gene ontology terms, including 119 biological processes, and 14 KEGG pathways. Identifying patients with CAD based on the top 20 network motifs provided increased accuracy compared with the conventional method based on individual DEGs. The results of the present study indicate that the network motif-based method is more efficient and accurate for identifying CAD patients than the conventional method based on individual DEGs. PMID:27347046

  4. A Localization Method for Multistatic SAR Based on Convex Optimization

    PubMed Central

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  5. A Localization Method for Multistatic SAR Based on Convex Optimization.

    PubMed

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  6. Level set method for image segmentation based on moment competition

    NASA Astrophysics Data System (ADS)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  7. Study on UPF Harmonic Current Detection Method Based on DSP

    NASA Astrophysics Data System (ADS)

    Zhao, H. J.; Pang, Y. F.; Qiu, Z. M.; Chen, M.

    2006-10-01

    Unity power factor (UPF) harmonic current detection method applied to active power filter (APF) is presented in this paper. The intention of this method is to make nonlinear loads and active power filter in parallel to be an equivalent resistance. So after compensation, source current is sinusoidal, and has the same shape of source voltage. Meanwhile, there is no harmonic in source current, and the power factor becomes one. The mathematic model of proposed method and the optimum project for equivalent low pass filter in measurement are presented. Finally, the proposed detection method applied to a shunt active power filter experimental prototype based on DSP TMS320F2812 is developed. Simulation and experiment results indicate the method is simple and easy to implement, and can obtain the real-time calculation of harmonic current exactly.

  8. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    PubMed Central

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  9. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    NASA Astrophysics Data System (ADS)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  10. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method.

    PubMed

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  11. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Posner, R.G.; Marrone, B.L.; Hammond, M.L.; Simpson, D.J.

    1995-04-11

    A method is described for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand. 4 figures.

  12. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Posner, Richard G.; Marrone, Babetta L.; Hammond, Mark L.; Simpson, Daniel J.

    1995-01-01

    Method for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand.

  13. Acoustic radiation force-based elasticity imaging methods

    PubMed Central

    Palmeri, Mark L.; Nightingale, Kathryn R.

    2011-01-01

    Conventional diagnostic ultrasound images portray differences in the acoustic properties of soft tissues, whereas ultrasound-based elasticity images portray differences in the elastic properties of soft tissues (i.e. stiffness, viscosity). The benefit of elasticity imaging lies in the fact that many soft tissues can share similar ultrasonic echogenicities, but may have different mechanical properties that can be used to clearly visualize normal anatomy and delineate pathological lesions. Acoustic radiation force-based elasticity imaging methods use acoustic radiation force to transiently deform soft tissues, and the dynamic displacement response of those tissues is measured ultrasonically and is used to estimate the tissue's mechanical properties. Both qualitative images and quantitative elasticity metrics can be reconstructed from these measured data, providing complimentary information to both diagnose and longitudinally monitor disease progression. Recently, acoustic radiation force-based elasticity imaging techniques have moved from the laboratory to the clinical setting, where clinicians are beginning to characterize tissue stiffness as a diagnostic metric, and commercial implementations of radiation force-based ultrasonic elasticity imaging are beginning to appear on the commercial market. This article provides an overview of acoustic radiation force-based elasticity imaging, including a review of the relevant soft tissue material properties, a review of radiation force-based methods that have been proposed for elasticity imaging, and a discussion of current research and commercial realizations of radiation force based-elasticity imaging technologies. PMID:22419986

  14. Effective Teaching Methods--Project-based Learning in Physics

    ERIC Educational Resources Information Center

    Holubova, Renata

    2008-01-01

    The paper presents results of the research of new effective teaching methods in physics and science. It is found out that it is necessary to educate pre-service teachers in approaches stressing the importance of the own activity of students, in competences how to create an interdisciplinary project. Project-based physics teaching and learning…

  15. Explorations in Using Arts-Based Self-Study Methods

    ERIC Educational Resources Information Center

    Samaras, Anastasia P.

    2010-01-01

    Research methods courses typically require students to conceptualize, describe, and present their research ideas in writing. In this article, the author describes her exploration in using arts-based techniques for teaching research to support the development of students' self-study research projects. The pedagogical approach emerged from the…

  16. Bioanalytical method transfer considerations of chromatographic-based assays.

    PubMed

    Williard, Clark V

    2016-07-01

    Bioanalysis is an important part of the modern drug development process. The business practice of outsourcing and transferring bioanalytical methods from laboratory to laboratory has increasingly become a crucial strategy for successful and efficient delivery of therapies to the market. This chapter discusses important considerations when transferring various types of chromatographic-based assays in today's pharmaceutical research and development environment. PMID:27277876

  17. Metaphoric Investigation of the Phonic-Based Sentence Method

    ERIC Educational Resources Information Center

    Dogan, Birsen

    2012-01-01

    This study aimed to understand the views of prospective teachers with "phonic-based sentence method" through metaphoric images. In this descriptive study, the participants involve the prospective teachers who take reading-writing instruction courses in Primary School Classroom Teaching Program of the Education Faculty of Pamukkale University. The…

  18. Preparing Students for Flipped or Team-Based Learning Methods

    ERIC Educational Resources Information Center

    Balan, Peter; Clark, Michele; Restall, Gregory

    2015-01-01

    Purpose: Teaching methods such as Flipped Learning and Team-Based Learning require students to pre-learn course materials before a teaching session, because classroom exercises rely on students using self-gained knowledge. This is the reverse to "traditional" teaching when course materials are presented during a lecture, and students are…

  19. Bead Collage: An Arts-Based Research Method

    ERIC Educational Resources Information Center

    Kay, Lisa

    2013-01-01

    In this paper, "bead collage," an arts-based research method that invites participants to reflect, communicate and construct their experience through the manipulation of beads and found objects is explained. Emphasizing the significance of one's personal biography and experiences as a researcher, I discuss how my background as an…

  20. pyro: Python-based tutorial for computational methods for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  1. A Natural Teaching Method Based on Learning Theory.

    ERIC Educational Resources Information Center

    Smilkstein, Rita

    1991-01-01

    The natural teaching method is active and student-centered, based on schema and constructivist theories, and informed by research in neuroplasticity. A schema is a mental picture or understanding of something we have learned. Humans can have knowledge only to the degree to which they have constructed schemas from learning experiences and practice.…

  2. NIM: A Node Influence Based Method for Cancer Classification

    PubMed Central

    Wang, Yiwen; Yang, Jianhua

    2014-01-01

    The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM) is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART. PMID:25180045

  3. Transformer winding defects identification based on a high frequency method

    NASA Astrophysics Data System (ADS)

    Florkowski, Marek; Furgał, Jakub

    2007-09-01

    The transformer diagnostic methods are systematically being improved and extended due to growing requirements for reliability of power systems in terms of uninterrupted power supply and avoidance of blackouts. Those methods are also driven by longer lifetime of transformers and demand for reduction of transmission and distribution costs. Hence, the detection of winding faults in transformers, both in exploitation or during transportation is an important aspect of power transformer failure prevention. The frequency response analysis method (FRA), more and more frequently used in electric power engineering, has been applied for investigations and signature analysis based on the admittance and transfer function. The paper presents a novel approach to the identification of typical transformer winding problems such as axial or radial movements or turn-to-turn faults. The proposed transfer function discrimination (TFD) criteria are based on the derived transfer function ratios, manifesting higher sensitivity.

  4. Matrix-based image reconstruction methods for tomography

    SciTech Connect

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.

  5. Method of plasma etching Ga-based compound semiconductors

    SciTech Connect

    Qiu, Weibin; Goddard, Lynford L.

    2012-12-25

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent to the process chamber. The process chamber contains a sample comprising a Ga-based compound semiconductor. The sample is in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. The method includes flowing SiCl.sub.4 gas into the chamber, flowing Ar gas into the chamber, and flowing H.sub.2 gas into the chamber. RF power is supplied independently to the source electrode and the platen. A plasma is generated based on the gases in the process chamber, and regions of a surface of the sample adjacent to one or more masked portions of the surface are etched to create a substantially smooth etched surface including features having substantially vertical walls beneath the masked portions.

  6. Screw thread parameter measurement system based on image processing method

    NASA Astrophysics Data System (ADS)

    Rao, Zhimin; Huang, Kanggao; Mao, Jiandong; Zhang, Yaya; Zhang, Fan

    2013-08-01

    In the industrial production, as an important transmission part, the screw thread is applied extensively in many automation equipments. The traditional measurement methods of screw thread parameter, including integrated test methods of multiparameters and the single parameter measurement method, belong to contact measurement method. In practical the contact measurement exists some disadvantages, such as relatively high time cost, introducing easily human error and causing thread damage. In this paper, as a new kind of real-time and non-contact measurement method, a screw thread parameter measurement system based on image processing method is developed to accurately measure the outside diameter, inside diameter, pitch diameter, pitch, thread height and other parameters of screw thread. In the system the industrial camera is employed to acquire the image of screw thread, some image processing methods are used to obtain the image profile of screw thread and a mathematics model is established to compute the parameters. The C++Builder 6.0 is employed as the software development platform to realize the image process and computation of screw thread parameters. For verifying the feasibility of the measurement system, some experiments were carried out and the measurement errors were analyzed. The experiment results show the image measurement system satisfies the measurement requirements and suitable for real-time detection of screw thread parameters mentioned above. Comparing with the traditional methods the system based on image processing method has some advantages, such as, non-contact, easy operation, high measuring accuracy, no work piece damage, fast error analysis and so on. In the industrial production, this measurement system can provide an important reference value for development of similar parameter measurement system.

  7. PDEs on moving surfaces via the closest point method and a modified grid based particle method

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ruuth, S. J.

    2016-05-01

    Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.

  8. An efficient frequency recognition method based on likelihood ratio test for SSVEP-based BCI.

    PubMed

    Zhang, Yangsong; Dong, Li; Zhang, Rui; Yao, Dezhong; Zhang, Yu; Xu, Peng

    2014-01-01

    An efficient frequency recognition method is very important for SSVEP-based BCI systems to improve the information transfer rate (ITR). To address this aspect, for the first time, likelihood ratio test (LRT) was utilized to propose a novel multichannel frequency recognition method for SSVEP data. The essence of this new method is to calculate the association between multichannel EEG signals and the reference signals which were constructed according to the stimulus frequency with LRT. For the simulation and real SSVEP data, the proposed method yielded higher recognition accuracy with shorter time window length and was more robust against noise in comparison with the popular canonical correlation analysis- (CCA-) based method and the least absolute shrinkage and selection operator- (LASSO-) based method. The recognition accuracy and information transfer rate (ITR) obtained by the proposed method was higher than those of the CCA-based method and LASSO-based method. The superior results indicate that the LRT method is a promising candidate for reliable frequency recognition in future SSVEP-BCI. PMID:25250058

  9. Numerical methods for characterization of synchrotron radiation based on the Wigner function method

    NASA Astrophysics Data System (ADS)

    Tanaka, Takashi

    2014-06-01

    Numerical characterization of synchrotron radiation based on the Wigner function method is explored in order to accurately evaluate the light source performance. A number of numerical methods to compute the Wigner functions for typical synchrotron radiation sources such as bending magnets, undulators and wigglers, are presented, which significantly improve the computation efficiency and reduce the total computation time. As a practical example of the numerical characterization, optimization of betatron functions to maximize the brilliance of undulator radiation is discussed.

  10. Gravity base, jack-up platform - method and apparatus

    SciTech Connect

    Herrmann, R.P.; Pease, F.T.; Ray, D.R.

    1981-05-05

    The invention relates to an offshore, gravity base, jack-up platform comprising a deck, a gravity base and one or more legs interconnecting the deck and base. The gravity base comprises a generally polygonal shaped, monolithic hull structure with reaction members extending downwardly from the hull to penetrate the waterbed and react to vertical and lateral loads imposed upon the platform while maintaining the gravity hull in a posture elevated above the surface of the waterbed. A method aspect of the invention includes the steps of towing a gravity base, jack-up platform, as a unit, to a preselected offshore site floating upon the gravity hull. During the towing operation, the deck is mounted adjacent the gravity base with a leg or legs projecting through the deck. At a preselected offshore station ballast is added to the gravity base and the platform descends slightly to a posture where the platform is buoyantly supported by the deck. The base is then jacked down toward the seabed and the platform is laterally brought onto station. Ballast is then added to the deck and the reaction members are penetrated into the waterbed to operational soil refusal. Ballast is then ejected from the deck and the deck is jacked to an operational elevation above a predetermined statistical wave crest height.

  11. Spindle extraction method for ISAR image based on Radon transform

    NASA Astrophysics Data System (ADS)

    Wei, Xia; Zheng, Sheng; Zeng, Xiangyun; Zhu, Daoyuan; Xu, Gaogui

    2015-12-01

    In this paper, a method of spindle extraction of target in inverse synthetic aperture radar (ISAR) image is proposed which depends on Radon Transform. Firstly, utilizing Radon Transform to detect all straight lines which are collinear with these line segments in image. Then, using Sobel operator to detect image contour. Finally, finding all intersections of each straight line and image contour, the two intersections which have maximum distance between them is the two ends of this line segment and the longest line segment of all line segments is spindle of target. According to the proposed spindle extraction method, one hundred simulated ISAR images which are respectively rotated 0 degrees, 10 degrees, 20 degrees, 30 degrees and 40 degrees in counterclockwise are used to do experiment and the proposed method and the detection results are more close to the real spindle of target than the method based on Hough Transform .

  12. A Novel Robot Visual Homing Method Based on SIFT Features

    PubMed Central

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-01-01

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method. PMID:26473880

  13. A history-based method to estimate animal preference.

    PubMed

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  14. Diabatization based on the dipole and quadrupole: The DQ method

    SciTech Connect

    Hoyer, Chad E.; Xu, Xuefei; Ma, Dongxia; Gagliardi, Laura E-mail: truhlar@umn.edu; Truhlar, Donald G. E-mail: truhlar@umn.edu

    2014-09-21

    In this work, we present a method, called the DQ scheme (where D and Q stand for dipole and quadrupole, respectively), for transforming a set of adiabatic electronic states to diabatic states by using the dipole and quadrupole moments to determine the transformation coefficients. It is more broadly applicable than methods based only on the dipole moment; for example, it is not restricted to electron transfer reactions, and it works with any electronic structure method and for molecules with and without symmetry, and it is convenient in not requiring orbital transformations. We illustrate this method by prototype applications to two cases, LiH and phenol, for which we compare the results to those obtained by the fourfold-way diabatization scheme.

  15. A Novel Method for Pulsometry Based on Traditional Iranian Medicine

    PubMed Central

    Yousefipoor, Farzane; Nafisi, Vahidreza

    2015-01-01

    Arterial pulse measurement is one of the most important methods for evaluation of healthy conditions. In traditional Iranian medicine (TIM), physician may detect radial pulse by holding four fingers on the patient's wrist. By using this method, under standard condition, the detected pulses are subjective and erroneous, in case of weak and/or abnormal pulses, the ambiguity of diagnosis may rise. In this paper, we present an equipment which is designed and implemented for automation of traditional pulse detection method. By this novel system, the developed noninvasive diagnostic method and database based on the TIM are way forward to apply traditional medicine and diagnose patients with present technology. The accuracy for period measuring is 76% and systolic peak is 72%. PMID:26955566

  16. [Fast Implementation Method of Protein Spots Detection Based on CUDA].

    PubMed

    Xiong, Bangshu; Ye, Yijia; Ou, Qiaofeng; Zhang, Haodong

    2016-02-01

    In order to improve the efficiency of protein spots detection, a fast detection method based on CUDA was proposed. Firstly, the parallel algorithms of the three most time-consuming parts in the protein spots detection algorithm: image preprocessing, coarse protein point detection and overlapping point segmentation were studied. Then, according to single instruction multiple threads executive model of CUDA to adopted data space strategy of separating two-dimensional (2D) images into blocks, various optimizing measures such as shared memory and 2D texture memory are adopted in this study. The results show that the operative efficiency of this method is obviously improved compared to CPU calculation. As the image size increased, this method makes more improvement in efficiency, such as for the image with the size of 2,048 x 2,048, the method of CPU needs 52,641 ms, but the GPU needs only 4,384 ms. PMID:27382745

  17. A history-based method to estimate animal preference

    PubMed Central

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  18. Object Recognition using Feature- and Color-Based Methods

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  19. Weaving a Formal Methods Education with Problem-Based Learning

    NASA Astrophysics Data System (ADS)

    Gibson, J. Paul

    The idea of weaving formal methods through computing (or software engineering) degrees is not a new one. However, there has been little success in developing and implementing such a curriculum. Formal methods continue to be taught as stand-alone modules and students, in general, fail to see how fundamental these methods are to the engineering of software. A major problem is one of motivation — how can the students be expected to enthusiastically embrace a challenging subject when the learning benefits, beyond passing an exam and achieving curriculum credits, are not clear? Problem-based learning has gradually moved from being an innovative pedagogique technique, commonly used to better-motivate students, to being widely adopted in the teaching of many different disciplines, including computer science and software engineering. Our experience shows that a good problem can be re-used throughout a student's academic life. In fact, the best computing problems can be used with children (young and old), undergraduates and postgraduates. In this paper we present a process for weaving formal methods through a University curriculum that is founded on the application of problem-based learning and a library of good software engineering problems, where students learn about formal methods without sitting a traditional formal methods module. The process of constructing good problems and integrating them into the curriculum is shown to be analagous to the process of engineering software. This approach is not intended to replace more traditional formal methods modules: it will better prepare students for such specialised modules and ensure that all students have an understanding and appreciation for formal methods even if they do not go on to specialise in them.

  20. Lunar-base construction equipment and methods evaluation

    NASA Technical Reports Server (NTRS)

    Boles, Walter W.; Ashley, David B.; Tucker, Richard L.

    1993-01-01

    A process for evaluating lunar-base construction equipment and methods concepts is presented. The process is driven by the need for more quantitative, systematic, and logical methods for assessing further research and development requirements in an area where uncertainties are high, dependence upon terrestrial heuristics is questionable, and quantitative methods are seldom applied. Decision theory concepts are used in determining the value of accurate information and the process is structured as a construction-equipment-and-methods selection methodology. Total construction-related, earth-launch mass is the measure of merit chosen for mathematical modeling purposes. The work is based upon the scope of the lunar base as described in the National Aeronautics and Space Administration's Office of Exploration's 'Exploration Studies Technical Report, FY 1989 Status'. Nine sets of conceptually designed construction equipment are selected as alternative concepts. It is concluded that the evaluation process is well suited for assisting in the establishment of research agendas in an approach that is first broad, with a low level of detail, followed by more-detailed investigations into areas that are identified as critical due to high degrees of uncertainty and sensitivity.

  1. Springback Compensation Based on FDM-DTF Method

    SciTech Connect

    Liu Qiang; Kang Lan

    2010-06-15

    Stamping part error caused by springback is usually considered to be a tooling defect in sheet metal forming process. This problem can be corrected by adjusting the tooling shape to appropriate shape. In this paper, springback compensation based on FDM-DTF method is proposed to be used for design and modification of the tooling shape. Firstly, based on FDM method, the tooling shape is designed by reversing inner force's direction at the end of forming simulation, the required tooling shape can be got through some iterations. Secondly actual tooling is produced based on results got in the first step. When the tooling and part surface discrete data are investigated, the transfer function between numerical springback error and real springback error can be calculated based on wavelet transform results, which can be used in predicting the tooling shape for the desired product. Finally the FDM-DTF method is proved to control springback effectively after it has been applied in the 2D irregular product springback control.

  2. Evaluation of base widening methods on flexible pavements in Wyoming

    NASA Astrophysics Data System (ADS)

    Offei, Edward

    The surface transportation system forms the biggest infrastructure investment in the United States of which the roadway pavement is an integral part. Maintaining the roadways can involve rehabilitation in the form of widening, which requires a longitudinal joint between the existing and new pavement sections to accommodate wider travel lanes, additional travel lanes or modification to shoulder widths. Several methods are utilized for the joint construction between the existing and new pavement sections including vertical, tapered and stepped joints. The objective of this research is to develop a formal recommendation for the preferred joint construction method that provides the best base layer support for the state of Wyoming. Field collection of Dynamic Cone Penetrometer (DCP) data, Falling Weight Deflectometer (FWD) data, base samples for gradation and moisture content were conducted on 28 existing and 4 newly constructed pavement widening projects. A survey of constructability issues on widening projects as experienced by WYDOT engineers was undertaken. Costs of each joint type were compared as well. Results of the analyses indicate that the tapered joint type showed relatively better pavement strength compared to the vertical joint type and could be the preferred joint construction method. The tapered joint type also showed significant base material savings than the vertical joint type. The vertical joint has an 18% increase in cost compared to the tapered joint. This research is intended to provide information and/or recommendation to state policy makers as to which of the base widening joint techniques (vertical, tapered, stepped) for flexible pavement provides better pavement performance.

  3. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  4. Characteristic-based time domain method for antenna analysis

    NASA Astrophysics Data System (ADS)

    Jiao, Dan; Jin, Jian-Ming; Shang, J. S.

    2001-01-01

    The characteristic-based time domain method, developed in the computational fluid dynamics community for solving the Euler equations, is applied to the antenna radiation problem. Based on the principle of the characteristic-based algorithm, a governing equation in the cylindrical coordinate system is formulated directly to facilitate the analysis of body-of-revolution antennas and also to achieve the exact Riemann problem. A finite difference scheme with second-order accuracy in both time and space is constructed from the eigenvalue and eigenvector analysis of the derived governing equation. Rigorous boundary conditions for all the field components are formulated to improve the accuracy of the characteristic-based finite difference scheme. Numerical results demonstrate the validity and accuracy of the proposed technique.

  5. Spectral radiative property control method based on filling solution

    NASA Astrophysics Data System (ADS)

    Jiao, Y.; Liu, L. H.; Hsu, P.-f.

    2014-01-01

    Controlling thermal radiation by tailoring spectral properties of microstructure is a promising method, can be applied in many industrial systems and have been widely researched recently. Among various property tailoring schemes, geometry design of microstructures is a commonly used method. However, the existing radiation property tailoring is limited by adjustability of processed microstructures. In other words, the spectral radiative properties of microscale structures are not possible to change after the gratings are fabricated. In this paper, we propose a method that adjusts the grating spectral properties by means of injecting filling solution, which could modify the thermal radiation in a fabricated microstructure. Therefore, this method overcomes the limitation mentioned above. Both mercury and water are adopted as the filling solution in this study. Aluminum and silver are selected as the grating materials to investigate the generality and limitation of this control method. The rigorous coupled-wave analysis is used to investigate the spectral radiative properties of these filling solution grating structures. A magnetic polaritons mechanism identification method is proposed based on LC circuit model principle. It is found that this control method could be used by different grating materials. Different filling solutions would enable the high absorption peak to move to longer or shorter wavelength band. The results show that the filling solution grating structures are promising for active control of spectral radiative properties.

  6. Efficient variational Bayesian approximation method based on subspace optimization.

    PubMed

    Zheng, Yuling; Fraysse, Aurélia; Rodet, Thomas

    2015-02-01

    Variational Bayesian approximations have been widely used in fully Bayesian inference for approximating an intractable posterior distribution by a separable one. Nevertheless, the classical variational Bayesian approximation (VBA) method suffers from slow convergence to the approximate solution when tackling large dimensional problems. To address this problem, we propose in this paper a more efficient VBA method. Actually, variational Bayesian issue can be seen as a functional optimization problem. The proposed method is based on the adaptation of subspace optimization methods in Hilbert spaces to the involved function space, in order to solve this optimization problem in an iterative way. The aim is to determine an optimal direction at each iteration in order to get a more efficient method. We highlight the efficiency of our new VBA method and demonstrate its application to image processing by considering an ill-posed linear inverse problem using a total variation prior. Comparisons with state of the art variational Bayesian methods through a numerical example show a notable improvement in computation time. PMID:25532179

  7. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  8. CEMS using hot wet extractive method based on DOAS

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Zhang, Chi; Sun, Changku

    2011-11-01

    A continuous emission monitoring system (CEMS) using hot wet extractive method based on differential optical absorption spectroscopy (DOAS) is designed. The developed system is applied to retrieving the concentration of SO2 and NOx in flue gas on-site. The flue gas is carried along a heated sample line into the sample pool at a constant temperature above the dew point. In this case, the adverse impact of water vapor on measurement accuracy is reduced greatly, and the on-line calibration is implemented. And then the flue gas is discharged from the sample pool after the measuring process is complete. The on-site applicability of the system is enhanced by using Programmable Logic Controller (PLC) to control each valve in the system during the measuring and on-line calibration process. The concentration retrieving method used in the system is based on the partial least squares (PLS) regression nonlinear method. The relationship between the known concentration and the differential absorption feature gathered by the PLS nonlinear method can be figured out after the on-line calibration process. Then the concentration measurement of SO2 and NOx can be easily implemented according to the definite relationship. The concentration retrieving method can identify the information and noise effectively, which improves the measuring accuracy of the system. SO2 with four different concentrations are measured by the system under laboratory conditions. The results proved that the full-scale error of this system is less than 2%FS.

  9. Misalignment-robust, edge-based image fusion method

    NASA Astrophysics Data System (ADS)

    Xi, Cai; Wei, Zhao

    2012-07-01

    We propose an image fusion method robust to misaligned source images based on their multiscale edge representations. Significant long edge curves at the second scale are selected to decide edge locations at each scale for the multiscale edge representations of source images. Then, processes are only executed on the representations that contain the main spatial structures of the images and also help suppress noise interference. A registration process is embedded in our fusion method. Edge correlation, calculated at the second scale, is involved as a match measure determining the fusion rules and also as a similarity measure quantifying the matching extent between source images, which makes the registration and fusion processes share the same data and hence lessens the computation of our method. Experimental results prove that, no matter whether in a noiseless or noisy condition, the proposed method provides satisfying treatment to misregistered source images and behaves well in terms of visual and objective evaluations on the fusion results, which further verifies the robustness of our edge-based method to misregistration and noise.

  10. A velocity-correction projection method based immersed boundary method for incompressible flows

    NASA Astrophysics Data System (ADS)

    Cai, Shanggui

    2014-11-01

    In the present work we propose a novel direct forcing immersed boundary method based on the velocity-correction projection method of [J.L. Guermond, J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (1)(2003) 112]. The principal idea of immersed boundary method is to correct the velocity in the vicinity of the immersed object by using an artificial force to mimic the presence of the physical boundaries. Therefore, velocity-correction projection method is preferred to its pressure-correction counterpart in the present work. Since the velocity-correct projection method is considered as a dual class of pressure-correction method, the proposed method here can also be interpreted in the way that first the pressure is predicted by treating the viscous term explicitly without the consideration of the immersed boundary, and the solenoidal velocity is used to determine the volume force on the Lagrangian points, then the non-slip boundary condition is enforced by correcting the velocity with the implicit viscous term. To demonstrate the efficiency and accuracy of the proposed method, several numerical simulations are performed and compared with the results in the literature. China Scholarship Council.

  11. An error embedded method based on generalized Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Kim, Philsu; Kim, Junghan; Jung, WonKyu; Bu, Sunyoung

    2016-02-01

    In this paper, we develop an error embedded method based on generalized Chebyshev polynomials for solving stiff initial value problems. The solution and the error at each integration step are calculated by generalized Chebyshev polynomials of two consecutive degrees having overlapping zeros, which enables us to minimize overall computational costs. Further the errors at each integration step are embedded in the algorithm itself. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have the 6th order convergence and an almost L-stability. We assess the proposed method with several numerical results, showing that it uses larger time step sizes and is numerically more efficient.

  12. Swelling-based method for preparing stable, functionalized polymer colloids.

    PubMed

    Kim, Anthony J; Manoharan, Vinothan N; Crocker, John C

    2005-02-16

    We describe a swelling-based method to prepare sterically stabilized polymer colloids with different functional groups or biomolecules attached to their surface. It should be applicable to a variety of polymeric colloids, including magnetic particles, fluorescent particles, polystyrene particles, PMMA particles, and so forth. The resulting particles are more stable in the presence of monovalent and divalent salt than existing functionalized colloids, even in the absence of any surfactant or protein blocker. While we use a PEG polymer brush here, the method should enable the use of a variety of polymer chemistries and molecular weights. PMID:15700965

  13. A backtranslation method based on codon usage strategy.

    PubMed Central

    Pesole, G; Attimonelli, M; Liuni, S

    1988-01-01

    This study describes a method for the backtranslation of an aminoacidic sequence, an extremely useful tool for various experimental approaches. It involves two computer programs CLUSTER and BACKTR written in Fortran 77 running on a VAX/VMS computer. CLUSTER generates a reliable codon usage table through a cluster analysis, based on a chi 2-like distance between the sequences. BACKTR produces backtranslated sequences according to different options when use is made of the codon usage table obtained in addition to selecting the least ambiguous potential oligonucleotide probes within an aminoacidic sequence. The method was tested by applying it to 158 yeast genes. PMID:3281142

  14. Cepstrum based feature extraction method for fungus detection

    NASA Astrophysics Data System (ADS)

    Yorulmaz, Onur; Pearson, Tom C.; Çetin, A. Enis

    2011-06-01

    In this paper, a method for detection of popcorn kernels infected by a fungus is developed using image processing. The method is based on two dimensional (2D) mel and Mellin-cepstrum computation from popcorn kernel images. Cepstral features that were extracted from popcorn images are classified using Support Vector Machines (SVM). Experimental results show that high recognition rates of up to 93.93% can be achieved for both damaged and healthy popcorn kernels using 2D mel-cepstrum. The success rate for healthy popcorn kernels was found to be 97.41% and the recognition rate for damaged kernels was found to be 89.43%.

  15. Methodical Base of Experimental Studies of Collinear Multibody Decays

    NASA Astrophysics Data System (ADS)

    Kamanin, D. V.; Zhuchko, V. E.; Kondtatyev, N. A.; Alexandrov, A. A.; Alexandrova, I. A.; Kuznetsova, E. A.; Strekalovsky, A. O.; Strekalovsky, O. V.; Pyatkov, Yu. V.; Jacobs, N.; Malaza, V.; Mulgin, S. I.

    2013-06-01

    Our recent experiments dedicated to study of the CCT of 252Cf (sf) were carried out at the COMETA setup based on the mosaics of PIN diodes and special array of 3He filled neutron counters. Principal peculiarity of the experiment consists in measuring of the heavy ions masses in the frame of the TOF-E (time-of-flight vs. energy) method in the wide range of masses and energies and almost collinear recession of the decay partners. The methodical questions of such experiment are under discussion here.

  16. Methods for preparing colloidal nanocrystal-based thin films

    DOEpatents

    Kagan, Cherie R.; Fafarman, Aaron T.; Choi, Ji-Hyuk; Koh, Weon-kyu; Kim, David K.; Oh, Soong Ju; Lai, Yuming; Hong, Sung-Hoon; Saudari, Sangameshwar Rao; Murray, Christopher B.

    2016-05-10

    Methods of exchanging ligands to form colloidal nanocrystals (NCs) with chalcogenocyanate (xCN)-based ligands and apparatuses using the same are disclosed. The ligands may be exchanged by assembling NCs into a thin film and immersing the thin film in a solution containing xCN-based ligands. The ligands may also be exchanged by mixing a xCN-based solution with a dispersion of NCs, flocculating the mixture, centrifuging the mixture, discarding the supernatant, adding a solvent to the pellet, and dispersing the solvent and pellet to form dispersed NCs with exchanged xCN-ligands. The NCs with xCN-based ligands may be used to form thin film devices and/or other electronic, optoelectronic, and photonic devices. Devices comprising nanocrystal-based thin films and methods for forming such devices are also disclosed. These devices may be constructed by depositing NCs on to a substrate to form an NC thin film and then doping the thin film by evaporation and thermal diffusion.

  17. Design of a Password-Based EAP Method

    NASA Astrophysics Data System (ADS)

    Manganaro, Andrea; Koblensky, Mingyur; Loreti, Michele

    In recent years, amendments to IEEE standards for wireless networks added support for authentication algorithms based on the Extensible Authentication Protocol (EAP). Available solutions generally use digital certificates or pre-shared keys but the management of the resulting implementations is complex or unlikely to be scalable. In this paper we present EAP-SRP-256, an authentication method proposal that relies on the SRP-6 protocol and provides a strong password-based authentication mechanism. It is intended to meet the IETF security and key management requirements for wireless networks.

  18. A Flow SPR Immunosensor Based on a Sandwich Direct Method

    PubMed Central

    Tomassetti, Mauro; Conta, Giorgia; Campanella, Luigi; Favero, Gabriele; Sanzò, Gabriella; Mazzei, Franco; Antiochia, Riccarda

    2016-01-01

    In this study, we report the development of an SPR (Surface Plasmon Resonance) immunosensor for the detection of ampicillin, operating under flow conditions. SPR sensors based on both direct (with the immobilization of the antibody) and competitive (with the immobilization of the antigen) methods did not allow the detection of ampicillin. Therefore, a sandwich-based sensor was developed which showed a good linear response towards ampicillin between 10−3 and 10−1 M, a measurement time of ≤20 min and a high selectivity both towards β-lactam antibiotics and antibiotics of different classes. PMID:27187486

  19. A Flow SPR Immunosensor Based on a Sandwich Direct Method.

    PubMed

    Tomassetti, Mauro; Conta, Giorgia; Campanella, Luigi; Favero, Gabriele; Sanzò, Gabriella; Mazzei, Franco; Antiochia, Riccarda

    2016-01-01

    In this study, we report the development of an SPR (Surface Plasmon Resonance) immunosensor for the detection of ampicillin, operating under flow conditions. SPR sensors based on both direct (with the immobilization of the antibody) and competitive (with the immobilization of the antigen) methods did not allow the detection of ampicillin. Therefore, a sandwich-based sensor was developed which showed a good linear response towards ampicillin between 10(-3) and 10(-1) M, a measurement time of ≤20 min and a high selectivity both towards β-lactam antibiotics and antibiotics of different classes. PMID:27187486

  20. Real reproduction and evaluation of color based on BRDF method

    NASA Astrophysics Data System (ADS)

    Qin, Feng; Yang, Weiping; Yang, Jia; Li, Hongning; Luo, Yanlin; Long, Hongli

    2013-12-01

    It is difficult to reproduce the original color of targets really in different illuminating environment using the traditional methods. So a function which can reconstruct the characteristics of reflection about every point on the surface of target is required urgently to improve the authenticity of color reproduction, which known as the Bidirectional Reflectance Distribution Function(BRDF). A method of color reproduction based on the BRDF measurement is introduced in this paper. Radiometry is combined with the colorimetric theories to measure the irradiance and radiance of GretagMacbeth 24 ColorChecker by using PR-715 Radiation Spectrophotometer of PHOTO RESEARCH, Inc, USA. The BRDF and BRF (Bidirectional Reflectance Factor) values of every color piece corresponding to the reference area are calculated according to irradiance and radiance, thus color tristimulus values of 24 ColorChecker are reconstructed. The results reconstructed by BRDF method are compared with values calculated by the reflectance using PR-715, at last, the chromaticity coordinates in color space and color difference between each other are analyzed. The experimental result shows average color difference and sample standard deviation between the method proposed in this paper and traditional reconstruction method depended on reflectance are 2.567 and 1.3049 respectively. The conclusion indicates that the method of color reproduction based on BRDF has the more obvious advantages to describe the color information of object than the reflectance in hemisphere space through the theoretical and experimental analysis. This method proposed in this paper is effective and feasible during the research of reproducing the chromaticity.

  1. [An Effective Wavelength Detection Method Based on Echelle Spectra Reduction].

    PubMed

    Yin, Lu; Bayanheshig; Cui, Ji-cheng; Yang, Jin; Zhu, Ji-wei; Yao, Xue-feng

    2015-03-01

    Echelle spectrometer with high dispersion, high resolution, wide spectral coverage, full spectrum transient direct-reading and many other advantages, is one of the representative of the advanced spectrometer. In the commercialization trend of echelle spectrometer, the method of two-dimension spectra image processing is becoming more and more important. Currently, centroid extraction algorithm often be used first to detect the centroid position of effective facula and then combined with echelle spectrum reduction method to detect the effective wavelength, but this method is more difficult to achieve the desired requirements. To improve the speed, accuracy and the ability of imaging error correction during detecting the effective wavelength, an effective wavelength detection method based on spectra reduction is coming up. At the beginning, the two-dimension spectra will be converted to a one-dimension image using echelle spectra reduction method instead of finding centroid of effective facula. And then by setting appropriate threshold the one-dimension image is easy to be dealing with than the two-dimension spectra image and all of the pixel points stand for effective wavelength can be detected at one time. Based on this new idea, the speed and accuracy of image processing have been improved, at the same time a range of imaging errors can be compensated. Using the echelle spectrograph make a test applying this algorithm for data processing to check whether this method is fit for the spectra image processing or not. Choosing a standard mercury lamp as a light source during the test because the standard mercury lamp have a number of known characteristic lines which can be used to examine the accuracy of wavelength detection. According to experimental result, this method not only increase operation speed but improve accuracy of wavelength detection, also the imaging error lower than 0.05 mm (two pixel) can be corrected, and the wavelength accuracy would up to 0.02 nm

  2. Microbial detection method based on sensing molecular hydrogen

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Stoner, G. E.; Boykin, E. H.

    1974-01-01

    A simple method for detecting bacteria, based on the time of hydrogen evolution, was developed and tested against various members of the Enterobacteriaceae group. The test system consisted of (1) two electrodes, platinum and a reference electrode, (2) a buffer amplifier, and (3) a strip-chart recorder. Hydrogen evolution was measured by an increase in voltage in the negative (cathodic) direction. A linear relationship was established between inoculum size and the time hydrogen was detected (lag period). Lag times ranged from 1 h for 1 million cells/ml to 7 h for 1 cell/ml. For each 10-fold decrease in inoculum, length of the lag period increased 60 to 70 min. Based on the linear relationship between inoculum and lag period, these results indicate the potential application of the hydrogen-sensing method for rapidly detecting coliforms and other gas-producing microorganisms in a variety of clinical, food, and other samples.

  3. A Human Gait Classification Method Based on Radar Doppler Spectrograms

    NASA Astrophysics Data System (ADS)

    Tivive, Fok Hing Chi; Bouzerdoum, Abdesselam; Amin, Moeness G.

    2010-12-01

    An image classification technique, which has recently been introduced for visual pattern recognition, is successfully applied for human gait classification based on radar Doppler signatures depicted in the time-frequency domain. The proposed method has three processing stages. The first two stages are designed to extract Doppler features that can effectively characterize human motion based on the nature of arm swings, and the third stage performs classification. Three types of arm motion are considered: free-arm swings, one-arm confined swings, and no-arm swings. The last two arm motions can be indicative of a human carrying objects or a person in stressed situations. The paper discusses the different steps of the proposed method for extracting distinctive Doppler features and demonstrates their contributions to the final and desirable classification rates.

  4. The conditional risk probability-based seawall height design method

    NASA Astrophysics Data System (ADS)

    Yang, Xing; Hu, Xiaodong; Li, Zhiqing

    2015-11-01

    The determination of the required seawall height is usually based on the combination of wind speed (or wave height) and still water level according to a specified return period, e.g., 50-year return period wind speed and 50-year return period still water level. In reality, the two variables are be partially correlated. This may be lead to over-design (costs) of seawall structures. The above-mentioned return period for the design of a seawall depends on economy, society and natural environment in the region. This means a specified risk level of overtopping or damage of a seawall structure is usually allowed. The aim of this paper is to present a conditional risk probability-based seawall height design method which incorporates the correlation of the two variables. For purposes of demonstration, the wind speeds and water levels collected from Jiangsu of China are analyzed. The results show this method can improve seawall height design accuracy.

  5. A Model Based Security Testing Method for Protocol Implementation

    PubMed Central

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation. PMID:25105163

  6. Geophysics-based method of locating a stationary earth object

    DOEpatents

    Daily, Michael R.; Rohde, Steven B.; Novak, James L.

    2008-05-20

    A geophysics-based method for determining the position of a stationary earth object uses the periodic changes in the gravity vector of the earth caused by the sun- and moon-orbits. Because the local gravity field is highly irregular over a global scale, a model of local tidal accelerations can be compared to actual accelerometer measurements to determine the latitude and longitude of the stationary object.

  7. Geometrical MTF computation method based on the irradiance model

    NASA Astrophysics Data System (ADS)

    Lin, P.-D.; Liu, C.-S.

    2011-01-01

    The Modulation Transfer Function (MTF) is a measure of an optical system's ability to transfer contrast from the specimen to the image plane at a specific resolution. It can be computed either numerically by geometrical optics or measured experimentally by imaging a knife edge or a bar-target pattern of varying spatial frequency. Previously, MTF accuracy was generally affected by the size of the mesh on the image plane. This paper presents a new MTF computation method based on the irradiance model, without counting the number of rays hitting each grid. To verify the method, the MTF in the sagittal and meridional directions of an axis-symmetrical optical system is computed by both the ray-counting and the proposed methods. It is found that the grid size meshed on the image plane significantly affects the MTF of the ray-counting method, sometimes with significantly negative results. The proposed irradiance method is immune to issues of grid size. The CPU computation time for the two methods is approximately the same.

  8. A PDE-Based Fast Local Level Set Method

    NASA Astrophysics Data System (ADS)

    Peng, Danping; Merriman, Barry; Osher, Stanley; Zhao, Hongkai; Kang, Myungjoo

    1999-11-01

    We develop a fast method to localize the level set method of Osher and Sethian (1988, J. Comput. Phys.79, 12) and address two important issues that are intrinsic to the level set method: (a) how to extend a quantity that is given only on the interface to a neighborhood of the interface; (b) how to reset the level set function to be a signed distance function to the interface efficiently without appreciably moving the interface. This fast local level set method reduces the computational effort by one order of magnitude, works in as much generality as the original one, and is conceptually simple and easy to implement. Our approach differs from previous related works in that we extract all the information needed from the level set function (or functions in multiphase flow) and do not need to find explicitly the location of the interface in the space domain. The complexity of our method to do tasks such as extension and distance reinitialization is O(N), where N is the number of points in space, not O(N log N) as in works by Sethian (1996, Proc. Nat. Acad. Sci. 93, 1591) and Helmsen and co-workers (1996, SPIE Microlithography IX, p. 253). This complexity estimation is also valid for quite general geometrically based front motion for our localized method.

  9. CT Scanning Imaging Method Based on a Spherical Trajectory

    PubMed Central

    2016-01-01

    In industrial computed tomography (CT), the mismatch between the X-ray energy and the effective thickness makes it difficult to ensure the integrity of projection data using the traditional scanning model, because of the limitations of the object’s complex structure. So, we have developed a CT imaging method that is based on a spherical trajectory. Considering an unrestrained trajectory for iterative reconstruction, an iterative algorithm can be used to realise the CT reconstruction of a spherical trajectory for complete projection data only. Also, an inclined circle trajectory is used as an example of a spherical trajectory to illustrate the accuracy and feasibility of this new scanning method. The simulation results indicate that the new method produces superior results for a larger cone-beam angle, a limited angle and tabular objects compared with traditional circle trajectory scanning. PMID:26934744

  10. Traffic Speed Data Imputation Method Based on Tensor Completion

    PubMed Central

    Ran, Bin; Feng, Jianshuai; Liu, Ying; Wang, Wuhong

    2015-01-01

    Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches. PMID:25866501

  11. Grid-based Methods in Relativistic Hydrodynamics and Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Martí, José María; Müller, Ewald

    2015-12-01

    An overview of grid-based numerical methods used in relativistic hydrodynamics (RHD) and magnetohydrodynamics (RMHD) is presented. Special emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods. Results of a set of demanding test bench simulations obtained with different numerical methods are compared in an attempt to assess the present capabilities and limits of the various numerical strategies. Applications to three astrophysical phenomena are briefly discussed to motivate the need for and to demonstrate the success of RHD and RMHD simulations in their understanding. The review further provides FORTRAN programs to compute the exact solution of the Riemann problem in RMHD, and to simulate 1D RMHD flows in Cartesian coordinates.

  12. A protein structural class prediction method based on novel features.

    PubMed

    Zhang, Lichao; Zhao, Xiqiang; Kong, Liang

    2013-09-01

    In this study, a 12-dimensional feature vector is constructed to reflect the general contents and spatial arrangements of the secondary structural elements of a given protein sequence. Among the 12 features, 6 novel features are specially designed to improve the prediction accuracies for α/β and α + β classes based on the distributions of α-helices and β-strands and the characteristics of parallel β-sheets and anti-parallel β-sheets. To evaluate our method, the jackknife cross-validating test is employed on two widely-used datasets, 25PDB and 1189 datasets with sequence similarity lower than 40% and 25%, respectively. The performance of our method outperforms the recently reported methods in most cases, and the 6 newly-designed features have significant positive effect to the prediction accuracies, especially for α/β and α + β classes. PMID:23770446

  13. A Micromechanics-Based Method for Multiscale Fatigue Prediction

    NASA Astrophysics Data System (ADS)

    Moore, John Allan

    An estimated 80% of all structural failures are due to mechanical fatigue, often resulting in catastrophic, dangerous and costly failure events. However, an accurate model to predict fatigue remains an elusive goal. One of the major challenges is that fatigue is intrinsically a multiscale process, which is dependent on a structure's geometric design as well as its material's microscale morphology. The following work begins with a microscale study of fatigue nucleation around non- metallic inclusions. Based on this analysis, a novel multiscale method for fatigue predictions is developed. This method simulates macroscale geometries explicitly while concurrently calculating the simplified response of microscale inclusions. Thus, providing adequate detail on multiple scales for accurate fatigue life predictions. The methods herein provide insight into the multiscale nature of fatigue, while also developing a tool to aid in geometric design and material optimization for fatigue critical devices such as biomedical stents and artificial heart valves.

  14. CT Scanning Imaging Method Based on a Spherical Trajectory.

    PubMed

    Chen, Ping; Han, Yan; Gui, Zhiguo

    2016-01-01

    In industrial computed tomography (CT), the mismatch between the X-ray energy and the effective thickness makes it difficult to ensure the integrity of projection data using the traditional scanning model, because of the limitations of the object's complex structure. So, we have developed a CT imaging method that is based on a spherical trajectory. Considering an unrestrained trajectory for iterative reconstruction, an iterative algorithm can be used to realise the CT reconstruction of a spherical trajectory for complete projection data only. Also, an inclined circle trajectory is used as an example of a spherical trajectory to illustrate the accuracy and feasibility of this new scanning method. The simulation results indicate that the new method produces superior results for a larger cone-beam angle, a limited angle and tabular objects compared with traditional circle trajectory scanning. PMID:26934744

  15. A Swarm-Based Learning Method Inspired by Social Insects

    NASA Astrophysics Data System (ADS)

    He, Xiaoxian; Zhu, Yunlong; Hu, Kunyuan; Niu, Ben

    Inspired by cooperative transport behaviors of ants, on the basis of Q-learning, a new learning method, Neighbor-Information-Reference (NIR) learning method, is present in the paper. This is a swarm-based learning method, in which principles of swarm intelligence are strictly complied with. In NIR learning, the i-interval neighbor's information, namely its discounted reward, is referenced when an individual selects the next state, so that it can make the best decision in a computable local neighborhood. In application, different policies of NIR learning are recommended by controlling the parameters according to time-relativity of concrete tasks. NIR learning can remarkably improve individual efficiency, and make swarm more "intelligent".

  16. Footstep Planning Based on Univector Field Method for Humanoid Robot

    NASA Astrophysics Data System (ADS)

    Hong, Youngdae; Kim, Jong-Hwan

    This paper proposes a footstep planning algorithm based on univector field method optimized by evolutionary programming for humanoid robot to arrive at a target point in a dynamic environment. The univector field method is employed to determine the moving direction of the humanoid robot at every footstep. Modifiable walking pattern generator, extending the conventional 3D-LIPM method by allowing the ZMP variation while in single support phase, is utilized to generate every joint trajectory of a robot satisfying the planned footstep. The proposed algorithm enables the humanoid robot not only to avoid either static or moving obstacles but also step over static obstacles. The performance of the proposed algorithm is demonstrated by computer simulations using a modeled small-sized humanoid robot HanSaRam (HSR)-VIII.

  17. Method to find community structures based on information centrality

    NASA Astrophysics Data System (ADS)

    Fortunato, Santo; Latora, Vito; Marchiori, Massimo

    2004-11-01

    Community structures are an important feature of many social, biological, and technological networks. Here we study a variation on the method for detecting such communities proposed by Girvan and Newman and based on the idea of using centrality measures to define the community boundaries [M. Girvan and M. E. J. Newman, Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)]. We develop an algorithm of hierarchical clustering that consists in finding and removing iteratively the edge with the highest information centrality. We test the algorithm on computer generated and real-world networks whose community structure is already known or has been studied by means of other methods. We show that our algorithm, although it runs to completion in a time O(n4) , is very effective especially when the communities are very mixed and hardly detectable by the other methods.

  18. Sparse Reconstruction for Bioluminescence Tomography Based on the Semigreedy Method

    PubMed Central

    Guo, Wei; Jia, Kebin; Zhang, Qian; Liu, Xueyan; Feng, Jinchao; Qin, Chenghu; Ma, Xibo; Yang, Xin; Tian, Jie

    2012-01-01

    Bioluminescence tomography (BLT) is a molecular imaging modality which can three-dimensionally resolve the molecular processes in small animals in vivo. The ill-posedness nature of BLT problem makes its reconstruction bears nonunique solution and is sensitive to noise. In this paper, we proposed a sparse BLT reconstruction algorithm based on semigreedy method. To reduce the ill-posedness and computational cost, the optimal permissible source region was automatically chosen by using an iterative search tree. The proposed method obtained fast and stable source reconstruction from the whole body and imposed constraint without using a regularization penalty term. Numerical simulations on a mouse atlas, and in vivo mouse experiments were conducted to validate the effectiveness and potential of the method. PMID:22927887

  19. Decision tree based transient stability method -- A case study

    SciTech Connect

    Wehenkel, L.; Pavella, M. . Inst. Montefiore); Euxibie, E.; Heilbronn, B. . Direction des Etudes et Recherches)

    1994-02-01

    The decision tree transient stability method is revisited via a case study carried out on the French EHV power system. In short, the method consists of building off-line decision trees, able to subsequently assess the system transient behavior in terms of precontingency parameters (or attributes'') of it, likely to drive the stability phenomena. This case study aims at investigating practical feasibility aspects and features of the trees, at enhancing their reliability to the extent possible, and at generalizing them. Feasibility aspects encompass data base generation, candidate attributes, stability classes; tree features concern in particular complexity in terms of their size and interpretability capabilities, robustness with respect to both their building and use. Reliability is enhanced by defining and exploiting pragmatic quality measures. Generalization concerns multicontingency, instead of single-contingency trees. The results obtained show real promise for the method to meet practical needs of electric power utilities.

  20. Amplification-based method for microRNA detection.

    PubMed

    Shen, Yanting; Tian, Fei; Chen, Zhenzhu; Li, Rui; Ge, Qinyu; Lu, Zuhong

    2015-09-15

    Over the last two decades, the study of miRNAs has attracted tremendous attention since they regulate gene expression post-transcriptionally and have been demonstrated to be dysregulated in many diseases. Detection methods with higher sensitivity, specificity and selectivity between precursors and mature microRNAs are urgently needed and widely studied. This review gave an overview of the amplification-based technologies including traditional methods, current modified methods and the cross-platforms of them combined with other techniques. Many progresses were found in the modified amplification-based microRNA detection methods, while traditional platforms could not be replaced until now. Several sample-specific normalizers had been validated, suggesting that the different normalizers should be established for different sample types and the combination of several normalizers might be more appropriate than a single universal normalizer. This systematic overview would be useful to provide comprehensive information for subsequent related studies and could reduce the un-necessary repetition in the future. PMID:25930002

  1. A MUSIC-based method for SSVEP signal processing.

    PubMed

    Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei

    2016-03-01

    The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications. PMID:26831487

  2. FBG interrogation method based on wavelength-swept laser

    NASA Astrophysics Data System (ADS)

    Qin, Chuan; Zhao, Jianlin; Jiang, Biqiang; Rauf, Abdul; Wang, Donghui; Yang, Dexing

    2013-06-01

    Wavelength-swept laser technique is an active demodulation method which integrates laser source and detecting circuit together to achieve compact size. The method also has the advantages such as large demodulation range, high accuracy, and comparatively high speed. In this paper, we present a FBG interrogation method based on wavelength-swept Laser, in which an erbium-doped fiber is used as gain medium and connected by a WDM to form a ring cavity, a fiber FP tunable filter is inserted in the loop for choosing the laser frequency and a gas absorption cell is adopted as a frequency reference. The laser wavelength is swept by driving the FP filter. If the laser wavelength matches with that of FBG sensors, there will be some strong reflection peak signals. Detecting such signals with the transmittance signal after the gas absorption cell synchronously and analyzing them, the center wavelengths of the FBG sensors are calculated out at last. Here, we discuss the data processing method based on the frequency reference, and experimentally study the swept laser characteristics. Finally, we adopt this interrogator to demodulate FBG stress sensors. The results show that, the demodulation range almost covers C+L band, the resolution and accuracy can reach about 1pm or less and 5pm respectively. So it is very suitable for most FBG measurements.

  3. Gradient-based image recovery methods from incomplete Fourier measurements.

    PubMed

    Patel, Vishal M; Maleh, Ray; Gilbert, Anna C; Chellappa, Rama

    2012-01-01

    A major problem in imaging applications such as magnetic resonance imaging and synthetic aperture radar is the task of trying to reconstruct an image with the smallest possible set of Fourier samples, every single one of which has a potential time and/or power cost. The theory of compressive sensing (CS) points to ways of exploiting inherent sparsity in such images in order to achieve accurate recovery using sub-Nyquist sampling schemes. Traditional CS approaches to this problem consist of solving total-variation (TV) minimization programs with Fourier measurement constraints or other variations thereof. This paper takes a different approach. Since the horizontal and vertical differences of a medical image are each more sparse or compressible than the corresponding TV image, CS methods will be more successful in recovering these differences individually. We develop an algorithm called GradientRec that uses a CS algorithm to recover the horizontal and vertical gradients and then estimates the original image from these gradients. We present two methods of solving the latter inverse problem, i.e., one based on least-square optimization and the other based on a generalized Poisson solver. After a thorough derivation of our complete algorithm, we present the results of various experiments that compare the effectiveness of the proposed method against other leading methods. PMID:21690011

  4. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification. PMID:22097877

  5. Hydrologic regionalization using wavelet-based multiscale entropy method

    NASA Astrophysics Data System (ADS)

    Agarwal, A.; Maheswaran, R.; Sehgal, V.; Khosa, R.; Sivakumar, B.; Bernhofer, C.

    2016-07-01

    Catchment regionalization is an important step in estimating hydrologic parameters of ungaged basins. This paper proposes a multiscale entropy method using wavelet transform and k-means based hybrid approach for clustering of hydrologic catchments. Multi-resolution wavelet transform of a time series reveals structure, which is often obscured in streamflow records, by permitting gross and fine features of a signal to be separated. Wavelet-based Multiscale Entropy (WME) is a measure of randomness of the given time series at different timescales. In this study, streamflow records observed during 1951-2002 at 530 selected catchments throughout the United States are used to test the proposed regionalization framework. Further, based on the pattern of entropy across multiple scales, each cluster is given an entropy signature that provides an approximation of the entropy pattern of the streamflow data in each cluster. The tests for homogeneity reveals that the proposed approach works very well in regionalization.

  6. Current trends in virtual high throughput screening using ligand-based and structure-based methods.

    PubMed

    Sukumar, Nagamani; Das, Sourav

    2011-12-01

    High throughput in silico methods have offered the tantalizing potential to drastically accelerate the drug discovery process. Yet despite significant efforts expended by academia, national labs and industry over the years, many of these methods have not lived up to their initial promise of reducing the time and costs associated with the drug discovery enterprise, a process that can typically take over a decade and cost hundreds of millions of dollars from conception to final approval and marketing of a drug. Nevertheless structure-based modeling has become a mainstay of computational biology and medicinal chemistry, helping to leverage our knowledge of the biological target and the chemistry of protein-ligand interactions. While ligand-based methods utilize the chemistry of molecules that are known to bind to the biological target, structure-based drug design methods rely on knowledge of the three-dimensional structure of the target, as obtained through crystallographic, spectroscopic or bioinformatics techniques. Here we review recent developments in the methodology and applications of structure-based and ligand-based methods and target-based chemogenomics in Virtual High Throughput Screening (VHTS), highlighting some case studies of recent applications, as well as current research in further development of these methods. The limitations of these approaches will also be discussed, to give the reader an indication of what might be expected in years to come. PMID:21843144

  7. Effect of changing journal clubs from traditional method to evidence-based method on psychiatry residents

    PubMed Central

    Faridhosseini, Farhad; Saghebi, Ali; Khadem-Rezaiyan, Majid; Moharari, Fatemeh; Dadgarmoghaddam, Maliheh

    2016-01-01

    Introduction Journal club is a valuable educational tool in the medical field. This method follows different goals. This study aims to investigate the effect on psychiatry residents of changing journal clubs from the traditional method to the evidence-based method. Method This study was conducted using a before–after design. First- and second-year residents of psychiatry were included in the study. First, the status quo was evaluated by standardized questionnaire regarding the effect of journal club. Then, ten sessions were held to familiarize the residents with the concept of journal club. After that, evidence-based journal club sessions were held. The questionnaire was given to the residents again after the final session. Data were analyzed through descriptive statistics (frequency and percentage frequency, mean and standard deviation), and analytic statistics (paired t-test) using SPSS 22. Results Of a total of 20 first- and second-year residents of psychiatry, the data of 18 residents were finally analyzed. Most of the subjects (17 [93.7%]) were females. The mean overall score before and after the intervention was 1.83±0.45 and 2.85±0.57, respectively, which showed a significant increase (P<0.001). Conclusion Moving toward evidence-based journal clubs seems like an appropriate measure to reach the goals set by this educational tool. PMID:27570469

  8. Global seismic waveform tomography based on the spectral element method.

    NASA Astrophysics Data System (ADS)

    Capdeville, Y.; Romanowicz, B.; Gung, Y.

    2003-04-01

    Because seismogram waveforms contain much more information on the earth structure than body wave time arrivals or surface wave phase velocities, inversion of complete time-domain seismograms should allow much better resolution in global tomography. In order to achieve this, accurate methods for the calculation of forward propagation of waves in a 3D earth need to be utilized, which presents theoretical as well as computational challenges. In the past 8 years, we have developed several global 3D S velocity models based on long period waveform data, and a normal mode asymptotic perturbation formalism (NACT, Li and Romanowicz, 1996). While this approach is relatively accessible from the computational point of view, it relies on the assumption of smooth heterogeneity in a single scattering framework. Recently, the introduction of the spectral element method (SEM) has been a major step forward in the computation of seismic waveforms in a global 3D earth with no restrictions on the size of heterogeneities (Chaljub, 2000). While this method is computationally heavy when the goal is to compute large numbers of seismograms down to typical body wave periods (1-10 sec), it is much more accessible when restricted to low frequencies (T>150sec). When coupled with normal modes (e.g. Capdeville et al., 2000), the numerical computation can be restricted to a spherical shell within which heterogeneity is considered, further reducing the computational time. Here, we present a tomographic method based on the non linear least square inversion of time domain seismograms using the coupled method of spectral elements and modal solution. SEM/modes are used for both the forward modeling and to compute partial derivatives. The parametrisation of the model is also based on the spectral element mesh, the "cubed sphere" (Sadourny, 1972), which leads to a 3D local polynomial parametrization. This parametrization, combined with the excellent earth coverage resulting from the full 3D theory used

  9. An analytical method for Mathieu oscillator based on method of variation of parameter

    NASA Astrophysics Data System (ADS)

    Li, Xianghong; Hou, Jingyu; Chen, Jufeng

    2016-08-01

    A simple, but very accurate analytical method for forced Mathieu oscillator is proposed, the idea of which is based on the method of variation of parameter. Assuming that the time-varying parameter in Mathieu oscillator is constant, one could easily obtain its accurately analytical solution. Then the approximately analytical solution for Mathieu oscillator could be established after substituting periodical time-varying parameter for the constant one in the obtained accurate analytical solution. In order to certify the correctness and precision of the proposed analytical method, the first-order and ninth-order approximation solutions by harmonic balance method (HBM) are also presented. The comparisons between the results by the proposed method with those by the numerical simulation and HBM verify that the results by the proposed analytical method agree very well with those by the numerical simulation. Moreover, the precision of the proposed new analytical method is not only higher than the approximation solution by first-order HBM, but also better than the approximation solution by the ninth-order HBM in large ranges of system parameters.

  10. TRUST-TECH based Methods for Optimization and Learning

    NASA Astrophysics Data System (ADS)

    Reddy, Chandan K.

    2007-12-01

    Many problems that arise in machine learning domain deal with nonlinearity and quite often demand users to obtain global optimal solutions rather than local optimal ones. Optimization problems are inherent in machine learning algorithms and hence many methods in machine learning were inherited from the optimization literature. Popularly known as the initialization problem, the ideal set of parameters required will significantly depend on the given initialization values. The recently developed TRUST-TECH (TRansformation Under STability-reTaining Equilibria CHaracterization) methodology systematically explores the subspace of the parameters to obtain a complete set of local optimal solutions. In this thesis work, we propose TRUST-TECH based methods for solving several optimization and machine learning problems. Two stages namely, the local stage and the neighborhood-search stage, are repeated alternatively in the solution space to achieve improvements in the quality of the solutions. Our methods were tested on both synthetic and real datasets and the advantages of using this novel framework are clearly manifested. This framework not only reduces the sensitivity to initialization, but also allows the flexibility for the practitioners to use various global and local methods that work well for a particular problem of interest. Other hierarchical stochastic algorithms like evolutionary algorithms and smoothing algorithms are also studied and frameworks for combining these methods with TRUST-TECH have been proposed and evaluated on several test systems.

  11. Scene-based nonuniformity correction method using multiscale constant statistics

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Chen, Qian; Gu, Guohua; Sui, Xiubao; Qian, Weixian

    2011-08-01

    In scene-based nonuniformity correction (NUC) methods for infrared focal plane array cameras, the statistical approaches have been well studied because of their lower computational complexity. However, when the assumptions imposed by statistical algorithms are violated, their performance is poor. Moreover, many of these techniques, like the global constant statistics method, usually need tens of thousands of image frames to obtain a good NUC result. In this paper, we introduce a new statistical NUC method called the multiscale constant statistics (MSCS). The MSCS statically considers that the spatial scale of the temporal constant distribution expands over time. Under the assumption that the nonuniformity is distributed in a higher spatial frequency domain, the spatial range for gain and offset estimates gradually expands to guarantee fast compensation for nonuniformity. Furthermore, an exponential window and a tolerance interval for the acquired data are introduced to capture the drift in nonuniformity and eliminate the ghosting artifacts. The strength of the proposed method lies in its simplicity, low computational complexity, and its good trade-off between convergence rate and correction precision. The NUC ability of the proposed method is demonstrated by using infrared video sequences with both synthetic and real nonuniformity.

  12. Mode separation of Lamb waves based on dispersion compensation method.

    PubMed

    Xu, Kailiang; Ta, Dean; Moilanen, Petro; Wang, Weiqi

    2012-04-01

    Ultrasonic Lamb modes typically propagate as a combination of multiple dispersive wave packets. Frequency components of each mode distribute widely in time domain due to dispersion and it is very challenging to separate individual modes by traditional signal processing methods. In the present study, a method of dispersion compensation is proposed for the purpose of mode separation. This numerical method compensates, i.e., compresses, the individual dispersive waveforms into temporal pulses, which thereby become nearly un-overlapped in time and frequency and can thus be extracted individually by rectangular time windows. It was further illustrated that the dispersion compensation also provided a method for predicting the plate thickness. Finally, based on reversibility of the numerical compensation method, an artificial dispersion technique was used to restore the original waveform of each mode from the separated compensated pulse. Performances of the compensation separation techniques were evaluated by processing synthetic and experimental signals which consisted of multiple Lamb modes with high dispersion. Individual modes were extracted with good accordance with the original waveforms and theoretical predictions. PMID:22501050

  13. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  14. Assessment of mesoscopic particle-based methods in microfluidic geometries

    NASA Astrophysics Data System (ADS)

    Zhao, Tongyang; Wang, Xiaogong; Jiang, Lei; Larson, Ronald G.

    2013-08-01

    We assess the accuracy and efficiency of two particle-based mesoscopic simulation methods, namely, Dissipative Particle Dynamics (DPD) and Stochastic Rotation Dynamics (SRD) for predicting a complex flow in a microfluidic geometry. Since both DPD and SRD use soft or weakly interacting particles to carry momentum, both methods contain unavoidable inertial effects and unphysically high fluid compressibility. To assess these effects, we compare the predictions of DPD and SRD for both an exact Stokes-flow solution and nearly exact solutions at finite Reynolds numbers from the finite element method for flow in a straight channel with periodic slip boundary conditions. This flow represents a periodic electro-osmotic flow, which is a complex flow with an analytical solution for zero Reynolds number. We find that SRD is roughly ten-fold faster than DPD in predicting the flow field, with better accuracy at low Reynolds numbers. However, SRD has more severe problems with compressibility effects than does DPD, which limits the Reynolds numbers attainable in SRD to around 25-50, while DPD can achieve Re higher than this before compressibility effects become too large. However, since the SRD method runs much faster than DPD does, we can afford to enlarge the number of grid cells in SRD to reduce the fluid compressibility at high Reynolds number. Our simulations provide a method to estimate the range of conditions for which SRD or DPD is preferable for mesoscopic simulations.

  15. DGGE-based detection method for Quahog Parasite Unknown (QPX).

    PubMed

    Gast, R J; Cushman, E; Moran, D M; Uhlinger, K R; Leavitt, D; Smolowitz, R

    2006-06-12

    Quahog Parasite Unknown (QPX) is a significant cause of hard clam Mercenaria mercenaria mortality along the northeast coast of the United States. It infects both wild and cultured clams, often annually in plots that are heavily farmed. Subclinically infected clams can be identified by histological examination of the mantle tissue, but there is currently no method available to monitor the presence of QPX in the environment. Here, we report on a polymerase chain reaction (PCR)-based method that will facilitate the detection of QPX in natural samples and seed clams. With our method, between 10 and 100 QPX cells can be detected in 1 l of water, 1 g of sediment and 100 mg of clam tissue. Denaturing gradient gel electrophoresis (DGGE) is used to establish whether the PCR products are the same as those in the control QPX culture. We used the method to screen 100 seed clams of 15 mm, and found that 10 to 12% of the clams were positive for the presence of the QPX organism. This method represents a reliable and sensitive procedure for screening both environmental samples and potentially contaminated small clams. PMID:16875398

  16. Variation block-based genomics method for crop plants

    PubMed Central

    2014-01-01

    Background In contrast with wild species, cultivated crop genomes consist of reshuffled recombination blocks, which occurred by crossing and selection processes. Accordingly, recombination block-based genomics analysis can be an effective approach for the screening of target loci for agricultural traits. Results We propose the variation block method, which is a three-step process for recombination block detection and comparison. The first step is to detect variations by comparing the short-read DNA sequences of the cultivar to the reference genome of the target crop. Next, sequence blocks with variation patterns are examined and defined. The boundaries between the variation-containing sequence blocks are regarded as recombination sites. All the assumed recombination sites in the cultivar set are used to split the genomes, and the resulting sequence regions are termed variation blocks. Finally, the genomes are compared using the variation blocks. The variation block method identified recurring recombination blocks accurately and successfully represented block-level diversities in the publicly available genomes of 31 soybean and 23 rice accessions. The practicality of this approach was demonstrated by the identification of a putative locus determining soybean hilum color. Conclusions We suggest that the variation block method is an efficient genomics method for the recombination block-level comparison of crop genomes. We expect that this method will facilitate the development of crop genomics by bringing genomics technologies to the field of crop breeding. PMID:24929792

  17. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. PMID:23246613

  18. A beam hardening correction method based on HL consistency

    NASA Astrophysics Data System (ADS)

    Mou, Xuanqin; Tang, Shaojie; Yu, Hengyong

    2006-08-01

    XCT with polychromatic tube spectrum causes artifact called beam hardening effect. The current correction in CT device is carried by apriori polynomial from water phantom experiment. This paper proposes a new beam hardening correction algorithm that the correction polynomial depends on the relativity of projection data in angles, which obeys Helgasson-Ludwig Consistency (HL Consistency). Firstly, a bi-polynomial is constructed to characterize the beam hardening effect based on the physical model of medical x-ray imaging. In this bi-polynomial, a factor r(γ,β) represents the ratio of the attenuation contributions caused by high density mass (bone, etc.) to low density mass (muscle, vessel, blood, soft tissue, fat, etc.) respectively in the projection angle β and fan angle γ. Secondly, let r(γ,β)=0, the bi-polynomial is degraded as a sole-polynomial. The coefficient of this polynomial can be calculated based on HL Consistency. Then, the primary correction is reached, which is also more efficient in theoretical than the correction method in current CT devices. Thirdly, based on the result of a normal CT reconstruction from the corrected projection data, r(γ,β) can be estimated. Fourthly, the coefficient of bi-polynomial can also be calculated based HL Consistency and the final correction are achieved. Experiments of circular cone beam CT indicate this method an excellent property. Correcting beam hardening effect based on HL Consistency, not only achieving a self-adaptive and more precise correction, but also getting rid of regular inconvenient water phantom experiments, will renovate the correction technique of current CT devices.

  19. A supervoxel-based segmentation method for prostate MR images

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, LiZhi; Fei, Baowei

    2015-03-01

    Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a "Supervoxel" based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A threedimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%+/-3.2%. The segmentation method can be used not only for the prostate but also for other organs.

  20. Warped document image correction method based on heterogeneous registration strategies

    NASA Astrophysics Data System (ADS)

    Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan

    2013-03-01

    With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.

  1. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  2. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  3. Integrated method for the measurement of trace atmospheric bases

    NASA Astrophysics Data System (ADS)

    Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.

    2011-09-01

    Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace atmospheric nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  4. Integrated method for the measurement of trace nitrogenous atmospheric bases

    NASA Astrophysics Data System (ADS)

    Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.

    2011-12-01

    Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv), as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  5. a Robust Pct Method Based on Complex Least Squares Adjustment Method

    NASA Astrophysics Data System (ADS)

    Haiqiang, F.; Jianjun, Z.; Changcheng, W.; Qinghua, X.; Rong, Z.

    2013-07-01

    Polarization Coherence Tomography (PCT) method has the good performance in deriving the vegetation vertical structure. However, Errors caused by temporal decorrelation and vegetation height and ground phase always propagate to the data analysis and contaminate the results. In order to overcome this disadvantage, we exploit Complex Least Squares Adjustment Method to compute vegetation height and ground phase based on Random Volume over Ground and Volume Temporal Decorrelation (RVoG + VTD) model. By the fusion of different polarimetric InSAR data, we can use more observations to obtain more robust estimations of temporal decorrelation and vegetation height, and then, we introduce them into PCT to acquire more accurate vegetation vertical structure. Finally the new approach is validated on E-SAR data of Oberpfaffenhofen, Germany. The results demonstrate that the robust method can greatly improve accusation of vegetation vertical structure.

  6. A new smartphone-based method for wound area measurement.

    PubMed

    Foltynski, Piotr; Ladyzynski, Piotr; Wojcicki, Jan M

    2014-04-01

    Proper wound healing can be assessed by monitoring the wound surface area. Its reduction by 10 or 50% should be achieved after 1 or 4 weeks, respectively, from the start of the applied therapy. There are various methods of wound area measurement, which differ in terms of the cost of the devices and their accuracy. This article presents an originally developed method for wound area measurement. It is based on the automatic recognition of the wound contour with a software application running on a smartphone. The wound boundaries have to be traced manually on transparent foil placed over the wound. After taking a picture of the wound outline over a grid of 1 × 1 cm, the AreaMe software calculates the wound area, sends the data to a clinical database using an Internet connection, and creates a graph of the wound area change over time. The accuracy and precision of the new method was assessed and compared with the accuracy and precision of commercial devices: Visitrak and SilhouetteMobile. The comparison was performed using 108 wound shapes that were measured five times with each device, using an optical scanner as a reference device. The accuracy of the new method was evaluated by calculating relative errors and comparing them with relative errors for the Visitrak and the SilhouetteMobile devices. The precision of the new method was determined by calculating the coefficients of variation and comparing them with the coefficients of variation for the Visitrak and the SilhouetteMobile devices. A statistical analysis revealed that the new method was more accurate and more precise than the Visitrak device but less accurate and less precise than the SilhouetteMobile device. Thus, the AreaMe application is a superior alternative to the Visitrak device because it provides not only a more accurate measurement of the wound area but also stores the data for future use by the physician. PMID:24102380

  7. Density functional theory based generalized effective fragment potential method

    SciTech Connect

    Nguyen, Kiet A. E-mail: ruth.pachter@wpafb.af.mil; Pachter, Ruth E-mail: ruth.pachter@wpafb.af.mil; Day, Paul N.

    2014-06-28

    We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes.

  8. Density functional theory based generalized effective fragment potential method.

    PubMed

    Nguyen, Kiet A; Pachter, Ruth; Day, Paul N

    2014-06-28

    We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes. PMID:24985612

  9. Star-Based Methods for Pleiades HR Commissioning

    NASA Astrophysics Data System (ADS)

    Fourest, S.; Kubik, P.; Lebègue, L.; Déchoz, C.; Lacherade, S.; Blanchet, G.

    2012-07-01

    PLEIADES is the highest resolution civilian earth observing system ever developed in Europe. This imagery program is conducted by the French National Space Agency, CNES. It has been operating since 2012 a first satellite PLEIADES-HR launched on 2011 December 17th, a second one should be launched by the end of the year. Each satellite is designed to provide optical 70 cm resolution colored images to civilian and defense users. Thanks to the extreme agility of the satellite, new calibration methods have been tested, based on the observation of celestial bodies, and stars in particular. It has then been made possible to perform MTF measurement, re-focusing, geometrical bias and focal plane assessment, absolute calibration, ghost images localization, micro-vibrations measurement, etc… Starting from an overview of the star acquisition process, this paper will discuss the methods and present the results obtained during the first four months of the commissioning phase.

  10. Vision-based method for tracking meat cuts in slaughterhouses.

    PubMed

    Larsen, Anders Boesen Lindbo; Hviid, Marchen Sonja; Jørgensen, Mikkel Engbo; Larsen, Rasmus; Dahl, Anders Lindbjerg

    2014-01-01

    Meat traceability is important for linking process and quality parameters from the individual meat cuts back to the production data from the farmer that produced the animal. Current tracking systems rely on physical tagging, which is too intrusive for individual meat cuts in a slaughterhouse environment. In this article, we demonstrate a computer vision system for recognizing meat cuts at different points along a slaughterhouse production line. More specifically, we show that 211 pig loins can be identified correctly between two photo sessions. The pig loins undergo various perturbation scenarios (hanging, rough treatment and incorrect trimming) and our method is able to handle these perturbations gracefully. This study shows that the suggested vision-based approach to tracking is a promising alternative to the more intrusive methods currently available. PMID:23962525

  11. Phase retrieval-based distribution detecting method for transparent objects

    NASA Astrophysics Data System (ADS)

    Wu, Liang; Tao, Shaohua; Xiao, Si

    2015-11-01

    A distribution detecting method to recover the distribution of transparent objects from their diffraction intensities is proposed. First, on the basis of the Gerchberg-Saxton algorithm, a wavefront function involving the phase change of the object is retrieved from the incident light intensity and the diffraction intensity, then the phase change of the object is calculated from the retrieved wavefront function by using a gradient field-based phase estimation algorithm, which circumvents the common phase wrapping problem. Finally, a linear model between the distribution of the object and the phase change is set up, and the distribution of the object can be calculated from the obtained phase change. The effectiveness of the proposed method is verified with simulations and experiments.

  12. Novel parameter-based flexure bearing design method

    NASA Astrophysics Data System (ADS)

    Amoedo, Simon; Thebaud, Edouard; Gschwendtner, Michael; White, David

    2016-06-01

    A parameter study was carried out on the design variables of a flexure bearing to be used in a Stirling engine with a fixed axial displacement and a fixed outer diameter. A design method was developed in order to assist identification of the optimum bearing configuration. This was achieved through a parameter study of the bearing carried out with ANSYS®. The parameters varied were the number and the width of the arms, the thickness of the bearing, the eccentricity, the size of the starting and ending holes, and the turn angle of the spiral. Comparison was made between the different designs in terms of axial and radial stiffness, the natural frequency, and the maximum induced stresses. Moreover, the Finite Element Analysis (FEA) was compared to theoretical results for a given design. The results led to a graphical design method which assists the selection of flexure bearing geometrical parameters based on pre-determined geometric and material constraints.

  13. Method to implement the CCD timing generator based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  14. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-01-01

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position. PMID:25046019

  15. Network-Based Inference Methods for Drug Repositioning

    PubMed Central

    Zhang, Heng; Cao, Yiqin; Tang, Wenliang

    2015-01-01

    Mining potential drug-disease associations can speed up drug repositioning for pharmaceutical companies. Previous computational strategies focused on prior biological information for association inference. However, such information may not be comprehensively available and may contain errors. Different from previous research, two inference methods, ProbS and HeatS, were introduced in this paper to predict direct drug-disease associations based only on the basic network topology measure. Bipartite network topology was used to prioritize the potentially indicated diseases for a drug. Experimental results showed that both methods can receive reliable prediction performance and achieve AUC values of 0.9192 and 0.9079, respectively. Case studies on real drugs indicated that some of the strongly predicted associations were confirmed by results in the Comparative Toxicogenomics Database (CTD). Finally, a comprehensive prediction of drug-disease associations enables us to suggest many new drug indications for further studies. PMID:25969690

  16. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  17. An Optimization-based Atomistic-to-Continuum Coupling Method

    DOE PAGESBeta

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less

  18. Neural cell image segmentation method based on support vector machine

    NASA Astrophysics Data System (ADS)

    Niu, Shiwei; Ren, Kan

    2015-10-01

    In the analysis of neural cell images gained by optical microscope, accurate and rapid segmentation is the foundation of nerve cell detection system. In this paper, a modified image segmentation method based on Support Vector Machine (SVM) is proposed to reduce the adverse impact caused by low contrast ratio between objects and background, adherent and clustered cells' interference etc. Firstly, Morphological Filtering and OTSU Method are applied to preprocess images for extracting the neural cells roughly. Secondly, the Stellate Vector, Circularity and Histogram of Oriented Gradient (HOG) features are computed to train SVM model. Finally, the incremental learning SVM classifier is used to classify the preprocessed images, and the initial recognition areas identified by the SVM classifier are added to the library as the positive samples for training SVM model. Experiment results show that the proposed algorithm can achieve much better segmented results than the classic segmentation algorithms.

  19. An Optimization-based Atomistic-to-Continuum Coupling Method

    SciTech Connect

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally, we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.

  20. A novel non-uniformity correction method based on ROIC

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoming; Li, Yujue; Di, Chao; Wang, Xinxing; Cao, Yi

    2011-11-01

    Infrared focal plane arrays (IRFPA) suffer from inherent low frequency and fixed patter noised (FPN). They are thus limited by their inability to calibrate out individual detector variations including detector dark current (offset) and responsivity (gain). To achieve high quality infrared image by mitigating the FPN of IRFPAs, we have developed a novel non-uniformity correction (NUC) method based on read-out integrated circuit (ROIC). The offset and gain correction coefficients can be calculated by function fitting for the linear relationship between the detector's output and a reference voltage in ROIC. We tested the purposed method using an infrared imaging system using the ULIS 03 19 1 detector with real nonuniformity. A set of 384*288 infrared images with 12 bits was collected to evaluate the performance. With the experiments, the non-uniformity was greatly eliminated. We also used the universe non-uniformity (NU) parameter to estimate the performance. The calculated NU parameters with the two-point calibration (TPC) and the purposed method imply that the purposed method has almost as good performance as TPC.

  1. A novel virtual viewpoint merging method based on machine learning

    NASA Astrophysics Data System (ADS)

    Zheng, Di; Peng, Zongju; Wang, Hui; Jiang, Gangyi; Chen, Fen

    2014-11-01

    In multi-view video system, multiple video plus depth is main data format of 3D scene representation. Continuous virtual views can be generated by using depth image based rendering (DIBR) technique. DIBR process includes geometric mapping, hole filling and merging. Unique weights, inversely proportional to the distance between the virtual and real cameras, are used to merge the virtual views. However, the weights might not the optimal ones in terms of virtual view quality. In this paper, a novel virtual view merging algorithm is proposed. In the proposed algorithm, machine learning method is utilized to establish an optimal weight model. In the model, color, depth, color gradient and sequence parameters are taken into consideration. Firstly, we render the same virtual view from left and right views, and select the training samples by using a threshold. Then, the eigenvalues of the samples are extracted and the optimal merging weights are calculated as training labels. Finally, support vector classifier (SVC) is adopted to establish the model which is used for guiding virtual views rendering. Experimental results show that the proposed method can improve the quality of virtual views for most sequences. Especially, it is effective in the case of large distance between the virtual and real cameras. And compared to the original method of virtual view synthesis, the proposed method can obtain more than 0.1dB gain for some sequences.

  2. Optimal grid-based methods for thin film micromagnetics simulations

    NASA Astrophysics Data System (ADS)

    Muratov, C. B.; Osipov, V. V.

    2006-08-01

    Thin film micromagnetics are a broad class of materials with many technological applications, primarily in magnetic memory. The dynamics of the magnetization distribution in these materials is traditionally modeled by the Landau-Lifshitz-Gilbert (LLG) equation. Numerical simulations of the LLG equation are complicated by the need to compute the stray field due to the inhomogeneities in the magnetization which presents the chief bottleneck for the simulation speed. Here, we introduce a new method for computing the stray field in a sample for a reduced model of ultra-thin film micromagnetics. The method uses a recently proposed idea of optimal finite difference grids for approximating Neumann-to-Dirichlet maps and has an advantage of being able to use non-uniform discretization in the film plane, as well as an efficient way of dealing with the boundary conditions at infinity for the stray field. We present several examples of the method's implementation and give a detailed comparison of its performance for studying domain wall structures compared to the conventional FFT-based methods.

  3. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  4. Method of plasma etching GA-based compound semiconductors

    SciTech Connect

    Qiu, Weibin; Goddard, Lynford L.

    2013-01-01

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent thereto. The chamber contains a Ga-based compound semiconductor sample in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. SiCl.sub.4 and Ar gases are flowed into the chamber. RF power is supplied to the platen at a first power level, and RF power is supplied to the source electrode. A plasma is generated. Then, RF power is supplied to the platen at a second power level lower than the first power level and no greater than about 30 W. Regions of a surface of the sample adjacent to one or more masked portions of the surface are etched at a rate of no more than about 25 nm/min to create a substantially smooth etched surface.

  5. 'Fertility Awareness-Based Methods' and subfertility: a systematic review.

    PubMed

    Thijssen, A; Meier, A; Panis, K; Ombelet, W

    2014-01-01

    Fertility awareness based methods (FABMs) can be used to ameliorate the likelihood to conceive. A literature search was performed to evaluate the relationship of cervical mucus monitoring (CMM) and the day-specific -pregnancy rate, in case of subfertility. A MEDLINE search revealed a total of 3331 articles. After excluding articles based on their relevance, 10 studies and were selected. The observed studies demonstrated that the cervical mucus monitoring (CMM) can identify the days with the highest pregnancy rate. According to the literature, the quality of the vaginal discharge correlates well with the cycle-specific probability of pregnancy in normally fertile couples but less in subfertile couples. The results indicate an urgent need for more prospective randomised trials and -prospective cohort studies on CMM in a subfertile population to evaluate the effectiveness of CMM in the subfertile couple. PMID:25374654

  6. Transistor-based particle detection systems and methods

    DOEpatents

    Jain, Ankit; Nair, Pradeep R.; Alam, Muhammad Ashraful

    2015-06-09

    Transistor-based particle detection systems and methods may be configured to detect charged and non-charged particles. Such systems may include a supporting structure contacting a gate of a transistor and separating the gate from a dielectric of the transistor, and the transistor may have a near pull-in bias and a sub-threshold region bias to facilitate particle detection. The transistor may be configured to change current flow through the transistor in response to a change in stiffness of the gate caused by securing of a particle to the gate, and the transistor-based particle detection system may configured to detect the non-charged particle at least from the change in current flow.

  7. Method for fabricating beryllium-based multilayer structures

    DOEpatents

    Skulina, Kenneth M.; Bionta, Richard M.; Makowiecki, Daniel M.; Alford, Craig S.

    2003-02-18

    Beryllium-based multilayer structures and a process for fabricating beryllium-based multilayer mirrors, useful in the wavelength region greater than the beryllium K-edge (111 .ANG. or 11.1 nm). The process includes alternating sputter deposition of beryllium and a metal, typically from the fifth row of the periodic table, such as niobium (Nb), molybdenum (Mo), ruthenium (Ru), and rhodium (Rh). The process includes not only the method of sputtering the materials, but the industrial hygiene controls for safe handling of beryllium. The mirrors made in accordance with the process may be utilized in soft x-ray and extreme-ultraviolet projection lithography, which requires mirrors of high reflectivity (>60%) for x-rays in the range of 60-140 .ANG. (60-14.0 nm).

  8. Hybrid Modeling Method for a DEP Based Particle Manipulation

    PubMed Central

    Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad

    2013-01-01

    In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results. PMID:23364197

  9. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Smith, Timothy A. (Inventor); Urnes, James M., Sr. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  10. Methods and applications of positron-based medical imaging

    NASA Astrophysics Data System (ADS)

    Herzog, H.

    2007-02-01

    Positron emission tomography (PET) is a diagnostic imaging method to examine metabolic functions and their disorders. Dedicated ring systems of scintillation detectors measure the 511 keV γ-radiation produced in the course of the positron emission from radiolabelled metabolically active molecules. A great number of radiopharmaceuticals labelled with 11C, 13N, 15O, or 18F positron emitters have been applied both for research and clinical purposes in neurology, cardiology and oncology. The recent success of PET with rapidly increasing installations is mainly based on the use of [ 18F]fluorodeoxyglucose (FDG) in oncology where it is most useful to localize primary tumours and their metastases.

  11. Supersampling method for efficient grid-based electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn

    2016-03-01

    The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing.

  12. Simultaneous least squares fitter based on the Lagrange multiplier method

    NASA Astrophysics Data System (ADS)

    Guan, Ying-Hui; Lü, Xiao-Rui; Zheng, Yang-Heng; Zhu, Yong-Sheng

    2013-10-01

    We developed a least squares fitter used for extracting expected physics parameters from the correlated experimental data in high energy physics. This fitter considers the correlations among the observables and handles the nonlinearity using linearization during the χ2 minimization. This method can naturally be extended to the analysis with external inputs. By incorporating with Lagrange multipliers, the fitter includes constraints among the measured observables and the parameters of interest. We applied this fitter to the study of the D0-D¯0 mixing parameters as the test-bed based on MC simulation. The test results show that the fitter gives unbiased estimators with correct uncertainties and the approach is credible.

  13. Supersampling method for efficient grid-based electronic structure calculations.

    PubMed

    Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn

    2016-03-01

    The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing. PMID:26957151

  14. [Others physical methods in psychiatric treatment based on electromagnetic stimulation].

    PubMed

    Zyss, Tomasz; Rachel, Wojciech; Datka, Wojciech; Hese, Robert T; Gorczyca, Piotr; Zięba, Andrzej; Piekoszewski, Wojciech

    2016-01-01

    In the last decades a few new physical methods based on the electromagnetic head stimulation were subjected to the clinical research. To them belong:--vagus nerve stimulation (VNS),--magnetic seizure therapy/magnetoconvulsive therapy (MST/MCT),--deep stimulation of the brain (DBS) and--transcranial direct current stimulation (tDCS). The paper presents a description of mentioned techniques (nature, advantages, defects, restrictions), which were compared to the applied electroconvulsive treatment ECT, earlier described transcranial magnetic stimulation TMS and the pharmacotherapy (the basis of the psychiatric treatment). PMID:27197431

  15. Rapid Mapping Method Based on Free Blocks of Surveys

    NASA Astrophysics Data System (ADS)

    Yu, Xianwen; Wang, Huiqing; Wang, Jinling

    2016-06-01

    While producing large-scale larger than 1:2000 maps in cities or towns, the obstruction from buildings leads to difficult and heavy tasks of measuring mapping control points. In order to avoid measuring the mapping control points and shorten the time of fieldwork, in this paper, a quick mapping method is proposed. This method adjusts many free blocks of surveys together, and transforms the points from all free blocks of surveys into the same coordinate system. The entire surveying area is divided into many free blocks, and connection points are set on the boundaries between free blocks. An independent coordinate system of every free block is established via completely free station technology, and the coordinates of the connection points, detail points and control points in every free block in the corresponding independent coordinate systems are obtained based on poly-directional open traverses. Error equations are established based on connection points, which are determined together to obtain the transformation parameters. All points are transformed from the independent coordinate systems to a transitional coordinate system via the transformation parameters. Several control points are then measured by GPS in a geodetic coordinate system. All the points can then be transformed from the transitional coordinate system to the geodetic coordinate system. In this paper, the implementation process and mathematical formulas of the new method are presented in detail, and the formula to estimate the precision of surveys is given. An example has demonstrated that the precision of using the new method could meet large-scale mapping needs.

  16. Human Temporal Bone Removal: The Skull Base Block Method.

    PubMed

    Dinh, Christine; Szczupak, Mikhaylo; Moon, Seo; Angeli, Simon; Eshraghi, Adrien; Telischi, Fred F

    2015-08-01

    Objectives To describe a technique for harvesting larger temporal bone specimens from human cadavers for the training of otolaryngology residents and fellows on the various approaches to the lateral and posterolateral skull base. Design Human cadaveric anatomical study. The calvarium was excised 6 cm above the superior aspect of the ear canal. The brain and cerebellum were carefully removed, and the cranial nerves were cut sharply. Two bony cuts were performed, one in the midsagittal plane and the other in the coronal plane at the level of the optic foramen. Setting Medical school anatomy laboratory. Participants Human cadavers. Main Outcome Measures Anatomical contents of specimens and technical effort required. Results Larger temporal bone specimens containing portions of the parietal, occipital, and sphenoidal bones were consistently obtained using this technique of two bone cuts. All specimens were inspected and contained pertinent surface and skull base landmarks. Conclusions The skull base block method allows for larger temporal bone specimens using a two bone cut technique that is efficient and reproducible. These specimens have the necessary anatomical bony landmarks for studying the complexity, utility, and limitations of lateral and posterolateral approaches to the skull base, important for the education of otolaryngology residents and fellows. PMID:26225316

  17. Iterative support detection-based split Bregman method for wavelet frame-based image inpainting.

    PubMed

    He, Liangtian; Wang, Yilun

    2014-12-01

    The wavelet frame systems have been extensively studied due to their capability of sparsely approximating piece-wise smooth functions, such as images, and the corresponding wavelet frame-based image restoration models are mostly based on the penalization of the l1 norm of wavelet frame coefficients for sparsity enforcement. In this paper, we focus on the image inpainting problem based on the wavelet frame, propose a weighted sparse restoration model, and develop a corresponding efficient algorithm. The new algorithm combines the idea of iterative support detection method, first proposed by Wang and Yin for sparse signal reconstruction, and the split Bregman method for wavelet frame l1 model of image inpainting, and more important, naturally makes use of the specific multilevel structure of the wavelet frame coefficients to enhance the recovery quality. This new algorithm can be considered as the incorporation of prior structural information of the wavelet frame coefficients into the traditional l1 model. Our numerical experiments show that the proposed method is superior to the original split Bregman method for wavelet frame-based l1 norm image inpainting model as well as some typical l(p) (0 ≤ p < 1) norm-based nonconvex algorithms such as mean doubly augmented Lagrangian method, in terms of better preservation of sharp edges, due to their failing to make use of the structure of the wavelet frame coefficients. PMID:25312924

  18. An acoustic intensity-based method and its aeroacoustic applications

    NASA Astrophysics Data System (ADS)

    Yu, Chao

    Aircraft noise prediction and control is one of the most urgent and challenging tasks worldwide. A hybrid approach is usually considered for predicting the aerodynamic noise. The approach separates the field into aerodynamic source and acoustic propagation regions. Conventional CFD solvers are typically used to evaluate the flow field in the source region. Once the sound source is predicted, the linearized Euler Equations (LEE) can be used to extend the near-field CFD solution to the mid-field acoustic radiation. However, the far-field extension is very time consuming and always prohibited by the excessive computer memory requirements. The FW-H method, instead, predicts the far-field radiation using the flow-field quantities on a closed control surface (that encloses the entire aerodynamic source region) if the wave equation is assumed outside. The surface integration, however, has to be carried out for each far-field location. This would be still computationally intensive for a practical 3D problem even though the intensity in terms of the CPU time has been much decreased compared with that required by the LEE methods. For an accurate far-field prediction, the other difficulty of using the FW-H method is that the complete control surface may be infeasible to accomplish for most practical applications. Motivated by the need for the accurate and efficient far-field prediction techniques, an Acoustic Intensity-Based Method (AIBM) has been developed based on an acoustic input from an OPEN control surface. The AIBM assumes that the sound propagation is governed by the modified Helmholtz equation on and outside a control surface that encloses all the nonlinear effects and noise sources. The prediction of the acoustic radiation field is carried out by the inverse method with an input of acoustic pressure derivative and its simultaneous, co-located acoustic pressure. The reconstructed acoustic radiation field using the AIBM is unique due to the unique continuation theory

  19. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    NASA Astrophysics Data System (ADS)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  20. A Progressive Image Compression Method Based on EZW Algorithm

    NASA Astrophysics Data System (ADS)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  1. Scanning-fiber-based imaging method for tissue engineering

    NASA Astrophysics Data System (ADS)

    Hofmann, Matthias C.; Whited, Bryce M.; Mitchell, Josh; Vogt, William C.; Criswell, Tracy; Rylander, Christopher; Rylander, Marissa Nichole; Soker, Shay; Wang, Ge; Xu, Yong

    2012-06-01

    A scanning-fiber-based method developed for imaging bioengineered tissue constructs such as synthetic carotid arteries is reported. Our approach is based on directly embedding one or more hollow-core silica fibers within the tissue scaffold to function as micro-imaging channels (MIC). The imaging process is carried out by translating and rotating an angle-polished fiber micro-mirror within the MIC to scan excitation light across the tissue scaffold. The locally emitted fluorescent signals are captured using an electron multiplying CCD camera and then mapped into fluorophore distributions according to fiber micro-mirror positions. Using an optical phantom composed of fluorescent microspheres, tissue scaffolds, and porcine skin, we demonstrated single-cell-level imaging resolution (20 to 30 μm) at an imaging depth that exceeds the photon transport mean free path by one order of magnitude. This result suggests that the imaging depth is no longer constrained by photon scattering, but rather by the requirement that the fluorophore signal overcomes the background ``noise'' generated by processes such as scaffold autofluorescence. Finally, we demonstrated the compatibility of our imaging method with tissue engineering by visualizing endothelial cells labeled with green fluorescent protein through a ~500 μm thick and highly scattering electrospun scaffold.

  2. Scanning-fiber-based imaging method for tissue engineering

    PubMed Central

    Hofmann, Matthias C.; Whited, Bryce M.; Mitchell, Josh; Vogt, William C.; Criswell, Tracy; Rylander, Christopher; Rylander, Marissa Nichole; Soker, Shay; Wang, Ge

    2012-01-01

    Abstract A scanning-fiber-based method developed for imaging bioengineered tissue constructs such as synthetic carotid arteries is reported. Our approach is based on directly embedding one or more hollow-core silica fibers within the tissue scaffold to function as micro-imaging channels (MIC). The imaging process is carried out by translating and rotating an angle-polished fiber micro-mirror within the MIC to scan excitation light across the tissue scaffold. The locally emitted fluorescent signals are captured using an electron multiplying CCD camera and then mapped into fluorophore distributions according to fiber micro-mirror positions. Using an optical phantom composed of fluorescent microspheres, tissue scaffolds, and porcine skin, we demonstrated single-cell-level imaging resolution (20 to 30 μm) at an imaging depth that exceeds the photon transport mean free path by one order of magnitude. This result suggests that the imaging depth is no longer constrained by photon scattering, but rather by the requirement that the fluorophore signal overcomes the background “noise” generated by processes such as scaffold autofluorescence. Finally, we demonstrated the compatibility of our imaging method with tissue engineering by visualizing endothelial cells labeled with green fluorescent protein through a ∼500  μm thick and highly scattering electrospun scaffold. PMID:22734766

  3. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  4. Quality control and analytical methods for baculovirus-based products.

    PubMed

    Roldão, António; Vicente, Tiago; Peixoto, Cristina; Carrondo, Manuel J T; Alves, Paula M

    2011-07-01

    Recombinant baculoviruses (rBac) are used for many different applications, ranging from bio-insecticides to the production of heterologous proteins, high-throughput screening of gene functions, drug delivery, in vitro assembly studies, design of antiviral drugs, bio-weapons, building blocks for electronics, biosensors and chemistry, and recently as a delivery system in gene therapy. Independent of the application, the quality, quantity and purity of rBac-based products are pre-requisites demanded by regulatory authorities for product licensing. To guarantee maximization utility, it is necessary to delineate optimized production schemes either using trial-and-error experimental setups ("brute force" approach) or rational design of experiments by aid of in silico mathematical models (Systems Biology approach). For that, one must define all of the main steps in the overall process, identify the main bioengineering issues affecting each individual step and implement, if required, accurate analytical methods for product characterization. In this review, current challenges for quality control (QC) technologies for up- and down-stream processing of rBac-based products are addressed. In addition, a collection of QC methods for monitoring/control of the production of rBac derived products are presented as well as innovative technologies for faster process optimization and more detailed product characterization. PMID:21784235

  5. Application of rule based methods to predicting storm surge

    NASA Astrophysics Data System (ADS)

    Royston, S. J.; Horsburgh, K. J.; Lawry, J.

    2012-04-01

    The accurate forecast of storm surge, the long wavelength sea level response to meteorological forcing, is imperative for flood warning purposes. There remain regions of the world where operational forecast systems have not been developed and in these locations it is worthwhile considering numerically simpler, data-driven techniques to provide operational services. In this paper, we investigate the applicability of a class of data driven methods referred to as rule based models to the problem of forecasting storm surge. The accuracy of the rule based model is found to be comparable to several alternative data-driven techniques, all of which result in marginally worse but acceptable forecasts compared with the UK's operational hydrodynamic forecast model, given the reduction in computational effort. Promisingly, the rule based model is considered to be skillful in forecasting total water levels above a given flood warning threshold, with a Brier Skill Score of 0.58 against a climatological forecast (the operational storm surge system has a Brier Skill Score of up to 0.75 for the same data set). The structure of the model can be interrogated as IF-THEN rules and we find that the model structure in this case is consistent with our understanding of the physical system. Furthermore, the rule based approach provides probabilistic forecasts of storm surge, which is much more informative to flood warning managers than alternative approaches. Therefore, the rule based model provides reasonably skillful forecasts in comparison with the operational forecast model, for a significant reduction in development and run time, and is therefore considered to be an appropriate data driven approach that could be employed to forecast storm surge in regions of the world where a fully fledged hydrodynamic forecast system does not exist, provided a good observational and meteorological forecast can be made.

  6. Framework of a Contour Based Depth Map Coding Method

    NASA Astrophysics Data System (ADS)

    Wang, Minghui; He, Xun; Jin, Xin; Goto, Satoshi

    Stereo-view and multi-view video formats are heavily investigated topics given their vast application potential. Depth Image Based Rendering (DIBR) system has been developed to improve Multiview Video Coding (MVC). Depth image is introduced to synthesize virtual views on the decoder side in this system. Depth image is a piecewise image, which is filled with sharp contours and smooth interior. Contours in a depth image show more importance than interior in view synthesis process. In order to improve the quality of the synthesized views and reduce the bitrate of depth image, a contour based coding strategy is proposed. First, depth image is divided into layers by different depth value intervals. Then regions, which are defined as the basic coding unit in this work, are segmented from each layer. The region is further divided into the contour and the interior. Two different procedures are employed to code contours and interiors respectively. A vector-based strategy is applied to code the contour lines. Straight lines in contours cost few of bits since they are regarded as vectors. Pixels, which are out of straight lines, are coded one by one. Depth values in the interior of a region are modeled by a linear or nonlinear formula. Coefficients in the formula are retrieved by regression. This process is called interior painting. Unlike conventional block based coding method, the residue between original frame and reconstructed frame (by contour rebuilt and interior painting) is not sent to decoder. In this proposal, contour is coded in a lossless way whereas interior is coded in a lossy way. Experimental results show that the proposed Contour Based Depth map Coding (CBDC) achieves a better performance than JMVC (reference software of MVC) in the high quality scenarios.

  7. 3D face recognition based on a modified ICP method

    NASA Astrophysics Data System (ADS)

    Zhao, Kankan; Xi, Jiangtao; Yu, Yanguang; Chicharo, Joe F.

    2011-11-01

    3D face recognition technique has gained much more attention recently, and it is widely used in security system, identification system, and access control system, etc. The core technique in 3D face recognition is to find out the corresponding points in different 3D face images. The classic partial Iterative Closest Point (ICP) method is iteratively align the two point sets based on repetitively calculate the closest points as the corresponding points in each iteration. After several iterations, the corresponding points can be obtained accurately. However, if two 3D face images with different scale are from the same person, the classic partial ICP does not work. In this paper we propose a modified partial Iterative Closest Point (ICP) method in which the scaling effect is considered to achieve 3D face recognition. We design a 3x3 diagonal matrix as the scale matrix in each iteration of the classic partial ICP. The probing face image which is multiplied by the scale matrix will keep the similar scale with the reference face image. Therefore, we can accurately determine the corresponding points even the scales of probing image and reference image are different. 3D face images in our experiments are acquired by a 3D data acquisition system based on Digital Fringe Projection Profilometry (DFPP). A 3D database consists of 30 group images, three images with the same scale, which are from the same person with different views, are included in each group. And in different groups, the scale of the 3 images may be different from other groups. The experiment results show that our proposed method can achieve 3D face recognition, especially in the case that the scales of probing image and referent image are different.

  8. A vocal-based analytical method for goose behaviour recognition.

    PubMed

    Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole

    2012-01-01

    Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86-97% sensitivity, 89-98% precision) and a reasonable recognition of flushing (79-86%, 66-80%) and landing behaviour(73-91%, 79-92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system. PMID:22737037

  9. A method for MREIT-based source imaging: simulation studies.

    PubMed

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data. PMID:27401235

  10. Simulation Method for Wind Tunnel Based Virtual Flight Testing

    NASA Astrophysics Data System (ADS)

    Li, Hao; Zhao, Zhong-Liang; Fan, Zhao-Lin

    The Wind Tunnel Based Virtual Flight Testing (WTBVFT) could replicate the actual free flight and explore the aerodynamics/flight dynamics nonlinear coupling mechanism during the maneuver in the wind tunnel. The basic WTBVFT concept is to mount the test model on a specialized support system which allows for the model freely rotational motion, and the aerodynamic loading and motion parameters are measured simultaneously during the model motion. The simulations of the 3-DOF pitching motion of a typical missile in the vertical plane are performed with the openloop and closed-loop control methods. The objective is to analyze the effect of the main differences between the WTBVFT and the actual free flight, and study the simulation method for the WTBVFT. Preliminary simulation analyses have been conducted with positive results. These results indicate that the WTBVFT that uses closed-loop autopilot control method with the pitch angular rate feedback signal is able to replicate the actual free flight behavior within acceptable differences.

  11. A detection method of moving object based on hybrid difference

    NASA Astrophysics Data System (ADS)

    Wang, Chuncai; Yan, Lei; Li, Yingtao

    2014-11-01

    The detection method is based on background subtraction and inter-frame difference. To use statistical model of RGB color histograms to extracting background. In this way, the initial background image could be extracted without noise effect to a great extent. To get difference image of moving object according to the results of background subtraction and three frames difference. To get binary Image A which difference from Frame k-1 and Frame k, to get Image B which difference from Frame k and Frame k+1. Let Image A and Image B do LOR operation to get Image C for obtaining more information of the moving object. Finally, let binary image of background subtraction and Image C do LAND operation to get outline of moving object. To use self-adaption method updates background image to promise the instantaneity. If a pixel of the current frame is estimated as moving target, we set the corresponding pixel of current background image to instead of the pixel in background image, else set the corresponding pixel of current frame to update the corresponding pixel of background. To use background updating factor α to control update rate. Moving object can be detected more accurately by mathematical morphology. This method can improve the shortcomings of background subtraction and inter-frame difference.

  12. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-07-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  13. A conductivity-based interface tracking method for microfluidic application

    NASA Astrophysics Data System (ADS)

    Salgado, Juan David; Horiuchi, Keisuke; Dutta, Prashanta

    2006-05-01

    A novel conductivity-based interface tracking method is developed for 'lab-on-a-chip' applications to measure the velocity of the liquid-gas boundary during the filling process. This interface tracking system consists of two basic components: a fluidic circuit and an electronic circuit. The fluidic circuit is composed of a microchannel network where a number of very thin electrodes are placed in the flow path to detect the location of the liquid-gas interface in order to quantify the speed of a traveling liquid front. The electronic circuit is placed on a microelectronic chip that works as a logical switch. This interface tracking method is used to evaluate the performance of planar electrokinetic micropumps formed on a hybrid poly-di-methyl-siloxane (PDMS)-glass platform. In this study, the thickness of the planar micropump is set to be 10 µm, while the externally applied electric field is ranged from 100 V mm-1 to 200 V mm-1. For a particular geometric and electrokinetic condition, repeatable flow results are obtained from the speed of the liquid-gas interface. Flow results obtained from this interface tracking method are compared to those of other existing flow measuring techniques. The maximum error of this interface tracking sensor is less than 5%, even in an ultra low flow velocity.

  14. Nozzle Mounting Method Optimization Based on Robot Kinematic Analysis

    NASA Astrophysics Data System (ADS)

    Chen, Chaoyue; Liao, Hanlin; Montavon, Ghislain; Deng, Sihao

    2016-08-01

    Nowadays, the application of industrial robots in thermal spray is gaining more and more importance. A desired coating quality depends on factors such as a balanced robot performance, a uniform scanning trajectory and stable parameters (e.g. nozzle speed, scanning step, spray angle, standoff distance). These factors also affect the mass and heat transfer as well as the coating formation. Thus, the kinematic optimization of all these aspects plays a key role in order to obtain an optimal coating quality. In this study, the robot performance was optimized from the aspect of nozzle mounting on the robot. An optimized nozzle mounting for a type F4 nozzle was designed, based on the conventional mounting method from the point of view of robot kinematics validated on a virtual robot. Robot kinematic parameters were obtained from the simulation by offline programming software and analyzed by statistical methods. The energy consumptions of different nozzle mounting methods were also compared. The results showed that it was possible to reasonably assign the amount of robot motion to each axis during the process, so achieving a constant nozzle speed. Thus, it is possible optimize robot performance and to economize robot energy.

  15. A method for MREIT-based source imaging: simulation studies

    NASA Astrophysics Data System (ADS)

    Song, Yizhuang; Jeong, Woo Chul; Woo, Eung Je; Seo, Jin Keun

    2016-08-01

    This paper aims to provide a method for using magnetic resonance electrical impedance tomography (MREIT) to visualize local conductivity changes associated with evoked neuronal activities in the brain. MREIT is an MRI-based technique for conductivity mapping by probing the magnetic flux density induced by an externally injected current through surface electrodes. Since local conductivity changes resulting from evoked neural activities are very small (less than a few %), a major challenge is to acquire exogenous magnetic flux density data exceeding a certain noise level. Noting that the signal-to-noise ratio is proportional to the square root of the number of averages, it is important to reduce the data acquisition time to get more averages within a given total data collection time. The proposed method uses a sub-sampled k-space data set in the phase-encoding direction to significantly reduce the data acquisition time. Since the sub-sampled data violates the Nyquist criteria, we only get a nonlinearly wrapped version of the exogenous magnetic flux density data, which is insufficient for conductivity imaging. Taking advantage of the sparseness of the conductivity change, the proposed method detects local conductivity changes by estimating the time-change of the Laplacian of the nonlinearly wrapped data.

  16. Cardiac rate detection method based on the beam splitter prism

    NASA Astrophysics Data System (ADS)

    Yang, Lei; Liu, Xiaohua; Liu, Ming; Zhao, Yuejin; Dong, Liquan; Zhao, Ruirui; Jin, Xiaoli; Zhao, Jingsheng

    2013-09-01

    A new cardiac rate measurement method is proposed. Through the beam splitter prism, the common-path optical system of transmitting and receiving signals is achieved. By the focusing effect of the lens, the small amplitude motion artifact is inhibited and the signal-to-noise is improved. The cardiac rate is obtained based on the PhotoPlethysmoGraphy (PPG). We use LED as the light source and use photoelectric diode as the receiving tube. The LED and the photoelectric diode are on the different sides of the beam splitter prism and they form the optical system. The signal processing and display unit is composed by the signal processing circuit, data acquisition device and computer. The light emitted by the modulated LED is collimated by the lens and irradiates the measurement target through the beam splitter prism. The light reflected by the target is focused on the receiving tube through the beam splitter prism and another lens. The signal received by the photoelectric diode is processed by the analog circuit and obtained by the data acquisition device. Through the filtering and Fast Fourier Transform, the cardiac rate is achieved. We get the real time cardiac rate by the moving average method. We experiment with 30 volunteers, containing different genders and different ages. We compare the signals captured by this method to a conventional PPG signal captured concurrently from a finger. The results of the experiments are all relatively agreeable and the biggest deviation value is about 2bmp.

  17. Wavelet packet-based insufficiency murmurs analysis method

    NASA Astrophysics Data System (ADS)

    Choi, Samjin; Jiang, Zhongwei

    2007-12-01

    In this paper, the aortic and mitral insufficiency murmurs analysis method using the wavelet packet technique is proposed for classifying the valvular heart defects. Considering the different frequency distributions between the normal sound and insufficiency murmurs in frequency domain, we used two properties such as the relative wavelet energy and the Shannon wavelet entropy which described the energy information and the entropy information at the selected frequency band, respectively. Then, the signal to murmur ratio (SMR) measures which could mean the ratio between the frequency bands for normal heart sounds and for aortic and mitral insufficiency murmurs allocated to 15.62-187.50 Hz and 187.50-703.12 Hz respectively, were employed as a classification manner to identify insufficiency murmurs. The proposed measures were validated by some case studies. The 194 heart sound signals with 48 normal and 146 abnormal sound cases acquired from 6 healthy volunteers and 30 patients were tested. The normal sound signals recorded by applying a self-produced wireless electric stethoscope system to subjects with no history of other heart complications were used. Insufficiency murmurs were grouped into two valvular heart defects such as aortic insufficiency and mitral insufficiency. These murmur subjects included no other coexistent valvular defects. As a result, the proposed insufficiency murmurs detection method showed relatively very high classification efficiency. Therefore, the proposed heart sound classification method based on the wavelet packet was validated for the classification of valvular heart defects, especially insufficiency murmurs.

  18. Improved reliability analysis method based on the failure assessment diagram

    NASA Astrophysics Data System (ADS)

    Zhou, Yu; Zhang, Zheng; Zhong, Qunpeng

    2012-07-01

    With the uncertainties related to operating conditions, in-service non-destructive testing (NDT) measurements and material properties considered in the structural integrity assessment, probabilistic analysis based on the failure assessment diagram (FAD) approach has recently become an important concern. However, the point density revealing the probabilistic distribution characteristics of the assessment points is usually ignored. To obtain more detailed and direct knowledge from the reliability analysis, an improved probabilistic fracture mechanics (PFM) assessment method is proposed. By integrating 2D kernel density estimation (KDE) technology into the traditional probabilistic assessment, the probabilistic density of the randomly distributed assessment points is visualized in the assessment diagram. Moreover, a modified interval sensitivity analysis is implemented and compared with probabilistic sensitivity analysis. The improved reliability analysis method is applied to the assessment of a high pressure pipe containing an axial internal semi-elliptical surface crack. The results indicate that these two methods can give consistent sensitivities of input parameters, but the interval sensitivity analysis is computationally more efficient. Meanwhile, the point density distribution and its contour are plotted in the FAD, thereby better revealing the characteristics of PFM assessment. This study provides a powerful tool for the reliability analysis of critical structures.

  19. [A Standing Balance Evaluation Method Based on Largest Lyapunov Exponent].

    PubMed

    Liu, Kun; Wang, Hongrui; Xiao, Jinzhuang; Zhao, Qing

    2015-12-01

    In order to evaluate the ability of human standing balance scientifically, we in this study proposed a new evaluation method based on the chaos nonlinear analysis theory. In this method, a sinusoidal acceleration stimulus in forward/backward direction was forced under the subjects' feet, which was supplied by a motion platform. In addition, three acceleration sensors, which were fixed to the shoulder, hip and knee of each subject, were applied to capture the balance adjustment dynamic data. Through reconstructing the system phase space, we calculated the largest Lyapunov exponent (LLE) of the dynamic data of subjects' different segments, then used the sum of the squares of the difference between each LLE (SSDLLE) as the balance capabilities evaluation index. Finally, 20 subjects' indexes were calculated, and compared with evaluation results of existing methods. The results showed that the SSDLLE were more in line with the subjects' performance during the experiment, and it could measure the body's balance ability to some extent. Moreover, the results also illustrated that balance level was determined by the coordinate ability of various joints, and there might be more balance control strategy in the process of maintaining balance. PMID:27079089

  20. Histogram-Based Calibration Method for Pipeline ADCs.

    PubMed

    Son, Hyeonuk; Jang, Jaewon; Kim, Heetae; Kang, Sungho

    2015-01-01

    Measurement and calibration of an analog-to-digital converter (ADC) using a histogram-based method requires a large volume of data and a long test duration, especially for a high resolution ADC. A fast and accurate calibration method for pipelined ADCs is proposed in this research. The proposed calibration method composes histograms through the outputs of each stage and calculates error sources. The digitized outputs of a stage are influenced directly by the operation of the prior stage, so the results of the histogram provide the information of errors in the prior stage. The composed histograms reduce the required samples and thus calibration time being implemented by simple modules. For 14-bit resolution pipelined ADC, the measured maximum integral non-linearity (INL) is improved from 6.78 to 0.52 LSB, and the spurious-free dynamic range (SFDR) and signal-to-noise-and-distortion ratio (SNDR) are improved from 67.0 to 106.2dB and from 65.6 to 84.8dB, respectively. PMID:26070196

  1. Luminex-Based Methods in High-Resolution HLA Typing.

    PubMed

    Testi, Manuela; Andreani, Marco

    2015-01-01

    Luminex-based technology has been applied to discriminate between the different Human Leukocyte Antigens (HLA) alleles. The typing method consists in a reverse-SSO assay: Target DNA is PCR-amplified using biotinylated group-specific primers. A single PCR reaction is used for each HLA locus. The biotinylated PCR product is chemically denatured using a pH change and allowed to rehybridize to complementary DNA probes conjugated to microspheres. These beads are characterized by two internal fluorescent dyes that create a unique combination of color, make them identifiable. Washes are performed to eliminate any additional PCR product that does not exactly match the sequence detected by the probe. The biotinylated PCR product bound to the microsphere is labelled with streptavidin conjugated with R-phycoerythrin (SAPE). A flow analyzer identifies the fluorescent intensity SAPE on each microsphere. Software is used to assign positive or negative reactions based on the strength of the fluorescent signal. The assignment of the HLA typing is based on positive and negative probe reactions compared with published HLA gene sequences. Recently kits characterized by an extensive number of probes/beads designed to potentially reduce the number of ambiguities or to directly lead to an allele level typing, have been made available. PMID:26024639

  2. An improved image sharpness assessment method based on contrast sensitivity

    NASA Astrophysics Data System (ADS)

    Zhang, Li; Tian, Yan; Yin, Yili

    2015-10-01

    An image sharpness assessment method based on the property of Contrast Sensitivity Function (CSF) was proposed to realize the sharpness assessment of unfocused image. Firstly, image was performed the two-dimensional Discrete Fourier Transform (DFT), and intermediate frequency coefficients and high frequency coefficients are divided into two parts respectively. Secondly the four parts were performed the inverse Discrete Fourier Transform (IDFT) to obtain subimages. Thirdly, using Range Function evaluates the four sub-image sharpness value. Finally, the image sharpness is obtained through the weighted sum of the sub-image sharpness value. In order to comply with the CSF characteristics, weighting factor is setting based on the Contrast Sensitivity Function. The new algorithm and four typical evaluation algorithm: Fourier, Range , Variance and Wavelet are evaluated based on the six quantitative evaluation index, which include the width of steep part of focusing curve, the ration of sharpness, the steepness, the variance of float part of focusing curve, the factor of local extreme and the sensitivity. On the other hand, the effect of noise, and image content on algorithm is analyzed in this paper. The experiment results show that the new algorithm has better performance of sensitivity, anti-nose than the four typical evaluation algorithms. The evaluation results are consistent with human visual characteristics.

  3. Distance-Based Phylogenetic Methods Around a Polytomy.

    PubMed

    Davidson, Ruth; Sullivant, Seth

    2014-01-01

    Distance-based phylogenetic algorithms attempt to solve the NP-hard least-squares phylogeny problem by mapping an arbitrary dissimilarity map representing biological data to a tree metric. The set of all dissimilarity maps is a Euclidean space properly containing the space of all tree metrics as a polyhedral fan. Outputs of distance-based tree reconstruction algorithms such as UPGMA and neighbor-joining are points in the maximal cones in the fan. Tree metrics with polytomies lie at the intersections of maximal cones. A phylogenetic algorithm divides the space of all dissimilarity maps into regions based upon which combinatorial tree is reconstructed by the algorithm. Comparison of phylogenetic methods can be done by comparing the geometry of these regions. We use polyhedral geometry to compare the local nature of the subdivisions induced by least-squares phylogeny, UPGMA, and neighbor-joining when the true tree has a single polytomy with exactly four neighbors. Our results suggest that in some circumstances, UPGMA and neighbor-joining poorly match least-squares phylogeny. PMID:26355780

  4. Updating National Topographic Data Base Using Change Detection Methods

    NASA Astrophysics Data System (ADS)

    Keinan, E.; Felus, Y. A.; Tal, Y.; Zilberstien, O.; Elihai, Y.

    2016-06-01

    The traditional method for updating a topographic database on a national scale is a complex process that requires human resources, time and the development of specialized procedures. In many National Mapping and Cadaster Agencies (NMCA), the updating cycle takes a few years. Today, the reality is dynamic and the changes occur every day, therefore, the users expect that the existing database will portray the current reality. Global mapping projects which are based on community volunteers, such as OSM, update their database every day based on crowdsourcing. In order to fulfil user's requirements for rapid updating, a new methodology that maps major interest areas while preserving associated decoding information, should be developed. Until recently, automated processes did not yield satisfactory results, and a typically process included comparing images from different periods. The success rates in identifying the objects were low, and most were accompanied by a high percentage of false alarms. As a result, the automatic process required significant editorial work that made it uneconomical. In the recent years, the development of technologies in mapping, advancement in image processing algorithms and computer vision, together with the development of digital aerial cameras with NIR band and Very High Resolution satellites, allow the implementation of a cost effective automated process. The automatic process is based on high-resolution Digital Surface Model analysis, Multi Spectral (MS) classification, MS segmentation, object analysis and shape forming algorithms. This article reviews the results of a novel change detection methodology as a first step for updating NTDB in the Survey of Israel.

  5. Artificial Boundary Conditions Based on the Difference Potentials Method

    NASA Technical Reports Server (NTRS)

    Tsynkov, Semyon V.

    1996-01-01

    While numerically solving a problem initially formulated on an unbounded domain, one typically truncates this domain, which necessitates setting the artificial boundary conditions (ABC's) at the newly formed external boundary. The issue of setting the ABC's appears to be most significant in many areas of scientific computing, for example, in problems originating from acoustics, electrodynamics, solid mechanics, and fluid dynamics. In particular, in computational fluid dynamics (where external problems present a wide class of practically important formulations) the proper treatment of external boundaries may have a profound impact on the overall quality and performance of numerical algorithms. Most of the currently used techniques for setting the ABC's can basically be classified into two groups. The methods from the first group (global ABC's) usually provide high accuracy and robustness of the numerical procedure but often appear to be fairly cumbersome and (computationally) expensive. The methods from the second group (local ABC's) are, as a rule, algorithmically simple, numerically cheap, and geometrically universal; however, they usually lack accuracy of computations. In this paper we first present a survey and provide a comparative assessment of different existing methods for constructing the ABC's. Then, we describe a relatively new ABC's technique of ours and review the corresponding results. This new technique, in our opinion, is currently one of the most promising in the field. It enables one to construct such ABC's that combine the advantages relevant to the two aforementioned classes of existing methods. Our approach is based on application of the difference potentials method attributable to V. S. Ryaben'kii. This approach allows us to obtain highly accurate ABC's in the form of certain (nonlocal) boundary operator equations. The operators involved are analogous to the pseudodifferential boundary projections first introduced by A. P. Calderon and then

  6. ADVANCED SEISMIC BASE ISOLATION METHODS FOR MODULAR REACTORS

    SciTech Connect

    E. Blanford; E. Keldrauk; M. Laufer; M. Mieler; J. Wei; B. Stojadinovic; P.F. Peterson

    2010-09-20

    Advanced technologies for structural design and construction have the potential for major impact not only on nuclear power plant construction time and cost, but also on the design process and on the safety, security and reliability of next generation of nuclear power plants. In future Generation IV (Gen IV) reactors, structural and seismic design should be much more closely integrated with the design of nuclear and industrial safety systems, physical security systems, and international safeguards systems. Overall reliability will be increased, through the use of replaceable and modular equipment, and through design to facilitate on-line monitoring, in-service inspection, maintenance, replacement, and decommissioning. Economics will also receive high design priority, through integrated engineering efforts to optimize building arrangements to minimize building heights and footprints. Finally, the licensing approach will be transformed by becoming increasingly performance based and technology neutral, using best-estimate simulation methods with uncertainty and margin quantification. In this context, two structural engineering technologies, seismic base isolation and modular steel-plate/concrete composite structural walls, are investigated. These technologies have major potential to (1) enable standardized reactor designs to be deployed across a wider range of sites, (2) reduce the impact of uncertainties related to site-specific seismic conditions, and (3) alleviate reactor equipment qualification requirements. For Gen IV reactors the potential for deliberate crashes of large aircraft must also be considered in design. This report concludes that base-isolated structures should be decoupled from the reactor external event exclusion system. As an example, a scoping analysis is performed for a rectangular, decoupled external event shell designed as a grillage. This report also reviews modular construction technology, particularly steel-plate/concrete construction using

  7. Image processing methods for visual prostheses based on DSP

    NASA Astrophysics Data System (ADS)

    Liu, Huwei; Zhao, Ying; Tian, Yukun; Ren, Qiushi; Chai, Xinyu

    2008-12-01

    Visual prostheses for extreme vision impairment have come closer to reality during these few years. The task of this research has been to design exoteric devices and study image processing algorithms and methods for different complexity images. We have developed a real-time system capable of image capture and processing to obtain most available and important image features for recognition and simulation experiment based on DSP (Digital Signal Processor). Beyond developing hardware system, we introduce algorithms such as resolution reduction, information extraction, dilation and erosion, square (circular) pixelization and Gaussian pixelization. And we classify images with different stages according to different complexity such as simple images, medium complex images, complex images. As a result, this paper will get the needed signal for transmitting to electrode array and images for simulation experiment.

  8. An RBF-based reparameterization method for constrained texture mapping.

    PubMed

    Yu, Hongchuan; Lee, Tong-Yee; Yeh, I-Cheng; Yang, Xiaosong; Li, Wenxi; Zhang, Jian J

    2012-07-01

    Texture mapping has long been used in computer graphics to enhance the realism of virtual scenes. However, to match the 3D model feature points with the corresponding pixels in a texture image, surface parameterization must satisfy specific positional constraints. However, despite numerous research efforts, the construction of a mathematically robust, foldover-free parameterization that is subject to positional constraints continues to be a challenge. In the present paper, this foldover problem is addressed by developing radial basis function (RBF)-based reparameterization. Given initial 2D embedding of a 3D surface, the proposed method can reparameterize 2D embedding into a foldover-free 2D mesh, satisfying a set of user-specified constraint points. In addition, this approach is mesh free. Therefore, generating smooth texture mapping results is possible without extra smoothing optimization. PMID:21690643

  9. [Landscape pattern in Northeast China based on moving window method].

    PubMed

    Liu, Xin; Guo, Qing-Xi

    2009-06-01

    Based on GIS technology and by using moving window method, the characteristics of landscape pattern in Northeast China in 2006 and the relationships between these characteristics and environmental factors such as precipitation, air temperature, altitude and human activities were studied. In the study area in 2006, forest land had the largest proportion, followed by the cultivated land, occupying 61.69% and 25.11% of the total respectively, and the landscape diversity had the characteristics of circle-zoning structure, which provided a buffer region for fragmented and sensitive regions, making the adverse ecological consequences be reduced to or restricted in a definite scale. The correlation coefficients of landscape indices with precipitation and air temperature were less than 0.4, and those with altitude were less than 0.07, illustrating that the heterogeneity of landscape pattern in study area was not dependent on single natural factors. PMID:19795653

  10. Method and apparatus for making articles from particle based materials

    DOEpatents

    Moorhead, Arthur J.; Menchhofer, Paul A.

    1995-01-01

    A method and apparatus for the production of articles made of a particle-based material; e.g., ceramics and sintered metals. In accordance with the invention, a thermally settable slurry containing a relatively high concentration of the particles is conveyed through an elongate flow area having a desired cross-sectional configuration. The slurry is heated as it is advanced through the flow area causing the slurry to set or harden in a shape which conforms to the cross-sectional configuration of the flow area. The material discharges from the flow area as a self-supporting solid of near net final dimensions. The article may then be sintered to consolidate the particles and provide a high density product.

  11. Method and apparatus for making articles from particle based materials

    DOEpatents

    Moorhead, A.J.; Menchhofer, P.A.

    1995-12-19

    A method and apparatus are disclosed for the production of articles made of a particle-based material; e.g., ceramics and sintered metals. In accordance with the invention, a thermally settable slurry containing a relatively high concentration of the particles is conveyed through an elongate flow area having a desired cross-sectional configuration. The slurry is heated as it is advanced through the flow area causing the slurry to set or harden in a shape which conforms to the cross-sectional configuration of the flow area. The material discharges from the flow area as a self-supporting solid of near net final dimensions. The article may then be sintered to consolidate the particles and provide a high density product. 10 figs.

  12. Inversion method based on stochastic optimization for particle sizing.

    PubMed

    Sánchez-Escobar, Juan Jaime; Barbosa-Santillán, Liliana Ibeth; Vargas-Ubera, Javier; Aguilar-Valdés, Félix

    2016-08-01

    A stochastic inverse method is presented based on a hybrid evolutionary optimization algorithm (HEOA) to retrieve a monomodal particle-size distribution (PSD) from the angular distribution of scattered light. By solving an optimization problem, the HEOA (with the Fraunhofer approximation) retrieves the PSD from an intensity pattern generated by Mie theory. The analyzed light-scattering pattern can be attributed to unimodal normal, gamma, or lognormal distribution of spherical particles covering the interval of modal size parameters 46≤α≤150. The HEOA ensures convergence to the near-optimal solution during the optimization of a real-valued objective function by combining the advantages of a multimember evolution strategy and locally weighted linear regression. The numerical results show that our HEOA can be satisfactorily applied to solve the inverse light-scattering problem. PMID:27505357

  13. An image fusion method based region segmentation and complex wavelets

    NASA Astrophysics Data System (ADS)

    Zhang, Junju; Yuan, Yihui; Chang, Benkang; Han, Yiyong; Liu, Lei; Qiu, Yafeng

    2009-07-01

    A fusion algorithm for infrared and visible light images based on region segmentation and the dual-tree complex wavelet transform. Before image segmentation, morphological top-hat filtering is firstly performed on the IR image and visual images respectively and the details of the luminous area are eliminated. Morphological bottom-hat filtering is then performed on the two kinds of images respectively and the details of the dark area are eliminated. Make the top-hat filtered image subtract the bottom-hat filtered image and obtain the enhanced images. Then the threshold method is used to segment the enhanced images. After image segmentation, the DTCWT coefficients from different regions are merged separately. Finally the fused image is obtained by performing inverse DTCWT. The evaluation results show the validity of the presented algorithm.

  14. Action Capture: A VR-Based Method for Character Animation

    NASA Astrophysics Data System (ADS)

    Jung, Bernhard; Amor, Heni Ben; Heumer, Guido; Vitzthum, Arnd

    This contribution describes a Virtual Reality (VR) based method for character animation that extends conventional motion capture by not only tracking an actor's movements but also his or her interactions with the objects of a virtual environment. Rather than merely replaying the actor's movements, the idea is that virtual characters learn to imitate the actor's goal-directed behavior while interacting with the virtual scene. Following Arbib's equation action = movement + goal we call this approach Action Capture. For this, the VR user's body movements are analyzed and transformed into a multi-layered action representation. Behavioral animation techniques are then applied to synthesize animations which closely resemble the demonstrated action sequences. As an advantage, captured actions can often be naturally applied to virtual characters of different sizes and body proportions, thus avoiding retargeting problems of motion capture.

  15. Study of Flapping Flight Using Discrete Vortex Method Based Simulations

    NASA Astrophysics Data System (ADS)

    Devranjan, S.; Jalikop, Shreyas V.; Sreenivas, K. R.

    2013-12-01

    In recent times, research in the area of flapping flight has attracted renewed interest with an endeavor to use this mechanism in Micro Air vehicles (MAVs). For a sustained and high-endurance flight, having larger payload carrying capacity we need to identify a simple and efficient flapping-kinematics. In this paper, we have used flow visualizations and Discrete Vortex Method (DVM) based simulations for the study of flapping flight. Our results highlight that simple flapping kinematics with down-stroke period (tD) shorter than the upstroke period (tU) would produce a sustained lift. We have identified optimal asymmetry ratio (Ar = tD/tU), for which flapping-wings will produce maximum lift and find that introducing optimal wing flexibility will further enhances the lift.

  16. Measurement matrix optimization method based on matrix orthogonal similarity transformation

    NASA Astrophysics Data System (ADS)

    Pan, Jinfeng

    2016-05-01

    Optimization of the measurement matrix is one of the important research aspects of compressive sensing theory. A measurement matrix optimization method is presented based on the orthogonal similarity transformation of the information operator's Gram matrix. In terms of the fact that the information operator's Gram matrix is a singular symmetric matrix, a simplified orthogonal similarity transformation is deduced, and thus the simplified diagonal matrix that is orthogonally similar to it is obtained. Then an approximation of the Gram matrix is obtained by letting all the nonzero diagonal entries of the simplified diagonal matrix equal their average value. Thus an optimized measurement matrix can be acquired according to its relationship with the information operator. Results of experiments show that the optimized measurement matrix compared to the random measurement matrix is less coherent with dictionaries. The relative signal recovery error also declines when the proposed measurement matrix is utilized.

  17. A quantitative dimming method for LED based on PWM

    NASA Astrophysics Data System (ADS)

    Wang, Jiyong; Mou, Tongsheng; Wang, Jianping; Tian, Xiaoqing

    2012-10-01

    Traditional light sources were required to provide stable and uniform illumination for a living or working environment considering performance of visual function of human being. The requirement was always reasonable until non-visual functions of the ganglion cells in the retina photosensitive layer were found. New generation of lighting technology, however, is emerging based on novel lighting materials such as LED and photobiological effects on human physiology and behavior. To realize dynamic lighting of LED whose intensity and color were adjustable to the need of photobiological effects, a quantitative dimming method based on Pulse Width Modulation (PWM) and light-mixing technology was presented. Beginning with two channels' PWM, this paper demonstrated the determinacy and limitation of PWM dimming for realizing Expected Photometric and Colorimetric Quantities (EPCQ), in accordance with the analysis on geometrical, photometric, colorimetric and electrodynamic constraints. A quantitative model which mapped the EPCQ into duty cycles was finally established. The deduced model suggested that the determinacy was a unique individuality only for two channels' and three channels' PWM, but the limitation was an inevitable commonness for multiple channels'. To examine the model, a light-mixing experiment with two kinds of white LED simulated variations of illuminance and Correlation Color Temperature (CCT) from dawn to midday. Mean deviations between theoretical values and measured values were obtained, which were 15lx and 23K respectively. Result shows that this method can effectively realize the light spectrum which has a specific requirement of EPCQ, and provides a theoretical basis and a practical way for dynamic lighting of LED.

  18. Iron-based amorphous alloys and methods of synthesizing iron-based amorphous alloys

    DOEpatents

    Saw, Cheng Kiong; Bauer, William A.; Choi, Jor-Shan; Day, Dan; Farmer, Joseph C.

    2016-05-03

    A method according to one embodiment includes combining an amorphous iron-based alloy and at least one metal selected from a group consisting of molybdenum, chromium, tungsten, boron, gadolinium, nickel phosphorous, yttrium, and alloys thereof to form a mixture, wherein the at least one metal is present in the mixture from about 5 atomic percent (at %) to about 55 at %; and ball milling the mixture at least until an amorphous alloy of the iron-based alloy and the at least one metal is formed. Several amorphous iron-based metal alloys are also presented, including corrosion-resistant amorphous iron-based metal alloys and radiation-shielding amorphous iron-based metal alloys.

  19. A Robust Deconvolution Method based on Transdimensional Hierarchical Bayesian Inference

    NASA Astrophysics Data System (ADS)

    Kolb, J.; Lekic, V.

    2012-12-01

    Analysis of P-S and S-P conversions allows us to map receiver side crustal and lithospheric structure. This analysis often involves deconvolution of the parent wave field from the scattered wave field as a means of suppressing source-side complexity. A variety of deconvolution techniques exist including damped spectral division, Wiener filtering, iterative time-domain deconvolution, and the multitaper method. All of these techniques require estimates of noise characteristics as input parameters. We present a deconvolution method based on transdimensional Hierarchical Bayesian inference in which both noise magnitude and noise correlation are used as parameters in calculating the likelihood probability distribution. Because the noise for P-S and S-P conversion analysis in terms of receiver functions is a combination of both background noise - which is relatively easy to characterize - and signal-generated noise - which is much more difficult to quantify - we treat measurement errors as an known quantity, characterized by a probability density function whose mean and variance are model parameters. This transdimensional Hierarchical Bayesian approach has been successfully used previously in the inversion of receiver functions in terms of shear and compressional wave speeds of an unknown number of layers [1]. In our method we used a Markov chain Monte Carlo (MCMC) algorithm to find the receiver function that best fits the data while accurately assessing the noise parameters. In order to parameterize the receiver function we model the receiver function as an unknown number of Gaussians of unknown amplitude and width. The algorithm takes multiple steps before calculating the acceptance probability of a new model, in order to avoid getting trapped in local misfit minima. Using both observed and synthetic data, we show that the MCMC deconvolution method can accurately obtain a receiver function as well as an estimate of the noise parameters given the parent and daughter

  20. Data Bases in Writing: Method, Practice, and Metaphor.

    ERIC Educational Resources Information Center

    Schwartz, Helen J.

    1985-01-01

    Points out the need for informed and experienced users of data bases. Discusses the definition of a data base, creating a data base for research, comparison use, and checking written text as a data base. (EL)

  1. Bacteria counting method based on polyaniline/bacteria thin film.

    PubMed

    Zhihua, Li; Xuetao, Hu; Jiyong, Shi; Xiaobo, Zou; Xiaowei, Huang; Xucheng, Zhou; Tahir, Haroon Elrasheid; Holmes, Mel; Povey, Malcolm

    2016-07-15

    A simple and rapid bacteria counting method based on polyaniline (PANI)/bacteria thin film was proposed. Since the negative effects of immobilized bacteria on the deposition of PANI on glass carbon electrode (GCE), PANI/bacteria thin films containing decreased amount of PANI would be obtained when increasing the bacteria concentration. The prepared PANI/bacteria film was characterized with cyclic voltammetry (CV) technique to provide quantitative index for the determination of the bacteria count, and electrochemical impedance spectroscopy (EIS) was also performed to further investigate the difference in the PANI/bacteria films. Good linear relationship of the peak currents of the CVs and the log total count of bacteria (Bacillus subtilis) could be established using the equation Y=-30.413X+272.560 (R(2)=0.982) over the range of 5.3×10(4) to 5.3×10(8)CFUmL(-1), which also showed acceptable stability, reproducibility and switchable ability. The proposed method was feasible for simple and rapid counting of bacteria. PMID:26921555

  2. Distance Metric Based Oversampling Method for Bioinformatics and Performance Evaluation.

    PubMed

    Tsai, Meng-Fong; Yu, Shyr-Shen

    2016-07-01

    An imbalanced classification means that a dataset has an unequal class distribution among its population. For any given dataset, regardless of any balancing issue, the predictions made by most classification methods are highly accurate for the majority class but significantly less accurate for the minority class. To overcome this problem, this study took several imbalanced datasets from the famed UCI datasets and designed and implemented an efficient algorithm which couples Top-N Reverse k-Nearest Neighbor (TRkNN) with the Synthetic Minority Oversampling TEchnique (SMOTE). The proposed algorithm was investigated by applying it to classification methods such as logistic regression (LR), C4.5, Support Vector Machine (SVM), and Back Propagation Neural Network (BPNN). This research also adopted different distance metrics to classify the same UCI datasets. The empirical results illustrate that the Euclidean and Manhattan distances are not only more accurate, but also show greater computational efficiency when compared to the Chebyshev and Cosine distances. Therefore, the proposed algorithm based on TRkNN and SMOTE can be widely used to handle imbalanced datasets. Our recommendations on choosing suitable distance metrics can also serve as a reference for future studies. PMID:27185255

  3. Super pixel density based clustering automatic image classification method

    NASA Astrophysics Data System (ADS)

    Xu, Mingxing; Zhang, Chuan; Zhang, Tianxu

    2015-12-01

    The image classification is an important means of image segmentation and data mining, how to achieve rapid automated image classification has been the focus of research. In this paper, based on the super pixel density of cluster centers algorithm for automatic image classification and identify outlier. The use of the image pixel location coordinates and gray value computing density and distance, to achieve automatic image classification and outlier extraction. Due to the increased pixel dramatically increase the computational complexity, consider the method of ultra-pixel image preprocessing, divided into a small number of super-pixel sub-blocks after the density and distance calculations, while the design of a normalized density and distance discrimination law, to achieve automatic classification and clustering center selection, whereby the image automatically classify and identify outlier. After a lot of experiments, our method does not require human intervention, can automatically categorize images computing speed than the density clustering algorithm, the image can be effectively automated classification and outlier extraction.

  4. Digital image registration method based upon binary boundary maps

    NASA Technical Reports Server (NTRS)

    Jayroe, R. R., Jr.; Andrus, J. F.; Campbell, C. W.

    1974-01-01

    A relatively fast method is presented for matching or registering the digital data of imagery from the same ground scene acquired at different times, or from different multispectral images, sensors, or both. It is assumed that the digital images can be registed by using translations and rotations only, that the images are of the same scale, and that little or no distortion exists between images. It is further assumed that by working with several local areas of the image, the rotational effects in the local areas can be neglected. Thus, by treating the misalignments of local areas as translations, it is possible to determine rotational and translational misalignments for a larger portion of the image containing the local areas. This procedure of determining the misalignment and then registering the data according to the misalignment can be repeated until the desired degree of registration is achieved. The method to be presented is based upon the use of binary boundary maps produced from the raw digital imagery rather than the raw digital data.

  5. Molecular Dynamics and Energy Minimization Based on Embedded Atom Method

    Energy Science and Technology Software Center (ESTSC)

    1995-03-01

    This program performs atomic scale computer simulations of the structure and dynamics of metallic system using energetices based on the Embedded Atom Method. The program performs two types of calculations. First, it performs local energy minimization of all atomic positions to determine ground state and saddle point energies and structures. Second, it performs molecular dynamics simulations to determine thermodynamics or miscroscopic dynamics of the system. In both cases, various constraints can be applied to themore » system. The volume of the system can be varied automatically to achieve any desired external pressure. The temperature in molecular dynamics simulations can be controlled by a variety of methods. Further, the temperature control can be applied either to the entire system or just a subset of the atoms that would act as a thermal source/sink. The motion of one or more of the atoms can be constrained to either simulate the effects of bulk boundary conditions or to facilitate the determination of saddle point configurations. The simulations are performed with periodic boundary conditions.« less

  6. A Molecular Selection Index Method Based on Eigenanalysis

    PubMed Central

    Cerón-Rojas, J. Jesús; Castillo-González, Fernando; Sahagún-Castellanos, Jaime; Santacruz-Varela, Amalio; Benítez-Riquelme, Ignacio; Crossa, José

    2008-01-01

    The traditional molecular selection index (MSI) employed in marker-assisted selection maximizes the selection response by combining information on molecular markers linked to quantitative trait loci (QTL) and phenotypic values of the traits of the individuals of interest. This study proposes an MSI based on an eigenanalysis method (molecular eigen selection index method, MESIM), where the first eigenvector is used as a selection index criterion, and its elements determine the proportion of the trait's contribution to the selection index. This article develops the theoretical framework of MESIM. Simulation results show that the genotypic means and the expected selection response from MESIM for each trait are equal to or greater than those from the traditional MSI. When several traits are simultaneously selected, MESIM performs well for traits with relatively low heritability. The main advantages of MESIM over the traditional molecular selection index are that its statistical sampling properties are known and that it does not require economic weights and thus can be used in practical applications when all or some of the traits need to be improved simultaneously. PMID:18716338

  7. Novel kind of DSP design method based on IP core

    NASA Astrophysics Data System (ADS)

    Yu, Qiaoyan; Liu, Peng; Wang, Weidong; Hong, Xiang; Chen, Jicheng; Yuan, Jianzhong; Chen, Keming

    2004-04-01

    With the pressure from the design productivity and various special applications, original design method for DSP can no longer keep up with the required speed. A novel design method is needed urgently. Intellectual Property (IP) reusing is a tendency for DSP design, but simple plug-and-play IP cores approaches almost never work. Therefore, appropriate control strategies are needed to connect all the IP cores used and coordinate the whole DSP. This paper presents a new DSP design procedure, which refers to System-on-a-chip, and later introduces a novel control strategy named DWC to implement the DSP based on IP cores. The most important part of this novel control strategy, pipeline control unit (PCU), is given in detail. Because a great number of data hazards occur in most computation-intensive scientific application, a new effective algorithm of checking data hazards is employed in PCU. Following this strategy, the design of a general or special purposed DSP can be finished in shorter time, and the DSP has a potency to improve performance with little modification on basic function units. This DWC strategy has been implement in a 16-bit fixed-pointed DSP successfully.

  8. An analytic reconstruction method for PET based on cubic splines

    NASA Astrophysics Data System (ADS)

    Kastis, George A.; Kyriakopoulou, Dimitra; Fokas, Athanasios S.

    2014-03-01

    PET imaging is an important nuclear medicine modality that measures in vivo distribution of imaging agents labeled with positron-emitting radionuclides. Image reconstruction is an essential component in tomographic medical imaging. In this study, we present the mathematical formulation and an improved numerical implementation of an analytic, 2D, reconstruction method called SRT, Spline Reconstruction Technique. This technique is based on the numerical evaluation of the Hilbert transform of the sinogram via an approximation in terms of 'custom made' cubic splines. It also imposes sinogram thresholding which restricts reconstruction only within object pixels. Furthermore, by utilizing certain symmetries it achieves a reconstruction time similar to that of FBP. We have implemented SRT in the software library called STIR and have evaluated this method using simulated PET data. We present reconstructed images from several phantoms. Sinograms have been generated at various Poison noise levels and 20 realizations of noise have been created at each level. In addition to visual comparisons of the reconstructed images, the contrast has been determined as a function of noise level. Further analysis includes the creation of line profiles when necessary, to determine resolution. Numerical simulations suggest that the SRT algorithm produces fast and accurate reconstructions at realistic noise levels. The contrast is over 95% in all phantoms examined and is independent of noise level.

  9. Trinocular stereo vision method based on mesh candidates

    NASA Astrophysics Data System (ADS)

    Liu, Bin; Xu, Gang; Li, Haibin

    2010-10-01

    One of the most interesting goals of machine vision is 3D structure recovery of the scenes. This recovery has many applications, such as object recognition, reverse engineering, automatic cartography, autonomous robot navigation, etc. To meet the demand of measuring the complex prototypes in reverse engineering, a trinocular stereo vision method based on mesh candidates was proposed. After calibration of the cameras, the joint field of view can be defined in the world coordinate system. Mesh grid is established along the coordinate axes, and the mesh nodes are considered as potential depth data of the object surface. By similarity measure of the correspondence pairs which are projected from a certain group of candidates, the depth data can be obtained readily. With mesh nodes optimization, the interval between the neighboring nodes in depth direction could be designed reasonably. The potential ambiguity can be eliminated efficiently in correspondence matching with the constraint of a third camera. The cameras can be treated as two independent pairs, left-right and left-centre. Due to multiple peaks of the correlation values, the binocular method may not satisfy the accuracy of the measurement. Another image pair is involved if the confidence coefficient is less than the preset threshold. The depth is determined by the highest sum of correlation of both camera pairs. The measurement system was simulated using 3DS MAX and Matlab software for reconstructing the surface of the object. The experimental result proved that the trinocular vision system has good performance in depth measurement.

  10. lytA-based identification methods can misidentify Streptococcus pneumoniae.

    PubMed

    Simões, Alexandra S; Tavares, Débora A; Rolo, Dora; Ardanuy, Carmen; Goossens, Herman; Henriques-Normark, Birgitta; Linares, Josefina; de Lencastre, Hermínia; Sá-Leão, Raquel

    2016-06-01

    During surveillance studies we detected, among over 1500 presumptive pneumococci, 11 isolates displaying conflicting or novel results when characterized by widely accepted phenotypic (optochin susceptibility and bile solubility) and genotypic (lytA-BsaAI-RFLP and MLST) identification methods. We aimed to determine the genetic basis for the unexpected results given by lytA-BsaAI-RFLP and investigate the accuracy of the WHO recommended lytA real-time PCR assay to classify these 11 isolates. Three novel lytA-BsaAI-RFLP signatures were found (one in pneumococcus and two in S. mitis). In addition, one pneumococcus displayed the atypical lytA-BsaAI-RFLP signature characteristic of non-pneumococci and two S. pseudopneumoniae displayed the typical lytA-BsaAI-RFLP pattern characteristic of pneumococci. lytA real-time PCR misidentified these three isolates. In conclusion, identification of pneumococci by lytA real-time PCR, and other lytA-based methodologies, may lead to false results. This is of particular relevance in the increasingly frequent colonization studies relying solely on culture-independent methods. PMID:27107535

  11. A GIS-based method for flood risk assessment

    NASA Astrophysics Data System (ADS)

    Kalogeropoulos, Kleomenis; Stathopoulos, Nikos; Psarogiannis, Athanasios; Penteris, Dimitris; Tsiakos, Chrisovalantis; Karagiannopoulou, Aikaterini; Krikigianni, Eleni; Karymbalis, Efthimios; Chalkias, Christos

    2016-04-01

    Floods are physical global hazards with negative environmental and socio-economic impacts on local and regional scale. The technological evolution during the last decades, especially in the field of geoinformatics, has offered new advantages in hydrological modelling. This study seeks to use this technology in order to quantify flood risk assessment. The study area which was used is an ungauged catchment and by using mostly GIS hydrological and geomorphological analysis together with a GIS-based distributed Unit Hydrograph model, a series of outcomes have risen. More specifically, this paper examined the behaviour of the Kladeos basin (Peloponnese, Greece) using real rainfall data, as well hypothetical storms. The hydrological analysis held using a Digital Elevation Model of 5x5m pixel size, while the quantitative drainage basin characteristics were calculated and were studied in terms of stream order and its contribution to the flood. Unit Hydrographs are, as it known, useful when there is lack of data and in this work, based on time-area method, a sequences of flood risk assessments have been made using the GIS technology. Essentially, the proposed methodology estimates parameters such as discharge, flow velocity equations etc. in order to quantify flood risk assessment. Keywords Flood Risk Assessment Quantification; GIS; hydrological analysis; geomorphological analysis.

  12. Graph-Based Methods for Discovery Browsing with Semantic Predications

    PubMed Central

    Wilkowski, Bartłomiej; Fiszman, Marcelo; Miller, Christopher M.; Hristovski, Dimitar; Arabandi, Sivaram; Rosemblat, Graciela; Rindflesch, Thomas C.

    2011-01-01

    We present an extension to literature-based discovery that goes beyond making discoveries to a principled way of navigating through selected aspects of some biomedical domain. The method is a type of “discovery browsing” that guides the user through the research literature on a specified phenomenon. Poorly understood relationships may be explored through novel points of view, and potentially interesting relationships need not be known ahead of time. In a process of “cooperative reciprocity” the user iteratively focuses system output, thus controlling the large number of relationships often generated in literature-based discovery systems. The underlying technology exploits SemRep semantic predications represented as a graph of interconnected nodes (predication arguments) and edges (predicates). The system suggests paths in this graph, which represent chains of relationships. The methodology is illustrated with depressive disorder and focuses on the interaction of inflammation, circadian phenomena, and the neurotransmitter norepinephrine. Insight provided may contribute to enhanced understanding of the pathophysiology, treatment, and prevention of this disorder. PMID:22195216

  13. A DNA-based method for detecting homologous blood doping.

    PubMed

    Manokhina, Irina; Rupert, James L

    2013-12-01

    Homologous (or allogeneic) blood doping, in which blood is transferred from a donor into a recipient athlete, is the easiest, cheapest, and fastest way to increase red cell mass (hematocrit) and therefore the oxygen-carrying capacity of the blood. Although thought to have been rendered obsolete as a doping strategy by the increased use of rhEPO to increased hematocrits, there is evidence that athletes are still using this potentially dangerous method to improve endurance performance. Current testing for homologous blood doping is based on identification of mixed populations of red blood cells by flow cytometry. This paper proposes that homologous blood doping could also be tested for by high-resolution qPCR-based genotyping and demonstrates that assays could be developed that would detect second populations of cells even if the "donor" blood was depleted of 99% of the DNA-containing leukocytes. Issues of test specificity and sensitivity are discussed as well as some of the ethical considerations that would have to be addressed if athletes' genotypes were to be used by the anti-doping authorities to prevent, or detect, the use of prohibited ergogenic practices. PMID:23842898

  14. IR-based method for copper electrolysis short circuit detection

    NASA Astrophysics Data System (ADS)

    Makipaa, Esa; Tanttu, Juha T.; Virtanen, Henri

    1997-04-01

    In the copper electrorefining process short-circuits between the anodes and cathodes are harmful. They cause decreasing production rate and poor cathode copper quality. Short- circuits should be detected and eliminated as soon as possible. Manual inspection methods often take a lot of time and excessive walking on the electrodes can not be avoided. For these reasons there is a lot of interest to develop short-circuit detection and quality control. In this paper an IR based method for short circuit detection is presented. In the case of the short-circuited anode and cathode pair especially cathode bar becomes significantly warmer than bar in the normal condition. Using IR camera mounted on a moving crane these hot spots among the electrodes were easily detected. IR imaging was tested in the harsh conditions of the refinery hall with various crane speeds. Image processing is a tool to interpret the obtained IR images. In this paper an algorithm for searching the locations of the short-circuits in the electrolytic cell using imaging results as test material is proposed. The basic idea of the developed algorithm is first to search and calculate necessary edges and initial lines of the electrolytic cell. The second step is to determine the exact position of each cathode plate in the cell so that using thresholding the location of the short-circuited cathode can be determined. IR imaging combined with image processing has proven to be a superior method for predictive maintenance and process control compared to manual ones in the copper electrorefining process. It also makes it possible to collect valuable information for the quality control purposes.

  15. High accuracy operon prediction method based on STRING database scores.

    PubMed

    Taboada, Blanca; Verde, Cristina; Merino, Enrique

    2010-07-01

    We present a simple and highly accurate computational method for operon prediction, based on intergenic distances and functional relationships between the protein products of contiguous genes, as defined by STRING database (Jensen,L.J., Kuhn,M., Stark,M., Chaffron,S., Creevey,C., Muller,J., Doerks,T., Julien,P., Roth,A., Simonovic,M. et al. (2009) STRING 8-a global view on proteins and their functional interactions in 630 organisms. Nucleic Acids Res., 37, D412-D416). These two parameters were used to train a neural network on a subset of experimentally characterized Escherichia coli and Bacillus subtilis operons. Our predictive model was successfully tested on the set of experimentally defined operons in E. coli and B. subtilis, with accuracies of 94.6 and 93.3%, respectively. As far as we know, these are the highest accuracies ever obtained for predicting bacterial operons. Furthermore, in order to evaluate the predictable accuracy of our model when using an organism's data set for the training procedure, and a different organism's data set for testing, we repeated the E. coli operon prediction analysis using a neural network trained with B. subtilis data, and a B. subtilis analysis using a neural network trained with E. coli data. Even for these cases, the accuracies reached with our method were outstandingly high, 91.5 and 93%, respectively. These results show the potential use of our method for accurately predicting the operons of any other organism. Our operon predictions for fully-sequenced genomes are available at http://operons.ibt.unam.mx/OperonPredictor/. PMID:20385580

  16. Texture based feature extraction methods for content based medical image retrieval systems.

    PubMed

    Ergen, Burhan; Baykara, Muhammet

    2014-01-01

    The developments of content based image retrieval (CBIR) systems used for image archiving are continued and one of the important research topics. Although some studies have been presented general image achieving, proposed CBIR systems for archiving of medical images are not very efficient. In presented study, it is examined the retrieval efficiency rate of spatial methods used for feature extraction for medical image retrieval systems. The investigated algorithms in this study depend on gray level co-occurrence matrix (GLCM), gray level run length matrix (GLRLM), and Gabor wavelet accepted as spatial methods. In the experiments, the database is built including hundreds of medical images such as brain, lung, sinus, and bone. The results obtained in this study shows that queries based on statistics obtained from GLCM are satisfied. However, it is observed that Gabor Wavelet has been the most effective and accurate method. PMID:25227014

  17. An image fusion method based on biorthogonal wavelet

    NASA Astrophysics Data System (ADS)

    Li, Jianlin; Yu, Jiancheng; Sun, Shengli

    2008-03-01

    Image fusion could process and utilize the source images, with complementing different image information, to achieve the more objective and essential understanding of the identical object. Recently, image fusion has been extensively applied in many fields such as medical imaging, micro photographic imaging, remote sensing, and computer vision as well as robot. There are various methods have been proposed in the past years, such as pyramid decomposition and wavelet transform algorithm. As for wavelet transform algorithm, due to the virtue of its multi-resolution, wavelet transform has been applied in image processing successfully. Another advantage of wavelet transform is that it can be much more easily realized in hardware, because its data format is very simple, so it could save a lot of resources, besides, to some extent, it can solve the real-time problem of huge-data image fusion. However, as the orthogonal filter of wavelet transform doesn't have the characteristics of linear phase, the phase distortion will lead to the distortion of the image edge. To make up for this shortcoming, the biorthogonal wavelet is introduced here. So, a novel image fusion scheme based on biorthogonal wavelet decomposition is presented in this paper. As for the low-frequency and high-frequency wavelet decomposition coefficients, the local-area-energy-weighted-coefficient fusion rule is adopted and different thresholds of low-frequency and high-frequency are set. Based on biorthogonal wavelet transform and traditional pyramid decomposition algorithm, an MMW image and a visible image are fused in the experiment. Compared with the traditional pyramid decomposition, the fusion scheme based biorthogonal wavelet is more capable to retain and pick up image information, and make up the distortion of image edge. So, it has a wide application potential.

  18. Post-Fragmentation Whole Genome Amplification-Based Method

    NASA Technical Reports Server (NTRS)

    Benardini, James; LaDuc, Myron T.; Langmore, John

    2011-01-01

    This innovation is derived from a proprietary amplification scheme that is based upon random fragmentation of the genome into a series of short, overlapping templates. The resulting shorter DNA strands (<400 bp) constitute a library of DNA fragments with defined 3 and 5 termini. Specific primers to these termini are then used to isothermally amplify this library into potentially unlimited quantities that can be used immediately for multiple downstream applications including gel eletrophoresis, quantitative polymerase chain reaction (QPCR), comparative genomic hybridization microarray, SNP analysis, and sequencing. The standard reaction can be performed with minimal hands-on time, and can produce amplified DNA in as little as three hours. Post-fragmentation whole genome amplification-based technology provides a robust and accurate method of amplifying femtogram levels of starting material into microgram yields with no detectable allele bias. The amplified DNA also facilitates the preservation of samples (spacecraft samples) by amplifying scarce amounts of template DNA into microgram concentrations in just a few hours. Based on further optimization of this technology, this could be a feasible technology to use in sample preservation for potential future sample return missions. The research and technology development described here can be pivotal in dealing with backward/forward biological contamination from planetary missions. Such efforts rely heavily on an increasing understanding of the burden and diversity of microorganisms present on spacecraft surfaces throughout assembly and testing. The development and implementation of these technologies could significantly improve the comprehensiveness and resolving power of spacecraft-associated microbial population censuses, and are important to the continued evolution and advancement of planetary protection capabilities. Current molecular procedures for assaying spacecraft-associated microbial burden and diversity have

  19. Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?

    ERIC Educational Resources Information Center

    Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.

    2011-01-01

    This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…

  20. Geomorphometry-based method of landform assessment for geodiversity

    NASA Astrophysics Data System (ADS)

    Najwer, Alicja; Zwoliński, Zbigniew

    2015-04-01

    Climate variability primarily induces the variations in the intensity and frequency of surface processes and consequently, principal changes in the landscape. As a result, abiotic heterogeneity may be threatened and the key elements of the natural diversity even decay. The concept of geodiversity was created recently and has rapidly gained the approval of scientists around the world. However, the problem recognition is still at an early stage. Moreover, little progress has been made concerning its assessment and geovisualisation. Geographical Information System (GIS) tools currently provide wide possibilities for the Earth's surface studies. Very often, the main limitation in that analysis is acquisition of geodata in appropriate resolution. The main objective of this study was to develop a proceeding algorithm for the landform geodiversity assessment using geomorphometric parameters. Furthermore, final maps were compared to those resulting from thematic layers method. The study area consists of two peculiar valleys, characterized by diverse landscape units and complex geological setting: Sucha Woda in Polish part of Tatra Mts. and Wrzosowka in Sudetes Mts. Both valleys are located in the National Park areas. The basis for the assessment is a proper selection of geomorphometric parameters with reference to the definition of geodiversity. Seven factor maps were prepared for each valley: General Curvature, Topographic Openness, Potential Incoming Solar Radiation, Topographic Position Index, Topographic Wetness Index, Convergence Index and Relative Heights. After the data integration and performing the necessary geoinformation analysis, the next step with a certain degree of subjectivity is score classification of the input maps using an expert system and geostatistical analysis. The crucial point to generate the final maps of geodiversity by multi-criteria evaluation (MCE) with GIS-based Weighted Sum technique is to assign appropriate weights for each factor map by

  1. Efficient ray tracing algorithms based on wavefront construction and model based interpolation method

    NASA Astrophysics Data System (ADS)

    Lee, Kyoung Jin

    Understanding and modeling seismic wave propagation is important in regional and exploration seismology. Ray tracing is a powerful and popular method for this purpose. Wavefront construction (WFC) method handles wavefronts instead of individual rays, thereby controlling proper ray density on the wavefront. By adaptively controlling rays over a wavefront, it efficiently models wave propagation. Algorithms for a quasi-P wave wavefront construction method and a new coordinate system used to generate wavefront construction mesh are proposed and tested for numerical properties and modeling capabilities. Traveltimes, amplitudes, and other parameters, which can be used for seismic imaging such as migrations and synthetic seismograms, are computed from the wavefront construction method. Modeling with wavefront construction code is applied to anisotropic media as well as isotropic media. Synthetic seismograms are computed using the wavefront construction method as a new way of generating synthetics. To incorporate layered velocity models, the model based interpolation (MBI) ray tracing method, which is designed to take advantage of the wavefront construction method as well as conventional ray tracing methods, is proposed and experimental codes are developed for it. Many wavefront construction codes are limited to smoothed velocity models for handling complicated problems in layered velocity models and the conventional ray tracing methods suffer from the inability to control ray density during wave propagation. By interpolating the wavefront near model boundaries, it is possible to handle the layered velocity model as well as overcome ray density control problems in conventional methods. The test results revealed this new method can be an effective modeling tool for accurate and effective computing.

  2. A multivariate quadrature based moment method for LES based modeling of supersonic combustion

    NASA Astrophysics Data System (ADS)

    Donde, Pratik; Koo, Heeseok; Raman, Venkat

    2012-07-01

    The transported probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of scramjet combustors. In this approach, a high-dimensional transport equation for the joint composition-enthalpy PDF needs to be solved. Quadrature based approaches provide deterministic Eulerian methods for solving the joint-PDF transport equation. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.

  3. Prediction Method of Speech Recognition Performance Based on HMM-based Speech Synthesis Technique

    NASA Astrophysics Data System (ADS)

    Terashima, Ryuta; Yoshimura, Takayoshi; Wakita, Toshihiro; Tokuda, Keiichi; Kitamura, Tadashi

    We describe an efficient method that uses a HMM-based speech synthesis technique as a test pattern generator for evaluating the word recognition rate. The recognition rates of each word and speaker can be evaluated by the synthesized speech by using this method. The parameter generation technique can be formulated as an algorithm that can determine the speech parameter vector sequence O by maximizing P(O¦Q,λ) given the model parameter λ and the state sequence Q, under a dynamic acoustic feature constraint. We conducted recognition experiments to illustrate the validity of the method. Approximately 100 speakers were used to train the speaker dependent models for the speech synthesis used in these experiments, and the synthetic speech was generated as the test patterns for the target speech recognizer. As a result, the recognition rate of the HMM-based synthesized speech shows a good correlation with the recognition rate of the actual speech. Furthermore, we find that our method can predict the speaker recognition rate with approximately 2% error on average. Therefore the evaluation of the speaker recognition rate will be performed automatically by using the proposed method.

  4. Content based Image Retrieval based on Different Global and Local Color Histogram Methods: A Survey

    NASA Astrophysics Data System (ADS)

    Suhasini, Pallikonda Sarah; Sri Rama Krishna, K.; Murali Krishna, I. V.

    2016-06-01

    Different global and local color histogram methods for content based image retrieval (CBIR) are investigated in this paper. Color histogram is a widely used descriptor for CBIR. Conventional method of extracting color histogram is global, which misses the spatial content, is less invariant to deformation and viewpoint changes, and results in a very large three dimensional histogram corresponding to the color space used. To address the above deficiencies, different global and local histogram methods are proposed in recent research. Different ways of extracting local histograms to have spatial correspondence, invariant colour histogram to add deformation and viewpoint invariance and fuzzy linking method to reduce the size of the histogram are found in recent papers. The color space and the distance metric used are vital in obtaining color histogram. In this paper the performance of CBIR based on different global and local color histograms in three different color spaces, namely, RGB, HSV, L*a*b* and also with three distance measures Euclidean, Quadratic and Histogram intersection are surveyed, to choose appropriate method for future research.

  5. Knowledge Discovery from Climate Data using Graph-Based Methods

    NASA Astrophysics Data System (ADS)

    Steinhaeuser, K.

    2012-04-01

    Climate and Earth sciences have recently experienced a rapid transformation from a historically data-poor to a data-rich environment, thus bringing them into the realm of the Fourth Paradigm of scientific discovery - a term coined by the late Jim Gray (Hey et al. 2009), the other three being theory, experimentation and computer simulation. In particular, climate-related observations from remote sensors on satellites and weather radars, in situ sensors and sensor networks, as well as outputs of climate or Earth system models from large-scale simulations, provide terabytes of spatio-temporal data. These massive and information-rich datasets offer a significant opportunity for advancing climate science and our understanding of the global climate system, yet current analysis techniques are not able to fully realize their potential benefits. We describe a class of computational approaches, specifically from the data mining and machine learning domains, which may be novel to the climate science domain and can assist in the analysis process. Computer scientists have developed spatial and spatio-temporal analysis techniques for a number of years now, and many of them may be applicable and/or adaptable to problems in climate science. We describe a large-scale, NSF-funded project aimed at addressing climate science question using computational analysis methods; team members include computer scientists, statisticians, and climate scientists from various backgrounds. One of the major thrusts is in the development of graph-based methods, and several illustrative examples of recent work in this area will be presented.

  6. Ensemble-based methods for forecasting census in hospital units

    PubMed Central

    2013-01-01

    Background The ability to accurately forecast census counts in hospital departments has considerable implications for hospital resource allocation. In recent years several different methods have been proposed forecasting census counts, however many of these approaches do not use available patient-specific information. Methods In this paper we present an ensemble-based methodology for forecasting the census under a framework that simultaneously incorporates both (i) arrival trends over time and (ii) patient-specific baseline and time-varying information. The proposed model for predicting census has three components, namely: current census count, number of daily arrivals and number of daily departures. To model the number of daily arrivals, we use a seasonality adjusted Poisson Autoregressive (PAR) model where the parameter estimates are obtained via conditional maximum likelihood. The number of daily departures is predicted by modeling the probability of departure from the census using logistic regression models that are adjusted for the amount of time spent in the census and incorporate both patient-specific baseline and time varying patient-specific covariate information. We illustrate our approach using neonatal intensive care unit (NICU) data collected at Women & Infants Hospital, Providence RI, which consists of 1001 consecutive NICU admissions between April 1st 2008 and March 31st 2009. Results Our results demonstrate statistically significant improved prediction accuracy for 3, 5, and 7 day census forecasts and increased precision of our forecasting model compared to a forecasting approach that ignores patient-specific information. Conclusions Forecasting models that utilize patient-specific baseline and time-varying information make the most of data typically available and have the capacity to substantially improve census forecasts. PMID:23721123

  7. Method of Heating a Foam-Based Catalyst Bed

    NASA Technical Reports Server (NTRS)

    Fortini, Arthur J.; Williams, Brian E.; McNeal, Shawn R.

    2009-01-01

    A method of heating a foam-based catalyst bed has been developed using silicon carbide as the catalyst support due to its readily accessible, high surface area that is oxidation-resistant and is electrically conductive. The foam support may be resistively heated by passing an electric current through it. This allows the catalyst bed to be heated directly, requiring less power to reach the desired temperature more quickly. Designed for heterogeneous catalysis, the method can be used by the petrochemical, chemical processing, and power-generating industries, as well as automotive catalytic converters. Catalyst beds must be heated to a light-off temperature before they catalyze the desired reactions. This typically is done by heating the assembly that contains the catalyst bed, which results in much of the power being wasted and/or lost to the surrounding environment. The catalyst bed is heated indirectly, thus requiring excessive power. With the electrically heated catalyst bed, virtually all of the power is used to heat the support, and only a small fraction is lost to the surroundings. Although the light-off temperature of most catalysts is only a few hundred degrees Celsius, the electrically heated foam is able to achieve temperatures of 1,200 C. Lower temperatures are achievable by supplying less electrical power to the foam. Furthermore, because of the foam s open-cell structure, the catalyst can be applied either directly to the foam ligaments or in the form of a catalyst- containing washcoat. This innovation would be very useful for heterogeneous catalysis where elevated temperatures are needed to drive the reaction.

  8. Using Corporate-Based Methods To Assess Technical Communication Programs.

    ERIC Educational Resources Information Center

    Faber, Brenton; Bekins, Linn; Karis, Bill

    2002-01-01

    Investigates methods of program assessment used by corporate learning sites and profiles value added methods as a way to both construct and evaluate academic programs in technical communication. Examines and critiques assessment methods from corporate training environments including methods employed by corporate universities and value added…

  9. Riding comfort optimization of railway trains based on pseudo-excitation method and symplectic method

    NASA Astrophysics Data System (ADS)

    Zhang, You-Wei; Zhao, Yan; Zhang, Ya-Hui; Lin, Jia-Hao; He, Xing-Wen

    2013-10-01

    This research is intended to develop a FEM-based riding comfort optimization approach to the railway trains considering the coupling effect of vehicle-track system. To obtain its accurate dynamic response, the car body is modeled with finite elements, while the bogie frames and wheel-sets are idealized as rigid bodies. The differential equations of motion of the dynamic vehicle-track system are derived considering wheel-track interaction, in which the pseudo-excitation method and the symplectic mathematical method are effectively applied to simplify the calculation. Then, the min-max optimization approach is utilized to improve the train riding comfort with related parameters of the suspension structure adopted as design variables, in which 54 design points on the car floor are chosen as estimation locations. The K-S function is applied to fit the objective function to make it smooth, differentiable and have superior integrity. Analytical sensitivities of the K-S function are then derived to solve the optimization problem. Finally, the effectiveness of the proposed approach is demonstrated through numerical examples and some useful discussions are made.

  10. 3D modeling method for computer animate based on modified weak structured light method

    NASA Astrophysics Data System (ADS)

    Xiong, Hanwei; Pan, Ming; Zhang, Xiangwei

    2010-11-01

    A simple and affordable 3D scanner is designed in this paper. Three-dimensional digital models are playing an increasingly important role in many fields, such as computer animate, industrial design, artistic design and heritage conservation. For many complex shapes, optical measurement systems are indispensable to acquiring the 3D information. In the field of computer animate, such an optical measurement device is too expensive to be widely adopted, and on the other hand, the precision is not as critical a factor in that situation. In this paper, a new cheap 3D measurement system is implemented based on modified weak structured light, using only a video camera, a light source and a straight stick rotating on a fixed axis. For an ordinary weak structured light configuration, one or two reference planes are required, and the shadows on these planes must be tracked in the scanning process, which destroy the convenience of this method. In the modified system, reference planes are unnecessary, and size range of the scanned objects is expanded widely. A new calibration procedure is also realized for the proposed method, and points cloud is obtained by analyzing the shadow strips on the object. A two-stage ICP algorithm is used to merge the points cloud from different viewpoints to get a full description of the object, and after a series of operations, a NURBS surface model is generated in the end. A complex toy bear is used to verify the efficiency of the method, and errors range from 0.7783mm to 1.4326mm comparing with the ground truth measurement.

  11. SPRi-based adenovirus detection using a surrogate antibody method.

    PubMed

    Abadian, Pegah N; Yildirim, Nimet; Gu, April Z; Goluch, Edgar D

    2015-12-15

    Adenovirus infection, which is a waterborne viral disease, is one of the most prevelant causes of human morbidity in the world. Thus, methods for rapid detection of this infectious virus in the environment are urgently needed for public health protection. In this study, we developed a rapid, real-time, sensitive, and label-free SPRi-based biosensor for rapid, sensitive and highly selective detection of adenoviruses. The sensing protocol consists of mixing the sample containing adenovirus with a predetermined concentration of adenovirus antibody. The mixture was filtered to remove the free antibodies from the sample. A secondary antibody, which was specific to the adenovirus antibody, was immobilized onto the SPRi chip surface covalently and the filtrate was flowed over the sensor surface. When the free adenovirus antibodies bound to the surface-immobilized secondary antibodies, we observed this binding via changes in reflectivity. In this approach, a higher amount of adenoviruses resulted in fewer free adenovirus antibodies and thus smaller reflectivity changes. A dose-response curve was generated, and the linear detection range was determined to be from 10 PFU/mL to 5000 PFU/mL with an R(2) value greater than 0.9. The results also showed that the developed biosensing system had a high specificity towards adenovirus (less than 20% signal change when tested in a sample matrix containing rotavirus and lentivirus). PMID:26232675

  12. Method of predicting Splice Sites based on signal interactions

    PubMed Central

    Churbanov, Alexander; Rogozin, Igor B; Deogun, Jitender S; Ali, Hesham

    2006-01-01

    Background Predicting and proper ranking of canonical splice sites (SSs) is a challenging problem in bioinformatics and machine learning communities. Any progress in SSs recognition will lead to better understanding of splicing mechanism. We introduce several new approaches of combining a priori knowledge for improved SS detection. First, we design our new Bayesian SS sensor based on oligonucleotide counting. To further enhance prediction quality, we applied our new de novo motif detection tool MHMMotif to intronic ends and exons. We combine elements found with sensor information using Naive Bayesian Network, as implemented in our new tool SpliceScan. Results According to our tests, the Bayesian sensor outperforms the contemporary Maximum Entropy sensor for 5' SS detection. We report a number of putative Exonic (ESE) and Intronic (ISE) Splicing Enhancers found by MHMMotif tool. T-test statistics on mouse/rat intronic alignments indicates, that detected elements are on average more conserved as compared to other oligos, which supports our assumption of their functional importance. The tool has been shown to outperform the SpliceView, GeneSplicer, NNSplice, Genio and NetUTR tools for the test set of human genes. SpliceScan outperforms all contemporary ab initio gene structural prediction tools on the set of 5' UTR gene fragments. Conclusion Designed methods have many attractive properties, compared to existing approaches. Bayesian sensor, MHMMotif program and SpliceScan tools are freely available on our web site. Reviewers This article was reviewed by Manyuan Long, Arcady Mushegian and Mikhail Gelfand. PMID:16584568

  13. New method of peptide cleavage based on Edman degradation.

    PubMed

    Bąchor, Remigiusz; Kluczyk, Alicja; Stefanowicz, Piotr; Szewczuk, Zbigniew

    2013-08-01

    A straightforward cleavage method for N- acylated peptides based on the phenylthiohydantoin (PTH) formation is presented. The procedure could be applied to acid-stable resins, such as TentaGel HL-NH[Formula: see text]. We designed a cleavable linker that consists of a lysine residue with the [Formula: see text]-amino group blocked by Boc, whereas the [Formula: see text]-amino group is used for peptide synthesis. After the peptide assembly is completed, the protecting groups in peptide side chains are removed using trifluoroacetic acid, thus liberating also the [Formula: see text]-amino group of the lysine in the linker. Then the reaction with phenyl isothiocyanate followed by acidolysis causes an efficient peptide release from the resin as a stable PTH derivative. Furthermore, the application of a fixed charge tag in the form of 2-(4-aza-1-azoniabicyclo[2.2.2]octylammonium)acetyl group increases ionization efficiency and reduces the detection limit, allowing ESI-MS/MS sequencing of peptides in the subfemtomolar range. The proposed strategy is compatible with standard conditions during one-bead-one-compound peptide library synthesis. The applicability of the developed strategy in combinatorial chemistry was confirmed using a small training library of [Formula: see text]-chymotrypsin substrates. PMID:23690169

  14. A Monitoring Method Based on FBG for Concrete Corrosion Cracking

    PubMed Central

    Mao, Jianghong; Xu, Fangyuan; Gao, Qian; Liu, Shenglin; Jin, Weiliang; Xu, Yidong

    2016-01-01

    Corrosion cracking of reinforced concrete caused by chloride salt is one of the main determinants of structure durability. Monitoring the entire process of concrete corrosion cracking is critical for assessing the remaining life of the structure and determining if maintenance is needed. Fiber Bragg Grating (FBG) sensing technology is extensively developed in photoelectric monitoring technology and has been used on many projects. FBG can detect the quasi-distribution of strain and temperature under corrosive environments, and thus it is suitable for monitoring reinforced concrete cracking. According to the mechanical principle that corrosion expansion is responsible for the reinforced concrete cracking, a package design of reinforced concrete cracking sensors based on FBG was proposed and investigated in this study. The corresponding relationship between the grating wavelength and strain was calibrated by an equal strength beam test. The effectiveness of the proposed method was verified by an electrically accelerated corrosion experiment. The fiber grating sensing technology was able to track the corrosion expansion and corrosion cracking in real time and provided data to inform decision-making for the maintenance and management of the engineering structure. PMID:27428972

  15. A Monitoring Method Based on FBG for Concrete Corrosion Cracking.

    PubMed

    Mao, Jianghong; Xu, Fangyuan; Gao, Qian; Liu, Shenglin; Jin, Weiliang; Xu, Yidong

    2016-01-01

    Corrosion cracking of reinforced concrete caused by chloride salt is one of the main determinants of structure durability. Monitoring the entire process of concrete corrosion cracking is critical for assessing the remaining life of the structure and determining if maintenance is needed. Fiber Bragg Grating (FBG) sensing technology is extensively developed in photoelectric monitoring technology and has been used on many projects. FBG can detect the quasi-distribution of strain and temperature under corrosive environments, and thus it is suitable for monitoring reinforced concrete cracking. According to the mechanical principle that corrosion expansion is responsible for the reinforced concrete cracking, a package design of reinforced concrete cracking sensors based on FBG was proposed and investigated in this study. The corresponding relationship between the grating wavelength and strain was calibrated by an equal strength beam test. The effectiveness of the proposed method was verified by an electrically accelerated corrosion experiment. The fiber grating sensing technology was able to track the corrosion expansion and corrosion cracking in real time and provided data to inform decision-making for the maintenance and management of the engineering structure. PMID:27428972

  16. Hyperspectral image-based methods for spectral diversity

    NASA Astrophysics Data System (ADS)

    Sotomayor, Alejandro; Medina, Ollantay; Chinea, J. D.; Manian, Vidya

    2015-05-01

    Hyperspectral images are an important tool to assess ecosystem biodiversity. To obtain more precise analysis of biodiversity indicators that agree with indicators obtained using field data, analysis of spectral diversity calculated from images have to be validated with field based diversity estimates. The plant species richness is one of the most important indicators of biodiversity. This indicator can be measured in hyperspectral images considering the Spectral Variation Hypothesis (SVH) which states that the spectral heterogeneity is related to spatial heterogeneity and thus to species richness. The goal of this research is to capture spectral heterogeneity from hyperspectral images for a terrestrial neo tropical forest site using Vector Quantization (VQ) method and then use the result for prediction of plant species richness. The results are compared with that of Hierarchical Agglomerative Clustering (HAC). The validation of the process index is done calculating the Pearson correlation coefficient between the Shannon entropy from actual field data and the Shannon entropy computed in the images. One of the advantages of developing more accurate analysis tools would be the extension of the analysis to larger zones. Multispectral image with a lower spatial resolution has been evaluated as a prospective tool for spectral diversity.

  17. Inter-Domain Redundancy Path Computation Methods Based on PCE

    NASA Astrophysics Data System (ADS)

    Hayashi, Rie; Oki, Eiji; Shiomoto, Kohei

    This paper evaluates three inter-domain redundancy path computation methods based on PCE (Path Computation Element). Some inter-domain paths carry traffic that must be assured of high quality and high reliability transfer such as telephony over IP and premium virtual private networks (VPNs). It is, therefore, important to set inter-domain redundancy paths, i. e. primary and secondary paths. The first scheme utilizes an existing protocol and the basic PCE implementation. It does not need any extension or modification. In the second scheme, PCEs make a virtual shortest path tree (VSPT) considering the candidates of primary paths that have corresponding secondary paths. The goal is to reduce blocking probability; corresponding secondary paths may be found more often after a primary path is decided; no protocol extension is necessary. In the third scheme, PCEs make a VSPT considering all candidates of primary and secondary paths. Blocking probability is further decreased since all possible candidates are located, and the sum of primary and secondary path cost is reduced by choosing the pair with minimum cost among all path pairs. Numerical evaluations show that the second and third schemes offer only a few percent reduction in blocking probability and path pair total cost, while the overheads imposed by protocol revision and increase of the amount of calculation and information to be exchanged are large. This suggests that the first scheme, the most basic and simple one, is the best choice.

  18. Agent-based method for distributed clustering of textual information

    DOEpatents

    Potok, Thomas E [Oak Ridge, TN; Reed, Joel W [Knoxville, TN; Elmore, Mark T [Oak Ridge, TN; Treadwell, Jim N [Louisville, TN

    2010-09-28

    A computer method and system for storing, retrieving and displaying information has a multiplexing agent (20) that calculates a new document vector (25) for a new document (21) to be added to the system and transmits the new document vector (25) to master cluster agents (22) and cluster agents (23) for evaluation. These agents (22, 23) perform the evaluation and return values upstream to the multiplexing agent (20) based on the similarity of the document to documents stored under their control. The multiplexing agent (20) then sends the document (21) and the document vector (25) to the master cluster agent (22), which then forwards it to a cluster agent (23) or creates a new cluster agent (23) to manage the document (21). The system also searches for stored documents according to a search query having at least one term and identifying the documents found in the search, and displays the documents in a clustering display (80) of similarity so as to indicate similarity of the documents to each other.

  19. A Novel SNPs Detection Method Based on Gold Magnetic Nanoparticles Array and Single Base Extension

    PubMed Central

    Li, Song; Liu, Hongna; Jia, Yingying; Deng, Yan; Zhang, Liming; Lu, Zhuoxuan; He, Nongyue

    2012-01-01

    To fulfill the increasing need for large-scale genetic research, a high-throughput and automated SNPs genotyping method based on gold magnetic nanoparticles (GMNPs) array and dual-color single base extension has been designed. After amplification of DNA templates, biotinylated extension primers were captured by streptavidin coated gold magnetic nanoparticle (SA-GMNPs). Next a solid-phase, dual-color single base extension (SBE) reaction with the specific biotinylated primer was performed directly on the surface of the GMNPs. Finally, a “bead array” was fabricated by spotting GMNPs with fluorophore on a clean glass slide, and the genotype of each sample was discriminated by scanning the “bead array”. MTHFR gene C677T polymorphism of 320 individual samples were interrogated using this method, the signal/noise ratio for homozygous samples were over 12.33, while the signal/noise ratio for heterozygous samples was near 1. Compared with other dual-color hybridization based genotyping methods, the method described here gives a higher signal/noise ratio and SNP loci can be identified with a high level of confidence. This assay has the advantage of eliminating the need for background subtraction and direct analysis of the fluorescence values of the GMNPs to determine their genotypes without the necessary procedures for purification and complex reduction of PCR products. The application of this strategy to large-scale SNP studies simplifies the process, and reduces the labor required to produce highly sensitive results while improving the potential for automation. PMID:23139724

  20. Kinetic theory based new upwind methods for inviscid compressible flows

    NASA Technical Reports Server (NTRS)

    Deshpande, S. M.

    1986-01-01

    Two new upwind methods called the Kinetic Numerical Method (KNM) and the Kinetic Flux Vector Splitting (KFVS) method for the solution of the Euler equations have been presented. Both of these methods can be regarded as some suitable moments of an upwind scheme for the solution of the Boltzmann equation provided the distribution function is Maxwellian. This moment-method strategy leads to a unification of the Riemann approach and the pseudo-particle approach used earlier in the development of upwind methods for the Euler equations. A very important aspect of the moment-method strategy is that the new upwind methods satisfy the entropy condition because of the Boltzmann H-Theorem and suggest a possible way of extending the Total Variation Diminishing (TVD) principle within the framework of the H-Theorem. The ability of these methods in obtaining accurate wiggle-free solution is demonstrated by applying them to two test problems.

  1. "YFlag"--a single-base extension primer based method for gender determination.

    PubMed

    Allwood, Julia S; Harbison, Sally Ann

    2015-01-01

    Assigning the gender of a DNA contributor in forensic analysis is typically achieved using the amelogenin test. Occasionally, this test produces false-positive results due to deletions occurring on the Y chromosome. Here, a four-marker "YFlag" method is presented to infer gender using single-base extension primers to flag the presence (or absence) of Y-chromosome DNA within a sample to supplement forensic STR profiling. This method offers built-in redundancy, with a single marker being sufficient to detect the presence of male DNA. In a study using 30 male and 30 female individuals, detection of male DNA was achieved with c. 0.03 ng of male DNA. All four markers were present in male/female mixture samples despite the presence of excessive female DNA. In summary, the YFlag system offers a method that is reproducible, specific, and sensitive, making it suitable for forensic use to detect male DNA. PMID:25354446

  2. A fracture enhancement method based on the histogram equalization of eigenstructure-based coherence

    NASA Astrophysics Data System (ADS)

    Dou, Xi-Ying; Han, Li-Guo; Wang, En-Li; Dong, Xue-Hua; Yang, Qing; Yan, Gao-Han

    2014-06-01

    Eigenstructure-based coherence attributes are efficient and mature techniques for large-scale fracture detection. However, in horizontally bedded and continuous strata, buried fractures in high grayscale value zones are difficult to detect. Furthermore, middle- and small-scale fractures in fractured zones where migration image energies are usually not concentrated perfectly are also hard to detect because of the fuzzy, clouded shadows owing to low grayscale values. A new fracture enhancement method combined with histogram equalization is proposed to solve these problems. With this method, the contrast between discontinuities and background in coherence images is increased, linear structures are highlighted by stepwise adjustment of the threshold of the coherence image, and fractures are detected at different scales. Application of the method shows that it can also improve fracture cognition and accuracy.

  3. Calibration-based NUC Method in Real-time Based on IRFPA

    NASA Astrophysics Data System (ADS)

    Sheng, Meng; Xie, Juntang; Fu, Ziyuan

    The non-uniformity of Infrared Focal Plane Array (IRFPA) resulted from the limits of the detector's materials and the manufacturing process affects the performance of the staring IR imaging system. To address this problem, non-uniformity correction (NUC), applied for real-time resolution, is the important issue in the IR imaging information processing system. This thesis introduces method of non-uniformity correction. Considering the nonlinear character of IRFPA, the calibration-based polynomial NUC method is proposed in the hardware system. Comparing with the conventional NUC schemes, polynomial method can achieve better NUC performance and implement in real-time. The algorithm is designed based on System architecture for FPGA hardware, for which is the Xilinx ML402 platform dedicated for video processing, which consists of A/D and D/A converter, and Virtex-4 FPGA on the mother board. The polynomial method reduces the non-uniformity in the infrared image largely, implemented at real-time, as well as the advantage of wide dynamic range.

  4. CHAPTER 7. BERYLLIUM ANALYSIS BY NON-PLASMA BASED METHODS

    SciTech Connect

    Ekechukwu, A

    2009-04-20

    The most common method of analysis for beryllium is inductively coupled plasma atomic emission spectrometry (ICP-AES). This method, along with inductively coupled plasma mass spectrometry (ICP-MS), is discussed in Chapter 6. However, other methods exist and have been used for different applications. These methods include spectroscopic, chromatographic, colorimetric, and electrochemical. This chapter provides an overview of beryllium analysis methods other than plasma spectrometry (inductively coupled plasma atomic emission spectrometry or mass spectrometry). The basic methods, detection limits and interferences are described. Specific applications from the literature are also presented.

  5. DO TIE LABORATORY BASED METHODS REALLY REFLECT FIELD CONDITIONS

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both interstitial waters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question ...

  6. DO TIE LABORATORY BASED ASSESSMENT METHODS REALLY PREDICT FIELD EFFECTS?

    EPA Science Inventory

    Sediment Toxicity Identification and Evaluation (TIE) methods have been developed for both porewaters and whole sediments. These relatively simple laboratory methods are designed to identify specific toxicants or classes of toxicants in sediments; however, the question of whethe...

  7. Alternative modeling methods for plasma-based Rf ion sources

    NASA Astrophysics Data System (ADS)

    Veitzer, Seth A.; Kundrapu, Madhusudhan; Stoltz, Peter H.; Beckwith, Kristian R. C.

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H- source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H- ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD models

  8. Alternative modeling methods for plasma-based Rf ion sources.

    PubMed

    Veitzer, Seth A; Kundrapu, Madhusudhan; Stoltz, Peter H; Beckwith, Kristian R C

    2016-02-01

    Rf-driven ion sources for accelerators and many industrial applications benefit from detailed numerical modeling and simulation of plasma characteristics. For instance, modeling of the Spallation Neutron Source (SNS) internal antenna H(-) source has indicated that a large plasma velocity is induced near bends in the antenna where structural failures are often observed. This could lead to improved designs and ion source performance based on simulation and modeling. However, there are significant separations of time and spatial scales inherent to Rf-driven plasma ion sources, which makes it difficult to model ion sources with explicit, kinetic Particle-In-Cell (PIC) simulation codes. In particular, if both electron and ion motions are to be explicitly modeled, then the simulation time step must be very small, and total simulation times must be large enough to capture the evolution of the plasma ions, as well as extending over many Rf periods. Additional physics processes such as plasma chemistry and surface effects such as secondary electron emission increase the computational requirements in such a way that even fully parallel explicit PIC models cannot be used. One alternative method is to develop fluid-based codes coupled with electromagnetics in order to model ion sources. Time-domain fluid models can simulate plasma evolution, plasma chemistry, and surface physics models with reasonable computational resources by not explicitly resolving electron motions, which thereby leads to an increase in the time step. This is achieved by solving fluid motions coupled with electromagnetics using reduced-physics models, such as single-temperature magnetohydrodynamics (MHD), extended, gas dynamic, and Hall MHD, and two-fluid MHD models. We show recent results on modeling the internal antenna H(-) ion source for the SNS at Oak Ridge National Laboratory using the fluid plasma modeling code USim. We compare demonstrate plasma temperature equilibration in two-temperature MHD

  9. An MRI-based parcellation method for the temporal lobe.

    PubMed

    Kim, J J; Crespo-Facorro, B; Andreasen, N C; O'Leary, D S; Zhang, B; Harris, G; Magnotta, V A

    2000-04-01

    The temporal lobe has long been a focus of attention with regard to the underlying pathology of several major psychiatric illnesses. Previous postmortem and imaging studies describing regional volume reductions or perfusion defects in temporal subregions have shown inconsistent findings, which are in part due to differences in the definition of the subregions and the methodology of measurement. The development of precise reproducible parcellation systems on magnetic resonance images may help improve uniformity of results in volumetric MR studies and unravel the complex activation patterns seen in functional neuroimaging studies. The present study describes detailed guidelines for the parcellation of the temporal neocortex. It parcels the entire temporal neocortex into 16 subregions: temporal pole, heschl's gyrus, planum temporale, planum polare, superior temporal gyrus (rostral and caudal), middle temporal gyrus (rostral, intermediate, and caudal), inferior temporal gyrus (rostral, intermediate, and caudal), occipitotemporal gyrus (rostral and caudal), and parahippocampal gyrus (rostral and caudal). Based upon topographic landmarks of individual sulci, every subregion was consecutively traced on a set of serial coronal slices. In spite of the huge variability of sulcal topography, the sulcal landmarks could be identified reliably due to the simultaneous display of three orthogonal (transaxial, coronal, and sagittal) planes, triangulated gray matter isosurface, and a 3-D-rendered image. The reliability study showed that the temporal neocortex could be parceled successfully and reliably; intraclass correlation coefficient for each subregion ranged from 0.62 to 0.99. Ultimately, this method will permit us to detect subtle morphometric impairments or to find abnormal patterns of functional activation in the temporal subregions that might reflect underlying neuropathological processes in psychiatric illnesses such as schizophrenia. PMID:10725184

  10. Usefulness and limits of biological dosimetry based on cytogenetic methods.

    PubMed

    Léonard, A; Rueff, J; Gerber, G B; Léonard, E D

    2005-01-01

    Damage from occupational or accidental exposure to ionising radiation is often assessed by monitoring chromosome aberrations in peripheral blood lymphocytes, and these procedures have, in several cases, assisted physicians in the management of irradiated persons. Thereby, circulating lymphocytes, which are in the G0 stage of the cell cycle are stimulated with a mitogenic agent, usually phytohaemagglutinin, to replicate in vitro their DNA and enter cell division, and are then observed for abnormalities. Comparison with dose-response relationships obtained in vitro allows an estimate of exposure based on scoring: Unstable aberrations by the conventional, well-established analysis of metaphases for chromosome abnormalities or for micronuclei; So-called stable aberrations by the classical G-banding (Giemsa-Stain-banding) technique or by the more recently developed fluorescent in situ hybridisation (FISH) method using fluorescent-labelled probes for centromeres and chromosomes. Three factors need to be considered in applying such biological dosimetry: (1) Radiation doses in the body are often inhomogeneous. A comparison of the distribution of the observed aberrations among cells with that expected from a normal poisson distribution can allow conclusions to be made with regard to the inhomogeneity of exposure by means of the so-called contaminated poisson distribution method; however, its application requires a sufficiently large number of aberrations, i.e. an exposure to a rather large dose at a high dose rate. (2) Exposure can occur at a low dose rate (e.g. from spread or lost radioactive sources) rendering a comparison with in vitro exposure hazardous. Dose-effect relationships of most aberrations that were scored, such as translocations, follow a square law. Repair intervening during exposure reduces the quadratic component with decreasing dose rate as exposure is spread over a longer period of time. No valid solution for this problem has yet been developed, although

  11. A constrained optimization algorithm based on the simplex search method

    NASA Astrophysics Data System (ADS)

    Mehta, Vivek Kumar; Dasgupta, Bhaskar

    2012-05-01

    In this article, a robust method is presented for handling constraints with the Nelder and Mead simplex search method, which is a direct search algorithm for multidimensional unconstrained optimization. The proposed method is free from the limitations of previous attempts that demand the initial simplex to be feasible or a projection of infeasible points to the nonlinear constraint boundaries. The method is tested on several benchmark problems and the results are compared with various evolutionary algorithms available in the literature. The proposed method is found to be competitive with respect to the existing algorithms in terms of effectiveness and efficiency.

  12. Computer based methods for thermodynamic analysis of materials processing

    NASA Astrophysics Data System (ADS)

    Kaufman, L.; Agren, J.

    1982-11-01

    A data base is being developed for calculating binary, ternary and multi component phase diagrams for systems of interest in processing novel materials. Current applications cover Zirconium Fluoride based Glasses for tunable gap Electro-Optical applications, Iron-Aluminum based alloys for high temperature applications and titanium-carbo-nitride compounds for hard metal coatings.

  13. Recent processing methods for preparing starch-based bioproducts

    Technology Transfer Automated Retrieval System (TEKTRAN)

    There is currently an intense interest in starch-based materials because of the low cost of starch, the replacement of dwindling petroleum-based resources with annually-renewable feedstocks, the biodegradability of starch-based products, and the creation of new markets for farm commodities. Non-trad...

  14. GPU-based parallel algorithm for blind image restoration using midfrequency-based methods

    NASA Astrophysics Data System (ADS)

    Xie, Lang; Luo, Yi-han; Bao, Qi-liang

    2013-08-01

    GPU-based general-purpose computing is a new branch of modern parallel computing, so the study of parallel algorithms specially designed for GPU hardware architecture is of great significance. In order to solve the problem of high computational complexity and poor real-time performance in blind image restoration, the midfrequency-based algorithm for blind image restoration was analyzed and improved in this paper. Furthermore, a midfrequency-based filtering method is also used to restore the image hardly with any recursion or iteration. Combining the algorithm with data intensiveness, data parallel computing and GPU execution model of single instruction and multiple threads, a new parallel midfrequency-based algorithm for blind image restoration is proposed in this paper, which is suitable for stream computing of GPU. In this algorithm, the GPU is utilized to accelerate the estimation of class-G point spread functions and midfrequency-based filtering. Aiming at better management of the GPU threads, the threads in a grid are scheduled according to the decomposition of the filtering data in frequency domain after the optimization of data access and the communication between the host and the device. The kernel parallelism structure is determined by the decomposition of the filtering data to ensure the transmission rate to get around the memory bandwidth limitation. The results show that, with the new algorithm, the operational speed is significantly increased and the real-time performance of image restoration is effectively improved, especially for high-resolution images.

  15. Comparing the Principle-Based SBH Maieutic Method to Traditional Case Study Methods of Teaching Media Ethics

    ERIC Educational Resources Information Center

    Grant, Thomas A.

    2012-01-01

    This quasi-experimental study at a Northwest university compared two methods of teaching media ethics, a class taught with the principle-based SBH Maieutic Method (n = 25) and a class taught with a traditional case study method (n = 27), with a control group (n = 21) that received no ethics training. Following a 16-week intervention, a one-way…

  16. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT

    PubMed Central

    Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  17. Fluence-based and microdosimetric event-based methods for radiation protection in space

    NASA Technical Reports Server (NTRS)

    Curtis, Stanley B.; Meinhold, C. B. (Principal Investigator)

    2002-01-01

    The National Council on Radiation Protection and Measurements (NCRP) has recently published a report (Report #137) that discusses various aspects of the concepts used in radiation protection and the difficulties in measuring the radiation environment in spacecraft for the estimation of radiation risk to space travelers. Two novel dosimetric methodologies, fluence-based and microdosimetric event-based methods, are discussed and evaluated, along with the more conventional quality factor/LET method. It was concluded that for the present, any reason to switch to a new methodology is not compelling. It is suggested that because of certain drawbacks in the presently-used conventional method, these alternative methodologies should be kept in mind. As new data become available and dosimetric techniques become more refined, the question should be revisited and that in the future, significant improvement might be realized. In addition, such concepts as equivalent dose and organ dose equivalent are discussed and various problems regarding the measurement/estimation of these quantities are presented.

  18. Hybrid Pixel-Based Method for Cardiac Ultrasound Fusion Based on Integration of PCA and DWT.

    PubMed

    Mazaheri, Samaneh; Sulaiman, Puteri Suhaiza; Wirza, Rahmita; Dimon, Mohd Zamrin; Khalid, Fatimah; Moosavi Tayebi, Rohollah

    2015-01-01

    Medical image fusion is the procedure of combining several images from one or multiple imaging modalities. In spite of numerous attempts in direction of automation ventricle segmentation and tracking in echocardiography, due to low quality images with missing anatomical details or speckle noises and restricted field of view, this problem is a challenging task. This paper presents a fusion method which particularly intends to increase the segment-ability of echocardiography features such as endocardial and improving the image contrast. In addition, it tries to expand the field of view, decreasing impact of noise and artifacts and enhancing the signal to noise ratio of the echo images. The proposed algorithm weights the image information regarding an integration feature between all the overlapping images, by using a combination of principal component analysis and discrete wavelet transform. For evaluation, a comparison has been done between results of some well-known techniques and the proposed method. Also, different metrics are implemented to evaluate the performance of proposed algorithm. It has been concluded that the presented pixel-based method based on the integration of PCA and DWT has the best result for the segment-ability of cardiac ultrasound images and better performance in all metrics. PMID:26089965

  19. Novel method of manufacturing hydrogen storage materials combining with numerical analysis based on discrete element method

    NASA Astrophysics Data System (ADS)

    Zhao, Xuzhe

    High efficiency hydrogen storage method is significant in development of fuel cell vehicle. Seeking for a high energy density material as the fuel becomes the key of wide spreading fuel cell vehicle. LiBH4 + MgH 2 system is a strong candidate due to their high hydrogen storage density and the reaction between them is reversible. However, LiBH4 + MgH 2 system usually requires the high temperature and hydrogen pressure for hydrogen release and uptake reaction. In order to reduce the requirements of this system, nanoengineering is the simple and efficient method to improve the thermodynamic properties and reduce kinetic barrier of reaction between LiBH4 and MgH2. Based on ab initio density functional theory (DFT) calculations, the previous study has indicated that the reaction between LiBH4 and MgH2 can take place at temperature near 200°C or below. However, the predictions have been shown to be inconsistent with many experiments. Therefore, it is the first time that our experiment using ball milling with aerosol spraying (BMAS) to prove the reaction between LiBH4 and MgH2 can happen during high energy ball milling at room temperature. Through this BMAS process we have found undoubtedly the formation of MgB 2 and LiH during ball milling of MgH2 while aerosol spraying of the LiBH4/THF solution. Aerosol nanoparticles from LiBH 4/THF solution leads to form Li2B12H12 during BMAS process. The Li2B12H12 formed then reacts with MgH2 in situ during ball milling to form MgB 2 and LiH. Discrete element modeling (DEM) is a useful tool to describe operation of various ball milling processes. EDEM is software based on DEM to predict power consumption, liner and media wear and mill output. In order to further improve the milling efficiency of BMAS process, EDEM is conducted to make analysis for complicated ball milling process. Milling speed and ball's filling ratio inside the canister as the variables are considered to determine the milling efficiency. The average and maximum

  20. A genetic algorithm based method for docking flexible molecules

    SciTech Connect

    Judson, R.S.; Jaeger, E.P.; Treasurywala, A.M.

    1993-11-01

    The authors describe a computational method for docking flexible molecules into protein binding sites. The method uses a genetic algorithm (GA) to search the combined conformation/orientation space of the molecule to find low energy conformation. Several techniques are described that increase the efficiency of the basic search method. These include the use of several interacting GA subpopulations or niches; the use of a growing algorithm that initially docks only a small part of the molecule; and the use of gradient minimization during the search. To illustrate the method, they dock Cbz-GlyP-Leu-Leu (ZGLL) into thermolysin. This system was chosen because a well refined crystal structure is available and because another docking method had previously been tested on this system. Their method is able to find conformations that lie physically close to and in some cases lower in energy than the crystal conformation in reasonable periods of time on readily available hardware.

  1. A literature based method for identifying gene-disease connections.

    PubMed

    Adamic, Lada A; Wilkinson, Dennis; Huberman, Bernardo A; Adar, Eytan

    2002-01-01

    We present a statistical method that can swiftly identify, from the literature, sets of genes known to be associated with given diseases. It offers a comprehensive way to treat alias symbols, a statistical method for computing the relevance of the gene to the query, and a novel way to disambiguate gene symbols from other abbreviations. The method is illustrated by finding genes related to breast cancer. PMID:15838128

  2. Methods for Data-based Delineation of Spatial Regions

    SciTech Connect

    Wilson, John E.

    2012-10-01

    In data analysis, it is often useful to delineate or segregate areas of interest from the general population of data in order to concentrate further analysis efforts on smaller areas. Three methods are presented here for automatically generating polygons around spatial data of interest. Each method addresses a distinct data type. These methods were developed for and implemented in the sample planning tool called Visual Sample Plan (VSP). Method A is used to delineate areas of elevated values in a rectangular grid of data (raster). The data used for this method are spatially related. Although VSP uses data from a kriging process for this method, it will work for any type of data that is spatially coherent and appears on a regular grid. Method B is used to surround areas of interest characterized by individual data points that are congregated within a certain distance of each other. Areas where data are “clumped” together spatially will be delineated. Method C is used to recreate the original boundary in a raster of data that separated data values from non-values. This is useful when a rectangular raster of data contains non-values (missing data) that indicate they were outside of some original boundary. If the original boundary is not delivered with the raster, this method will approximate the original boundary.

  3. Disordered Speech Assessment Using Automatic Methods Based on Quantitative Measures

    NASA Astrophysics Data System (ADS)

    Gu, Lingyun; Harris, John G.; Shrivastav, Rahul; Sapienza, Christine

    2005-12-01

    Speech quality assessment methods are necessary for evaluating and documenting treatment outcomes of patients suffering from degraded speech due to Parkinson's disease, stroke, or other disease processes. Subjective methods of speech quality assessment are more accurate and more robust than objective methods but are time-consuming and costly. We propose a novel objective measure of speech quality assessment that builds on traditional speech processing techniques such as dynamic time warping (DTW) and the Itakura-Saito (IS) distortion measure. Initial results show that our objective measure correlates well with the more expensive subjective methods.

  4. Aperture-Tolerant, Chemical-Based Methods to Reduce Channeling

    SciTech Connect

    Randall S. Seright

    2007-09-30

    This final technical progress report describes work performed from October 1, 2004, through May 16, 2007, for the project, 'Aperture-Tolerant, Chemical-Based Methods to Reduce Channeling'. We explored the potential of pore-filling gels for reducing excess water production from both fractured and unfractured production wells. Several gel formulations were identified that met the requirements--i.e., providing water residual resistance factors greater than 2,000 and ultimate oil residual resistance factors (F{sub rro}) of 2 or less. Significant oil throughput was required to achieve low F{sub rro} values, suggesting that gelant penetration into porous rock must be small (a few feet or less) for existing pore-filling gels to provide effective disproportionate permeability reduction. Compared with adsorbed polymers and weak gels, strong pore-filling gels can provide greater reliability and behavior that is insensitive to the initial rock permeability. Guidance is provided on where relative-permeability-modification/disproportionate-permeability-reduction treatments can be successfully applied for use in either oil or gas production wells. When properly designed and executed, these treatments can be successfully applied to a limited range of oilfield excessive-water-production problems. We examined whether gel rheology can explain behavior during extrusion through fractures. The rheology behavior of the gels tested showed a strong parallel to the results obtained from previous gel extrusion experiments. However, for a given aperture (fracture width or plate-plate separation), the pressure gradients measured during the gel extrusion experiments were much higher than anticipated from rheology measurements. Extensive experiments established that wall slip and first normal stress difference were not responsible for the pressure gradient discrepancy. To explain the discrepancy, we noted that the aperture for gel flow (for mobile gel wormholing through concentrated immobile

  5. Two novel pathway analysis methods based on a hierarchical model

    PubMed Central

    Evangelou, Marina; Dudbridge, Frank; Wernisch, Lorenz

    2014-01-01

    Motivation: Over the past few years several pathway analysis methods have been proposed for exploring and enhancing the analysis of genome-wide association data. Hierarchical models have been advocated as a way to integrate SNP and pathway effects in the same model, but their computational complexity has prevented them being applied on a genome-wide scale to date. Methods: We present two novel methods for identifying associated pathways. In the proposed hierarchical model, the SNP effects are analytically integrated out of the analysis, allowing computationally tractable model fitting to genome-wide data. The first method uses Bayes factors for calculating the effect of the pathways, whereas the second method uses a machine learning algorithm and adaptive lasso for finding a sparse solution of associated pathways. Results: The performance of the proposed methods was explored on both simulated and real data. The results of the simulation study showed that the methods outperformed some well-established association methods: the commonly used Fisher’s method for combining P-values and also the recently published BGSA. The methods were applied to two genome-wide association study datasets that aimed to find the genetic structure of platelet function and body mass index, respectively. The results of the analyses replicated the results of previously published pathway analysis of these phenotypes but also identified novel pathways that are potentially involved. Availability: An R package is under preparation. In the meantime, the scripts of the methods are available on request from the authors. Contact: marina.evangelou@cimr.cam.ac.uk Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:24123673

  6. Artificial Intelligence Methods: Challenge in Computer Based Polymer Design

    NASA Astrophysics Data System (ADS)

    Rusu, Teodora; Pinteala, Mariana; Cartwright, Hugh

    2009-08-01

    This paper deals with the use of Artificial Intelligence Methods (AI) in the design of new molecules possessing desired physical, chemical and biological properties. This is an important and difficult problem in the chemical, material and pharmaceutical industries. Traditional methods involve a laborious and expensive trial-and-error procedure, but computer-assisted approaches offer many advantages in the automation of molecular design.

  7. Method of characteristics - Based sensitivity calculations for international PWR benchmark

    SciTech Connect

    Suslov, I. R.; Tormyshev, I. V.; Komlev, O. G.

    2013-07-01

    Method to calculate sensitivity of fractional-linear neutron flux functionals to transport equation coefficients is proposed. Implementation of the method on the basis of MOC code MCCG3D is developed. Sensitivity calculations for fission intensity for international PWR benchmark are performed. (authors)

  8. Hyperspectral anomaly detection method based on auto-encoder

    NASA Astrophysics Data System (ADS)

    Bati, Emrecan; ćalışkan, Akın.; Koz, Alper; Alatan, A. A.

    2015-10-01

    A major drawback of most of the existing hyperspectral anomaly detection methods is the lack of an efficient background representation, which can successfully adapt to the varying complexity of hyperspectral images. In this paper, we propose a novel anomaly detection method which represents the hyperspectral scenes of different complexity with the state-of-the-art representation learning method, namely auto-encoder. The proposed method first encodes the spectral image into a sparse code, then decodes the coded image, and finally, assesses the coding error at each pixel as a measure of anomaly. Predictive Sparse Decomposition Auto-encoder is utilized in the proposed anomaly method due to its efficient joint learning for the encoding and decoding functions. The performance of the proposed anomaly detection method is both tested on visible-near infrared (VNIR) and long wave infrared (LWIR) hyperspectral images and compared with the conventional anomaly detection method, namely Reed-Xiaoli (RX) detector.1 The experiments has verified the superiority of the proposed anomaly detection method in terms of receiver operating characteristics (ROC) performance.

  9. Developing a Self-Report-Based Sequential Analysis Method for Educational Technology Systems: A Process-Based Usability Evaluation

    ERIC Educational Resources Information Center

    Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse

    2015-01-01

    The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…

  10. Segment-based vs. element-based integration for mortar methods in computational contact mechanics

    NASA Astrophysics Data System (ADS)

    Farah, Philipp; Popp, Alexander; Wall, Wolfgang A.

    2015-01-01

    Mortar finite element methods provide a very convenient and powerful discretization framework for geometrically nonlinear applications in computational contact mechanics, because they allow for a variationally consistent treatment of contact conditions (mesh tying, non-penetration, frictionless or frictional sliding) despite the fact that the underlying contact surface meshes are non-matching and possibly also geometrically non-conforming. However, one of the major issues with regard to mortar methods is the design of adequate numerical integration schemes for the resulting interface coupling terms, i.e. curve integrals for 2D contact problems and surface integrals for 3D contact problems. The way how mortar integration is performed crucially influences the accuracy of the overall numerical procedure as well as the computational efficiency of contact evaluation. Basically, two different types of mortar integration schemes, which will be termed as segment-based integration and element-based integration here, can be found predominantly in the literature. While almost the entire existing literature focuses on either of the two mentioned mortar integration schemes without questioning this choice, the intention of this paper is to provide a comprehensive and unbiased comparison. The theoretical aspects covered here include the choice of integration rule, the treatment of boundaries of the contact zone, higher-order interpolation and frictional sliding. Moreover, a new hybrid scheme is proposed, which beneficially combines the advantages of segment-based and element-based mortar integration. Several numerical examples are presented for a detailed and critical evaluation of the overall performance of the different schemes within several well-known benchmark problems of computational contact mechanics.

  11. Analysis of an Assessment Method for Problem-Based Learning

    ERIC Educational Resources Information Center

    Acar, B. Serpil

    2004-01-01

    The paper commences by briefly introducing the systems engineering programme, then focuses on the "systems" module, which requires the first-year students to undertake a number of "open-ended" projects. During the problem-based learning (PBL) based projects the students are expected to combine creativity and the knowledge they acquire during the…

  12. The research of positioning methods based on Internet of Things

    NASA Astrophysics Data System (ADS)

    Zou, Dongyao; Liu, Jia; Sun, Hui; Li, Nana; Han, Xueqin

    2013-03-01

    With the advent of Internet of Things time, more and more applications require location-based services. This article describes the concept and basic principles of several of Internet of things positioning technology such as GPS positioning, Base Station positioning, ZigBee positioning. And then the advantages and disadvantages of these types of positioning technologies are compared.

  13. Research-Based Methods of Reading Instruction, Grades K-3

    ERIC Educational Resources Information Center

    Vaughn, Sharon; Linan-Thompson, Sylvia

    2004-01-01

    Get your reading program on the right track using research-based teaching strategies from this helpful guide. Learn what you need to know about five essential elements of reading, why you should teach them, and how. A treasure chest of research-based instructional activities helps you: (1) Build students phonemic awareness; (2) Teach phonics and…

  14. WormBase: methods for data mining and comparative genomics.

    PubMed

    Harris, Todd W; Stein, Lincoln D

    2006-01-01

    WormBase is a comprehensive repository for information on Caenorhabditis elegans and related nematodes. Although the primary web-based interface of WormBase (http:// www.wormbase.org/) is familiar to most C. elegans researchers, WormBase also offers powerful data-mining features for addressing questions of comparative genomics, genome structure, and evolution. In this chapter, we focus on data mining at WormBase through the use of flexible web interfaces, custom queries, and scripts. The intended audience includes users wishing to query the database beyond the confines of the web interface or fetch data en masse. No knowledge of programming is necessary or assumed, although users with intermediate skills in the Perl scripting language will be able to utilize additional data-mining approaches. PMID:16988424

  15. Financial time series analysis based on information categorization method

    NASA Astrophysics Data System (ADS)

    Tian, Qiang; Shang, Pengjian; Feng, Guochen

    2014-12-01

    The paper mainly applies the information categorization method to analyze the financial time series. The method is used to examine the similarity of different sequences by calculating the distances between them. We apply this method to quantify the similarity of different stock markets. And we report the results of similarity in US and Chinese stock markets in periods 1991-1998 (before the Asian currency crisis), 1999-2006 (after the Asian currency crisis and before the global financial crisis), and 2007-2013 (during and after global financial crisis) by using this method. The results show the difference of similarity between different stock markets in different time periods and the similarity of the two stock markets become larger after these two crises. Also we acquire the results of similarity of 10 stock indices in three areas; it means the method can distinguish different areas' markets from the phylogenetic trees. The results show that we can get satisfactory information from financial markets by this method. The information categorization method can not only be used in physiologic time series, but also in financial time series.

  16. A comparison between boat-based and diver-based methods for quantifying coral bleaching

    USGS Publications Warehouse

    Zawada, David G.; Ruzicka, Rob; Colella, Michael A.

    2015-01-01

    Recent increases in both the frequency and severity of coral bleaching events have spurred numerous surveys to quantify the immediate impacts and monitor the subsequent community response. Most of these efforts utilize conventional diver-based methods, which are inherently time-consuming, expensive, and limited in spatial scope unless they deploy large teams of scientifically-trained divers. In this study, we evaluated the effectiveness of the Along-Track Reef Imaging System (ATRIS), an automated image-acquisition technology, for assessing a moderate bleaching event that occurred in the summer of 2011 in the Florida Keys. More than 100,000 images were collected over 2.7 km of transects spanning four patch reefs in a 3-h period. In contrast, divers completed 18, 10-m long transects at nine patch reefs over a 5-day period. Corals were assigned to one of four categories: not bleached, pale, partially bleached, and bleached. The prevalence of bleaching estimated by ATRIS was comparable to the results obtained by divers, but only for corals > 41 cm in size. The coral size-threshold computed for ATRIS in this study was constrained by prevailing environmental conditions (turbidity and sea state) and, consequently, needs to be determined on a study-by-study basis. Both ATRIS and diver-based methods have innate strengths and weaknesses that must be weighed with respect to project goals.

  17. Method for producing iron-based acid catalysts

    SciTech Connect

    Farcasiu, M.; Kathrein, H.; Kaufman, P.B.; Diehl, J.R.

    1998-04-01

    A method for preparing an acid catalyst with a long shelf-life is described. Crystalline iron oxides are doped with lattice compatible metals which are heated with halogen compounds at elevated temperatures.

  18. Research iris serial images quality assessment method based on HVS

    NASA Astrophysics Data System (ADS)

    Li, Zhi-hui; Zhang, Chang-hai; Ming, Xing; Zhao, Yong-hua

    2006-01-01

    Iris recognition can be widely used in security and customs, and it provides superiority security than other human feature recognition such as fingerprint, face and so on. The iris image quality is crucial to recognition effect. Accordingly reliable image quality assessments are necessary for evaluating iris image quality. However, there haven't uniformly criterion to Image quality assessment. Image quality assessment have Objective and Subjective Evaluation methods, In practice, However Subjective Evaluation method is fussy and doesn't effective on iris recognition. Objective Evaluation method should be used in iris recognition. According to human visual system model (HVS) Multi-scale and selectivity characteristic, it presents a new iris Image quality assessment method. In the paper, ROI is found and wavelet transform zero-crossing is used to find Multi-scale edge, and Multi-scale fusion measure is used to assess iris image quality. In experiment, Objective and Subjective Evaluation methods are used to assess iris images. From the results, the method is effectively to iris image quality assessment.

  19. Visual tracking method based on cuckoo search algorithm

    NASA Astrophysics Data System (ADS)

    Gao, Ming-Liang; Yin, Li-Ju; Zou, Guo-Feng; Li, Hai-Tao; Liu, Wei

    2015-07-01

    Cuckoo search (CS) is a new meta-heuristic optimization algorithm that is based on the obligate brood parasitic behavior of some cuckoo species in combination with the Lévy flight behavior of some birds and fruit flies. It has been found to be efficient in solving global optimization problems. An application of CS is presented to solve the visual tracking problem. The relationship between optimization and visual tracking is comparatively studied and the parameters' sensitivity and adjustment of CS in the tracking system are experimentally studied. To demonstrate the tracking ability of a CS-based tracker, a comparative study of tracking accuracy and speed of the CS-based tracker with six "state-of-art" trackers, namely, particle filter, meanshift, PSO, ensemble tracker, fragments tracker, and compressive tracker are presented. Comparative results show that the CS-based tracker outperforms the other trackers.

  20. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    SciTech Connect

    Paganelli, Chiara; Peroni, Marta

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  1. New approaches to fertility awareness-based methods: incorporating the Standard Days and TwoDay Methods into practice.

    PubMed

    Germano, Elaine; Jennings, Victoria

    2006-01-01

    Helping clients select and use appropriate family planning methods is a basic component of midwifery care. Many women prefer nonhormonal, nondevice methods, and may be interested in methods that involve understanding their natural fertility. Two new fertility awareness-based methods, the Standard Days Method and the TwoDay Method, meet the need for effective, easy-to-provide, easy-to-use approaches. The Standard Days Method is appropriate for women with most menstrual cycles between 26 and 32 days long. Women using this method are taught to avoid unprotected intercourse on potentially fertile days 8 through 19 of their cycles to prevent pregnancy. They use CycleBeads, a color-coded string of beads representing the menstrual cycle, to monitor their cycle days and cycle lengths. The Standard Days Method is more than 95% effective with correct use. The TwoDay Method is based on the presence or absence of cervical secretions to identify fertile days. To use this method, women are taught to note everyday whether they have secretions. If they had secretions on the current day or the previous day, they consider themselves fertile. The TwoDay Method is 96% effective with correct use. Both methods fit well into midwifery practice. PMID:17081938

  2. A multi-method review of home-based chemotherapy.

    PubMed

    Evans, J M; Qiu, M; MacKinnon, M; Green, E; Peterson, K; Kaizer, L

    2016-09-01

    This study summarises research- and practice-based evidence on home-based chemotherapy, and explores existing delivery models. A three-pronged investigation was conducted consisting of a literature review and synthesis of 54 papers, a review of seven home-based chemotherapy programmes spanning four countries, and two case studies within the Canadian province of Ontario. The results support the provision of home-based chemotherapy as a safe and patient-centred alternative to hospital- and outpatient-based service. This paper consolidates information on home-based chemotherapy programmes including services and drugs offered, patient eligibility criteria, patient views and experiences, delivery structures and processes, and common challenges. Fourteen recommendations are also provided for improving the delivery of chemotherapy in patients' homes by prioritising patient-centredness, provider training and teamwork, safety and quality of care, and programme management. The results of this study can be used to inform the development of an evidence-informed model for the delivery of chemotherapy and related care, such as symptom management, in patients' homes. PMID:26545409

  3. [Problem-based learning, description of a pedagogical method leading to evidence-based medicine].

    PubMed

    Chalon, P; Delvenne, C; Pasleau, F

    2000-04-01

    Problem-Based Learning is an educational method which uses health care scenarios to provide a context for learning and to elaborate knowledge through discussion. Additional expectations are to stimulate critical thinking and problem-solving skills, and to develop clinical reasoning taking into account the patient's psychosocial environment and preferences, the economic requirements as well as the best evidence from biomedical research. Appearing at the end of the 60's, it has been adopted by 10% of medical schools world-wide. PBL follows the same rules as Evidence-Based Medicine but is student-centered and provides the information-seeking skills necessary for self-directed life long learning. In this short article, we review the theoretical basis and process of PBL, emphasizing the teacher-student relationship and discussing the suggested advantages and disadvantages of this curriculum. Students in PBL programs make greater use of self-selected references and online searching. From this point of view, PBL strengthens the role of health libraries in medical education, and prepares the future physician for Evidence-Based Medicine. PMID:10909306

  4. Comparison of sequencing-based methods to profile DNA methylation and identification of monoallelic epigenetic modifications

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Analysis of DNA methylation patterns relies increasingly on sequencing-based profiling methods. The four most frequently used sequencing-based technologies are the bisulfite-based methods MethylC-seq and reduced representation bisulfite sequencing (RRBS), and the enrichment-based techniques methylat...

  5. Krylov subspace iterative methods for boundary element method based near-field acoustic holography.

    PubMed

    Valdivia, Nicolas; Williams, Earl G

    2005-02-01

    The reconstruction of the acoustic field for general surfaces is obtained from the solution of a matrix system that results from a boundary integral equation discretized using boundary element methods. The solution to the resultant matrix system is obtained using iterative regularization methods that counteract the effect of noise on the measurements. These methods will not require the calculation of the singular value decomposition, which can be expensive when the matrix system is considerably large. Krylov subspace methods are iterative methods that have the phenomena known as "semi-convergence," i.e., the optimal regularization solution is obtained after a few iterations. If the iteration is not stopped, the method converges to a solution that generally is totally corrupted by errors on the measurements. For these methods the number of iterations play the role of the regularization parameter. We will focus our attention to the study of the regularizing properties from the Krylov subspace methods like conjugate gradients, least squares QR and the recently proposed Hybrid method. A discussion and comparison of the available stopping rules will be included. A vibrating plate is considered as an example to validate our results. PMID:15759691

  6. A speaker change detection method based on coarse searching

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-yuan; He, Qian-hua; Li, Yan-xiong; He, Jun

    2013-03-01

    The conventional speaker change detection (SCD) method using Bayesian Information Criterion (BIC) has been widely used. However, its performance relies on the choice of penalty factor and suffers from mass calculation. The twostep SCD is less time consuming but generates more detection errors. The limitation of conventional method's performance originates from the two adjacent data windows. We propose a strategy that inserts an interval between the two adjacent fixed-size data windows in each analysis window. The dissimilarity value between the data windows is regarded as the probability of a speaker identity change within the interval area. Then this analysis window is slid along the audio by a large step to locate the areas where speaker change points may appear. Afterwards we only focus on these areas and locate precisely where the change points are. Other areas where a speaker change point unlikely appears are abandoned. The proposed method is computationally efficient and more robust to noise and penalty factor compared with conventional method. Evaluated on the corpus of China Central Television (CCTV) news, the proposed method obtains 74.18% reduction in calculation time and 22.24% improvement in F1-measure compared with the conventional approach.

  7. Simple method to verify OPC data based on exposure condition

    NASA Astrophysics Data System (ADS)

    Moon, James; Ahn, Young-Bae; Oh, Sey-Young; Nam, Byung-Ho; Yim, Dong Gyu

    2006-03-01

    In a world where Sub100nm lithography tool is an everyday household item for device makers, shrinkage of the device is at a rate that no one ever have imagined. With the shrinkage of device at such a high rate, demand placed on Optical Proximity Correction (OPC) is like never before. To meet this demand with respect to shrinkage rate of the device, more aggressive OPC tactic is involved. Aggressive OPC tactics is a must for sub 100nm lithography tech but this tactic eventually results in greater room for OPC error and complexity of the OPC data. Until now, Optical Rule Check (ORC) or Design Rule Check (DRC) was used to verify this complex OPC error. But each of these methods has its pros and cons. ORC verification of OPC data is rather accurate "process" wise but inspection of full chip device requires a lot of money (Computer , software,..) and patience (run time). DRC however has no such disadvantage, but accuracy of the verification is a total downfall "process" wise. In this study, we were able to create a new method for OPC data verification that combines the best of both ORC and DRC verification method. We created a method that inspects the biasing of the OPC data with respect to the illumination condition of the process that's involved. This new method for verification was applied to 80nm tech ISOLATION and GATE layer of the 512M DRAM device and showed accuracy equivalent to ORC inspection with run time that of DRC verification.

  8. Seamless Method- and Model-based Software and Systems Engineering

    NASA Astrophysics Data System (ADS)

    Broy, Manfred

    Today engineering software intensive systems is still more or less handicraft or at most at the level of manufacturing. Many steps are done ad-hoc and not in a fully systematic way. Applied methods, if any, are not scientifically justified, not justified by empirical data and as a result carrying out large software projects still is an adventure. However, there is no reason why the development of software intensive systems cannot be done in the future with the same precision and scientific rigor as in established engineering disciplines. To do that, however, a number of scientific and engineering challenges have to be mastered. The first one aims at a deep understanding of the essentials of carrying out such projects, which includes appropriate models and effective management methods. What is needed is a portfolio of models and methods coming together with a comprehensive support by tools as well as deep insights into the obstacles of developing software intensive systems and a portfolio of established and proven techniques and methods with clear profiles and rules that indicate when which method is ready for application. In the following we argue that there is scientific evidence and enough research results so far to be confident that solid engineering of software intensive systems can be achieved in the future. However, yet quite a number of scientific research problems have to be solved.

  9. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    PubMed Central

    Liu, Miao; Sun, Changku; Huang, Shujun; Zhang, Zonghua

    2015-01-01

    In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system. PMID:26492247

  10. Areal Feature Matching Based on Similarity Using Critic Method

    NASA Astrophysics Data System (ADS)

    Kim, J.; Yu, K.

    2015-10-01

    In this paper, we propose an areal feature matching method that can be applied for many-to-many matching, which involves matching a simple entity with an aggregate of several polygons or two aggregates of several polygons with fewer user intervention. To this end, an affine transformation is applied to two datasets by using polygon pairs for which the building name is the same. Then, two datasets are overlaid with intersected polygon pairs that are selected as candidate matching pairs. If many polygons intersect at this time, we calculate the inclusion function between such polygons. When the value is more than 0.4, many of the polygons are aggregated as single polygons by using a convex hull. Finally, the shape similarity is calculated between the candidate pairs according to the linear sum of the weights computed in CRITIC method and the position similarity, shape ratio similarity, and overlap similarity. The candidate pairs for which the value of the shape similarity is more than 0.7 are determined as matching pairs. We applied the method to two geospatial datasets: the digital topographic map and the KAIS map in South Korea. As a result, the visual evaluation showed two polygons that had been well detected by using the proposed method. The statistical evaluation indicates that the proposed method is accurate when using our test dataset with a high F-measure of 0.91.

  11. Risk-based methods applicable to ranking conceptual designs

    SciTech Connect

    Breeding, R.J.; Ortiz, K.; Ringland, J.T.; Lim, J.J.

    1993-11-01

    In Ginichi Taguchi`s latest book on quality engineering, an emphasis is placed on robust design processes in which quality engineering techniques are brought ``upstream,`` that is, they are utilized as early as possible, preferably in the conceptual design stage. This approach was used in a study of possible future safety system designs for weapons. As an experiment, a method was developed for using probabilistic risk analysis (PRA) techniques to rank conceptual designs for performance against a safety metric for ultimate incorporation into a Pugh matrix evaluation. This represents a high-level UW application of PRA methods to weapons. As with most conceptual designs, details of the implementation were not yet developed; many of the components had never been built, let alone tested. Therefore, our application of risk assessment methods was forced to be at such a high level that the entire evaluation could be performed on a spreadsheet. Nonetheless, the method produced numerical estimates of safety in a manner that was consistent, reproducible, and scrutable. The results enabled us to rank designs to identify areas where returns on research efforts would be the greatest. The numerical estimates were calibrated against what is achievable by current weapon safety systems. The use of expert judgement is inescapable, but these judgements are explicit and the method is easily implemented on an spreadsheet computer program.

  12. Automatic camera calibration method based on dashed lines

    NASA Astrophysics Data System (ADS)

    Li, Xiuhua; Wang, Guoyou; Liu, Jianguo

    2013-10-01

    We present a new method for full-automatic calibration of traffic cameras using the end points on dashed lines. Our approach uses the improved RANSAC method with the help of pixels transverse projection to detect the dashed lines and end points on them. Then combining analysis of the geometric relationship between the camera and road coordinate systems, we construct a road model to fit the end points. Finally using two-dimension calibration method we can convert pixels in image to meters along the ground truth lane. On a large number of experiments exhibiting a variety of conditions, our approach performs well, achieving less than 5% error in measuring test lengths in all cases.

  13. Sunspot drawings handwritten character recognition method based on deep learning

    NASA Astrophysics Data System (ADS)

    Zheng, Sheng; Zeng, Xiangyun; Lin, Ganghua; Zhao, Cui; Feng, Yongli; Tao, Jinping; Zhu, Daoyuan; Xiong, Li

    2016-05-01

    High accuracy scanned sunspot drawings handwritten characters recognition is an issue of critical importance to analyze sunspots movement and store them in the database. This paper presents a robust deep learning method for scanned sunspot drawings handwritten characters recognition. The convolution neural network (CNN) is one algorithm of deep learning which is truly successful in training of multi-layer network structure. CNN is used to train recognition model of handwritten character images which are extracted from the original sunspot drawings. We demonstrate the advantages of the proposed method on sunspot drawings provided by Chinese Academy Yunnan Observatory and obtain the daily full-disc sunspot numbers and sunspot areas from the sunspot drawings. The experimental results show that the proposed method achieves a high recognition accurate rate.

  14. Damage detection in turbine wind blades by vibration based methods

    NASA Astrophysics Data System (ADS)

    Doliński, L.; Krawczuk, M.

    2009-08-01

    The paper describes results of numerical simulation for damage localization in the composite coat of a wind turbine blade using modal parameters and a modern damage detection method (wavelet transform). The presented results were obtained in the first period of research on the diagnostic method, which is aimed at detecting damage in the blades of large wind turbines during normal operation. A blade-modelling process including the geometry, loads and failures has been introduced in the paper. A series of simulations has been carried out for different localizations and size of damage for finding the method's limits. To verify the results of numeric simulations a subscale blade has been built which has geometric features and mechanical properties similar to the computer model.

  15. An Adaptive Derivative-based Method for Function Approximation

    SciTech Connect

    Tong, C

    2008-10-22

    To alleviate the high computational cost of large-scale multi-physics simulations to study the relationships between the model parameters and the outputs of interest, response surfaces are often used in place of the exact functional relationships. This report explores a method for response surface construction using adaptive sampling guided by derivative information at each selected sample point. This method is especially suitable for applications that can readily provide added information such as gradients and Hessian with respect to the input parameters under study. When higher order terms (third and above) in the Taylor series are negligible, the approximation error for this method can be controlled. We present details of the adaptive algorithm and numerical results on a few test problems.

  16. General method for quantifying base adducts in specific mammalian genes

    SciTech Connect

    Thomas, D.C.; Morton, A.G.; Bohr, V.A.; Sancar, A.

    1988-06-01

    A general method has been developed to measure the formation and removal of DNA adducts in defined sequences of mammalian genomes. Adducted genomic DNA is digested with an appropriate restriction enzyme, treated with Escherichia coli UvrABC excision nuclease (ABC excinuclease), subjected to alkaline gel electrophoresis, and probed for specific sequences by Southern hybridization. The ABC excinuclease incises DNA containing bulky adducts and thus reduces the intensity of the full-length fragments in Southern hybridization in proportion to the number of adducts present in the probed sequence. This method is similar to that developed by Bohr et al. for quantifying pyrimidine dimers by using T4 endonuclease V. Because of the wide substrate range of ABC exinuclease, however, our method can be used to quantify a large variety of DNA adducts in specific genomic sequences.

  17. Calibration of AVHRR sensors using the reflectance-based method

    NASA Astrophysics Data System (ADS)

    Czapla-Myers, Jeffrey S.; Thome, Kurtis J.; Leisso, Nathan P.

    2007-09-01

    The Remote Sensing Group at the University of Arizona has been active in the vicarious calibration of numerous sensors through the use of ground-based test sites. Recent efforts have included work to develop cross-calibration information between these sensors using the results from the reflectance-based approach. The current work extends the cross-calibration to the AVHRR series of sensors, specifically NOAA-17, and NOAA-18. The results include work done based on data collected by ground-based personnel nearly coincident with the sensor overpasses. The available number of calibrations for the AVHRR series is increased through a set of ground-based radiometers that are deployed without the need for on-site personnel and have been operating for more than three years at Railroad Valley Playa. The spectral, spatial, and temporal characteristics of the 1-km2 large-footprint site at Railroad Valley are well understood. It is therefore well suited for the radiometric calibration of AVHRR, which has a nadir-viewing footprint of 1.1 x 1.1 km. The at-sensor radiance is predicted via a radiative transfer code using atmospheric data from a fully-automated solar radiometer. The results for AVHRR show that errors are currently larger for the automated data sets, but results indicate that the AVHRR sensors studied in this work are consistent with the Aqua and Terra MODIS sensors to within the uncertainties of each sensor.

  18. Space Object Tracking Method Based on a Snake Model

    NASA Astrophysics Data System (ADS)

    Zhan-wei, Xu; Xin, Wang

    2016-04-01

    In this paper, aiming at the problem of unstable tracking of low-orbit variable and bright space objects, adopting an active contour model, a kind of improved GVF (Gradient Vector Flow) - Snake algorithm is proposed to realize the real-time search of the real object contour on the CCD image. Combined with the Kalman filter for prediction, a new adaptive tracking method is proposed for space objects. Experiments show that this method can overcome the tracking error caused by the fixed window, and improve the tracking robustness.

  19. [Image processing method based on prime number factor layer].

    PubMed

    Fan, Yifang; Yuan, Zhirun

    2004-10-01

    In sport games, since the human body movement data are mainly drawn from the sports field with the hues or even interruptions of commercial environment, some difficulties must be surmounted in order to analyze the images. It is obviously not enough just to use the method of grey-image treatment. We have applied the characteristics of the prime number function to the human body movement images and thus introduce a new method of image processing in this article. When trying to deal with certain moving images, we can get a better result. PMID:15553856

  20. Gender-based violence: concepts, methods, and findings.

    PubMed

    Russo, Nancy Felipe; Pirlott, Angela

    2006-11-01

    The United Nations has identified gender-based violence against women as a global health and development issue, and a host of policies, public education, and action programs aimed at reducing gender-based violence have been undertaken around the world. This article highlights new conceptualizations, methodological issues, and selected research findings that can inform such activities. In addition to describing recent research findings that document relationships between gender, power, sexuality, and intimate violence cross-nationally, it identifies cultural factors, including linkages between sex and violence through media images that may increase women's risk for violence, and profiles a host of negative physical, mental, and behavioral health outcomes associated with victimization including unwanted pregnancy and abortion. More research is needed to identify the causes, dynamics, and outcomes of gender-based violence, including media effects, and to articulate how different forms of such violence vary in outcomes depending on cultural context. PMID:17189506

  1. A New Activity-Based Financial Cost Management Method

    NASA Astrophysics Data System (ADS)

    Qingge, Zhang

    The standard activity-based financial cost management model is a new model of financial cost management, which is on the basis of the standard cost system and the activity-based cost and integrates the advantages of the two. It is a new model of financial cost management with more accurate and more adequate cost information by taking the R&D expenses as the accounting starting point and after-sale service expenses as the terminal point and covering the whole producing and operating process and the whole activities chain and value chain aiming at serving the internal management and decision.

  2. CRISPR-Based Methods for Caenorhabditis elegans Genome Engineering.

    PubMed

    Dickinson, Daniel J; Goldstein, Bob

    2016-03-01

    The advent of genome editing techniques based on the clustered regularly interspersed short palindromic repeats (CRISPR)-Cas9 system has revolutionized research in the biological sciences. CRISPR is quickly becoming an indispensible experimental tool for researchers using genetic model organisms, including the nematode Caenorhabditis elegans. Here, we provide an overview of CRISPR-based strategies for genome editing in C. elegans. We focus on practical considerations for successful genome editing, including a discussion of which strategies are best suited to producing different kinds of targeted genome modifications. PMID:26953268

  3. CRISPR-Based Methods for Caenorhabditis elegans Genome Engineering

    PubMed Central

    Dickinson, Daniel J.; Goldstein, Bob

    2016-01-01

    The advent of genome editing techniques based on the clustered regularly interspersed short palindromic repeats (CRISPR)–Cas9 system has revolutionized research in the biological sciences. CRISPR is quickly becoming an indispensible experimental tool for researchers using genetic model organisms, including the nematode Caenorhabditis elegans. Here, we provide an overview of CRISPR-based strategies for genome editing in C. elegans. We focus on practical considerations for successful genome editing, including a discussion of which strategies are best suited to producing different kinds of targeted genome modifications. PMID:26953268

  4. Methods and Approaches to Mass Spectroscopy Based Protein Identification

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This book chapter is a review of current mass spectrometers and the role in the field of proteomics. Various instruments are discussed and their strengths and weaknesses are highlighted. In addition, the methods of protein identification using a mass spectrometer are explained as well as data vali...

  5. Objective, Way and Method of Faculty Management Based on Ergonomics

    ERIC Educational Resources Information Center

    WANG, Hong-bin; Liu, Yu-hua

    2008-01-01

    The core problem that influences educational quality of talents in colleges and universities is the faculty management. Without advanced faculty, it is difficult to cultivate excellent talents. With regard to some problems in present faculty construction of colleges and universities, this paper puts forward the new objectives, ways and methods of…

  6. Assessment of gene set analysis methods based on microarray data.

    PubMed

    Alavi-Majd, Hamid; Khodakarim, Soheila; Zayeri, Farid; Rezaei-Tavirani, Mostafa; Tabatabaei, Seyyed Mohammad; Heydarpour-Meymeh, Maryam

    2014-01-25

    Gene set analysis (GSA) incorporates biological information into statistical knowledge to identify gene sets differently expressed between two or more phenotypes. It allows us to gain an insight into the functional working mechanism of cells beyond the detection of differently expressed gene sets. In order to evaluate the competence of GSA approaches, three self-contained GSA approaches with different statistical methods were chosen; Category, Globaltest and Hotelling's T(2) together with their assayed power to identify the differences expressed via simulation and real microarray data. The Category does not take care of the correlation structure, while the other two deal with correlations. In order to perform these methods, R and Bioconductor were used. Furthermore, venous thromboembolism and acute lymphoblastic leukemia microarray data were applied. The results of three GSAs showed that the competence of these methods depends on the distribution of gene expression in a dataset. It is very important to assay the distribution of gene expression data before choosing the GSA method to identify gene sets differently expressed between phenotypes. On the other hand, assessment of common genes among significant gene sets indicated that there was a significant agreement between the result of GSA and the findings of biologists. PMID:24012817

  7. A special purpose knowledge-based face localization method

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad; Jassim, Sabah

    2008-04-01

    This paper is concerned with face localization for visual speech recognition (VSR) system. Face detection and localization have got a great deal of attention in the last few years, because it is an essential pre-processing step in many techniques that handle or deal with faces, (e.g. age, face, gender, race and visual speech recognition). We shall present an efficient method for localization human's faces in video images captured on mobile constrained devices, under a wide variation in lighting conditions. We use a multiphase method that may include all or some of the following steps starting with image pre-processing, followed by a special purpose edge detection, then an image refinement step. The output image will be passed through a discrete wavelet decomposition procedure, and the computed LL sub-band at a certain level will be transformed into a binary image that will be scanned by using a special template to select a number of possible candidate locations. Finally, we fuse the scores from the wavelet step with scores determined by color information for the candidate location and employ a form of fuzzy logic to distinguish face from non-face locations. We shall present results of large number of experiments to demonstrate that the proposed face localization method is efficient and achieve high level of accuracy that outperforms existing general-purpose face detection methods.

  8. Methods of use for sensor based fluid detection devices

    NASA Technical Reports Server (NTRS)

    Lewis, Nathan S. (Inventor)

    2001-01-01

    Methods of use and devices for detecting analyte in fluid. A system for detecting an analyte in a fluid is described comprising a substrate having a sensor comprising a first organic material and a second organic material where the sensor has a response to permeation by an analyte. A detector is operatively associated with the sensor. Further, a fluid delivery appliance is operatively associated with the sensor. The sensor device has information storage and processing equipment, which is operably connected with the device. This device compares a response from the detector with a stored ideal response to detect the presence of analyte. An integrated system for detecting an analyte in a fluid is also described where the sensing device, detector, information storage and processing device, and fluid delivery device are incorporated in a substrate. Methods for use for the above system are also described where the first organic material and a second organic material are sensed and the analyte is detected with a detector operatively associated with the sensor. The method provides for a device, which delivers fluid to the sensor and measures the response of the sensor with the detector. Further, the response is compared to a stored ideal response for the analyte to determine the presence of the analyte. In different embodiments, the fluid measured may be a gaseous fluid, a liquid, or a fluid extracted from a solid. Methods of fluid delivery for each embodiment are accordingly provided.

  9. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    PubMed Central

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  10. Analysis of Methods for Collecting Test-based Judgments.

    ERIC Educational Resources Information Center

    Reckase, Mark D.

    Standard setting is a fairly widespread activity in educational and psychological measurement, but there is no formal psychometric theory to guide the development of standard setting methodology. This paper presents a conceptual framework for such a psychometric theory and uses the conceptual framework to analyze a number of methods for setting…

  11. New method of contour-based mask-shape compiler

    NASA Astrophysics Data System (ADS)

    Matsuoka, Ryoichi; Sugiyama, Akiyuki; Onizawa, Akira; Sato, Hidetoshi; Toyoda, Yasutaka

    2007-10-01

    We have developed a new method of accurately profiling a mask shape by utilizing a Mask CD-SEM. The method is intended to realize high accuracy, stability and reproducibility of the Mask CD-SEM adopting an edge detection algorithm as the key technology used in CD-SEM for high accuracy CD measurement. In comparison with a conventional image processing method for contour profiling, it is possible to create the profiles with much higher accuracy which is comparable with CD-SEM for semiconductor device CD measurement. In this report, we will introduce the algorithm in general, the experimental results and the application in practice. As shrinkage of design rule for semiconductor device has further advanced, an aggressive OPC (Optical Proximity Correction) is indispensable in RET (Resolution Enhancement Technology). From the view point of DFM (Design for Manufacturability), a dramatic increase of data processing cost for advanced MDP (Mask Data Preparation) for instance and surge of mask making cost have become a big concern to the device manufacturers. In a sense, it is a trade-off between the high accuracy RET and the mask production cost, while it gives a significant impact on the semiconductor market centered around the mask business. To cope with the problem, we propose the best method for a DFM solution in which two dimensional data are extracted for an error free practical simulation by precise reproduction of a real mask shape in addition to the mask data simulation. The flow centering around the design data is fully automated and provides an environment where optimization and verification for fully automated model calibration with much less error is available. It also allows complete consolidation of input and output functions with an EDA system by constructing a design data oriented system structure. This method therefore is regarded as a strategic DFM approach in the semiconductor metrology.

  12. A Methods-Based Biotechnology Course for Undergraduates

    ERIC Educational Resources Information Center

    Chakrabarti, Debopam

    2009-01-01

    This new course in biotechnology for upper division undergraduates provides a comprehensive overview of the process of drug discovery that is relevant to biopharmaceutical industry. The laboratory exercises train students in both cell-free and cell-based assays. Oral presentations by the students delve into recent progress in drug discovery.…

  13. The Teaching of Protein Synthesis--A Microcomputer Based Method.

    ERIC Educational Resources Information Center

    Goodridge, Frank

    1983-01-01

    Describes two computer programs (BASIC for 32K Commodore PET) for teaching protein synthesis. The first is an interactive test of base-pairing knowledge, and the second generates random DNA nucleotide sequences, with instructions for substitution, insertion, and deletion printed out for each student. (JN)

  14. Methods and Strategies: Modeling Problem-Based Instruction

    ERIC Educational Resources Information Center

    Sterling, Donna R.

    2007-01-01

    Students get excited about science when they investigate real scientific problems in the classroom, especially when the investigation extends over several weeks. This article describes a health-science problem-based learning (PBL) investigation that a group of teachers and teacher educators devised together for a group of fourth- to sixth-grade…

  15. 3D Wavelet-Based Filter and Method

    DOEpatents

    Moss, William C.; Haase, Sebastian; Sedat, John W.

    2008-08-12

    A 3D wavelet-based filter for visualizing and locating structural features of a user-specified linear size in 2D or 3D image data. The only input parameter is a characteristic linear size of the feature of interest, and the filter output contains only those regions that are correlated with the characteristic size, thus denoising the image.

  16. Metric Similarity in Vegetation-Based Wetland Assessment Methods

    EPA Science Inventory

    Wetland vegetation is a recognized indicator group for wetland assessments, but until recently few published protocols used plant-based indicators. To examine the proliferation of such protocols since 1999, this report reviewed 20 published index of biotic integrity (IBI) type p...

  17. Study of an image restoration method based on Poisson-maximum likelihood estimation method for earthquake ruin scene

    NASA Astrophysics Data System (ADS)

    Song, Yanxing; Yang, Jingsong; Cheng, Lina; Liu, Shucong

    2014-09-01

    An image restoration method based on Poisson-maximum likelihood estimation method (PMLE) for earthquake ruin scene is proposed in this paper. The PMLE algorithm is introduced at first, and automatic acceleration method is used in the algorithm to accelerate the iterative process, then an image of earthquake ruin scene is processed with this image restoration method. The spectral correlation method and PSNR (peak signal-to-noise ratio) are chosen respectively to validate the restoration effect of the method, the simulation results show that iterations in this method will effect the PSNR of the processed image and operation time, and this method can restore image of earthquake ruin scene effectively and has a good practicability.

  18. A multivariate based event detection method and performance comparison with two baseline methods.

    PubMed

    Liu, Shuming; Smith, Kate; Che, Han

    2015-09-01

    Early warning systems have been widely deployed to protect water systems from accidental and intentional contamination events. Conventional detection algorithms are often criticized for having high false positive rates and low true positive rates. This mainly stems from the inability of these methods to determine whether variation in sensor measurements is caused by equipment noise or the presence of contamination. This paper presents a new detection method that identifies the existence of contamination by comparing Euclidean distances of correlation indicators, which are derived from the correlation coefficients of multiple water quality sensors. The performance of the proposed method was evaluated using data from a contaminant injection experiment and compared with two baseline detection methods. The results show that the proposed method can differentiate between fluctuations caused by equipment noise and those due to the presence of contamination. It yielded higher possibility of detection and a lower false alarm rate than the two baseline methods. With optimized parameter values, the proposed method can correctly detect 95% of all contamination events with a 2% false alarm rate. PMID:25996758

  19. [A hyperspectral subpixel target detection method based on inverse least squares method].

    PubMed

    Li, Qing-Bo; Nie, Xin; Zhang, Guang-Jun

    2009-01-01

    In the present paper, an inverse least square (ILS) method combined with the Mahalanobis distance outlier detection method is discussed to detect the subpixel target from the hyperspectral image. Firstly, the inverse model for the target spectrum and all the pixel spectra was established, in which the accurate target spectrum was obtained previously, and then the SNV algorithm was employed to preprocess each original pixel spectra separately. After the pretreatment, the regressive coefficient of ILS was calculated with partial least square (PLS) algorithm. Each point in the vector of regressive coefficient corresponds to a pixel in the image. The Mahalanobis distance was calculated with each point in the regressive coefficient vector. Because Mahalanobis distance stands for the extent to which samples deviate from the total population, the point with Mahalanobis distance larger than the 3sigma was regarded as the subpixel target. In this algorithm, no other prior information such as representative background spectrum or modeling of background is required, and only the target spectrum is needed. In addition, the result of the detection is insensitive to the complexity of background. This method was applied to AVIRIS remote sensing data. For this simulation experiment, AVIRIS remote sensing data was free downloaded from the NASA official websit, the spectrum of a ground object in the AVIRIS hyperspectral image was picked up as the target spectrum, and the subpixel target was simulated though a linear mixed method. The comparison of the subpixel detection result of the method mentioned above with that of orthogonal subspace projection method (OSP) was performed. The result shows that the performance of the ILS method is better than the traditional OSP method. The ROC (receive operating characteristic curve) and SNR were calculated, which indicates that the ILS method possesses higher detection accuracy and less computing time than the OSP algorithm. PMID:19385196

  20. Image-based rendering method for mapping endoscopic video onto CT-based endoluminal views

    NASA Astrophysics Data System (ADS)

    Rai, Lav; Higgins, William E.

    2006-03-01

    One of the indicators of early lung cancer is a color change in airway mucosa. Bronchoscopy of the major airways can provide high-resolution color video of the airway tree's mucosal surfaces. In addition, 3D MDCT chest images provide 3D structural information of the airways. Unfortunately, the bronchoscopic video contains no explicit 3D structural and position information, and the 3D MDCT data captures no color or textural information of the mucosa. A fusion of the topographical information from the 3D CT data and the color information from the bronchoscopic video, however, enables realistic 3D visualization, navigation, localization, and quantitative color-topographic analysis of the airways. This paper presents a method for topographic airway-mucosal surface mapping from bronchoscopic video onto 3D MDCT endoluminal views. The method uses registered video images and CT-based virtual endoscopic renderings of the airways. The visibility and depth data are also generated by the renderings. Uniform sampling and over-scanning of the visible triangles are done before they are packed into a texture space. The texels are then re-projected onto video images and assigned color values based on depth and illumination data obtained from renderings. The texture map is loaded into the rendering engine to enable real-time navigation through the combined 3D CT surface and bronchoscopic video data. Tests were performed on pre-recorded bronchoscopy patient video and associated 3D MDCT scans. Results show that we can effectively accomplish mapping over a continuous sequence of airway images spanning several generations of airways.

  1. New displacement-based methods for optimal truss topology design

    NASA Technical Reports Server (NTRS)

    Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.

    1991-01-01

    Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.

  2. A rapid demodulation method for optical carrier based microwave interferometer

    NASA Astrophysics Data System (ADS)

    Chen, Zhen; Hefferman, Gerald; Wei, Tao

    2016-05-01

    This paper presents a rapid signal processing approach for OCMI system, which could significantly reduce the complexity of computations while maintaining decent performances. A direct phase demodulator can be pre-calibrated and applied to extract the absolute phase change to target reflectors at different locations, where the strain change can be found distributedly. Theoretical framework was conducted and to demo the concept, a strain test was performed with ultra-weak reflectors (-70 dB) under the OCMI system. The proposed method was applied to extract the distributed stain change along the fiber under test. Compared with the previous proposed method, no FIR filters and Fourier transform are involved. This algorithm holds the potential suitable for dynamic OCMI distributed sensing system.

  3. A Stirling engine analysis method based upon moving gas nodes

    NASA Technical Reports Server (NTRS)

    Martini, W. R.

    1986-01-01

    A Lagrangian nodal analysis method for Stirling engines (SEs) is described, validated, and applied to a conventional SE and an isothermalized SE (with fins in the hot and cold spaces). The analysis employs a constant-mass gas node (which moves with respect to the solid nodes during each time step) instead of the fixed gas nodes of Eulerian analysis. The isothermalized SE is found to have efficiency only slightly greater than that of a conventional SE.

  4. Evolute-based Hough transform method for characterization of ellipsoids.

    PubMed

    Kaytanli, B; Valentine, M T

    2013-03-01

    We propose a novel and algorithmically simple Hough transform method that exploits the geometric properties of ellipses to enable the robust determination of the ellipse position and properties. We make use of the unique features of the evolute created by Hough voting along the gradient vectors of a two-dimensional image to determine the ellipse centre, orientation and aspect ratio. A second one-dimensional voting is performed on the minor axis to uniquely determine the ellipse size. This reduction of search space substantially simplifies the algorithmic complexity. To demonstrate the accuracy of our method, we present analysis of single and multiple ellipsoidal particles, including polydisperse and imperfect ellipsoids, in both simulated images and electron micrographs. Given its mathematical simplicity, ease of implementation and reasonable algorithmic completion time, we anticipate that the proposed method will be broadly useful for image processing of ellipsoidal particles, including their detection and tracking for studies of colloidal suspensions, and for applications to drug delivery and microrheology. PMID:23301634

  5. Polyvinylidene fluoride sensor-based method for unconstrained snoring detection.

    PubMed

    Hwang, Su Hwan; Han, Chung Min; Yoon, Hee Nam; Jung, Da Woon; Lee, Yu Jin; Jeong, Do-Un; Park, Kwang Suk

    2015-07-01

    We established and tested a snoring detection method using a polyvinylidene fluoride (PVDF) sensor for accurate, fast, and motion-artifact-robust monitoring of snoring events during sleep. Twenty patients with obstructive sleep apnea participated in this study. The PVDF sensor was located between a mattress cover and mattress, and the patients' snoring signals were unconstrainedly measured with the sensor during polysomnography. The power ratio and peak frequency from the short-time Fourier transform were used to extract spectral features from the PVDF data. A support vector machine was applied to the spectral features to classify the data into either the snore or non-snore class. The performance of the method was assessed using manual labelling by three human observers as a reference. For event-by-event snoring detection, PVDF data that contained 'snoring' (SN), 'snoring with movement' (SM), and 'normal breathing' epochs were selected for each subject. As a result, the overall sensitivity and the positive predictive values were 94.6% and 97.5%, respectively, and there was no significant difference between the SN and SM results. The proposed method can be applied in both residential and ambulatory snoring monitoring systems. PMID:26012381

  6. Accurate optical CD profiler based on specialized finite element method

    NASA Astrophysics Data System (ADS)

    Carrero, Jesus; Perçin, Gökhan

    2012-03-01

    As the semiconductor industry is moving to very low-k1 patterning solutions, the metrology problems facing process engineers are becoming much more complex. Choosing the right optical critical dimension (OCD) metrology technique is essential for bridging the metrology gap and achieving the required manufacturing volume throughput. The critical dimension scanning electron microscope (CD-SEM) measurement is usually distorted by the high aspect ratio of the photoresist and hard mask layers. CD-SEM measurements cease to correlate with complex three-dimensional profiles, such as the cases for double patterning and FinFETs, thus necessitating sophisticated, accurate and fast computational methods to bridge the gap. In this work, a suite of computational methods that complement advanced OCD equipment, and enabling them to operate at higher accuracies, are developed. In this article, a novel method for accurately modeling OCD profiles is presented. A finite element formulation in primal form is used to discretize the equations. The implementation uses specialized finite element spaces to solve Maxwell equations in two dimensions.

  7. The discrete variational derivative method based on discrete differential forms

    NASA Astrophysics Data System (ADS)

    Yaguchi, Takaharu; Matsuo, Takayasu; Sugihara, Masaaki

    2012-05-01

    As is well known, for PDEs that enjoy a conservation or dissipation property, numerical schemes that inherit this property are often advantageous in that the schemes are fairly stable and give qualitatively better numerical solutions in practice. Lately, Furihata and Matsuo have developed the so-called “discrete variational derivative method” that automatically constructs energy preserving or dissipative finite difference schemes. Although this method was originally developed on uniform meshes, the use of non-uniform meshes is of importance for multi-dimensional problems. On the other hand, the theories of discrete differential forms have received much attention recently. These theories provide a discrete analogue of the vector calculus on general meshes. In this paper, we show that the discrete variational derivative method and the discrete differential forms by Bochev and Hyman can be combined. Applications to the Cahn-Hilliard equation and the Klein-Gordon equation on triangular meshes are provided as demonstrations. We also show that the schemes for these equations are H1-stable under some assumptions. In particular, one for the nonlinear Klein-Gordon equation is obtained by combination of the energy conservation property and the discrete Poincaré inequality, which are the temporal and spacial structures that are preserved by the above methods.

  8. Vital sign sensing method based on EMD in terahertz band

    NASA Astrophysics Data System (ADS)

    Xu, Zhengwu; Liu, Tong

    2014-12-01

    Non-contact respiration and heartbeat rates detection could be applied to find survivors trapped in the disaster or the remote monitoring of the respiration and heartbeat of a patient. This study presents an improved algorithm that extracts the respiration and heartbeat rates of humans by utilizing the terahertz radar, which further lessens the effects of noise, suppresses the cross-term, and enhances the detection accuracy. A human target echo model for the terahertz radar is first presented. Combining the over-sampling method, low-pass filter, and Empirical Mode Decomposition improves the signal-to-noise ratio. The smoothed pseudo Wigner-Ville distribution time-frequency technique and the centroid of the spectrogram are used to estimate the instantaneous velocity of the target's cardiopulmonary motion. The down-sampling method is adopted to prevent serious distortion. Finally, a second time-frequency analysis is applied to the centroid curve to extract the respiration and heartbeat rates of the individual. Simulation results show that compared with the previously presented vital sign sensing method, the improved algorithm enhances the signal-to-noise ratio to 1 dB with a detection accuracy of 80%. The improved algorithm is an effective approach for the detection of respiration and heartbeat signal in a complicated environment.

  9. Optimizing methods for PCR-based analysis of predation

    PubMed Central

    Sint, Daniela; Raso, Lorna; Kaufmann, Rüdiger; Traugott, Michael

    2011-01-01

    Molecular methods have become an important tool for studying feeding interactions under natural conditions. Despite their growing importance, many methodological aspects have not yet been evaluated but need to be considered to fully exploit the potential of this approach. Using feeding experiments with high alpine carabid beetles and lycosid spiders, we investigated how PCR annealing temperature affects prey DNA detection success and how post-PCR visualization methods differ in their sensitivity. Moreover, the replicability of prey DNA detection among individual PCR assays was tested using beetles and spiders that had digested their prey for extended times postfeeding. By screening all predators for three differently sized prey DNA fragments (range 116–612 bp), we found that only in the longest PCR product, a marked decrease in prey detection success occurred. Lowering maximum annealing temperatures by 4 °C resulted in significantly increased prey DNA detection rates in both predator taxa. Among the three post-PCR visualization methods, an eightfold difference in sensitivity was observed. Repeated screening of predators increased the total number of samples scoring positive, although the proportion of samples testing positive did not vary significantly between different PCRs. The present findings demonstrate that assay sensitivity, in combination with other methodological factors, plays a crucial role to obtain robust trophic interaction data. Future work employing molecular prey detection should thus consider and minimize the methodologically induced variation that would also allow for better cross-study comparisons. PMID:21507208

  10. An adaptive unsupervised hyperspectral classification method based on Gaussian distribution

    NASA Astrophysics Data System (ADS)

    Yue, Jiang; Wu, Jing-wei; Zhang, Yi; Bai, Lian-fa

    2014-11-01

    In order to achieve adaptive unsupervised clustering in the high precision, a method using Gaussian distribution to fit the similarity of the inter-class and the noise distribution is proposed in this paper, and then the automatic segmentation threshold is determined by the fitting result. First, according with the similarity measure of the spectral curve, this method assumes that the target and the background both in Gaussian distribution, the distribution characteristics is obtained through fitting the similarity measure of minimum related windows and center pixels with Gaussian function, and then the adaptive threshold is achieved. Second, make use of the pixel minimum related windows to merge adjacent similar pixels into a picture-block, then the dimensionality reduction is completed and the non-supervised classification is realized. AVIRIS data and a set of hyperspectral data we caught are used to evaluate the performance of the proposed method. Experimental results show that the proposed algorithm not only realizes the adaptive but also outperforms K-MEANS and ISODATA on the classification accuracy, edge recognition and robustness.

  11. A multithread based new sparse matrix method in bioluminescence tomography

    NASA Astrophysics Data System (ADS)

    Zhang, Bo; Tian, Jie; Liu, Dan; Sun, Li; Yang, Xin; Han, Dong

    2010-03-01

    Among many molecular imaging modalities, bioluminescence tomography (BLT) stands out as an effective approach for in vivo imaging because of its noninvasive molecular and cellular level detection ability, high sensitivity and low cost in comparison with other imaging technologies. However, there exists the case that large scale problem with large number of points and elements in the structure of mesh standing for the small animal or phantom. And the large scale problem's system matrix generated by the diffuse approximation (DA) model using finite element method (FEM) is large. So there wouldn't be enough random access memory (RAM) for the program and the related inverse problem couldn't be solved. Considering the sparse property of the BLT system matrix, we've developed a new sparse matrix (ZSM) to overcome the problem. And the related algorithms have all been speeded up by multi-thread technologies. Then the inverse problem is solved by Tikhonov regularization method in adaptive finite element (AFE) framework. Finally, the performance of this method is tested on a heterogeneous phantom and the boundary data is obtained through Monte Carlo simulation. During the process of solving the forward model, the ZSM can save more processing time and memory space than the usual way, such as those not using sparse matrix and those using Triples or Cross Linked sparse matrix. Numerical experiments have shown when more CPU cores are used, the processing speed is increased. By incorporating ZSM, BLT can be applied to large scale problems with large system matrix.

  12. Application of age estimation methods based on teeth eruption: how easy is Olze method to use?

    PubMed

    De Angelis, D; Gibelli, D; Merelli, V; Botto, M; Ventura, F; Cattaneo, C

    2014-09-01

    The development of new methods for age estimation has become with time an urgent issue because of the increasing immigration, in order to estimate accurately the age of those subjects who lack valid identity documents. Methods of age estimation are divided in skeletal and dental ones, and among the latter, Olze's method is one of the most recent, since it was introduced in 2010 with the aim to identify the legal age of 18 and 21 years by evaluating the different stages of development of the periodontal ligament of the third molars with closed root apices. The present study aims at verifying the applicability of the method to the daily forensic practice, with special focus on the interobserver repeatability. Olze's method was applied by three different observers (two physicians and one dentist without a specific training in Olze's method) to 61 orthopantomograms from subjects of mixed ethnicity aged between 16 and 51 years. The analysis took into consideration the lower third molars. The results provided by the different observers were then compared in order to verify the interobserver error. Results showed that interobserver error varies between 43 and 57 % for the right lower third molar (M48) and between 23 and 49 % for the left lower third molar (M38). Chi-square test did not show significant differences according to the side of teeth and type of professional figure. The results prove that Olze's method is not easy to apply when used by not adequately trained personnel, because of an intrinsic interobserver error. Since it is however a crucial method in age determination, it should be used only by experienced observers after an intensive and specific training. PMID:24781787

  13. Novel multilevel inverter carrier-based PWM method

    SciTech Connect

    Tolbert, L.M.; Habetler, T.G.

    1999-10-01

    The advent of the transformerless multilevel inverter topology has brought forth various pulsewidth modulation (PWM) schemes as a means to control the switching of the active devices in each of the multiple voltage levels in the inverter. An analysis of how existing multilevel carrier-based PWM affects switch utilization for the different levels of a diode-clamped inverter is conducted. Two novel carrier-based multilevel PWM schemes are presented which help to optimize or balance the switch utilization in multilevel inverters. A 10-kW prototype six-level diode-clamped inverter has been built and controlled with the novel PWM strategies proposed in this paper to act as a voltage-source inverter for a motor drive.

  14. Ensemble method: Community detection based on game theory

    NASA Astrophysics Data System (ADS)

    Zhang, Xia; Xia, Zhengyou; Xu, Shengwu; Wang, J. D.

    2014-08-01

    Timely and cost-effective analytics over social network has emerged as a key ingredient for success in many businesses and government endeavors. Community detection is an active research area of relevance to analyze online social network. The problem of selecting a particular community detection algorithm is crucial if the aim is to unveil the community structure of a network. The choice of a given methodology could affect the outcome of the experiments because different algorithms have different advantages and depend on tuning specific parameters. In this paper, we propose a community division model based on the notion of game theory, which can combine advantages of previous algorithms effectively to get a better community classification result. By making experiments on some standard dataset, it verifies that our community detection model based on game theory is valid and better.

  15. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2013-07-02

    A system for biosensor-based detection of toxins includes providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  16. An algebra-based method for inferring gene regulatory networks

    PubMed Central

    2014-01-01

    Background The inference of gene regulatory networks (GRNs) from experimental observations is at the heart of systems biology. This includes the inference of both the network topology and its dynamics. While there are many algorithms available to infer the network topology from experimental data, less emphasis has been placed on methods that infer network dynamics. Furthermore, since the network inference problem is typically underdetermined, it is essential to have the option of incorporating into the inference process, prior knowledge about the network, along with an effective description of the search space of dynamic models. Finally, it is also important to have an understanding of how a given inference method is affected by experimental and other noise in the data used. Results This paper contains a novel inference algorithm using the algebraic framework of Boolean polynomial dynamical systems (BPDS), meeting all these requirements. The algorithm takes as input time series data, including those from network perturbations, such as knock-out mutant strains and RNAi experiments. It allows for the incorporation of prior biological knowledge while being robust to significant levels of noise in the data used for inference. It uses an evolutionary algorithm for local optimization with an encoding of the mathematical models as BPDS. The BPDS framework allows an effective representation of the search space for algebraic dynamic models that improves computational performance. The algorithm is validated with both simulated and experimental microarray expression profile data. Robustness to noise is tested using a published mathematical model of the segment polarity gene network in Drosophila melanogaster. Benchmarking of the algorithm is done by comparison with a spectrum of state-of-the-art network inference methods on data from the synthetic IRMA network to demonstrate that our method has good precision and recall for the network reconstruction task, while also

  17. A new initial alignment method of MIMU based on CCD

    NASA Astrophysics Data System (ADS)

    Ma, Kai; Ding, Quanxin; Zhang, Qiuzhi; Chen, Shaodong; Wang, Yongsheng

    2014-09-01

    A new initial alignment of MIMU based on CCD is proposed in this article to overcome problems of time consuming and low accuracy of the traditional HMS. By utilize the attitude information provided by CCD, MIMU can achieve initial alignment process and output real-time head attitude to compute line of sight. Simulation results indicated that the CCD measurement system can effectively reduce the wasting time of initial alignment and improve attitude measurement precision.

  18. Method of polishing nickel-base alloys and stainless steels

    DOEpatents

    Steeves, Arthur F.; Buono, Donald P.

    1981-01-01

    A chemical attack polish and polishing procedure for use on metal surfaces such as nickel base alloys and stainless steels. The chemical attack polish comprises Fe(NO.sub.3).sub.3, concentrated CH.sub.3 COOH, concentrated H.sub.2 SO.sub.4 and H.sub.2 O. The polishing procedure includes saturating a polishing cloth with the chemical attack polish and submicron abrasive particles and buffing the metal surface.

  19. THE APPLICATION OF CONTINUOUS WAVELET TRANSFORM BASED FOREGROUND SUBTRACTION METHOD IN 21 cm SKY SURVEYS

    SciTech Connect

    Gu Junhua; Xu Haiguang; Wang Jingying; Chen Wen; An Tao

    2013-08-10

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  20. The Application of Continuous Wavelet Transform Based Foreground Subtraction Method in 21 cm Sky Surveys

    NASA Astrophysics Data System (ADS)

    Gu, Junhua; Xu, Haiguang; Wang, Jingying; An, Tao; Chen, Wen

    2013-08-01

    We propose a continuous wavelet transform based non-parametric foreground subtraction method for the detection of redshifted 21 cm signal from the epoch of reionization. This method works based on the assumption that the foreground spectra are smooth in frequency domain, while the 21 cm signal spectrum is full of saw-tooth-like structures, thus their characteristic scales are significantly different. We can distinguish them in the wavelet coefficient space easily and perform the foreground subtraction. Compared with the traditional spectral fitting based method, our method is more tolerant to complex foregrounds. Furthermore, we also find that when the instrument has uncorrected response error, our method can also work significantly better than the spectral fitting based method. Our method can obtain similar results with the Wp smoothing method, which is also a non-parametric method, but our method consumes much less computing time.

  1. Wavelet-based acoustic emission detection method with adaptive thresholding

    NASA Astrophysics Data System (ADS)

    Menon, Sunil; Schoess, Jeffrey N.; Hamza, Rida; Busch, Darryl

    2000-06-01

    Reductions in Navy maintenance budgets and available personnel have dictated the need to transition from time-based to 'condition-based' maintenance. Achieving this will require new enabling diagnostic technologies. One such technology, the use of acoustic emission for the early detection of helicopter rotor head dynamic component faults, has been investigated by Honeywell Technology Center for its rotor acoustic monitoring system (RAMS). This ambitious, 38-month, proof-of-concept effort, which was a part of the Naval Surface Warfare Center Air Vehicle Diagnostics System program, culminated in a successful three-week flight test of the RAMS system at Patuxent River Flight Test Center in September 1997. The flight test results demonstrated that stress-wave acoustic emission technology can detect signals equivalent to small fatigue cracks in rotor head components and can do so across the rotating articulated rotor head joints and in the presence of other background acoustic noise generated during flight operation. This paper presents the results of stress wave data analysis of the flight-test dataset using wavelet-based techniques to assess background operational noise vs. machinery failure detection results.

  2. Distributed Cooperation Solution Method of Complex System Based on MAS

    NASA Astrophysics Data System (ADS)

    Weijin, Jiang; Yuhui, Xu

    To adapt the model in reconfiguring fault diagnosing to dynamic environment and the needs of solving the tasks of complex system fully, the paper introduced multi-Agent and related technology to the complicated fault diagnosis, an integrated intelligent control system is studied in this paper. Based on the thought of the structure of diagnostic decision and hierarchy in modeling, based on multi-layer decomposition strategy of diagnosis task, a multi-agent synchronous diagnosis federation integrated different knowledge expression modes and inference mechanisms are presented, the functions of management agent, diagnosis agent and decision agent are analyzed, the organization and evolution of agents in the system are proposed, and the corresponding conflict resolution algorithm in given, Layered structure of abstract agent with public attributes is build. System architecture is realized based on MAS distributed layered blackboard. The real world application shows that the proposed control structure successfully solves the fault diagnose problem of the complex plant, and the special advantage in the distributed domain.

  3. Bearing diagnosis based on Mahalanobis-Taguchi-Gram-Schmidt method

    NASA Astrophysics Data System (ADS)

    Shakya, Piyush; Kulkarni, Makarand S.; Darpe, Ashish K.

    2015-02-01

    A methodology is developed for defect type identification in rolling element bearings using the integrated Mahalanobis-Taguchi-Gram-Schmidt (MTGS) method. Vibration data recorded from bearings with seeded defects on outer race, inner race and balls are processed in time, frequency, and time-frequency domains. Eleven damage identification parameters (RMS, Peak, Crest Factor, and Kurtosis in time domain, amplitude of outer race, inner race, and ball defect frequencies in FFT spectrum and HFRT spectrum in frequency domain and peak of HHT spectrum in time-frequency domain) are computed. Using MTGS, these damage identification parameters (DIPs) are fused into a single DIP, Mahalanobis distance (MD), and gain values for the presence of all DIPs are calculated. The gain value is used to identify the usefulness of DIP and the DIPs with positive gain are again fused into MD by using Gram-Schmidt Orthogonalization process (GSP) in order to calculate Gram-Schmidt Vectors (GSVs). Among the remaining DIPs, sign of GSVs of frequency domain DIPs is checked to classify the probable defect. The approach uses MTGS method for combining the damage parameters and in conjunction with the GSV classifies the defect. A Defect Occurrence Index (DOI) is proposed to rank the probability of existence of a type of bearing damage (ball defect/inner race defect/outer race defect/other anomalies). The methodology is successfully validated on vibration data from a different machine, bearing type and shape/configuration of the defect. The proposed methodology is also applied on the vibration data acquired from the accelerated life test on the bearings, which established the applicability of the method on naturally induced and naturally progressed defect. It is observed that the methodology successfully identifies the correct type of bearing defect. The proposed methodology is also useful in identifying the time of initiation of a defect and has potential for implementation in a real time environment.

  4. Delta wing flutter based on doublet lattice method in NASTRAN

    NASA Technical Reports Server (NTRS)

    Jew, H.

    1975-01-01

    The subsonic doublet-lattice method (DLM) aeroelastic analysis in NASTRAN was successfully applied to produce subsonic flutter boundary data in parameter space for a large delta wing configuration. Computed flow velocity and flutter frequency values as functions of air density ratio, flow Mach number, and reduced frequency are tabulated. The relevance and the meaning of the calculated results are discussed. Several input-deck problems encountered and overcome are cited with the hope that they may be helpful to NASTRAN Rigid Format 45 users.

  5. Based on the method of subaperture splicing detection on spherical

    NASA Astrophysics Data System (ADS)

    Zhao, Weirui; Liang, Zhengnan; Pan, Guangyu

    2012-11-01

    This thesis studies the stitching interferometry of large aperture optical aspheric surface. Analyzing the data of the adjacent two sub apertures overlap by using SIFT algorithm, we obtained the stitching parameters between the sub-aperture and the overall surface information of the inspected mirror. We wrote the interference graph solution phase program, Zernike fitting program, stitching test program, and completed the principle experiment. We have given the compare results of stitching and full caliber testing, the RMS deviation of the two kinds of method is less than 2nm.

  6. Significance of norms and completeness in variational based methods

    NASA Technical Reports Server (NTRS)

    Storch, Joel A.

    1989-01-01

    By means of a simple structural problem, an important requirement often overlooked in practice on the basis functions used in Rayleigh-Ritz-Galerkin type methods is brought into focus. The problem of the static deformation of a uniformly loaded beam is solved variationally by expanding the beam displacement in a Fourier Cosine series. The potential energy functional is rendered stationary subject to the geometric boundary conditions. It is demonstrated that the variational approach does not converge to the true solution. The object is to resolve this paradox, and in so doing, indicate the practical implications of norms and completeness in an appropriate inner product space.

  7. System and method for attitude determination based on optical imaging

    NASA Technical Reports Server (NTRS)

    Junkins, John L. (Inventor); Pollock, Thomas C. (Inventor); Mortari, Daniele (Inventor)

    2003-01-01

    A method and apparatus is provide for receiving a first set of optical data from a first field of view and receiving a second set of optical data from a second field of view. A portion of the first set of optical data is communicated and a portion of the second set of optical data is reflected, both toward an optical combiner. The optical combiner then focuses the portions onto the image plane such that information at the image plane that is associated with the first and second fields of view is received by an optical detector and used to determine an attitude characteristic.

  8. The decoding method based on wavelet image En vector quantization

    NASA Astrophysics Data System (ADS)

    Liu, Chun-yang; Li, Hui; Wang, Tao

    2013-12-01

    With the rapidly progress of internet technology, large scale integrated circuit and computer technology, digital image processing technology has been greatly developed. Vector quantization technique plays a very important role in digital image compression. It has the advantages other than scalar quantization, which possesses the characteristics of higher compression ratio, simple algorithm of image decoding. Vector quantization, therefore, has been widely used in many practical fields. This paper will combine the wavelet analysis method and vector quantization En encoder efficiently, make a testing in standard image. The experiment result in PSNR will have a great improvement compared with the LBG algorithm.

  9. Optical tissue phantoms based on spin coating method

    NASA Astrophysics Data System (ADS)

    Park, Jihoon; Ha, Myungjin; Yu, Sung Kon; Radfar, Edalat; Jun, Eunkwon; Lee, Nara; Jung, Byungjo

    2015-03-01

    Fabrication of optical tissue phantom (OTP) simulating whole skin structure has been regarded as laborious and time consuming work. This study fabricated multilayer OTP optically and structurally simulating epidermis-dermis structure including blood vessel. Spin coating method was used to produce thin layer mimicking epidermal layer, then optimized for reference epoxy and silicone matrix. Adequacy of both materials in phantom fabrication was considered by comparison the fabrication results. In addition similarities between OTP and biological tissue in optical property and thickness was measured to evaluate this fabrication process.

  10. 3D range scan enhancement using image-based methods

    NASA Astrophysics Data System (ADS)

    Herbort, Steffen; Gerken, Britta; Schugk, Daniel; Wöhler, Christian

    2013-10-01

    This paper addresses the problem of 3D surface scan refinement, which is desirable due to noise, outliers, and missing measurements being present in the 3D surfaces obtained with a laser scanner. We present a novel algorithm for the fusion of absolute laser scanner depth profiles and photometrically estimated surface normal data, which yields a noise-reduced and highly detailed depth profile with large scale shape robustness. In contrast to other approaches published in the literature, the presented algorithm (1) regards non-Lambertian surfaces, (2) simultaneously computes surface reflectance (i.e. BRDF) parameters required for 3D reconstruction, (3) models pixelwise incident light and viewing directions, and (4) accounts for interreflections. The algorithm as such relies on the minimization of a three-component error term, which penalizes intensity deviations, integrability deviations, and deviations from the known large-scale surface shape. The solution of the error minimization is obtained iteratively based on a calculus of variations. BRDF parameters are estimated by initially reducing and then iteratively refining the optical resolution, which provides the required robust data basis. The 3D reconstruction of concave surface regions affected by interreflections is improved by compensating global illumination in the image data. The algorithm is evaluated based on eight objects with varying albedos and reflectance behaviors (diffuse, specular, metallic). The qualitative evaluation shows a removal of outliers and a strong reduction of noise, while the large scale shape is preserved. Fine surface details Which are previously not contained in the surface scans, are incorporated through using image data. The algorithm is evaluated with respect to its absolute accuracy using two caliper objects of known shape, and based on synthetically generated data. The beneficial effect of interreflection compensation on the reconstruction accuracy is evaluated quantitatively in a

  11. Method for Stereo Mapping Based on Objectarx and Pipeline Technology

    NASA Astrophysics Data System (ADS)

    Liu, F.; Chen, T.; Lin, Z.; Yang, Y.

    2012-07-01

    Stereo mapping is an important way to acquire 4D production. Based on the development of the stereo mapping and the characteristics of ObjectARX and pipeline technology, a new stereo mapping scheme which can realize the interaction between the AutoCAD and digital photogrammetry system is offered by ObjectARX and pipeline technology. An experiment is made in order to make sure the feasibility with the example of the software MAP-AT (Modern Aerial Photogrammetry Automatic Triangulation), the experimental results show that this scheme is feasible and it has very important meaning for the realization of the acquisition and edit integration.

  12. METHOD FOR ANNEALING AND ROLLING ZIRCONIUM-BASE ALLOYS

    DOEpatents

    Picklesimer, M.L.

    1959-07-14

    A fabrication procedure is presented for alpha-stabilized zirconium-base alloys, and in particular Zircaloy-2. The alloy is initially worked at a temperature outside the alpha-plus-beta range (810 to 970 deg ), held at a temperature above 970 deg C for 30 minutes and cooled rapidly. The alloy is then cold-worked to reduce the size at least 20% and annealed at a temperature from 700 to 810 deg C. This procedure serves both to prevent the formation of stringers and to provide a randomly oriented crystal structure.

  13. A novel gene detection method based on period-3 property.

    PubMed

    Huang, Lun; Bataineh, Mohammad Al; Atkin, G E; Wang, Siyun; Zhang, Wei

    2009-01-01

    Processing of biomolecular sequences using communication theory techniques provides powerful approaches for solving highly relevant problems in bioinformatics by properly mapping character strings into numerical sequences. We provide an optimized procedure for predicting protein-coding regions in DNA sequences based on the period-3 property of coding region. We present a digital correlating and filtering approach in the process of predicting these regions, and find out their locations by using the magnitude of the output sequence. These approaches result in improved computational techniques for the solution of useful problems in genomic information science and technology. PMID:19963599

  14. Modeling of a PVDF based gesture controller using energy methods

    NASA Astrophysics Data System (ADS)

    Van Volkinburg, Kyle R.; Washington, Gregory N.

    2014-03-01

    In this paper the concept of a PVDF based gesture controller is introduced and accompanied by a supporting model derived using Hamilton's principle. The model incorporates strain contributions from two loading situations: beam subject to transverse loading and axial loading. The prototype gesture controller is comprised of a compression sleeve with a spatially shaded PVDF element situated above the extensor muscles of the right forearm. The goal of the gesture controller, at this stage, is to be able to measure and discern forearm muscle activity for three distinct hand gestures. In this study the system was modeled and simulated. Test data was then collected for each hand gesture and compared to simulations.

  15. Genetic-evolution-based optimization methods for engineering design

    NASA Technical Reports Server (NTRS)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  16. Determining the base resistance of InP HBTs: An evaluation of methods and structures

    NASA Astrophysics Data System (ADS)

    Nardmann, Tobias; Krause, Julia; Pawlak, Andreas; Schroter, Michael

    2016-09-01

    Many different methods can be found in the literature for determining both the internal and external base series resistance based on single transistor terminal characteristics. Those methods are not equally reliable or applicable for all technologies, device sizes and speeds. In this review, the most common methods are evaluated regarding their suitability for InP heterojunction bipolar transistors (HBTs) based on both measured and simulated data. Using data generated by a sophisticated physics-based compact model allows an evaluation of the extraction method precision by comparing the extracted parameter value to its known value. Based on these simulations, this study provides insight into the limitations of the applied methods, causes for errors and possible error mitigation. In addition to extraction methods based on just transistor terminal characteristics, test structures for separately determining the components of the base resistance from sheet and specific contact resistances are discussed and applied to serve as reference for the experimental evaluation.

  17. Input space versus feature space in kernel-based methods.

    PubMed

    Schölkopf, B; Mika, S; Burges, C C; Knirsch, P; Müller, K R; Rätsch, G; Smola, A J

    1999-01-01

    This paper collects some ideas targeted at advancing our understanding of the feature spaces associated with support vector (SV) kernel functions. We first discuss the geometry of feature space. In particular, we review what is known about the shape of the image of input space under the feature space map, and how this influences the capacity of SV methods. Following this, we describe how the metric governing the intrinsic geometry of the mapped surface can be computed in terms of the kernel, using the example of the class of inhomogeneous polynomial kernels, which are often used in SV pattern recognition. We then discuss the connection between feature space and input space by dealing with the question of how one can, given some vector in feature space, find a preimage (exact or approximate) in input space. We describe algorithms to tackle this issue, and show their utility in two applications of kernel methods. First, we use it to reduce the computational complexity of SV decision functions; second, we combine it with the Kernel PCA algorithm, thereby constructing a nonlinear statistical denoising technique which is shown to perform well on real-world data. PMID:18252603

  18. Speckle reduction methods in laser-based picture projectors

    NASA Astrophysics Data System (ADS)

    Akram, M. Nadeem; Chen, Xuyuan

    2016-02-01

    Laser sources have been promised for many years to be better light sources as compared to traditional lamps or light-emitting diodes (LEDs) for projectors, which enable projectors having wide colour gamut for vivid image, super brightness and high contrast for the best picture quality, long lifetime for maintain free operation, mercury free, and low power consumption for green environment. A major technology obstacle in using lasers for projection has been the speckle noise caused by to the coherent nature of the lasers. For speckle reduction, current state of the art solutions apply moving parts with large physical space demand. Solutions beyond the state of the art need to be developed such as integrated optical components, hybrid MOEMS devices, and active phase modulators for compact speckle reduction. In this article, major methods reported in the literature for the speckle reduction in laser projectors are presented and explained. With the advancement in semiconductor lasers with largely reduced cost for the red, green and the blue primary colours, and the developed methods for their speckle reduction, it is hoped that the lasers will be widely utilized in different projector applications in the near future.

  19. Method for breast cancer classification based solely on morphological descriptors

    NASA Astrophysics Data System (ADS)

    Todd, Catherine A.; Naghdy, Golshah

    2004-05-01

    A decision support system has been developed to assist the radiologist during mammogram classification. In this paper, mass identification and segmentation methods are discussed in brief. Fuzzy region-growing techniques are applied to effectively segment the tumour candidate from surrounding breast tissue. Boundary extraction is implemented using a unit vector rotating about the mass core. The focus of this work is on the feature extraction and classification processes. Important information relating to the malignancy of a mass may be derived from its morphological properties. Mass shape and boundary roughness are primary features used in this research to discriminate between the two types of lesions. A subset from thirteen shape descriptors is input to a binary decision tree classifier that provides a final diagnosis of tumour malignancy. Features that combine to produce the most accurate result in distinguishing between malignant and benign lesions include: spiculation index, zero crossings, boundary roughness index and area-to-perimeter ratio. Using this method, a classification result of high sensitivity and specificity is achieved, with false-positive and falsenegative rates of 9.3% and 0% respectively.

  20. Extraction of sea ice concentration based on spectral unmixing method

    NASA Astrophysics Data System (ADS)

    Zhang, Dong; Ke, Changqing; Sun, Bo; Lei, Ruibo; Tang, Xueyuan

    2011-01-01

    The traditional methods to derive sea ice concentration are mainly from low resolution microwave data, which is disadvantageous to meet the grid size requirement of high resolution climate models. In this paper, moderate resolution imaging spectroradiometer (MODIS)/Terra calibrated radiances Level-1B (MOD02HKM) data with 500 m resolution in the vicinity of the Abbot Ice Shelf, Antarctica, is unmixed, respectively, by two neural networks to extract the sea ice concentration. After two different neural network models and MODIS potential open water algorithm (MPA) are introduced, a MOD02HKM image is unmixed using these neural networks and sea ice concentration maps are derived. At the same time, sea ice concentration for the same area is extracted by MPA from MODIS/Terra sea ice extent (MOD29) data with 1 km resolution. Comparisons among sea ice concentration results of the three algorithms showed that a spectral unmixing method is suitable for the extraction of sea ice concentration with high resolution and the accuracy of radial basis function neural network is better than that of backpropagation.

  1. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, James L.; Wiczer, James J.

    1994-01-01

    A system and a method for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.

  2. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, James L.; Wiczer, James J.

    1995-01-01

    A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device.

  3. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, J.L.; Wiczer, J.J.

    1994-01-25

    A system and a method for imaging desired surfaces of a workpiece is described. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.

  4. Non-contact capacitance based image sensing method and system

    DOEpatents

    Novak, J.L.; Wiczer, J.J.

    1995-01-03

    A system and a method is provided for imaging desired surfaces of a workpiece. A sensor having first and second sensing electrodes which are electrically isolated from the workpiece is positioned above and in proximity to the desired surfaces of the workpiece. An electric field is developed between the first and second sensing electrodes of the sensor in response to input signals being applied thereto and capacitance signals are developed which are indicative of any disturbances in the electric field as a result of the workpiece. An image signal of the workpiece may be developed by processing the capacitance signals. The image signals may provide necessary control information to a machining device for machining the desired surfaces of the workpiece in processes such as deburring or chamfering. Also, the method and system may be used to image dimensions of weld pools on a workpiece and surfaces of glass vials. The sensor may include first and second preview sensors used to determine the feed rate of a workpiece with respect to the machining device. 18 figures.

  5. Method for forming bismuth-based superconducting ceramics

    DOEpatents

    Maroni, Victor A.; Merchant, Nazarali N.; Parrella, Ronald D.

    2005-05-17

    A method for reducing the concentration of non-superconducting phases during the heat treatment of Pb doped Ag/Bi-2223 composites having Bi-2223 and Bi-2212 superconducting phases is disclosed. A Pb doped Ag/Bi-2223 composite having Bi-2223 and Bi-2212 superconducting phases is heated in an atmosphere having an oxygen partial pressure not less than about 0.04 atmospheres and the temperature is maintained at the lower of a non-superconducting phase take-off temperature and the Bi-2223 superconducting phase grain growth take-off temperature. The oxygen partial pressure is varied and the temperature is varied between about 815.degree. C. and about 835.degree. C. to produce not less than 80 percent conversion to Pb doped Bi-2223 superconducting phase and not greater than about 20 volume percent non-superconducting phases. The oxygen partial pressure is preferably varied between about 0.04 and about 0.21 atmospheres. A product by the method is disclosed.

  6. A comparison of vibration damping methods for ground based telescopes

    NASA Astrophysics Data System (ADS)

    Anderson, Eric H.; Glaese, Roger M.; Neill, Douglas

    2008-07-01

    Vibration is becoming a more important element in design of telescope structures as these structures become larger and more compliant and include higher bandwidth actuation systems. This paper describes vibration damping methods available for current and future implementation and compares their effectiveness for a model of the Large Synoptic Survey Telescope (LSST), a structure that is actually stiffer than most large telescopes. Although facility and mount design, structural stiffening and occasionally vibration isolation have been adequate in telescopes built to date, vibration damping offers a mass-efficient means of reducing vibration response, whether the vibration results from external wind disturbances, telescope slewing, or other internal disturbances from translating or rotating components. The paper presents several damping techniques including constrained layer viscoelastics, viscous and magnetorheological (MR) fluid devices, passive and active piezoelectric dampers, tuned mass dampers (vibration absorbers) and active resonant dampers. Basic architectures and practical implementation considerations are discussed and expected performance is assessed using a finite element model of the LSST. With a goal of reducing settling time during the telescope's surveys, and considering practicalities of integration with the telescope structure, two damping methods were identified as most appropriate: passive tuned mass dampers and active electromagnetic resonant dampers.

  7. A Markov Chain Monte Carlo Based Method for System Identification

    SciTech Connect

    Glaser, R E; Lee, C L; Nitao, J J; Hanley, W G

    2002-10-22

    This paper describes a novel methodology for the identification of mechanical systems and structures from vibration response measurements. It combines prior information, observational data and predictive finite element models to produce configurations and system parameter values that are most consistent with the available data and model. Bayesian inference and a Metropolis simulation algorithm form the basis for this approach. The resulting process enables the estimation of distributions of both individual parameters and system-wide states. Attractive features of this approach include its ability to: (1) provide quantitative measures of the uncertainty of a generated estimate; (2) function effectively when exposed to degraded conditions including: noisy data, incomplete data sets and model misspecification; (3) allow alternative estimates to be produced and compared, and (4) incrementally update initial estimates and analysis as more data becomes available. A series of test cases based on a simple fixed-free cantilever beam is presented. These results demonstrate that the algorithm is able to identify the system, based on the stiffness matrix, given applied force and resultant nodal displacements. Moreover, it effectively identifies locations on the beam where damage (represented by a change in elastic modulus) was specified.

  8. A New Quaternion-Based Encryption Method for DICOM Images.

    PubMed

    Dzwonkowski, Mariusz; Papaj, Michal; Rykaczewski, Roman

    2015-11-01

    In this paper, a new quaternion-based lossless encryption technique for digital image and communication on medicine (DICOM) images is proposed. We have scrutinized and slightly modified the concept of the DICOM network to point out the best location for the proposed encryption scheme, which significantly improves speed of DICOM images encryption in comparison with those originally embedded into DICOM advanced encryption standard and triple data encryption standard algorithms. The proposed algorithm decomposes a DICOM image into two 8-bit gray-tone images in order to perform encryption. The algorithm implements Feistel network like the scheme proposed by Sastry and Kumar. It uses special properties of quaternions to perform rotations of data sequences in 3D space for each of the cipher rounds. The images are written as Lipschitz quaternions, and modular arithmetic was implemented for operations with the quaternions. A computer-based analysis has been carried out, and the obtained results are shown at the end of this paper. PMID:26276993

  9. Energetics-Based Methods for Protein Folding and Stability Measurements

    NASA Astrophysics Data System (ADS)

    Geer, M. Ariel; Fitzgerald, Michael C.

    2014-06-01

    Over the past 15 years, a series of energetics-based techniques have been developed for the thermodynamic analysis of protein folding and stability. These techniques include Stability of Unpurified Proteins from Rates of amide H/D Exchange (SUPREX), pulse proteolysis, Stability of Proteins from Rates of Oxidation (SPROX), slow histidine H/D exchange, lysine amidination, and quantitative cysteine reactivity (QCR). The above techniques, which are the subject of this review, all utilize chemical or enzymatic modification reactions to probe the chemical denaturant- or temperature-induced equilibrium unfolding properties of proteins and protein-ligand complexes. They employ various mass spectrometry-, sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE)-, and optical spectroscopy-based readouts that are particularly advantageous for high-throughput and in some cases multiplexed analyses. This has created the opportunity to use protein folding and stability measurements in new applications such as in high-throughput screening projects to identify novel protein ligands and in mode-of-action studies to identify protein targets of a particular ligand.

  10. Well casing-based geophysical sensor apparatus, system and method

    DOEpatents

    Daily, William D.

    2010-03-09

    A geophysical sensor apparatus, system, and method for use in, for example, oil well operations, and in particular using a network of sensors emplaced along and outside oil well casings to monitor critical parameters in an oil reservoir and provide geophysical data remote from the wells. Centralizers are affixed to the well casings and the sensors are located in the protective spheres afforded by the centralizers to keep from being damaged during casing emplacement. In this manner, geophysical data may be detected of a sub-surface volume, e.g. an oil reservoir, and transmitted for analysis. Preferably, data from multiple sensor types, such as ERT and seismic data are combined to provide real time knowledge of the reservoir and processes such as primary and secondary oil recovery.

  11. Minimum dominating set-based methods for analyzing biological networks.

    PubMed

    Nacher, Jose C; Akutsu, Tatsuya

    2016-06-01

    The fast increase of 'multi-omics' data does not only pose a computational challenge for its analysis but also requires novel algorithmic methodologies to identify complex biological patterns and decipher the ultimate roots of human disorders. To that end, the massive integration of omics data with disease phenotypes is offering a new window into the cell functionality. The minimum dominating set (MDS) approach has rapidly emerged as a promising algorithmic method to analyze complex biological networks integrated with human disorders, which can be composed of a variety of omics data, from proteomics and transcriptomics to metabolomics. Here we review the main theoretical foundations of the methodology and the key algorithms, and examine the recent applications in which biological systems are analyzed by using the MDS approach. PMID:26773457

  12. Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition

    NASA Technical Reports Server (NTRS)

    Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd

    2015-01-01

    Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.

  13. Research on Palmprint Identification Method Based on Quantum Algorithms

    PubMed Central

    Zhang, Zhanzhan

    2014-01-01

    Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT) is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%. PMID:25105165

  14. Formal Methods for Autonomic and Swarm-based Systems

    NASA Technical Reports Server (NTRS)

    Rouff, Christopher; Vanderbilt, Amy; Hinchey, Mike; Truszkowski, Walt; Rash, James

    2004-01-01

    Swarms of intelligent rovers and spacecraft are being considered for a number of future NASA missions. These missions will provide MSA scientist and explorers greater flexibility and the chance to gather more science than traditional single spacecraft missions. These swarms of spacecraft are intended to operate for large periods of time without contact with the Earth. To do this, they must be highly autonomous, have autonomic properties and utilize sophisticated artificial intelligence. The Autonomous Nano Technology Swarm (ANTS) mission is an example of one of the swarm type of missions NASA is considering. This mission will explore the asteroid belt using an insect colony analogy cataloging the mass, density, morphology, and chemical composition of the asteroids, including any anomalous concentrations of specific minerals. Verifying such a system would be a huge task. This paper discusses ongoing work to develop a formal method for verifying swarm and autonomic systems.

  15. Nanotunneling Junction-based Hyperspectal Polarimetric Photodetector and Detection Method

    NASA Technical Reports Server (NTRS)

    Son, Kyung-ah (Inventor); Moon, Jeongsun J. (Inventor); Chattopadhyay, Goutam (Inventor); Liao, Anna (Inventor); Ting, David (Inventor)

    2009-01-01

    A photodetector, detector array, and method of operation thereof in which nanojunctions are formed by crossing layers of nanowires. The crossing nanowires are separated by a few nm thick electrical barrier layer which allows tunneling. Each nanojunction is coupled to a slot antenna for efficient and frequency-selective coupling to photo signals. The nanojunctions formed at the intersection of the crossing wires defines a vertical tunneling diode that rectifies the AC signal from a coupled antenna and generates a DC signal suitable for reforming a video image. The nanojunction sensor allows multi/hyper spectral imaging of radiation within a spectral band ranging from terahertz to visible light, and including infrared (IR) radiation. This new detection approach also offers unprecedented speed, sensitivity and fidelity at room temperature.

  16. Entropy-based method to evaluate the data integrity

    NASA Astrophysics Data System (ADS)

    Peng, Xu; Tianyu, Ma; Yongjie, Jin

    2006-12-01

    Projection stage of single photon emission computed tomography (SPECT) was discussed to analyze the characteristics of information transmission and evaluate the data integrity. Information is transferred from the source to the detector in the photon emitting process. In the projection stage, integrity of projection data can be assessed by the information entropy, which is the conditional entropy standing for the average uncertainty of the source object under the condition of projection data. Simulations were performed to study projection data of emission-computed tomography with a pinhole collimator. Several types of collimators were treated. Results demonstrate that the conditional entropy shows the data integrity, and indicate how the algorithms are matched or mismatched to the geometry. A new method for assessing data integrity is devised for those decision makers to help improve the quality of image reconstruction.

  17. Library-based methods for identification of soluble expression constructs.

    PubMed

    Yumerefendi, Hayretin; Desravines, Danielle C; Hart, Darren J

    2011-09-01

    When expression or crystallisation of a protein target in its wild-type full-length form proves problematic, a common strategy is to divide it into subconstructs comprising one or more domains. Rational construct design is not always successful, especially with targets for which there are few similar sequences to generate multiple sequence alignments. Even when this is possible, expression constructs may still fail to yield soluble protein, commonly expressing insolubly or at unusable yields. To address this, several new methods have been described that borrow concepts from the field of directed evolution whereby a random library is generated encompassing construct diversity; this is then screened to identify soluble constructs empirically. Here, we review progress in this area. PMID:21723393

  18. Spectral analysis comparisons of Fourier-theory-based methods and minimum variance (Capon) methods

    NASA Astrophysics Data System (ADS)

    Garbanzo-Salas, Marcial; Hocking, Wayne. K.

    2015-09-01

    In recent years, adaptive (data dependent) methods have been introduced into many areas where Fourier spectral analysis has traditionally been used. Although the data-dependent methods are often advanced as being superior to Fourier methods, they do require some finesse in choosing the order of the relevant filters. In performing comparisons, we have found some concerns about the mappings, particularly when related to cases involving many spectral lines or even continuous spectral signals. Using numerical simulations, several comparisons between Fourier transform procedures and minimum variance method (MVM) have been performed. For multiple frequency signals, the MVM resolves most of the frequency content only for filters that have more degrees of freedom than the number of distinct spectral lines in the signal. In the case of Gaussian spectral approximation, MVM will always underestimate the width, and can misappropriate the location of spectral line in some circumstances. Large filters can be used to improve results with multiple frequency signals, but are computationally inefficient. Significant biases can occur when using MVM to study spectral information or echo power from the atmosphere. Artifacts and artificial narrowing of turbulent layers is one such impact.

  19. A T Matrix Method Based upon Scalar Basis Functions

    NASA Technical Reports Server (NTRS)

    Mackowski, D.W.; Kahnert, F. M.; Mishchenko, Michael I.

    2013-01-01

    A surface integral formulation is developed for the T matrix of a homogenous and isotropic particle of arbitrary shape, which employs scalar basis functions represented by the translation matrix elements of the vector spherical wave functions. The formulation begins with the volume integral equation for scattering by the particle, which is transformed so that the vector and dyadic components in the equation are replaced with associated dipole and multipole level scalar harmonic wave functions. The approach leads to a volume integral formulation for the T matrix, which can be extended, by use of Green's identities, to the surface integral formulation. The result is shown to be equivalent to the traditional surface integral formulas based on the VSWF basis.

  20. Tetraethyl orthosilicate-based glass composition and method

    DOEpatents

    Wicks, George G.; Livingston, Ronald R.; Baylor, Lewis C.; Whitaker, Michael J.; O'Rourke, Patrick E.

    1997-01-01

    A tetraethyl orthosilicate-based, sol-gel glass composition with additives selected for various applications. The composition is made by mixing ethanol, water, and tetraethyl orthosilicate, adjusting the pH into the acid range, and aging the mixture at room temperature. The additives, such as an optical indicator, filler, or catalyst, are then added to the mixture to form the composition which can be applied to a substrate before curing. If the additive is an indicator, the light-absorbing characteristics of which vary upon contact with a particular analyte, the indicator can be applied to a lens, optical fiber, reagant strip, or flow cell for use in chemical analysis. Alternatively, an additive such as alumina particles is blended into the mixture to form a filler composition for patching cracks in metal, glass, or ceramic piping.

  1. Method of narcissus analysis in infrared system based on ASAP

    NASA Astrophysics Data System (ADS)

    Ren, Guodong; Zhang, Liang; Lan, Weihua; Pan, Xiaodong

    2015-11-01

    Narcissus of cooled infrared system should be controlled strictly. So, accurate and rapid analysis of narcissus is very important. Deriving the SNR of narcissus based on the definition of noise equivalent power. Using simulation software CodeV and ASAP analyses the narcissus. Screen out the optical surface whose narcissus is serious in CodeV. Then build the model of the system in ASAP and add reasonable surface properties, get the result of size and average irradiance of the image narcissus spot by real ray tracing. Calculate the SNR of narcissus by putting the value of average irradiance into front formulation. On this basis, the simulation analysis and experimental test about the Narcissus of an infrared lens were performed. The experimental result is consistent with the simulation analysis.

  2. Colour based fire detection method with temporal intensity variation filtration

    NASA Astrophysics Data System (ADS)

    Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.

    2015-02-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.

  3. Dictionary-Learning-Based Reconstruction Method for Electron Tomography

    PubMed Central

    LIU, BAODONG; YU, HENGYONG; VERBRIDGE, SCOTT S.; SUN, LIZHI; WANG, GE

    2014-01-01

    Summary Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context. PMID:25104167

  4. Tetraethyl orthosilicate-based glass composition and method

    DOEpatents

    Wicks, G.G.; Livingston, R.R.; Baylor, L.C.; Whitaker, M.J.; O`Rourke, P.E.

    1997-06-10

    A tetraethyl orthosilicate-based, sol-gel glass composition with additives selected for various applications is described. The composition is made by mixing ethanol, water, and tetraethyl orthosilicate, adjusting the pH into the acid range, and aging the mixture at room temperature. The additives, such as an optical indicator, filler, or catalyst, are then added to the mixture to form the composition which can be applied to a substrate before curing. If the additive is an indicator, the light-absorbing characteristics of which vary upon contact with a particular analyte, the indicator can be applied to a lens, optical fiber, reagent strip, or flow cell for use in chemical analysis. Alternatively, an additive such as alumina particles is blended into the mixture to form a filler composition for patching cracks in metal, glass, or ceramic piping. 12 figs.

  5. An efficient liposome based method for antioxidants encapsulation.

    PubMed

    Paini, Marco; Daly, Sean Ryan; Aliakbarian, Bahar; Fathi, Ali; Tehrany, Elmira Arab; Perego, Patrizia; Dehghani, Fariba; Valtchev, Peter

    2015-12-01

    Apigenin is an antioxidant that has shown a preventive activity against different cancer and cardiovascular disorders. In this study, we encapsulate apigenin with liposome to tackle the issue of its poor bioavailability and low stability. Apigenin loaded liposomes are fabricated with food-grade rapeseed lecithin in an aqueous medium in absence of any organic solvent. The liposome particle characteristics, such as particle size and polydispersity are optimised by tuning ultrasonic processing parameters. In addition, to measure the liposome encapsulation efficiency accurately, we establish a unique high-performance liquid chromatography technique in which an alkaline buffer mobile phase is used to prevent apigenin precipitation in the column;. salt is added to separate lipid particles from the aqeuous phase. Our results demonstrate that apigenin encapsulation efficiency is nearly 98% that is remarkably higher than any other reported value for encapsulation of this compound. In addition, the average particle size of these liposomes is 158.9 ± 6.1 nm that is suitable for the formulation of many food products, such as fortified fruit juice. The encapsulation method developed in this study, therefore have a high potential for the production of innovative, functional foods or nutraceutical products. PMID:26590900

  6. Nuclear-based methods for the study of selenium

    SciTech Connect

    Spyrou, N.M.; Akanle, O.A.; Dhani, A. )

    1988-01-01

    The essentiality of selenium to the human being and in particular its deficiency state, associated with prolonged inadequate dietary intake, have received considerable attention. In addition, the possible relationship between selenium and cancer and the claim that selenium may possess cancer-prevention properties have focused research effort. It has been observed in a number of studies on laboratory animals that selenium supplementation protects the animals against carcinogen-induced neoplastic growth in various organ sites, reduces the incidence of spontaneous mammary tumors, and suppresses the growth of transplanted tumor cells. In these research programs on the relationship between trace element levels and senile dementia and depression and the elemental changes in blood associated with selenium supplementation in a normal group of volunteers, it became obvious that in addition to establishing normal levels of elements in the population of interest, there was a more fundamental requirement for methods to be developed that would allow the study of the distribution of selenium in the body and its binding sites. The authors propose emission tomography and perturbed angular correlation as techniques worth exploring.

  7. CMOS low data rate imaging method based on compressed sensing

    NASA Astrophysics Data System (ADS)

    Xiao, Long-long; Liu, Kun; Han, Da-peng

    2012-07-01

    Complementary metal-oxide semiconductor (CMOS) technology enables the integration of image sensing and image compression processing, making improvements on overall system performance possible. We present a CMOS low data rate imaging approach by implementing compressed sensing (CS). On the basis of the CS framework, the image sensor projects the image onto a separable two-dimensional (2D) basis set and measures the corresponding coefficients obtained. First, the electrical current output from the pixels in a column are combined, with weights specified by voltage, in accordance with Kirchhoff's law. The second computation is performed in an analog vector-matrix multiplier (VMM). Each element of the VMM considers the total value of each column as the input and multiplies it by a unique coefficient. Both weights and coefficients are reprogrammable through analog floating-gate (FG) transistors. The image can be recovered from a percentage of these measurements using an optimization algorithm. The percentage, which can be altered flexibly by programming on the hardware circuit, determines the image compression ratio. These novel designs facilitate image compression during the image-capture phase before storage, and have the potential to reduce power consumption. Experimental results demonstrate that the proposed method achieves a large image compression ratio and ensures imaging quality.

  8. Printer resolution measurement based on slanted edge method

    NASA Astrophysics Data System (ADS)

    Bang, Yousun; Kim, Sang Ho; Choi, Don Chul

    2008-01-01

    Printer resolution is an important attribute for determining print quality, and it has been frequently referred to hardware optical resolution. However, the spatial addressability of hardcopy is not directly related to optical resolution because it is affected by printing mechanism, media, or software data processing such as resolution enhancement techniques (RET). The international organization ISO/IEC SC28 addresses this issue, and makes efforts to develop a new metric to measure this effective resolution. As the development process, this paper proposes a candidate metric for measuring printer resolution. Slanted edge method has been used to evaluate image sharpness for scanners and digital still cameras. In this paper, it is applied to monochrome laser printers. A test chart is modified to reduce the effect of halftone patterns. Using a flatbed scanner, the spatial frequency response (SFR) is measured and modeled with a spline function. The frequency corresponding to 0.1 SFR is used in the metric for printer resolution. The stability of the metric is investigated in five separate experiments: (1) page to page variations, (2) different ROI locations, (3) different ROI sizes, (4) variations of toner density, and (5) correlation with visual quality. The 0.1 SFR frequencies of ten printers are analyzed. Experimental results show the strong correlation between a proposed metric and perceptual quality.

  9. Alternative processing methods for tungsten-base composite materials

    SciTech Connect

    Ohriner, E.K.; Sikka, V.K.

    1995-12-31

    Tungsten composite materials contain large amounts of tungsten distributed in a continuous matrix phase. Current commercial materials include the tungsten-nickel-iron with cobalt replacing some or all of the iron, and also tungsten-copper materials. Typically, these are fabricated by liquid-phase sintering of blended powders. Liquid-phase sintering offers the advantages of low processing costs, established technology, and generally attractive mechanical properties. However, liquid-phase sintering is restricted to a very limited number of matrix alloying elements and a limited range of tungsten and alloying compositions. In the past few years, there has been interest in a wider range of matrix materials that offer the potential for superior composite properties. These must be processed by solid-state processes and at sufficiently low temperatures to avoid undesired reactions between the tungsten and the matrix phase. These processes, in order of decreasing process temperature requirements, include hot-isostatic pressing (HIPing), hot extrusion, and dynamic compaction. The HIPing and hot extrusion processes have also been used to improve mechanical properties of conventional liquid-phase-sintered materials. Results of laboratory-scale investigations of solid-state consolidation of a variety of matrix materials, including titanium, hafnium, nickel aluminide, and steels are reviewed. The potential advantages and disadvantages of each of the possible alternative consolidation processes are identified. Postconsolidation processing to control microstructure and macrostructure is discussed, including novel methods of controlling microstructure alignment.

  10. Control method for mixed refrigerant based natural gas liquefier

    DOEpatents

    Kountz, Kenneth J.; Bishop, Patrick M.

    2003-01-01

    In a natural gas liquefaction system having a refrigerant storage circuit, a refrigerant circulation circuit in fluid communication with the refrigerant storage circuit, and a natural gas liquefaction circuit in thermal communication with the refrigerant circulation circuit, a method for liquefaction of natural gas in which pressure in the refrigerant circulation circuit is adjusted to below about 175 psig by exchange of refrigerant with the refrigerant storage circuit. A variable speed motor is started whereby operation of a compressor is initiated. The compressor is operated at full discharge capacity. Operation of an expansion valve is initiated whereby suction pressure at the suction pressure port of the compressor is maintained below about 30 psig and discharge pressure at the discharge pressure port of the compressor is maintained below about 350 psig. Refrigerant vapor is introduced from the refrigerant holding tank into the refrigerant circulation circuit until the suction pressure is reduced to below about 15 psig, after which flow of the refrigerant vapor from the refrigerant holding tank is terminated. Natural gas is then introduced into a natural gas liquefier, resulting in liquefaction of the natural gas.

  11. Module Based Differential Coexpression Analysis Method for Type 2 Diabetes

    PubMed Central

    Yuan, Lin; Zheng, Chun-Hou; Xia, Jun-Feng; Huang, De-Shuang

    2015-01-01

    More and more studies have shown that many complex diseases are contributed jointly by alterations of numerous genes. Genes often coordinate together as a functional biological pathway or network and are highly correlated. Differential coexpression analysis, as a more comprehensive technique to the differential expression analysis, was raised to research gene regulatory networks and biological pathways of phenotypic changes through measuring gene correlation changes between disease and normal conditions. In this paper, we propose a gene differential coexpression analysis algorithm in the level of gene sets and apply the algorithm to a publicly available type 2 diabetes (T2D) expression dataset. Firstly, we calculate coexpression biweight midcorrelation coefficients between all gene pairs. Then, we select informative correlation pairs using the “differential coexpression threshold” strategy. Finally, we identify the differential coexpression gene modules using maximum clique concept and k-clique algorithm. We apply the proposed differential coexpression analysis method on simulated data and T2D data. Two differential coexpression gene modules about T2D were detected, which should be useful for exploring the biological function of the related genes. PMID:26339648

  12. Base flow separation: A comparison of analytical and mass balance methods

    NASA Astrophysics Data System (ADS)

    Lott, Darline A.; Stewart, Mark T.

    2016-04-01

    Base flow is the ground water contribution to stream flow. Many activities, such as water resource management, calibrating hydrological and climate models, and studies of basin hydrology, require good estimates of base flow. The base flow component of stream flow is usually determined by separating a stream hydrograph into two components, base flow and runoff. Analytical methods, mathematical functions or algorithms used to calculate base flow directly from discharge, are the most widely used base flow separation methods and are often used without calibration to basin or gage-specific parameters other than basin area. In this study, six analytical methods are compared to a mass balance method, the conductivity mass-balance (CMB) method. The base flow index (BFI) values for 35 stream gages are obtained from each of the seven methods with each gage having at least two consecutive years of specific conductance data and 30 years of continuous discharge data. BFI is cumulative base flow divided by cumulative total discharge over the period of record of analysis. The BFI value is dimensionless, and always varies from 0 to 1. Areas of basins used in this study range from 27 km2 to 68,117 km2. BFI was first determined for the uncalibrated analytical methods. The parameters of each analytical method were then calibrated to produce BFI values as close to the CMB derived BFI values as possible. One of the methods, the power function (aQb + cQ) method, is inherently calibrated and was not recalibrated. The uncalibrated analytical methods have an average correlation coefficient of 0.43 when compared to CMB-derived values, and an average correlation coefficient of 0.93 when calibrated with the CMB method. Once calibrated, the analytical methods can closely reproduce the base flow values of a mass balance method. Therefore, it is recommended that analytical methods be calibrated against tracer or mass balance methods.

  13. A Gas Dynamics Method Based on The Spectral Deferred Corrections (SDC) Time Integration Technique and The Piecewise Parabolic Method (PPM)

    SciTech Connect

    Samet Y. Kadioglu

    2011-12-01

    We present a computational gas dynamics method based on the Spectral Deferred Corrections (SDC) time integration technique and the Piecewise Parabolic Method (PPM) finite volume method. The PPM framework is used to define edge averaged quantities which are then used to evaluate numerical flux functions. The SDC technique is used to integrate solution in time. This kind of approach was first taken by Anita et al in [17]. However, [17] is problematic when it is implemented to certain shock problems. Here we propose significant improvements to [17]. The method is fourth order (both in space and time) for smooth flows, and provides highly resolved discontinuous solutions. We tested the method by solving variety of problems. Results indicate that the fourth order of accuracy in both space and time has been achieved when the flow is smooth. Results also demonstrate the shock capturing ability of the method.

  14. A rapid wire-based sampling method for DNA profiling.

    PubMed

    Chen, Tong; Catcheside, David E A; Stephenson, Alice; Hefford, Chris; Kirkbride, K Paul; Burgoyne, Leigh A

    2012-03-01

    This paper reports the results of a commission to develop a field deployable rapid short tandem repeat (STR)-based DNA profiling system to enable discrimination between tissues derived from a small number of individuals. Speed was achieved by truncation of sample preparation and field deployability by use of an Agilent 2100 Bioanalyser(TM). Human blood and tissues were stabbed with heated stainless steel wire and the resulting sample dehydrated with isopropanol prior to direct addition to a PCR. Choice of a polymerase tolerant of tissue residues and cycles of amplification appropriate for the amount of template expected yielded useful profiles with a custom-designed quintuplex primer set suitable for use with the Bioanalyser(TM). Samples stored on wires remained amplifiable for months, allowing their transportation unrefrigerated from remote locations to a laboratory for analysis using AmpFlSTR(®) Profiler Plus(®) without further processing. The field system meets the requirements for discrimination of samples from small sets and retains access to full STR profiling when required. PMID:22211864

  15. A local damage detection approach based on restoring force method

    NASA Astrophysics Data System (ADS)

    Zhan, Chao; Li, Dongsheng; Li, Hongnan

    2014-09-01

    Chain-like systems have been studied by many researchers for their simple structure and wide range of application. Previously, the damage in a chain-like system was detected by the reduction of the mass-normalized stiffness coefficient for certain elements as reported by Nayeri et al. (2008 [16]). However, some shortcomings exist in that approach and for overcoming them; an improved approach is derived and presented in this paper. In our improved approach, the mass normalized stiffness coefficients under two states (baseline state and potentially damaged state) are first estimated by a least square method, then these mass-stiffness coupled coefficients are decoupled to derive stiffness and mass relative change ratios for individual elements. These ratios are assembled in a vector, which is defined as damage indication vector (DIV). Each component in DIV is normalized individually to one to get multiple solutions. These solutions are averaged for estimating relative system changes, while abnormal solutions are discarded. The work of judging a solution as normal or abnormal is done by a cluster analysis algorithm. The most intriguing merit of this improved approach is that the relative stiffness and mass changes, which are coupled in the previous approach, can be separately identified. By this approach, the damage (single or multiple) extent and location can be correctly detected under operational conditions, meanwhile the proposed damage index has a clear physical meaning and is directly related to the stiffness reduction of corresponding structural elements. For illustrating the effectiveness and robustness of the improved approach, numerical simulation of a four floor building was carried out and experimental data from a structure tested at the Los Alamos National Laboratory was employed. Identified structural changes with both simulation and experimental data properly indicated the location and extent of actual structural damage, which validated the proposed

  16. Wurfelspiel-based training data methods for ATR

    NASA Astrophysics Data System (ADS)

    Peterson, James K.

    2004-09-01

    A data object is constructed from a P by M Wurfelspiel matrix W by choosing an entry from each column to construct a sequence A0A1"AM-1. Each of the PM possibilities are designed to correspond to the same category according to some chosen measure. This matrix could encode many types of data. (1) Musical fragments, all of which evoke sadness; each column entry is a 4 beat sequence with a chosen A0A1A2 thus 16 beats long (W is P by 3). (2) Paintings, all of which evoke happiness; each column entry is a layer and a given A0A1A2 is a painting constructed using these layers (W is P by 3). (3) abstract feature vectors corresponding to action potentials evoked from a biological cell's exposure to a toxin. The action potential is divided into four relevant regions and each column entry represents the feature vector of a region. A given A0A1A2 is then an abstraction of the excitable cell's output (W is P by 4). (4) abstract feature vectors corresponding to an object such as a face or vehicle. The object is divided into four categories each assigned an abstract feature vector with the resulting concatenation an abstract representation of the object (W is P by 4). All of the examples above correspond to one particular measure (sad music, happy paintings, an introduced toxin, an object to recognize)and hence, when a Wurfelspiel matrix is constructed, relevant training information for recognition is encoded that can be used in many algorithms. The focus of this paper is on the application of these ideas to automatic target recognition (ATR). In addition, we discuss a larger biologically based model of temporal cortex polymodal sensor fusion which can use the feature vectors extracted from the ATR Wurfelspiel data.

  17. Methods of noninvasive electrophysiological heart examination basing on solution of inverse problem of electrocardiography

    NASA Astrophysics Data System (ADS)

    Grigoriev, M.; Babich, L.

    2015-09-01

    The article represents the main noninvasive methods of heart electrical activity examination, theoretical bases of solution of electrocardiography inverse problem, application of different methods of heart examination in clinical practice, and generalized achievements in this sphere in global experience.

  18. A 2D/1D coupling neutron transport method based on the matrix MOC and NEM methods

    SciTech Connect

    Zhang, H.; Zheng, Y.; Wu, H.; Cao, L.

    2013-07-01

    A new 2D/1D coupling method based on the matrix MOC method (MMOC) and nodal expansion method (NEM) is proposed for solving the three-dimensional heterogeneous neutron transport problem. The MMOC method, used for radial two-dimensional calculation, constructs a response matrix between source and flux with only one sweep and then solves the linear system by using the restarted GMRES algorithm instead of the traditional trajectory sweeping process during within-group iteration for angular flux update. Long characteristics are generated by using the customization of commercial software AutoCAD. A one-dimensional diffusion calculation is carried out in the axial direction by employing the NEM method. The 2D and ID solutions are coupled through the transverse leakage items. The 3D CMFD method is used to ensure the global neutron balance and adjust the different convergence properties of the radial and axial solvers. A computational code is developed based on these theories. Two benchmarks are calculated to verify the coupling method and the code. It is observed that the corresponding numerical results agree well with references, which indicates that the new method is capable of solving the 3D heterogeneous neutron transport problem directly. (authors)

  19. Polymerase Mechanism-Based Method of Viral Attenuation

    PubMed Central

    Lee, Cheri A.; August, Avery; Arnold, Jamie J.; Cameron, Craig E.

    2016-01-01

    from a lysine to an arginine results in a high fidelity polymerase that replicates slowly thus creating an attenuated virus that is genetically stable and less likely to revert to a wild-type phenotype. This chapter provides detailed methods in which to identify the conserved lysine residue and evaluating fidelity and attenuation in cell culture (in vitro) and in the PV transgenic murine model (in vivo). PMID:26458831

  20. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    USGS Publications Warehouse

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  1. A novel conformational B-cell epitope prediction method based on mimotope and patch analysis.

    PubMed

    Sun, Pingping; Qi, Jialiang; Zhao, Yizhu; Huang, Yanxin; Yang, Guifu; Ma, Zhiqiang; Li, Yuxin

    2016-04-01

    A B-cell epitope is a group of residues on the surface of an antigen that stimulates humoral immune responses. Identifying B-cell epitopes is important for effective vaccine design. Predicting epitopes by experimental methods is expensive in terms of time, cost and effort; therefore, computational methods that have a low cost and high speed are widely used to predict B-cell epitopes. Recently, epitope prediction based on random peptide library screening has been viewed as a promising method. Some novel software and web-based servers have been proposed that have succeeded in some test cases. Herein, we propose a novel epitope prediction method based on amino acid pairs and patch analysis. The method first divides antigen surfaces into overlapping patches based on both radius (R) and number (N), then predict epitopes based on Amino Acid Pairs (AAPs) from mimotopes and the surface patch. The proposed method yields a mean sensitivity of 0.53, specificity of 0.77, ACC of 0.75 and F-measure of 0.45 for 39 test cases. Compared with mimotope-based methods, patch-based methods and two other prediction methods, the sensitivity of the new method offers a certain improvement. Our findings demonstrate that this proposed method was successful for patch and AAPs analysis and allowed for conformational B-cell epitope prediction. PMID:26804644

  2. Spatial content-based scene matching using a relaxation method

    NASA Astrophysics Data System (ADS)

    Wang, Caixia

    Scene matching is a fundamental task for a variety of geospatial analysis applications. As we move towards multi-source data analysis, constantly increasing amounts of generated geospatial datasets and the diversification of data sources are the two major forces driving the need for novel and more efficient matching solutions. Despite the great effort within the geospatial and computer science communities, automated scene matching still remains crucial and challenging when vector data are involved such as image-to-map registration for change detection. In this context, features extracted from vector data contain no intensity information which typically is the significant component in current promising approaches for registration. This problem becomes increasingly complicated as the two or more datasets usually present differences in coverage, scale, or orientation in general, and accordingly corresponding objects in the two or more datasets may also differ to a certain extent. This dissertation developed a novel methodology for automatic image-to-vector matching, based on contextual information among salient spatial features (e.g. road networks and buildings) in a scene. In this work, we model the road networks extracted from the two to-be-matched datasets as attributed graphs . The developed attribute metric measures the geometric and topological properties of the road network, which are invariant to the differences of the two datasets in scale, orientation, area of coverage, physical changes and extraction errors. Road networks comprise line segments (or curves), intersections and loops. Such complex structure suggests versatile attributes derivable from the components themselves of the road networks as well as between these components. It is important to develop attributes that need less computational efforts, while having sufficient descriptive power. We extend the entropy concept to statistically measure the descriptive quality of the attributes under

  3. a Range Based Method for Complex Facade Modeling

    NASA Astrophysics Data System (ADS)

    Adami, A.; Fregonese, L.; Taffurelli, L.

    2011-09-01

    the complex architecture. From the point cloud we can extract a false colour map depending on the distance of each point from the average plane. In this way we can represent each point of the facades by a height map in grayscale. In this operation it is important to define the scale of the final result in order to set the correct pixel size in the map. The following step is concerning the use of a modifier which is well-known in computer graphics. In fact the modifier Displacement allows to simulate on a planar surface the original roughness of the object according to a grayscale map. The value of gray is read by the modifier as the distance from the reference plane and it represents the displacement of the corresponding element of the virtual plane. Similar to the bump map, the displacement modifier does not only simulate the effect, but it really deforms the planar surface. In this way the 3d model can be use not only in a static representation, but also in dynamic animation or interactive application. The setting of the plane to be deformed is the most important step in this process. In 3d Max the planar surface has to be characterized by the real dimension of the façade and also by a correct number of quadrangular faces which are the smallest part of the whole surface. In this way we can consider the modified surface as a 3d raster representation where each quadrangular face (corresponding to traditional pixel) is displaced according the value of gray (= distance from the plane). This method can be applied in different context, above all when the object to be represented can be considered as a 2,5 dimension such as facades of architecture in city model or large scale representation. But also it can be used to represent particular effect such as deformation of walls in a complete 3d way.

  4. Approach-Method Interaction: The Role of Teaching Method on the Effect of Context-Based Approach in Physics Instruction

    ERIC Educational Resources Information Center

    Pesman, Haki; Ozdemir, Omer Faruk

    2012-01-01

    The purpose of this study is to explore not only the effect of context-based physics instruction on students' achievement and motivation in physics, but also how the use of different teaching methods influences it (interaction effect). Therefore, two two-level-independent variables were defined, teaching approach (contextual and non-contextual…

  5. WebMail versus WebApp: Comparing Problem-Based Learning Methods in a Business Research Methods Course

    ERIC Educational Resources Information Center

    Williams van Rooij, Shahron

    2007-01-01

    This study examined the impact of two Problem-Based Learning (PBL) approaches on knowledge transfer, problem-solving self-efficacy, and perceived learning gains among four intact classes of adult learners engaged in a group project in an online undergraduate business research methods course. With two of the classes using a text-only PBL workbook…

  6. A Generalized Functional Model Based Method for Vibration-Based Damage Precise Localization in 3D Structures

    NASA Astrophysics Data System (ADS)

    Sakaris, Christos S.; Sakellariou, John S.; Fassois, Spilios D.

    2015-07-01

    A Generalized Functional Model Based Method for vibration-based damage precise localization on structures consisting of 1D, 2D, or 3D elements is introduced. The method generalizes previous versions applicable to structures consisting of 1D elements, thus allowing for 2D and 3D elements as well. It is based on scalar (single sensor) or vector (multiple sensor) Functional Models which - in the inspection phase - incorporate the mathematical form of the specific structural topology. Precise localization is then based on coordinate estimation within this model structure, and confidence bounds are also obtained. The effectiveness of the method is demonstrated through experiments on a 3D truss structure where damage corresponds to single bolt loosening. Both the scalar and vector versions of the method are shown to be effective even within a very limited, low frequency, bandwidth of 3-59 Hz. The improvement achieved through the use of multiple sensors is also demonstrated.

  7. A Simulation-Based Comparison of Covariate Adjustment Methods for the Analysis of Randomized Controlled Trials

    PubMed Central

    Chaussé, Pierre; Liu, Jin; Luta, George

    2016-01-01

    Covariate adjustment methods are frequently used when baseline covariate information is available for randomized controlled trials. Using a simulation study, we compared the analysis of covariance (ANCOVA) with three nonparametric covariate adjustment methods with respect to point and interval estimation for the difference between means. The three alternative methods were based on important members of the generalized empirical likelihood (GEL) family, specifically on the empirical likelihood (EL) method, the exponential tilting (ET) method, and the continuous updated estimator (CUE) method. Two criteria were considered for the comparison of the four statistical methods: the root mean squared error and the empirical coverage of the nominal 95% confidence intervals for the difference between means. Based on the results of the simulation study, for sensitivity analysis purposes, we recommend the use of ANCOVA (with robust standard errors when heteroscedasticity is present) together with the CUE-based covariate adjustment method. PMID:27077870

  8. Method for PE Pipes Fusion Jointing Based on TRIZ Contradictions Theory

    NASA Astrophysics Data System (ADS)

    Sun, Jianguang; Tan, Runhua; Gao, Jinyong; Wei, Zihui

    The core of the TRIZ theories is the contradiction detection and solution. TRIZ provided various methods for the contradiction solution, but all that is not systematized. Combined with the technique system conception, this paper summarizes an integration solution method for contradiction solution based on the TRIZ contradiction theory. According to the method, a flowchart of integration solution method for contradiction is given. As a casestudy, method of fusion jointing PE pipe is analysised.

  9. Exploring Methods of Analysing Talk in Problem-Based Learning Tutorials

    ERIC Educational Resources Information Center

    Clouston, Teena J.

    2007-01-01

    This article explores the use of discourse analysis and conversation analysis as an evaluation tool in problem-based learning. The basic principles of the methods are discussed and their application in analysing talk in problem-based learning considered. Findings suggest that these methods could enable an understanding of how effective…

  10. The Integration of Environmental Education into Two Elementary Preservice Science Methods Courses: A Content-Based and a Method-Based Approach

    NASA Astrophysics Data System (ADS)

    Weiland, Ingrid S.; Morrison, Judith A.

    2013-10-01

    To examine the notion of environmental education (EE) as context for integrating the elementary curricula, we engaged in a multi-case study analysis (Yin 2009) of two preservice elementary science methods courses that utilized an experiential reflective approach—case one (University A) through a science content focus (i.e., sustainability) and case two (University B) through a method focus (i.e., problem-based learning). We examined preservice teachers’ understandings of EE, their ideas to incorporate EE into their future teaching, and their conceptions of EE as a context for integration. Results indicate that both foci (content and method) were successful in building EE content, helping preservice teachers to envision EE in their future classrooms, and promoting EE as a context for integrating their instruction. Based on these results, we offer recommendations for the incorporation of EE as a context for integration into the elementary science methods course.

  11. Comparative Analysis of a Principal Component Analysis-Based and an Artificial Neural Network-Based Method for Baseline Removal.

    PubMed

    Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G

    2016-04-01

    This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. PMID:26917856

  12. Synthesis of Enterprise and Value-Based Methods for Multiattribute Risk Analysis

    SciTech Connect

    C. Robert Kenley; John W. Collins; John M. Beck; Harold J. Heydt; Chad B. Garcia

    2001-10-01

    This paper describes a method for performing multiattribute decision analysis to prioritize ap-proaches to handling risks during the development and operation of complex socio-technical systems. The method combines risk categorization based on enterprise views, risk prioritization of the categories based on the Analytic Hierarchy Process (AHP), and more standard probability-consequence ratings schemes. We also apply value-based testing me-thods used in software development to prioritize risk-handling approaches. We describe a tool that synthesizes the methods and performs a multiattribute analysis of the technical and pro-grammatic risks on the Next Generation Nuclear Plant (NGNP) enterprise.

  13. System and method for integrating hazard-based decision making tools and processes

    DOEpatents

    Hodgin, C. Reed

    2012-03-20

    A system and method for inputting, analyzing, and disseminating information necessary for identified decision-makers to respond to emergency situations. This system and method provides consistency and integration among multiple groups, and may be used for both initial consequence-based decisions and follow-on consequence-based decisions. The system and method in a preferred embodiment also provides tools for accessing and manipulating information that are appropriate for each decision-maker, in order to achieve more reasoned and timely consequence-based decisions. The invention includes processes for designing and implementing a system or method for responding to emergency situations.

  14. Comparing Team-Based and Mixed Active-Learning Methods in an Ambulatory Care Elective Course

    PubMed Central

    Franks, Andrea S.; Guirguis, Alexander B.; George, Christa M.; Howard-Thompson, Amanda; Heidel, Robert E.

    2010-01-01

    Objectives To assess students' performance and perceptions of team-based and mixed active-learning methods in 2 ambulatory care elective courses, and to describe faculty members' perceptions of team-based learning. Methods Using the 2 teaching methods, students' grades were compared. Students' perceptions were assessed through 2 anonymous course evaluation instruments. Faculty members who taught courses using the team-based learning method were surveyed regarding their impressions of team-based learning. Results The ambulatory care course was offered to 64 students using team-based learning (n = 37) and mixed active learning (n = 27) formats. The mean quality points earned were 3.7 (team-based learning) and 3.3 (mixed active learning), p < 0.001. Course evaluations for both courses were favorable. All faculty members who used the team-based learning method reported that they would consider using team-based learning in another course. Conclusions Students were satisfied with both teaching methods; however, student grades were significantly higher in the team-based learning course. Faculty members recognized team-based learning as an effective teaching strategy for small-group active learning. PMID:21301594

  15. Foam-based adsorbents having high adsorption capacities for recovering dissolved metals and methods thereof

    DOEpatents

    Janke, Christopher J.; Dai, Sheng; Oyola, Yatsandra

    2015-06-02

    Foam-based adsorbents and a related method of manufacture are provided. The foam-based adsorbents include polymer foam with grafted side chains and an increased surface area per unit weight to increase the adsorption of dissolved metals, for example uranium, from aqueous solutions. A method for forming the foam-based adsorbents includes irradiating polymer foam, grafting with polymerizable reactive monomers, reacting with hydroxylamine, and conditioning with an alkaline solution. Foam-based adsorbents formed according to the present method demonstrated a significantly improved uranium adsorption capacity per unit weight over existing adsorbents.

  16. Powder-based adsorbents having high adsorption capacities for recovering dissolved metals and methods thereof

    DOEpatents

    Janke, Christopher J.; Dai, Sheng; Oyola, Yatsandra

    2016-05-03

    A powder-based adsorbent and a related method of manufacture are provided. The powder-based adsorbent includes polymer powder with grafted side chains and an increased surface area per unit weight to increase the adsorption of dissolved metals, for example uranium, from aqueous solutions. A method for forming the powder-based adsorbent includes irradiating polymer powder, grafting with polymerizable reactive monomers, reacting with hydroxylamine, and conditioning with an alkaline solution. Powder-based adsorbents formed according to the present method demonstrated a significantly improved uranium adsorption capacity per unit weight over existing adsorbents.

  17. A new image segmentation method based on multifractal detrended moving average analysis

    NASA Astrophysics Data System (ADS)

    Shi, Wen; Zou, Rui-biao; Wang, Fang; Su, Le

    2015-08-01

    In order to segment and delineate some regions of interest in an image, we propose a novel algorithm based on the multifractal detrended moving average analysis (MF-DMA). In this method, the generalized Hurst exponent h(q) is calculated for every pixel firstly and considered as the local feature of a surface. And then a multifractal detrended moving average spectrum (MF-DMS) D(h(q)) is defined by the idea of box-counting dimension method. Therefore, we call the new image segmentation method MF-DMS-based algorithm. The performance of the MF-DMS-based method is tested by two image segmentation experiments of rapeseed leaf image of potassium deficiency and magnesium deficiency under three cases, namely, backward (θ = 0), centered (θ = 0.5) and forward (θ = 1) with different q values. The comparison experiments are conducted between the MF-DMS method and other two multifractal segmentation methods, namely, the popular MFS-based and latest MF-DFS-based methods. The results show that our MF-DMS-based method is superior to the latter two methods. The best segmentation result for the rapeseed leaf image of potassium deficiency and magnesium deficiency is from the same parameter combination of θ = 0.5 and D(h(- 10)) when using the MF-DMS-based method. An interesting finding is that the D(h(- 10)) outperforms other parameters for both the MF-DMS-based method with centered case and MF-DFS-based algorithms. By comparing the multifractal nature between nutrient deficiency and non-nutrient deficiency areas determined by the segmentation results, an important finding is that the gray value's fluctuation in nutrient deficiency area is much severer than that in non-nutrient deficiency area.

  18. A method for data base management and analysis for wind tunnel data

    NASA Technical Reports Server (NTRS)

    Biser, Aileen O.

    1987-01-01

    To respond to the need for improved data base management and analysis capabilities for wind-tunnel data at the Langley 16-Foot Transonic Tunnel, research was conducted into current methods of managing wind-tunnel data and a method was developed as a solution to this need. This paper describes the development of the data base management and analysis method for wind-tunnel data. The design and implementation of the software system are discussed and examples of its use are shown.

  19. Fast calculation with point-based method to make CGHs of the polygon model

    NASA Astrophysics Data System (ADS)

    Ogihara, Yuki; Ichikawa, Tsubasa; Sakamoto, Yuji

    2014-02-01

    Holography is one of the three-dimensional technology. Light waves from an object are recorded and reconstructed by using a hologram. Computer generated holograms (CGHs), which are made by simulating light propagation using a computer, are able to represent virtual object. However, an enormous amount of computation time is required to make CGHs. There are two primary methods of calculating CGHs: the polygon-based method and the point-based method. In the polygon-based method with Fourier transforms, CGHs are calculated using a fast Fourier transform (FFT). The calculation of complex objects composed of multiple polygons requires as many FFTs, so unfortunately the calculation time become enormous. In contrast, in the point-based method, it is easy to express complex objects, an enormous calculation time is still required. Graphics processing units (GPUs) have been used to speed up the calculations of point-based method. Because a GPU is specialized for parallel computation and CGH calculation can be calculated independently for each pixel. However, expressing a planar object by the point-based method requires a signi cant increase in the density of points and consequently in the number of point light sources. In this paper, we propose a fast calculation algorithm to express planar objects by the point-based method with a GPU. The proposed method accelerate calculation by obtaining the distance between a pixel and the point light source from the adjacent point light source by a difference method. Under certain speci ed conditions, the difference between adjacent object points becomes constant, so the distance is obtained by only an additions. Experimental results showed that the proposed method is more effective than the polygon-based method with FFT when the number of polygons composing an objects are high.

  20. Optimized sparse presentation-based classification method with weighted block and maximum likelihood model

    NASA Astrophysics Data System (ADS)

    He, Jun; Zuo, Tian; Sun, Bo; Wu, Xuewen; Chen, Chao

    2014-06-01

    This paper is aiming at applying sparse representation based classification (SRC) on face recognition with disguise or illumination variation. Having analyzed the characteristics of general object recognition and the principle of the classifier of SRC method, authors focus on evaluating blocks of a probe sample and propose an optimized SRC method based on position-preserving weighted block and maximum likelihood model. Principle and implementation of the proposed method have been introduced in the article, and experiments on Yale and AR face database have been given too. From experimental results, it can be seen that the proposed optimized SRC method works well than existing methods.