Science.gov

Sample records for agri-fooda method based

  1. DISPLACEMENT BASED SEISMIC DESIGN METHODS.

    SciTech Connect

    HOFMAYER,C.MILLER,C.WANG,Y.COSTELLO,J.

    2003-07-15

    A research effort was undertaken to determine the need for any changes to USNRC's seismic regulatory practice to reflect the move, in the earthquake engineering community, toward using expected displacement rather than force (or stress) as the basis for assessing design adequacy. The research explored the extent to which displacement based seismic design methods, such as given in FEMA 273, could be useful for reviewing nuclear power stations. Two structures common to nuclear power plants were chosen to compare the results of the analysis models used. The first structure is a four-story frame structure with shear walls providing the primary lateral load system, referred herein as the shear wall model. The second structure is the turbine building of the Diablo Canyon nuclear power plant. The models were analyzed using both displacement based (pushover) analysis and nonlinear dynamic analysis. In addition, for the shear wall model an elastic analysis with ductility factors applied was also performed. The objectives of the work were to compare the results between the analyses, and to develop insights regarding the work that would be needed before the displacement based analysis methodology could be considered applicable to facilities licensed by the NRC. A summary of the research results, which were published in NUREGICR-6719 in July 2001, is presented in this paper.

  2. DOM Based XSS Detecting Method Based on Phantomjs

    NASA Astrophysics Data System (ADS)

    Dong, Ri-Zhan; Ling, Jie; Liu, Yi

    Because malicious code does not appear in html source code, DOM based XSS cannot be detected by traditional methods. By analyzing the causes of DOM based XSS, this paper proposes a detection method of DOM based XSS based on phantomjs. This paper uses function hijacking to detect dangerous operation and achieves a prototype system. Comparing with existing tools shows that the system improves the detection rate and the method is effective to detect DOM based XSS.

  3. Method of recovering oil-based fluid

    SciTech Connect

    Brinkley, H.E.

    1993-07-13

    A method is described of recovering oil-based fluid, said method comprising the steps of: applying an oil-based fluid absorbent cloth of man-made fiber to an oil-based fluid, the cloth having at least a portion thereof that is napped so as to raise ends and loops of the man-made fibers and define voids; and absorbing the oil-based fluid into the napped portion of the cloth.

  4. Design for validation, based on formal methods

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.

    1990-01-01

    Validation of ultra-reliable systems decomposes into two subproblems: (1) quantification of probability of system failure due to physical failure; (2) establishing that Design Errors are not present. Methods of design, testing, and analysis of ultra-reliable software are discussed. It is concluded that a design-for-validation based on formal methods is needed for the digital flight control systems problem, and also that formal methods will play a major role in the development of future high reliability digital systems.

  5. Model-Based Method for Sensor Validation

    NASA Technical Reports Server (NTRS)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  6. [Family planning methods based on fertility awareness].

    PubMed

    Haghenbeck-Altamirano, Francisco Javier; Ayala-Yáñez, Rodrigo; Herrera-Meillón, Héctor

    2012-04-01

    The desire to limit fertility is recognized both by individuals and by nations. The concept of family planning is based on the right of individuals and couples to regulate their fertility and is based in the area of health, human rights and population. Despite the changes in policies and family planning programs worldwide, there are large geographic areas that have not yet met the minimum requirements in this regard, the reasons are multiple, including economic reasons but also ideological or religious. Knowledge on the physiology of the menstrual cycle, specifically ovulation process has been further enhanced due to the advances in reproductive medicine research. The series of events around ovulation are used to detect the "fertile window", this way women will look for the possibility of postponing their pregnancy or actually start looking for it. The aim of this article is to review the current methods of family planning based on fertility awareness, from the historical methods like the core temperature determination and rhythm, to the most popular ones like the Billings ovulation method, the Sympto-thermal method and current methods like the two days, and the standard days method. There are also mentioned methods that require electronic devices or specifically computer designed ones to detect this "window of fertility". The spread and popularity of these methods is low and their knowledge among physicians, including gynecologists, is also quite scarce. The effectiveness of these methods has been difficult to quantify due to the lack of well designed, randomized studies which are affected by small populations of patients using these methods. The publications mention high effectiveness with their proper use, but not with typical use, what indicates the need for increased awareness among medical practitioners and trainers, obtaining a better use and understanding of methods and reducing these discrepancies. PMID:22808858

  7. A Property Restriction Based Knowledge Merging Method

    NASA Astrophysics Data System (ADS)

    Che, Haiyan; Chen, Wei; Feng, Tie; Zhang, Jiachen

    Merging new instance knowledge extracted from the Web according to certain domain ontology into the knowledge base (KB for short) is essential for the knowledge management and should be processed carefully, since this may introduce redundant or contradictory knowledge, and the quality of the knowledge in the KB, which is very important for a knowledge-based system to provide users high quality services, will suffer from such "bad" knowledge. Advocates a property restriction based knowledge merging method, it can identify the equivalent instances, redundant or contradictory knowledge according to the property restrictions defined in the domain ontology and can consolidate the knowledge about equivalent instances and discard the redundancy and conflict to keep the KB compact and consistent. This knowledge merging method has been used in a semantic-based search engine project: CRAB and the effect is satisfactory.

  8. Recommendation advertising method based on behavior retargeting

    NASA Astrophysics Data System (ADS)

    Zhao, Yao; YIN, Xin-Chun; CHEN, Zhi-Min

    2011-10-01

    Online advertising has become an important business in e-commerce. Ad recommended algorithms are the most critical part in recommendation systems. We propose a recommendation advertising method based on behavior retargeting which can avoid leakage click of advertising due to objective reasons and can observe the changes of the user's interest in time. Experiments show that our new method can have a significant effect and can be further to apply to online system.

  9. Bayesian individualization via sampling-based methods.

    PubMed

    Wakefield, J

    1996-02-01

    We consider the situation where we wish to adjust the dosage regimen of a patient based on (in general) sparse concentration measurements taken on-line. A Bayesian decision theory approach is taken which requires the specification of an appropriate prior distribution and loss function. A simple method for obtaining samples from the posterior distribution of the pharmacokinetic parameters of the patient is described. In general, these samples are used to obtain a Monte Carlo estimate of the expected loss which is then minimized with respect to the dosage regimen. Some special cases which yield analytic solutions are described. When the prior distribution is based on a population analysis then a method of accounting for the uncertainty in the population parameters is described. Two simulation studies showing how the methods work in practice are presented. PMID:8827585

  10. A T-EOF Based Prediction Method.

    NASA Astrophysics Data System (ADS)

    Lee, Yung-An

    2002-01-01

    A new statistical time series prediction method based on temporal empirical orthogonal function (T-EOF) is introduced in this study. This method first applies singular spectrum analysis (SSA) to extract dominant T-EOFs from historical data. Then, the most recent data are projected onto an optimal subset of the T-EOFs to estimate the corresponding temporal principal components (T-PCs). Finally, a forecast is constructed from these T-EOFs and T-PCs. Results from forecast experiments on the El Niño sea surface temperature (SST) indices from 1993 to 2000 showed that this method consistently yielded better correlation skill than autoregressive models for a lead time longer than 6 months. Furthermore, the correlation skills of this method in predicting Niño-3 index remained above 0.5 for a lead time up to 36 months during this period. However, this method still encountered the `spring barrier' problem. Because the 1990s exhibited relatively weak spring barrier, these results indicate that the T-EOF based prediction method has certain extended forecasting capability in the period when the spring barrier is weak. They also suggest that the potential predictability of ENSO in a certain period may be longer than previously thought.

  11. Bare PCB test method based on AI

    NASA Astrophysics Data System (ADS)

    Li, Aihua; Zhou, Huiyang; Wan, Nianhong; Qu, Liangsheng

    1995-08-01

    The shortcomings of conventional methods used for developing test sets on current automated printed circuit board (PCB) test machines consist of overlooking the information from CAD, historical test data, and the experts' knowledge. Thus, the generated test sets and proposed test sequence may be sub-optimal and inefficient. This paper presents a weighting bare PCB test method based on analysis and utilization of the CAD information. AI technique is applied for faults statistics and faults identification. Also, the generation of test sets and the planning of test procedure are discussed. A faster and more efficient test system is achieved.

  12. An image mosaic method based on corner

    NASA Astrophysics Data System (ADS)

    Jiang, Zetao; Nie, Heting

    2015-08-01

    In view of the shortcomings of the traditional image mosaic, this paper describes a new algorithm for image mosaic based on the Harris corner. Firstly, Harris operator combining the constructed low-pass smoothing filter based on splines function and circular window search is applied to detect the image corner, which allows us to have better localisation performance and effectively avoid the phenomenon of cluster. Secondly, the correlation feature registration is used to find registration pair, remove the false registration using random sampling consensus. Finally use the method of weighted trigonometric combined with interpolation function for image fusion. The experiments show that this method can effectively remove the splicing ghosting and improve the accuracy of image mosaic.

  13. Treecode-based generalized Born method

    NASA Astrophysics Data System (ADS)

    Xu, Zhenli; Cheng, Xiaolin; Yang, Haizhao

    2011-02-01

    We have developed a treecode-based O(Nlog N) algorithm for the generalized Born (GB) implicit solvation model. Our treecode-based GB (tGB) is based on the GBr6 [J. Phys. Chem. B 111, 3055 (2007)], an analytical GB method with a pairwise descreening approximation for the R6 volume integral expression. The algorithm is composed of a cutoff scheme for the effective Born radii calculation, and a treecode implementation of the GB charge-charge pair interactions. Test results demonstrate that the tGB algorithm can reproduce the vdW surface based Poisson solvation energy with an average relative error less than 0.6% while providing an almost linear-scaling calculation for a representative set of 25 proteins with different sizes (from 2815 atoms to 65456 atoms). For a typical system of 10k atoms, the tGB calculation is three times faster than the direct summation as implemented in the original GBr6 model. Thus, our tGB method provides an efficient way for performing implicit solvent GB simulations of larger biomolecular systems at longer time scales.

  14. A multicore based parallel image registration method.

    PubMed

    Yang, Lin; Gong, Leiguang; Zhang, Hong; Nosher, John L; Foran, David J

    2009-01-01

    Image registration is a crucial step for many image-assisted clinical applications such as surgery planning and treatment evaluation. In this paper we proposed a landmark based nonlinear image registration algorithm for matching 2D image pairs. The algorithm was shown to be effective and robust under conditions of large deformations. In landmark based registration, the most important step is establishing the correspondence among the selected landmark points. This usually requires an extensive search which is often computationally expensive. We introduced a nonregular data partition algorithm using the K-means clustering algorithm to group the landmarks based on the number of available processing cores. The step optimizes the memory usage and data transfer. We have tested our method using IBM Cell Broadband Engine (Cell/B.E.) platform. PMID:19964921

  15. Lagrangian based methods for coherent structure detection.

    PubMed

    Allshouse, Michael R; Peacock, Thomas

    2015-09-01

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows. PMID:26428570

  16. Lagrangian based methods for coherent structure detection

    SciTech Connect

    Allshouse, Michael R.; Peacock, Thomas

    2015-09-15

    There has been a proliferation in the development of Lagrangian analytical methods for detecting coherent structures in fluid flow transport, yielding a variety of qualitatively different approaches. We present a review of four approaches and demonstrate the utility of these methods via their application to the same sample analytic model, the canonical double-gyre flow, highlighting the pros and cons of each approach. Two of the methods, the geometric and probabilistic approaches, are well established and require velocity field data over the time interval of interest to identify particularly important material lines and surfaces, and influential regions, respectively. The other two approaches, implementing tools from cluster and braid theory, seek coherent structures based on limited trajectory data, attempting to partition the flow transport into distinct regions. All four of these approaches share the common trait that they are objective methods, meaning that their results do not depend on the frame of reference used. For each method, we also present a number of example applications ranging from blood flow and chemical reactions to ocean and atmospheric flows.

  17. Chapter 11. Community analysis-based methods

    SciTech Connect

    Cao, Y.; Wu, C.H.; Andersen, G.L.; Holden, P.A.

    2010-05-01

    Microbial communities are each a composite of populations whose presence and relative abundance in water or other environmental samples are a direct manifestation of environmental conditions, including the introduction of microbe-rich fecal material and factors promoting persistence of the microbes therein. As shown by culture-independent methods, different animal-host fecal microbial communities appear distinctive, suggesting that their community profiles can be used to differentiate fecal samples and to potentially reveal the presence of host fecal material in environmental waters. Cross-comparisons of microbial communities from different hosts also reveal relative abundances of genetic groups that can be used to distinguish sources. In increasing order of their information richness, several community analysis methods hold promise for MST applications: phospholipid fatty acid (PLFA) analysis, denaturing gradient gel electrophoresis (DGGE), terminal restriction fragment length polymorphism (TRFLP), cloning/sequencing, and PhyloChip. Specific case studies involving TRFLP and PhyloChip approaches demonstrate the ability of community-based analyses of contaminated waters to confirm a diagnosis of water quality based on host-specific marker(s). The success of community-based MST for comprehensively confirming fecal sources relies extensively upon using appropriate multivariate statistical approaches. While community-based MST is still under evaluation and development as a primary diagnostic tool, results presented herein demonstrate its promise. Coupled with its inherently comprehensive ability to capture an unprecedented amount of microbiological data that is relevant to water quality, the tools for microbial community analysis are increasingly accessible, and community-based approaches have unparalleled potential for translation into rapid, perhaps real-time, monitoring platforms.

  18. Method for extruding pitch based foam

    DOEpatents

    Klett, James W.

    2002-01-01

    A method and apparatus for extruding pitch based foam is disclosed. The method includes the steps of: forming a viscous pitch foam; passing the precursor through an extrusion tube; and subjecting the precursor in said extrusion tube to a temperature gradient which varies along the length of the extrusion tube to form an extruded carbon foam. The apparatus includes an extrusion tube having a passageway communicatively connected to a chamber in which a viscous pitch foam formed in the chamber paring through the extrusion tube, and a heating mechanism in thermal communication with the tube for heating the viscous pitch foam along the length of the tube in accordance with a predetermined temperature gradient.

  19. Homogenization method based on the inverse problem

    SciTech Connect

    Tota, A.; Makai, M.

    2013-07-01

    We present a method for deriving homogeneous multi-group cross sections to replace a heterogeneous region's multi-group cross sections; providing that the fluxes and the currents on the external boundary, and the region averaged fluxes are preserved. The method is developed using diffusion approximation to the neutron transport equation in a symmetrical slab geometry. Assuming that the boundary fluxes are given, two response matrices (RMs) can be defined. The first derives the boundary current from the boundary flux, the second derives the flux integral over the region from the boundary flux. Assuming that these RMs are known, we present a formula which reconstructs the multi-group cross-section matrix and the diffusion coefficients from the RMs of a homogeneous slab. Applying this formula to the RMs of a slab with multiple homogeneous regions yields a homogenization method; which produce such homogenized multi-group cross sections and homogenized diffusion coefficients, that the fluxes and the currents on the external boundary, and the region averaged fluxes are preserved. The method is based on the determination of the eigenvalues and the eigenvectors of the RMs. We reproduce the four-group cross section matrix and the diffusion constants from the RMs in numerical examples. We give conditions for replacing a heterogeneous region by a homogeneous one so that the boundary current and the region-averaged flux are preserved for a given boundary flux. (authors)

  20. An Implicit Characteristic Based Method for Electromagnetics

    NASA Technical Reports Server (NTRS)

    Beggs, John H.; Briley, W. Roger

    2001-01-01

    An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.

  1. Graph-based Methods for Orbit Classification

    SciTech Connect

    Bagherjeiran, A; Kamath, C

    2005-09-29

    An important step in the quest for low-cost fusion power is the ability to perform and analyze experiments in prototype fusion reactors. One of the tasks in the analysis of experimental data is the classification of orbits in Poincare plots. These plots are generated by the particles in a fusion reactor as they move within the toroidal device. In this paper, we describe the use of graph-based methods to extract features from orbits. These features are then used to classify the orbits into several categories. Our results show that existing machine learning algorithms are successful in classifying orbits with few points, a situation which can arise in data from experiments.

  2. Subjective evidence based ethnography: method and applications.

    PubMed

    Lahlou, Saadi; Le Bellu, Sophie; Boesen-Mariani, Sabine

    2015-06-01

    Subjective Evidence Based Ethnography (SEBE) is a method designed to access subjective experience. It uses First Person Perspective (FPP) digital recordings as a basis for analytic Replay Interviews (RIW) with the participants. This triggers their memory and enables a detailed step by step understanding of activity: goals, subgoals, determinants of actions, decision-making processes, etc. This paper describes the technique and two applications. First, the analysis of professional practices for know-how transferring purposes in industry is illustrated with the analysis of nuclear power-plant operators' gestures. This shows how SEBE enables modelling activity, describing good and bad practices, risky situations, and expert tacit knowledge. Second, the analysis of full days lived by Polish mothers taking care of their children is described, with a specific focus on how they manage their eating and drinking. This research has been done on a sub-sample of a large scale intervention designed to increase plain water drinking vs sweet beverages. It illustrates the interest of SEBE as an exploratory technique in complement to other more classic approaches such as questionnaires and behavioural diaries. It provides the detailed "how" of the effects that are measured at aggregate level by other techniques. PMID:25579747

  3. DNA-based methods of geochemical prospecting

    DOEpatents

    Ashby, Matthew

    2011-12-06

    The present invention relates to methods for performing surveys of the genetic diversity of a population. The invention also relates to methods for performing genetic analyses of a population. The invention further relates to methods for the creation of databases comprising the survey information and the databases created by these methods. The invention also relates to methods for analyzing the information to correlate the presence of nucleic acid markers with desired parameters in a sample. These methods have application in the fields of geochemical exploration, agriculture, bioremediation, environmental analysis, clinical microbiology, forensic science and medicine.

  4. Multifractal Framework Based on Blanket Method

    PubMed Central

    Paskaš, Milorad P.; Reljin, Irini S.; Reljin, Branimir D.

    2014-01-01

    This paper proposes two local multifractal measures motivated by blanket method for calculation of fractal dimension. They cover both fractal approaches familiar in image processing. The first two measures (proposed Methods 1 and 3) support model of image with embedded dimension three, while the other supports model of image embedded in space of dimension three (proposed Method 2). While the classical blanket method provides only one value for an image (fractal dimension) multifractal spectrum obtained by any of the proposed measures gives a whole range of dimensional values. This means that proposed multifractal blanket model generalizes classical (monofractal) blanket method and other versions of this monofractal approach implemented locally. Proposed measures are validated on Brodatz image database through texture classification. All proposed methods give similar classification results, while average computation time of Method 3 is substantially longer. PMID:24578664

  5. Method for sequencing DNA base pairs

    DOEpatents

    Sessler, Andrew M.; Dawson, John

    1993-01-01

    The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source.

  6. Method for sequencing DNA base pairs

    DOEpatents

    Sessler, A.M.; Dawson, J.

    1993-12-14

    The base pairs of a DNA structure are sequenced with the use of a scanning tunneling microscope (STM). The DNA structure is scanned by the STM probe tip, and, as it is being scanned, the DNA structure is separately subjected to a sequence of infrared radiation from four different sources, each source being selected to preferentially excite one of the four different bases in the DNA structure. Each particular base being scanned is subjected to such sequence of infrared radiation from the four different sources as that particular base is being scanned. The DNA structure as a whole is separately imaged for each subjection thereof to radiation from one only of each source. 6 figures.

  7. New ITF measure method based on fringes

    NASA Astrophysics Data System (ADS)

    Fang, Qiaoran; Liu, Shijie; Gao, Wanrong; Zhou, You; Liu, HuanHuan

    2016-01-01

    With the unprecedented developments of the intense laser and aerospace projects', the interferometer is widely used in detecting middle frequency indicators of the optical elements, which put forward very high request towards the interferometer system transfer function (ITF). Conventionally, the ITF is measured by comparing the power spectra of known phase objects such as high-quality phase step. However, the fabrication of phase step is complex and high-cost, especially in the measurement of large-aperture interferometer. In this paper, a new fringe method is proposed to measure the ITF without additional objects. The frequency was changed by adjusting the number of fringes, and the normalized transfer function value was measured at different frequencies. The ITF value measured by fringe method was consistent with the traditional phase step method, which confirms the feasibility of proposed method. Moreover, the measurement error caused by defocus was analyzed. The proposed method does not require the preparation of a step artifact, which greatly reduces the test cost, and is of great significance to the ITF measurement of large aperture interferometer.

  8. HMM-Based Gene Annotation Methods

    SciTech Connect

    Haussler, David; Hughey, Richard; Karplus, Keven

    1999-09-20

    Development of new statistical methods and computational tools to identify genes in human genomic DNA, and to provide clues to their functions by identifying features such as transcription factor binding sites, tissue, specific expression and splicing patterns, and remove homologies at the protein level with genes of known function.

  9. Immunoassay control method based on light scattering

    NASA Astrophysics Data System (ADS)

    Bilyi, Olexander I.; Kiselyov, Eugene M.; Petrina, R. O.; Ferensovich, Yaroslav P.; Yaremyk, Roman Y.

    1999-11-01

    The physics principle of registration immune reaction by light scattering methods is concerned. The operation of laser nephelometry for measuring antigen-antibody reaction is described. The technique of obtaining diagnostic and immune reactions of interaction latex agglutination for diphtheria determination is described.

  10. Roadside-based communication system and method

    NASA Technical Reports Server (NTRS)

    Bachelder, Aaron D. (Inventor)

    2007-01-01

    A roadside-based communication system providing backup communication between emergency mobile units and emergency command centers. In the event of failure of a primary communication, the mobile units transmit wireless messages to nearby roadside controllers that may take the form of intersection controllers. The intersection controllers receive the wireless messages, convert the messages into standard digital streams, and transmit the digital streams along a citywide network to a destination intersection or command center.

  11. Method of casting pitch based foam

    DOEpatents

    Klett, James W.

    2002-01-01

    A process for producing molded pitch based foam is disclosed which minimizes cracking. The process includes forming a viscous pitch foam in a container, and then transferring the viscous pitch foam from the container into a mold. The viscous pitch foam in the mold is hardened to provide a carbon foam having a relatively uniform distribution of pore sizes and a highly aligned graphic structure in the struts.

  12. Method for producing iron-based catalysts

    DOEpatents

    Farcasiu, Malvina; Kaufman, Phillip B.; Diehl, J. Rodney; Kathrein, Hendrik

    1999-01-01

    A method for preparing an acid catalyst having a long shelf-life is provided comprising doping crystalline iron oxides with lattice-compatible metals and heating the now-doped oxide with halogen compounds at elevated temperatures. The invention also provides for a catalyst comprising an iron oxide particle having a predetermined lattice structure, one or more metal dopants for said iron oxide, said dopants having an ionic radius compatible with said lattice structure; and a halogen bound with the iron and the metal dopants on the surface of the particle.

  13. PCLC flake-based apparatus and method

    DOEpatents

    Cox, Gerald P; Fromen, Cathy A; Marshall, Kenneth L; Jacobs, Stephen D

    2012-10-23

    A PCLC flake/fluid host suspension that enables dual-frequency, reverse drive reorientation and relaxation of the PCLC flakes is composed of a fluid host that is a mixture of: 94 to 99.5 wt % of a non-aqueous fluid medium having a dielectric constant value .di-elect cons., where 1<.di-elect cons.<7, a conductivity value .sigma., where 10.sup.-9>.sigma.>10.sup.-7 Siemens per meter (S/m), and a resistivity r, where 10.sup.7>r>10.sup.10 ohm-meters (.OMEGA.-m), and which is optically transparent in a selected wavelength range .DELTA..lamda.; 0.0025 to 0.25 wt % of an inorganic chloride salt; 0.0475 to 4.75 wt % water; and 0.25 to 2 wt % of an anionic surfactant; and 1 to 5 wt % of PCLC flakes suspended in the fluid host mixture. Various encapsulation forms and methods are disclosed including a Basic test cell, a Microwell, a Microcube, Direct encapsulation (I), Direct encapsulation (II), and Coacervation encapsulation. Applications to display devices are disclosed.

  14. The Consistency and Ranking Method Based on Comparison Linguistic Variable

    NASA Astrophysics Data System (ADS)

    Zhao, Qisheng; Wei, Fajie; Zhou, Shenghan

    The study developed a consistency approximation and ranking method based on the comparison Linguistic variable. The method constructs the consistency fuzzy complementary judgment matrix by using the judgment matrix of linguistic variable. The judgment matrix is defined by the fuzzy set or vague set of comparison linguistic variable. The method obtains the VPIS and VNIS based on TOPSIS method. And the relative similar approach degrees with the distance between alternatives and VPIS or VNIS are defined. Then the study analyzes the impact on quality of evaluation which caused by evaluation method, index weight and appraiser. Finally, the improving methods were discussed, and an example is presented to illustrate the proposed method.

  15. Brain Based Teaching: Fad or Promising Teaching Method.

    ERIC Educational Resources Information Center

    Winters, Clyde A.

    This paper discusses brain-based teaching and examines its relevance as a teaching method and knowledge base. Brain-based teaching is very popular among early childhood educators. Positive attributes of brain-based education include student engagement and active involvement in their own learning, teachers teaching for meaning and understanding,…

  16. Shrinkage regression-based methods for microarray missing value imputation

    PubMed Central

    2013-01-01

    Background Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. Results To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Conclusions Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods. PMID:24565159

  17. Decision Making Method Based on Paraconsistent Annotated Logic and Statistical Method: a Comparison

    NASA Astrophysics Data System (ADS)

    de Carvalho, Fábio Romeu; Brunstein, Israel; Abe, Jair Minoro

    2008-10-01

    Presently, there are new kinds of logic capable of handling uncertain and contradictory data without becoming trivial. Decision making theories based on these logics have shown to be powerful in many aspects regarding more traditional methods based on Statistics. In this paper we intend to outline a first study for a decision making theory based on Paraconsistent Annotated Evidential Logic Eτ (Paraconsistent Decision Method (PDM)) and classical Statistical Decision Method (SDM). Some discussion is presented below.

  18. Pyrolyzed-parylene based sensors and method of manufacture

    NASA Technical Reports Server (NTRS)

    Tai, Yu-Chong (Inventor); Liger, Matthieu (Inventor); Miserendino, Scott (Inventor); Konishi, Satoshi (Inventor)

    2007-01-01

    A method (and resulting structure) for fabricating a sensing device. The method includes providing a substrate comprising a surface region and forming an insulating material overlying the surface region. The method also includes forming a film of carbon based material overlying the insulating material and treating to the film of carbon based material to pyrolyzed the carbon based material to cause formation of a film of substantially carbon based material having a resistivity ranging within a predetermined range. The method also provides at least a portion of the pyrolyzed carbon based material in a sensor application and uses the portion of the pyrolyzed carbon based material in the sensing application. In a specific embodiment, the sensing application is selected from chemical, humidity, piezoelectric, radiation, mechanical strain or temperature.

  19. Correlation theory-based signal processing method for CMF signals

    NASA Astrophysics Data System (ADS)

    Shen, Yan-lin; Tu, Ya-qing

    2016-06-01

    Signal processing precision of Coriolis mass flowmeter (CMF) signals affects measurement accuracy of Coriolis mass flowmeters directly. To improve the measurement accuracy of CMFs, a correlation theory-based signal processing method for CMF signals is proposed, which is comprised of the correlation theory-based frequency estimation method and phase difference estimation method. Theoretical analysis shows that the proposed method eliminates the effect of non-integral period sampling signals on frequency and phase difference estimation. The results of simulations and field experiments demonstrate that the proposed method improves the anti-interference performance of frequency and phase difference estimation and has better estimation performance than the adaptive notch filter, discrete Fourier transform and autocorrelation methods in terms of frequency estimation and the data extension-based correlation, Hilbert transform, quadrature delay estimator and discrete Fourier transform methods in terms of phase difference estimation, which contributes to improving the measurement accuracy of Coriolis mass flowmeters.

  20. EPA (ENVIRONMENTAL PROTECTION AGENCY) METHOD STUDY 30, METHOD 625 - BASE/NEUTRALS, ACIDS AND PESTICIDES

    EPA Science Inventory

    The work which is described in this report was performed for the purpose of validating, through an interlaboratory study, Method 625 for the analysis of the base/neutral, acid, and pesticide priority pollutants. This method is based on the extraction and concentration of the vari...

  1. A New Adaptive Image Denoising Method Based on Neighboring Coefficients

    NASA Astrophysics Data System (ADS)

    Biswas, Mantosh; Om, Hari

    2016-03-01

    Many good techniques have been discussed for image denoising that include NeighShrink, improved adaptive wavelet denoising method based on neighboring coefficients (IAWDMBNC), improved wavelet shrinkage technique for image denoising (IWST), local adaptive wiener filter (LAWF), wavelet packet thresholding using median and wiener filters (WPTMWF), adaptive image denoising method based on thresholding (AIDMT). These techniques are based on local statistical description of the neighboring coefficients in a window. These methods however do not give good quality of the images since they cannot modify and remove too many small wavelet coefficients simultaneously due to the threshold. In this paper, a new image denoising method is proposed that shrinks the noisy coefficients using an adaptive threshold. Our method overcomes these drawbacks and it has better performance than the NeighShrink, IAWDMBNC, IWST, LAWF, WPTMWF, and AIDMT denoising methods.

  2. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results. PMID:26405856

  3. Propensity Score–Based Methods versus MTE-Based Methods in Causal Inference: Identification, Estimation, and Application*

    PubMed Central

    ZHOU, XIANG; XIE, YU

    2012-01-01

    Since the seminal introduction of the propensity score by Rosenbaum and Rubin, propensity-score-based (PS-based) methods have been widely used for drawing causal inferences in the behavioral and social sciences. However, the propensity score approach depends on the ignorability assumption: there are no unobserved confounders once observed covariates are taken into account. For situations where this assumption may be violated, Heckman and his associates have recently developed a novel approach based on marginal treatment effects (MTE). In this paper, we (1) explicate consequences for PS-based methods when aspects of the ignorability assumption are violated; (2) compare PS-based methods and MTE-based methods by making a close examination of their identification assumptions and estimation performances; (3) apply these two approaches in estimating the economic return to college using data from NLSY 1979 and discuss their discrepancies in results. When there is a sorting gain but no systematic baseline difference between treated and untreated units given observed covariates, PS-based methods can identify the treatment effect of the treated (TT). The MTE approach performs best when there is a valid and strong instrumental variable (IV). In addition, this paper introduces the “smoothing-difference PS-based method,” which enables us to uncover heterogeneity across people of different propensity scores in both counterfactual outcomes and treatment effects. PMID:26877562

  4. Modeling of Tumor Growth Based on Adomian Decomposition Method

    NASA Astrophysics Data System (ADS)

    Mahiddin, Norhasimah; Ali, Siti Aishah Hashim

    2008-01-01

    Modeling of a growing tumor over time is extremely difficult. This is due to the complex biological phenomena underlying cancer growth. Existing models mostly based on numerical methods and could describe spherically-shaped avascular tumors but they cannot match the highly heterogeneous and complex shaped tumors seen in cancer patients. We propose a new technique based on decomposition method to solve analytically cancer model.

  5. Optimal assignment methods for ligand-based virtual screening

    PubMed Central

    2009-01-01

    Background Ligand-based virtual screening experiments are an important task in the early drug discovery stage. An ambitious aim in each experiment is to disclose active structures based on new scaffolds. To perform these "scaffold-hoppings" for individual problems and targets, a plethora of different similarity methods based on diverse techniques were published in the last years. The optimal assignment approach on molecular graphs, a successful method in the field of quantitative structure-activity relationships, has not been tested as a ligand-based virtual screening method so far. Results We evaluated two already published and two new optimal assignment methods on various data sets. To emphasize the "scaffold-hopping" ability, we used the information of chemotype clustering analyses in our evaluation metrics. Comparisons with literature results show an improved early recognition performance and comparable results over the complete data set. A new method based on two different assignment steps shows an increased "scaffold-hopping" behavior together with a good early recognition performance. Conclusion The presented methods show a good combination of chemotype discovery and enrichment of active structures. Additionally, the optimal assignment on molecular graphs has the advantage to investigate and interpret the mappings, allowing precise modifications of internal parameters of the similarity measure for specific targets. All methods have low computation times which make them applicable to screen large data sets. PMID:20150995

  6. Spectrum reconstruction based on the constrained optimal linear inverse methods.

    PubMed

    Ren, Wenyi; Zhang, Chunmin; Mu, Tingkui; Dai, Haishan

    2012-07-01

    The dispersion effect of birefringent material results in spectrally varying Nyquist frequency for the Fourier transform spectrometer based on birefringent prism. Correct spectral information cannot be retrieved from the observed interferogram if the dispersion effect is not appropriately compensated. Some methods, such as nonuniform fast Fourier transforms and compensation method, were proposed to reconstruct the spectrum. In this Letter, an alternative constrained spectrum reconstruction method is suggested for the stationary polarization interference imaging spectrometer (SPIIS) based on the Savart polariscope. In the theoretical model of the interferogram, the noise and the total measurement error are included, and the spectrum reconstruction is performed by using the constrained optimal linear inverse methods. From numerical simulation, it is found that the proposed method is much more effective and robust than the nonconstrained spectrum reconstruction method proposed by Jian, and provides a useful spectrum reconstruction approach for the SPIIS. PMID:22743461

  7. A Channelization-Based DOA Estimation Method for Wideband Signals.

    PubMed

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  8. Learning to Teach within Practice-Based Methods Courses

    ERIC Educational Resources Information Center

    Kazemi, Elham; Waege, Kjersti

    2015-01-01

    Supporting prospective teachers to enact high quality instruction requires transforming their methods preparation. This study follows three teachers through a practice-based elementary methods course. Weekly class sessions took place in an elementary school. The setting afforded opportunities for prospective teachers to engage in cycles of…

  9. Qualitative Assessment of Inquiry-Based Teaching Methods

    ERIC Educational Resources Information Center

    Briggs, Michael; Long, George; Owens, Katrina

    2011-01-01

    A new approach to teaching method assessment using student focused qualitative studies and the theoretical framework of mental models is proposed. The methodology is considered specifically for the advantages it offers when applied to the assessment of inquiry-based teaching methods. The theoretical foundation of mental models is discussed, and…

  10. A Novel Method for Learner Assessment Based on Learner Annotations

    ERIC Educational Resources Information Center

    Noorbehbahani, Fakhroddin; Samani, Elaheh Biglar Beigi; Jazi, Hossein Hadian

    2013-01-01

    Assessment is one of the most essential parts of any instructive learning process which aims to evaluate a learner's knowledge about learning concepts. In this work, a new method for learner assessment based on learner annotations is presented. The proposed method exploits the M-BLEU algorithm to find the most similar reference annotations…

  11. A Channelization-Based DOA Estimation Method for Wideband Signals

    PubMed Central

    Guo, Rui; Zhang, Yue; Lin, Qianqiang; Chen, Zengping

    2016-01-01

    In this paper, we propose a novel direction of arrival (DOA) estimation method for wideband signals with sensor arrays. The proposed method splits the wideband array output into multiple frequency sub-channels and estimates the signal parameters using a digital channelization receiver. Based on the output sub-channels, a channelization-based incoherent signal subspace method (Channelization-ISM) and a channelization-based test of orthogonality of projected subspaces method (Channelization-TOPS) are proposed. Channelization-ISM applies narrowband signal subspace methods on each sub-channel independently. Then the arithmetic mean or geometric mean of the estimated DOAs from each sub-channel gives the final result. Channelization-TOPS measures the orthogonality between the signal and the noise subspaces of the output sub-channels to estimate DOAs. The proposed channelization-based method isolates signals in different bandwidths reasonably and improves the output SNR. It outperforms the conventional ISM and TOPS methods on estimation accuracy and dynamic range, especially in real environments. Besides, the parallel processing architecture makes it easy to implement on hardware. A wideband digital array radar (DAR) using direct wideband radio frequency (RF) digitization is presented. Experiments carried out in a microwave anechoic chamber with the wideband DAR are presented to demonstrate the performance. The results verify the effectiveness of the proposed method. PMID:27384566

  12. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, G.F.; Steindler, M.J.

    1985-05-21

    A method of removing a phosphorus-based poisonous substance from water contaminated is presented. In addition, the toxicity of the phosphorus-based substance is also subsequently destroyed. A water-immiscible organic solvent is first immobilized on a supported liquid membrane before the contaminated water is contacted with one side of the supported liquid membrane to absorb the phosphorus-based substance in the organic solvent. The other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react with phosphorus-based solvated species to form a non-toxic product.

  13. [Synchrotron-based characterization methods applied to ancient materials (I)].

    PubMed

    Anheim, Étienne; Thoury, Mathieu; Bertrand, Loïc

    2015-12-01

    This article aims at presenting the first results of a transdisciplinary research programme in heritage sciences. Based on the growing use and on the potentialities of micro- and nano-characterization synchrotron-based methods to study ancient materials (archaeology, palaeontology, cultural heritage, past environments), this contribution will identify and test conceptual and methodological elements of convergence between physicochemical and historical sciences. PMID:25200450

  14. Isolate Speech Recognition Based on Time-Frequency Analysis Methods

    NASA Astrophysics Data System (ADS)

    Mantilla-Caeiros, Alfredo; Nakano Miyatake, Mariko; Perez-Meana, Hector

    A feature extraction method for isolate speech recognition is proposed, which is based on a time frequency analysis using a critical band concept similar to that performed in the inner ear model; which emulates the inner ear behavior by performing signal decomposition, similar to carried out by the basilar membrane. Evaluation results show that the proposed method performs better than other previously proposed feature extraction methods when it is used to characterize normal as well as esophageal speech signal.

  15. Fertility awareness-based methods: another option for family planning.

    PubMed

    Pallone, Stephen R; Bergus, George R

    2009-01-01

    Modern fertility awareness-based methods (FABMs) of family planning have been offered as alternative methods of family planning. Billings Ovulation Method, the Creighton Model, and the Symptothermal Method are the more widely used FABMs and can be more narrowly defined as natural family planning. The first 2 methods are based on the examination of cervical secretions to assess fertility. The Symptothermal Method combines characteristics of cervical secretions, basal body temperature, and historical cycle data to determine fertility. FABMs also include the more recently developed Standard Days Method and TwoDays Method. All are distinct from the more traditional rhythm and basal body temperature methods alone. Although these older methods are not highly effective, modern FABMs have typical-use unintended pregnancy rates of 1% to 3% in both industrialized and nonindustrialized nations. Studies suggest that in the United States physician knowledge of FABMs is frequently incomplete. We review the available evidence about the effectiveness for preventing unintended pregnancy, prognostic social demographics of users of the methods, and social outcomes related to FABMs, all of which suggest that family physicians can offer modern FABMs as effective means of family planning. We also provide suggestions about useful educational and instructional resources for family physicians and their patients. PMID:19264938

  16. A method for selecting training samples based on camera response

    NASA Astrophysics Data System (ADS)

    Zhang, Leihong; Li, Bei; Pan, Zilan; Liang, Dong; Kang, Yi; Zhang, Dawei; Ma, Xiuhua

    2016-09-01

    In the process of spectral reflectance reconstruction, sample selection plays an important role in the accuracy of the constructed model and in reconstruction effects. In this paper, a method for training sample selection based on camera response is proposed. It has been proved that the camera response value has a close correlation with the spectral reflectance. Consequently, in this paper we adopt the technique of drawing a sphere in camera response value space to select the training samples which have a higher correlation with the test samples. In addition, the Wiener estimation method is used to reconstruct the spectral reflectance. Finally, we find that the method of sample selection based on camera response value has the smallest color difference and root mean square error after reconstruction compared to the method using the full set of Munsell color charts, the Mohammadi training sample selection method, and the stratified sampling method. Moreover, the goodness of fit coefficient of this method is also the highest among the four sample selection methods. Taking all the factors mentioned above into consideration, the method of training sample selection based on camera response value enhances the reconstruction accuracy from both the colorimetric and spectral perspectives.

  17. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1987-10-07

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  18. Method of recovering oil-based fluid and apparatus

    SciTech Connect

    Brinkley, H.E.

    1993-07-20

    A method is described for recovering oil-based fluid from a surface having oil-based fluid thereon comprising the steps of: applying to the oil-based fluid on the surface an oil-based fluid absorbent cloth of man-made fibers, the cloth having at least one napped surface that defines voids therein, the nap being formed of raised ends or loops of the fibers; absorbing, with the cloth, oil-based fluid; feeding the cloth having absorbed oil-based fluid to a means for applying a force to the cloth to recover oil-based fluid; and applying force to the cloth to recover oil-based fluid therefrom using the force applying means.

  19. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Moyzis, Robert K.; Ratliff, Robert L.; Shera, E. Brooks; Stewart, Carleton C.

    1990-01-01

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed.

  20. Method for rapid base sequencing in DNA and RNA

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Moyzis, R.K.; Ratliff, R.L.; Shera, E.B.; Stewart, C.C.

    1990-10-09

    A method is provided for the rapid base sequencing of DNA or RNA fragments wherein a single fragment of DNA or RNA is provided with identifiable bases and suspended in a moving flow stream. An exonuclease sequentially cleaves individual bases from the end of the suspended fragment. The moving flow stream maintains the cleaved bases in an orderly train for subsequent detection and identification. In a particular embodiment, individual bases forming the DNA or RNA fragments are individually tagged with a characteristic fluorescent dye. The train of bases is then excited to fluorescence with an output spectrum characteristic of the individual bases. Accordingly, the base sequence of the original DNA or RNA fragment can be reconstructed. 2 figs.

  1. Comparing Methods for UAV-Based Autonomous Surveillance

    NASA Technical Reports Server (NTRS)

    Freed, Michael; Harris, Robert; Shafto, Michael

    2004-01-01

    We describe an approach to evaluating algorithmic and human performance in directing UAV-based surveillance. Its key elements are a decision-theoretic framework for measuring the utility of a surveillance schedule and an evaluation testbed consisting of 243 scenarios covering a well-defined space of possible missions. We apply this approach to two example UAV-based surveillance methods, a TSP-based algorithm and a human-directed approach, then compare them to identify general strengths, and weaknesses of each method.

  2. Leaf image segmentation method based on multifractal detrended fluctuation analysis

    NASA Astrophysics Data System (ADS)

    Wang, Fang; Li, Jin-Wei; Shi, Wen; Liao, Gui-Ping

    2013-12-01

    To identify singular regions of crop leaf affected by diseases, based on multifractal detrended fluctuation analysis (MF-DFA), an image segmentation method is proposed. In the proposed method, first, we defend a new texture descriptor: local generalized Hurst exponent, recorded as LHq based on MF-DFA. And then, box-counting dimension f(LHq) is calculated for sub-images constituted by the LHq of some pixels, which come from a specific region. Consequently, series of f(LHq) of the different regions can be obtained. Finally, the singular regions are segmented according to the corresponding f(LHq). Six kinds of corn diseases leaf's images are tested in our experiments. Both the proposed method and other two segmentation methods—multifractal spectrum based and fuzzy C-means clustering have been compared in the experiments. The comparison results demonstrate that the proposed method can recognize the lesion regions more effectively and provide more robust segmentations.

  3. Optimizing distance-based methods for large data sets

    NASA Astrophysics Data System (ADS)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  4. Correction of Misclassifications Using a Proximity-Based Estimation Method

    NASA Astrophysics Data System (ADS)

    Niemistö, Antti; Shmulevich, Ilya; Lukin, Vladimir V.; Dolia, Alexander N.; Yli-Harja, Olli

    2004-12-01

    An estimation method for correcting misclassifications in signal and image processing is presented. The method is based on the use of context-based (temporal or spatial) information in a sliding-window fashion. The classes can be purely nominal, that is, an ordering of the classes is not required. The method employs nonlinear operations based on class proximities defined by a proximity matrix. Two case studies are presented. In the first, the proposed method is applied to one-dimensional signals for processing data that are obtained by a musical key-finding algorithm. In the second, the estimation method is applied to two-dimensional signals for correction of misclassifications in images. In the first case study, the proximity matrix employed by the estimation method follows directly from music perception studies, whereas in the second case study, the optimal proximity matrix is obtained with genetic algorithms as the learning rule in a training-based optimization framework. Simulation results are presented in both case studies and the degree of improvement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.

  5. An overview of modal-based damage identification methods

    SciTech Connect

    Farrar, C.R.; Doebling, S.W.

    1997-09-01

    This paper provides an overview of methods that examine changes in measured vibration response to detect, locate, and characterize damage in structural and mechanical systems. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is first provided. The methods are then categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. This overview is limited to methods that can be adapted to a wide range of structures (i.e., are not dependent on a particular assumed model form for the system such as beam-bending behavior and methods and that are not based on updating finite element models). Next, the methods are described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of modal-based damage identification.

  6. Integrated navigation method based on inertial navigation system and Lidar

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoyue; Shi, Haitao; Pan, Jianye; Zhang, Chunxi

    2016-04-01

    An integrated navigation method based on the inertial navigational system (INS) and Lidar was proposed for land navigation. Compared with the traditional integrated navigational method and dead reckoning (DR) method, the influence of the inertial measurement unit (IMU) scale factor and misalignment was considered in the new method. First, the influence of the IMU scale factor and misalignment on navigation accuracy was analyzed. Based on the analysis, the integrated system error model of INS and Lidar was established, in which the IMU scale factor and misalignment error states were included. Then the observability of IMU error states was analyzed. According to the results of the observability analysis, the integrated system was optimized. Finally, numerical simulation and a vehicle test were carried out to validate the availability and utility of the proposed INS/Lidar integrated navigational method. Compared with the test result of a traditional integrated navigation method and DR method, the proposed integrated navigational method could result in a higher navigation precision. Consequently, the IMU scale factor and misalignment error were effectively compensated by the proposed method and the new integrated navigational method is valid.

  7. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.

  8. Consistency-based ellipse detection method for complicated images

    NASA Astrophysics Data System (ADS)

    Zhang, Lijun; Huang, Xuexiang; Feng, Weichun; Liang, Shuli; Hu, Tianjian

    2016-05-01

    Accurate ellipse detection in complicated images is a challenging problem due to corruptions from image clutter, noise, or occlusion of other objects. To cope with this problem, an edge-following-based ellipse detection method is proposed which promotes the performances of the subprocesses based on consistency. The ellipse detector models edge connectivity by line segments and exploits inconsistent endpoints of the line segments to split the edge contours into smooth arcs. The smooth arcs are further refined with a novel arc refinement method which iteratively improves the consistency degree of the smooth arc. A two-phase arc integration method is developed to group disconnected elliptical arcs belonging to the same ellipse, and two constraints based on consistency are defined to increase the effectiveness and speed of the merging process. Finally, an efficient ellipse validation method is proposed to evaluate the saliency of the elliptic hypotheses. Detailed evaluation on synthetic images shows that our method outperforms other state-of-the-art ellipse detection methods in terms of effectiveness and speed. Additionally, we test our detector on three challenging real-world datasets. The F-measure score and execution time of results demonstrate that our method is effective and fast in complicated images. Therefore, the proposed method is suitable for practical applications.

  9. Unstructured road segmentation based on Otsu-entropy method

    NASA Astrophysics Data System (ADS)

    Shi, Chaoxia; Wang, Yanqing; Liu, Hanxiang; Yang, Jingyu

    2011-10-01

    Unstructured road segmentation plays an important role in visual guiding navigation for intelligent vehicle. A novel vision-based road segmentation method that combined the Otsu double-threshold method with the maximum entropy double-threshold method was proposed to handle those problems caused by illumination variations and road surface dilapidation. Spatial correlation by analyzing the grey-level histogram of the original image and temporal correlation by matching of the selected referenced region was used to estimate the coarse range of the road region. Road segmentation experiments executed in different road scenes have demonstrate that the method proposed in this paper is robust against illumination variations and surface dilapidation.

  10. Multi-Point Combinatorial Optimization Method with Distance Based Interaction

    NASA Astrophysics Data System (ADS)

    Yasuda, Keiichiro; Jinnai, Hiroyuki; Ishigame, Atsushi

    This paper proposes a multi-point combinatorial optimization method based on Proximate Optimality Principle (POP), which method has several advantages for solving large-scale combinatorial optimization problems. The proposed algorithm uses not only the distance between search points but also the interaction among search points in order to utilize POP in several types of combinatorial optimization problems. The proposed algorithm is applied to several typical combinatorial optimization problems, a knapsack problem, a traveling salesman problem, and a flow shop scheduling problem, in order to verify the performance of the proposed algorithm. The simulation results indicate that the proposed method has higher optimality than the conventional combinatorial optimization methods.

  11. A threshold selection method based on edge preserving

    NASA Astrophysics Data System (ADS)

    Lou, Liantang; Dan, Wei; Chen, Jiaqi

    2015-12-01

    A method of automatic threshold selection for image segmentation is presented. An optimal threshold is selected in order to preserve edge of image perfectly in image segmentation. The shortcoming of Otsu's method based on gray-level histograms is analyzed. The edge energy function of bivariate continuous function is expressed as the line integral while the edge energy function of image is simulated by discretizing the integral. An optimal threshold method by maximizing the edge energy function is given. Several experimental results are also presented to compare with the Otsu's method.

  12. System and method for deriving a process-based specification

    NASA Technical Reports Server (NTRS)

    Hinchey, Michael Gerard (Inventor); Rash, James Larry (Inventor); Rouff, Christopher A. (Inventor)

    2009-01-01

    A system and method for deriving a process-based specification for a system is disclosed. The process-based specification is mathematically inferred from a trace-based specification. The trace-based specification is derived from a non-empty set of traces or natural language scenarios. The process-based specification is mathematically equivalent to the trace-based specification. Code is generated, if applicable, from the process-based specification. A process, or phases of a process, using the features disclosed can be reversed and repeated to allow for an interactive development and modification of legacy systems. The process is applicable to any class of system, including, but not limited to, biological and physical systems, electrical and electro-mechanical systems in addition to software, hardware and hybrid hardware-software systems.

  13. a Minimum Spanning Tree Based Method for Uav Image Segmentation

    NASA Astrophysics Data System (ADS)

    Wang, Ping; Wei, Zheng; Cui, Weihong; Lin, Zhiyong

    2016-06-01

    This paper proposes a Minimum Span Tree (MST) based image segmentation method for UAV images in coastal area. An edge weight based optimal criterion (merging predicate) is defined, which based on statistical learning theory (SLT). And we used a scale control parameter to control the segmentation scale. Experiments based on the high resolution UAV images in coastal area show that the proposed merging predicate can keep the integrity of the objects and prevent results from over segmentation. The segmentation results proves its efficiency in segmenting the rich texture images with good boundary of objects.

  14. The Reality-Based Learning Method: A Simple Method for Keeping Teaching Activities Relevant and Effective

    ERIC Educational Resources Information Center

    Smith, Louise W.; Van Doren, Doris C.

    2004-01-01

    Active and experiential learning theory have not dramatically changed collegiate classroom teaching methods, although they have long been included in the pedagogical literature. This article presents an evolved method, reality based learning, that aids professors in including active learning activities with feelings of clarity and confidence. The…

  15. Camera self-calibration method based on two vanishing points

    NASA Astrophysics Data System (ADS)

    Duan, Shaoli; Zang, Huaping; Xu, Mengmeng; Zhang, Xiaofang; Gong, Qiaoxia; Tian, Yongzhi; Liang, Erjun; Liu, Xiaomin

    2015-10-01

    Camera calibration is one of the indispensable processes to obtain 3D depth information from 2D images in the field of computer vision. Camera self-calibration is more convenient and flexible, especially in the application of large depth of fields, wide fields of view, and scene conversion, as well as other occasions like zooms. In this paper, a self-calibration method based on two vanishing points is proposed, the geometric characteristic of disappear points formed by two groups of orthogonal parallel lines is applied to camera self-calibration. By using the vectors' orthogonal properties of connection optical centers and the vanishing points, the constraint equations on the camera intrinsic parameters are established. By this method, four internal parameters of the camera can be solved though only four images taken from different viewpoints in a scene. Compared with the two other self-calibration methods with absolute quadric and calibration plate, the method based on two vanishing points does not require calibration objects, camera movement, the information on the size and location of parallel lines, without strict experimental equipment, and having convenient calibration process and simple algorithm. Compared with the experimental results of the method based on calibration plate, self-calibration method by using machine vision software Halcon, the practicability and effectiveness of the proposed method in this paper is verified.

  16. Irrigation scheduling: advantages and pitfalls of plant-based methods.

    PubMed

    Jones, Hamlyn G

    2004-11-01

    This paper reviews the various methods available for irrigation scheduling, contrasting traditional water-balance and soil moisture-based approaches with those based on sensing of the plant response to water deficits. The main plant-based methods for irrigation scheduling, including those based on direct or indirect measurement of plant water status and those based on plant physiological responses to drought, are outlined and evaluated. Specific plant-based methods include the use of dendrometry, fruit gauges, and other tissue water content sensors, while measurements of growth, sap flow, and stomatal conductance are also outlined. Recent advances, especially in the use of infrared thermometry and thermography for the study of stomatal conductance changes, are highlighted. The relative suitabilities of different approaches for specific crop and climatic situations are discussed, with the aim of indicating the strengths and weaknesses of different approaches, and highlighting their suitability over different spatial and temporal scales. The potential of soil- and plant-based systems for automated irrigation control using various scheduling techniques is also discussed. PMID:15286143

  17. Reentry trajectory optimization based on a multistage pseudospectral method.

    PubMed

    Zhao, Jiang; Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  18. Reentry Trajectory Optimization Based on a Multistage Pseudospectral Method

    PubMed Central

    Zhou, Rui; Jin, Xuelian

    2014-01-01

    Of the many direct numerical methods, the pseudospectral method serves as an effective tool to solve the reentry trajectory optimization for hypersonic vehicles. However, the traditional pseudospectral method is time-consuming due to large number of discretization points. For the purpose of autonomous and adaptive reentry guidance, the research herein presents a multistage trajectory control strategy based on the pseudospectral method, capable of dealing with the unexpected situations in reentry flight. The strategy typically includes two subproblems: the trajectory estimation and trajectory refining. In each processing stage, the proposed method generates a specified range of trajectory with the transition of the flight state. The full glide trajectory consists of several optimal trajectory sequences. The newly focused geographic constraints in actual flight are discussed thereafter. Numerical examples of free-space flight, target transition flight, and threat avoidance flight are used to show the feasible application of multistage pseudospectral method in reentry trajectory optimization. PMID:24574929

  19. Quaternion-based discriminant analysis method for color face recognition.

    PubMed

    Xu, Yong

    2012-01-01

    Pattern recognition techniques have been used to automatically recognize the objects, personal identities, predict the function of protein, the category of the cancer, identify lesion, perform product inspection, and so on. In this paper we propose a novel quaternion-based discriminant method. This method represents and classifies color images in a simple and mathematically tractable way. The proposed method is suitable for a large variety of real-world applications such as color face recognition and classification of the ground target shown in multispectrum remote images. This method first uses the quaternion number to denote the pixel in the color image and exploits a quaternion vector to represent the color image. This method then uses the linear discriminant analysis algorithm to transform the quaternion vector into a lower-dimensional quaternion vector and classifies it in this space. The experimental results show that the proposed method can obtain a very high accuracy for color face recognition. PMID:22937054

  20. Two DL-based Methods for Auditing Medical Terminological Systems

    PubMed Central

    Cornet, Ronald; Abu-Hanna, Ameen

    2005-01-01

    Medical terminological systems (TSs) play an increasingly important role in health care by supporting recording, retrieval and analysis of patient information. As the size and complexity of TSs are growing, the need arises for means to audit them, i.e. verify and maintain (logical) consistency and (semantic) correctness of their contents. In this paper we describe two methods based on description logics (DLs) for the audit of TSs. One method uses non-primitive definitions to detect concepts with equivalent definitions. The other method is characterized by stringent assumptions that are made about concept definitions, in order to detect inconsistent definitions. We discuss the possibility of applying these methods to the Foundational Model of Anatomy (FMA) to demonstrate the potentials and pitfalls of these methods. We show that the methods are complementary, and can indeed improve the contents of medical TSs. PMID:16779023

  1. A differential augmentation method based on aerostat reference stations

    NASA Astrophysics Data System (ADS)

    Shi, Zhengfa; Gong, Yingkui; Chen, Xiao

    2016-01-01

    Ground based regional augmentation systems is unable to cover regions such as the oceans, mountains and deserts. And its signal is vulnerable of building block. Besides, its positioning precision for high airspace object is limited. To settle such problems, a Differential augmentation method based on troposphere error corrections using aerostat reference stations is proposed. This method utilizes altitudes of mobile station and aerostat station to estimate troposphere delay errors, resulting in troposphere delay difference value between mobile stations and aerostat reference stations. With the aid of satellite navigation information of mobile stations and aerostat station and both troposphere delay difference values, mobile stations' positioning precision is enhanced by eliminating measurement errors (Satellite clock error, Ephemeris error, Ionospheric delay error, Tropospheric delay error) after differential. It is showed by simulation test that aerostat reference station Differential augmentation method based on tropospheric error corrections improves 3D positioning precision of mobile station to within 2m.

  2. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    SciTech Connect

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALE method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.

  3. Adaptive reconnection-based arbitrary Lagrangian Eulerian method

    DOE PAGESBeta

    Bo, Wurigen; Shashkov, Mikhail

    2015-07-21

    We present a new adaptive Arbitrary Lagrangian Eulerian (ALE) method. This method is based on the reconnection-based ALE (ReALE) methodology of Refs. [35], [34] and [6]. The main elements in a standard ReALE method are: an explicit Lagrangian phase on an arbitrary polygonal (in 2D) mesh in which the solution and positions of grid nodes are updated; a rezoning phase in which a new grid is defined by changing the connectivity (using Voronoi tessellation) but not the number of cells; and a remapping phase in which the Lagrangian solution is transferred onto the new grid. Furthermore, in the standard ReALEmore » method, the rezoned mesh is smoothed by using one or several steps toward centroidal Voronoi tessellation, but it is not adapted to the solution in any way.« less

  4. An XQDD-Based Verification Method for Quantum Circuits

    NASA Astrophysics Data System (ADS)

    Wang, Shiou-An; Lu, Chin-Yung; Tsai, I.-Ming; Kuo, Sy-Yen

    Synthesis of quantum circuits is essential for building quantum computers. It is important to verify that the circuits designed perform the correct functions. In this paper, we propose an algorithm which can be used to verify the quantum circuits synthesized by any method. The proposed algorithm is based on BDD (Binary Decision Diagram) and is called X-decomposition Quantum Decision Diagram (XQDD). In this method, quantum operations are modeled using a graphic method and the verification process is based on comparing these graphic diagrams. We also develop an algorithm to verify reversible circuits even if they have a different number of garbage qubits. In most cases, the number of nodes used in XQDD is less than that in other representations. In general, the proposed method is more efficient in terms of space and time and can be used to verify many quantum circuits in polynomial time.

  5. A Triangle Mesh Standardization Method Based on Particle Swarm Optimization

    PubMed Central

    Duan, Liming; Bai, Yang; Wang, Haoyu; Shao, Hui; Zhong, Siyang

    2016-01-01

    To enhance the triangle quality of a reconstructed triangle mesh, a novel triangle mesh standardization method based on particle swarm optimization (PSO) is proposed. First, each vertex of the mesh and its first order vertices are fitted to a cubic curve surface by using least square method. Additionally, based on the condition that the local fitted surface is the searching region of PSO and the best average quality of the local triangles is the goal, the vertex position of the mesh is regulated. Finally, the threshold of the normal angle between the original vertex and regulated vertex is used to determine whether the vertex needs to be adjusted to preserve the detailed features of the mesh. Compared with existing methods, experimental results show that the proposed method can effectively improve the triangle quality of the mesh while preserving the geometric features and details of the original mesh. PMID:27509129

  6. Image mosaic method based on SIFT features of line segment.

    PubMed

    Zhu, Jun; Ren, Mingwu

    2014-01-01

    This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling. PMID:24511326

  7. Altazimuth mount based dynamic calibration method for GNSS attitude measurement

    NASA Astrophysics Data System (ADS)

    Jiang, Nan; He, Tao; Sun, Shaohua; Gu, Qing

    2015-02-01

    As the key process to ensure the test accuracy and quality, the dynamic calibration of the GNSS attitude measuring instrument is often embarrassed by the lack of the rigid enough test platform and an accurate enough calibration reference. To solve the problems, a novel dynamic calibration method for GNSS attitude measurement based on altazimuth mount is put forward in this paper. The principle and implementation of this method are presented, and then the feasibility and usability of the method are analyzed in detail involving the applicability of the mount, calibrating precision, calibrating range, base line rigidity and the satellite signal involved factors. Furthermore, to verify and test the method, a confirmatory experiment is carried out with the survey ship GPS attitude measuring instrument, and the experimental results prove that it is a feasible way to the dynamic calibration for GNSS attitude measurement.

  8. Method of removing and detoxifying a phosphorus-based substance

    DOEpatents

    Vandegrift, George F.; Steindler, Martin J.

    1989-01-01

    A method of removing organic phosphorus-based poisonous substances from water contaminated therewith and of subsequently destroying the toxicity of the substance is disclosed. Initially, a water-immiscible organic is immobilized on a supported liquid membrane. Thereafter, the contaminated water is contacted with one side of the supported liquid membrane to selectively dissolve the phosphorus-based substance in the organic extractant. At the same time, the other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react the phosphorus-based substance dissolved by the organic extractant with a hydroxy ion. This forms a non-toxic reaction product in the base. The organic extractant can be a water-insoluble trialkyl amine, such as trilauryl amine. The phosphorus-based substance can be phosphoryl or a thiophosphoryl.

  9. Method of removing and detoxifying a phosphorus-based substance

    SciTech Connect

    Vandegrift, G.F.; Steindler, M.J.

    1989-07-25

    A method of removing organic phosphorus-based poisonous substances from water contaminated therewith and of subsequently destroying the toxicity of the substances is disclosed. Initially, a water-immiscible organic is immobilized on a supported liquid membrane. Thereafter, the contaminated water is contacted with one side of the supported liquid membrane to selectively dissolve the phosphorus-based substance in the organic extractant. At the same time, the other side of the supported liquid membrane is contacted with a hydroxy-affording strong base to react the phosphorus-based substance dissolved by the organic extractant with a hydroxy ion. This forms a non-toxic reaction product in the base. The organic extractant can be a water-insoluble trialkyl amine, such as trilauryl amine. The phosphorus-based substance can be phosphoryl or a thiophosphoryl.

  10. Local coding based matching kernel method for image classification.

    PubMed

    Song, Yan; McLoughlin, Ian Vince; Dai, Li-Rong

    2014-01-01

    This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV) techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK) method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method. PMID:25119982

  11. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2012-01-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  12. XML-based product information processing method for product design

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen Yu

    2011-12-01

    Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.

  13. LINEAR SCANNING METHOD BASED ON THE SAFT COARRAY

    SciTech Connect

    Martin, C. J.; Martinez-Graullera, O.; Romero, D.; Ullate, L. G.; Higuti, R. T.

    2010-02-22

    This work presents a method to obtain B-scan images based on linear array scanning and 2R-SAFT. Using this technique some advantages are obtained: the ultrasonic system is very simple; it avoids the grating lobes formation, characteristic in conventional SAFT; and subaperture size and focussing lens (to compensate emission-reception) can be adapted dynamically to every image point. The proposed method has been experimentally tested in the inspection of CFRP samples.

  14. Multispectral face liveness detection method based on gradient features

    NASA Astrophysics Data System (ADS)

    Hou, Ya-Li; Hao, Xiaoli; Wang, Yueyang; Guo, Changqing

    2013-11-01

    Face liveness detection aims to distinguish genuine faces from disguised faces. Most previous works under visible light focus on classification of genuine faces and planar photos or videos. To handle the three-dimensional (3-D) disguised faces, liveness detection based on multispectral images has been shown to be an effective choice. In this paper, a gradient-based multispectral method has been proposed for face liveness detection. Three feature vectors are developed to reduce the influence of varying illuminations. The reflectance-based feature achieves the best performance, which has a true positive rate of 98.3% and a true negative rate of 98.7%. The developed methods are also tested on individual bands to provide a clue for band selection in the imaging system. Preliminary results on different face orientations are also shown. The contributions of this paper are threefold. First, a gradient-based multispectral method has been proposed for liveness detection, which considers the reflectance properties of all the distinctive regions in a face. Second, three illumination-robust features are studied based on a dataset with two-dimensional planar photos, 3-D mannequins, and masks. Finally, the performance of the method on different spectral bands and face orientations is also shown in the evaluations.

  15. Innovating Method of Existing Mechanical Product Based on TRIZ Theory

    NASA Astrophysics Data System (ADS)

    Zhao, Cunyou; Shi, Dongyan; Wu, Han

    Main way of product development is adaptive design and variant design based on existing product. In this paper, conceptual design frame and its flow model of innovating products is put forward through combining the methods of conceptual design and TRIZ theory. Process system model of innovating design that includes requirement analysis, total function analysis and decomposing, engineering problem analysis, finding solution of engineering problem and primarily design is constructed and this establishes the base for innovating design of existing product.

  16. An analysis method for evaluating gradient-index fibers based on Monte Carlo method

    NASA Astrophysics Data System (ADS)

    Yoshida, S.; Horiuchi, S.; Ushiyama, Z.; Yamamoto, M.

    2011-05-01

    We propose a numerical analysis method for evaluating gradient-index (GRIN) optical fiber using the Monte Carlo method. GRIN optical fibers are widely used in optical information processing and communication applications, such as an image scanner, fax machine, optical sensor, and so on. An important factor which decides the performance of GRIN optical fiber is modulation transfer function (MTF). The MTF of a fiber is swayed by condition of manufacturing process such as temperature. Actual measurements of the MTF of a GRIN optical fiber using this method closely match those made by conventional methods. Experimentally, the MTF is measured using a square wave chart, and is then calculated based on the distribution of output strength on the chart. In contrast, the general method using computers evaluates the MTF based on a spot diagram made by an incident point light source. But the results differ greatly from those by experiment. In this paper, we explain the manufacturing process which affects the performance of GRIN optical fibers and a new evaluation method similar to the experimental system based on the Monte Carlo method. We verified that it more closely matches the experimental results than the conventional method.

  17. Improving merge methods for grid-based digital elevation models

    NASA Astrophysics Data System (ADS)

    Leitão, J. P.; Prodanović, D.; Maksimović, Č.

    2016-03-01

    Digital Elevation Models (DEMs) are used to represent the terrain in applications such as, for example, overland flow modelling or viewshed analysis. DEMs generated from digitising contour lines or obtained by LiDAR or satellite data are now widely available. However, in some cases, the area of study is covered by more than one of the available elevation data sets. In these cases the relevant DEMs may need to be merged. The merged DEM must retain the most accurate elevation information available while generating consistent slopes and aspects. In this paper we present a thorough analysis of three conventional grid-based DEM merging methods that are available in commercial GIS software. These methods are evaluated for their applicability in merging DEMs and, based on evaluation results, a method for improving the merging of grid-based DEMs is proposed. DEMs generated by the proposed method, called MBlend, showed significant improvements when compared to DEMs produced by the three conventional methods in terms of elevation, slope and aspect accuracy, ensuring also smooth elevation transitions between the original DEMs. The results produced by the improved method are highly relevant different applications in terrain analysis, e.g., visibility, or spotting irregularities in landforms and for modelling terrain phenomena, such as overland flow.

  18. A Star Pattern Recognition Method Based on Decreasing Redundancy Matching

    NASA Astrophysics Data System (ADS)

    Yao, Lu; Xiao-xiang, Zhang; Rong-yu, Sun

    2016-04-01

    During the optical observation of space objects, it is difficult to enable the background stars to get matched when the telescope pointing error and tracking error are significant. Based on the idea of decreasing redundancy matching, an effective recognition method for background stars is proposed in this paper. The simulative images under different conditions and the observed images are used to verify the proposed method. The experimental results show that the proposed method has raised the rate of recognition and reduced the time consumption, it can be used to match star patterns accurately and rapidly.

  19. An AIS-Based E-mail Classification Method

    NASA Astrophysics Data System (ADS)

    Qing, Jinjian; Mao, Ruilong; Bie, Rongfang; Gao, Xiao-Zhi

    This paper proposes a new e-mail classification method based on the Artificial Immune System (AIS), which is endowed with good diversity and self-adaptive ability by using the immune learning, immune memory, and immune recognition. In our method, the features of spam and non-spam extracted from the training sets are combined together, and the number of false positives (non-spam messages that are incorrectly classified as spam) can be reduced. The experimental results demonstrate that this method is effective in reducing the false rate.

  20. A online credit evaluation method based on AHP and SPA

    NASA Astrophysics Data System (ADS)

    Xu, Yingtao; Zhang, Ying

    2009-07-01

    Online credit evaluation is the foundation for the establishment of trust and for the management of risk between buyers and sellers in e-commerce. In this paper, a new credit evaluation method based on the analytic hierarchy process (AHP) and the set pair analysis (SPA) is presented to determine the credibility of the electronic commerce participants. It solves some of the drawbacks found in classical credit evaluation methods and broadens the scope of current approaches. Both qualitative and quantitative indicators are considered in the proposed method, then a overall credit score is achieved from the optimal perspective. In the end, a case analysis of China Garment Network is provided for illustrative purposes.

  1. Sonoclot(®)-based method to detect iron enhanced coagulation.

    PubMed

    Nielsen, Vance G; Henderson, Jon

    2016-07-01

    Thrombelastographic methods have been recently introduced to detect iron mediated hypercoagulability in settings such as sickle cell disease, hemodialysis, mechanical circulatory support, and neuroinflammation. However, these inflammatory situations may have heme oxygenase-derived, coexistent carbon monoxide present, which also enhances coagulation as assessed by the same thrombelastographic variables that are affected by iron. This brief report presents a novel, Sonoclot-based method to detect iron enhanced coagulation that is independent of carbon monoxide influence. Future investigation will be required to assess the sensitivity of this new method to detect iron mediated hypercoagulability in clinical settings compared to results obtained with thrombelastographic techniques. PMID:26497986

  2. A new image fusion method based on curvelet transform

    NASA Astrophysics Data System (ADS)

    Chu, Binbin; Yang, Xiushun; Qi, Dening; Li, Congli; Lu, Wei

    2010-02-01

    A new image fusion method based on Multiscale Geometric Analysis (MGA), which uses the improved fusion rules, is put forward in this paper. Firstly, the input low-level-light image and infrared image are decomposed by Curvelet transform, which is realized by Unequally-Spaced Fast Fourier Transforms. Secondly, the decomposed coefficients in different scales and directions are fused by corresponding fusion rules. At last, the fusion image is acquired by recomposing the fused coefficients. The simulation results show that this method performs better than the conventional wavelet method both in the subjective vision aspect and the objective estimation indices.

  3. Network motif-based method for identifying coronary artery disease

    PubMed Central

    LI, YIN; CONG, YAN; ZHAO, YUN

    2016-01-01

    The present study aimed to develop a more efficient method for identifying coronary artery disease (CAD) than the conventional method using individual differentially expressed genes (DEGs). GSE42148 gene microarray data were downloaded, preprocessed and screened for DEGs. Additionally, based on transcriptional regulation data obtained from ENCODE database and protein-protein interaction data from the HPRD, the common genes were downloaded and compared with genes annotated from gene microarrays to screen additional common genes in order to construct an integrated regulation network. FANMOD was then used to detect significant three-gene network motifs. Subsequently, GlobalAncova was used to screen differential three-gene network motifs between the CAD group and the normal control data from GSE42148. Genes involved in the differential network motifs were then subjected to functional annotation and pathway enrichment analysis. Finally, clustering analysis of the CAD and control samples was performed based on individual DEGs and the top 20 network motifs identified. In total, 9,008 significant three-node network motifs were detected from the integrated regulation network; these were categorized into 22 interaction modes, each containing a minimum of one transcription factor. Subsequently, 1,132 differential network motifs involving 697 genes were screened between the CAD and control group. The 697 genes were enriched in 154 gene ontology terms, including 119 biological processes, and 14 KEGG pathways. Identifying patients with CAD based on the top 20 network motifs provided increased accuracy compared with the conventional method based on individual DEGs. The results of the present study indicate that the network motif-based method is more efficient and accurate for identifying CAD patients than the conventional method based on individual DEGs. PMID:27347046

  4. A Localization Method for Multistatic SAR Based on Convex Optimization

    PubMed Central

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function’s maximum is on the circumference of the ellipse which is the iso-range for its model function’s T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  5. A Localization Method for Multistatic SAR Based on Convex Optimization.

    PubMed

    Zhong, Xuqi; Wu, Junjie; Yang, Jianyu; Sun, Zhichao; Huang, Yuling; Li, Zhongyu

    2015-01-01

    In traditional localization methods for Synthetic Aperture Radar (SAR), the bistatic range sum (BRS) estimation and Doppler centroid estimation (DCE) are needed for the calculation of target localization. However, the DCE error greatly influences the localization accuracy. In this paper, a localization method for multistatic SAR based on convex optimization without DCE is investigated and the influence of BRS estimation error on localization accuracy is analysed. Firstly, by using the information of each transmitter and receiver (T/R) pair and the target in SAR image, the model functions of T/R pairs are constructed. Each model function's maximum is on the circumference of the ellipse which is the iso-range for its model function's T/R pair. Secondly, the target function whose maximum is located at the position of the target is obtained by adding all model functions. Thirdly, the target function is optimized based on gradient descent method to obtain the position of the target. During the iteration process, principal component analysis is implemented to guarantee the accuracy of the method and improve the computational efficiency. The proposed method only utilizes BRSs of a target in several focused images from multistatic SAR. Therefore, compared with traditional localization methods for SAR, the proposed method greatly improves the localization accuracy. The effectivity of the localization approach is validated by simulation experiment. PMID:26566031

  6. Level set method for image segmentation based on moment competition

    NASA Astrophysics Data System (ADS)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  7. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    NASA Astrophysics Data System (ADS)

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.

  8. Study on UPF Harmonic Current Detection Method Based on DSP

    NASA Astrophysics Data System (ADS)

    Zhao, H. J.; Pang, Y. F.; Qiu, Z. M.; Chen, M.

    2006-10-01

    Unity power factor (UPF) harmonic current detection method applied to active power filter (APF) is presented in this paper. The intention of this method is to make nonlinear loads and active power filter in parallel to be an equivalent resistance. So after compensation, source current is sinusoidal, and has the same shape of source voltage. Meanwhile, there is no harmonic in source current, and the power factor becomes one. The mathematic model of proposed method and the optimum project for equivalent low pass filter in measurement are presented. Finally, the proposed detection method applied to a shunt active power filter experimental prototype based on DSP TMS320F2812 is developed. Simulation and experiment results indicate the method is simple and easy to implement, and can obtain the real-time calculation of harmonic current exactly.

  9. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method

    PubMed Central

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  10. A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method.

    PubMed

    Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang

    2016-01-01

    Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194

  11. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, J.H.; Keller, R.A.; Martin, J.C.; Posner, R.G.; Marrone, B.L.; Hammond, M.L.; Simpson, D.J.

    1995-04-11

    A method is described for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand. 4 figures.

  12. Method for rapid base sequencing in DNA and RNA with two base labeling

    DOEpatents

    Jett, James H.; Keller, Richard A.; Martin, John C.; Posner, Richard G.; Marrone, Babetta L.; Hammond, Mark L.; Simpson, Daniel J.

    1995-01-01

    Method for rapid-base sequencing in DNA and RNA with two-base labeling and employing fluorescent detection of single molecules at two wavelengths. Bases modified to accept fluorescent labels are used to replicate a single DNA or RNA strand to be sequenced. The bases are then sequentially cleaved from the replicated strand, excited with a chosen spectrum of electromagnetic radiation, and the fluorescence from individual, tagged bases detected in the order of cleavage from the strand.

  13. Acoustic radiation force-based elasticity imaging methods

    PubMed Central

    Palmeri, Mark L.; Nightingale, Kathryn R.

    2011-01-01

    Conventional diagnostic ultrasound images portray differences in the acoustic properties of soft tissues, whereas ultrasound-based elasticity images portray differences in the elastic properties of soft tissues (i.e. stiffness, viscosity). The benefit of elasticity imaging lies in the fact that many soft tissues can share similar ultrasonic echogenicities, but may have different mechanical properties that can be used to clearly visualize normal anatomy and delineate pathological lesions. Acoustic radiation force-based elasticity imaging methods use acoustic radiation force to transiently deform soft tissues, and the dynamic displacement response of those tissues is measured ultrasonically and is used to estimate the tissue's mechanical properties. Both qualitative images and quantitative elasticity metrics can be reconstructed from these measured data, providing complimentary information to both diagnose and longitudinally monitor disease progression. Recently, acoustic radiation force-based elasticity imaging techniques have moved from the laboratory to the clinical setting, where clinicians are beginning to characterize tissue stiffness as a diagnostic metric, and commercial implementations of radiation force-based ultrasonic elasticity imaging are beginning to appear on the commercial market. This article provides an overview of acoustic radiation force-based elasticity imaging, including a review of the relevant soft tissue material properties, a review of radiation force-based methods that have been proposed for elasticity imaging, and a discussion of current research and commercial realizations of radiation force based-elasticity imaging technologies. PMID:22419986

  14. Metaphoric Investigation of the Phonic-Based Sentence Method

    ERIC Educational Resources Information Center

    Dogan, Birsen

    2012-01-01

    This study aimed to understand the views of prospective teachers with "phonic-based sentence method" through metaphoric images. In this descriptive study, the participants involve the prospective teachers who take reading-writing instruction courses in Primary School Classroom Teaching Program of the Education Faculty of Pamukkale University. The…

  15. Preparing Students for Flipped or Team-Based Learning Methods

    ERIC Educational Resources Information Center

    Balan, Peter; Clark, Michele; Restall, Gregory

    2015-01-01

    Purpose: Teaching methods such as Flipped Learning and Team-Based Learning require students to pre-learn course materials before a teaching session, because classroom exercises rely on students using self-gained knowledge. This is the reverse to "traditional" teaching when course materials are presented during a lecture, and students are…

  16. Bead Collage: An Arts-Based Research Method

    ERIC Educational Resources Information Center

    Kay, Lisa

    2013-01-01

    In this paper, "bead collage," an arts-based research method that invites participants to reflect, communicate and construct their experience through the manipulation of beads and found objects is explained. Emphasizing the significance of one's personal biography and experiences as a researcher, I discuss how my background as an…

  17. Effective Teaching Methods--Project-based Learning in Physics

    ERIC Educational Resources Information Center

    Holubova, Renata

    2008-01-01

    The paper presents results of the research of new effective teaching methods in physics and science. It is found out that it is necessary to educate pre-service teachers in approaches stressing the importance of the own activity of students, in competences how to create an interdisciplinary project. Project-based physics teaching and learning…

  18. Explorations in Using Arts-Based Self-Study Methods

    ERIC Educational Resources Information Center

    Samaras, Anastasia P.

    2010-01-01

    Research methods courses typically require students to conceptualize, describe, and present their research ideas in writing. In this article, the author describes her exploration in using arts-based techniques for teaching research to support the development of students' self-study research projects. The pedagogical approach emerged from the…

  19. Bioanalytical method transfer considerations of chromatographic-based assays.

    PubMed

    Williard, Clark V

    2016-07-01

    Bioanalysis is an important part of the modern drug development process. The business practice of outsourcing and transferring bioanalytical methods from laboratory to laboratory has increasingly become a crucial strategy for successful and efficient delivery of therapies to the market. This chapter discusses important considerations when transferring various types of chromatographic-based assays in today's pharmaceutical research and development environment. PMID:27277876

  20. A Natural Teaching Method Based on Learning Theory.

    ERIC Educational Resources Information Center

    Smilkstein, Rita

    1991-01-01

    The natural teaching method is active and student-centered, based on schema and constructivist theories, and informed by research in neuroplasticity. A schema is a mental picture or understanding of something we have learned. Humans can have knowledge only to the degree to which they have constructed schemas from learning experiences and practice.…

  1. pyro: Python-based tutorial for computational methods for hydrodynamics

    NASA Astrophysics Data System (ADS)

    Zingale, Michael

    2015-07-01

    pyro is a simple python-based tutorial on computational methods for hydrodynamics. It includes 2-d solvers for advection, compressible, incompressible, and low Mach number hydrodynamics, diffusion, and multigrid. It is written with ease of understanding in mind. An extensive set of notes that is part of the Open Astrophysics Bookshelf project provides details of the algorithms.

  2. NIM: A Node Influence Based Method for Cancer Classification

    PubMed Central

    Wang, Yiwen; Yang, Jianhua

    2014-01-01

    The classification of different cancer types owns great significance in the medical field. However, the great majority of existing cancer classification methods are clinical-based and have relatively weak diagnostic ability. With the rapid development of gene expression technology, it is able to classify different kinds of cancers using DNA microarray. Our main idea is to confront the problem of cancer classification using gene expression data from a graph-based view. Based on a new node influence model we proposed, this paper presents a novel high accuracy method for cancer classification, which is composed of four parts: the first is to calculate the similarity matrix of all samples, the second is to compute the node influence of training samples, the third is to obtain the similarity between every test sample and each class using weighted sum of node influence and similarity matrix, and the last is to classify each test sample based on its similarity between every class. The data sets used in our experiments are breast cancer, central nervous system, colon tumor, prostate cancer, acute lymphoblastic leukemia, and lung cancer. experimental results showed that our node influence based method (NIM) is more efficient and robust than the support vector machine, K-nearest neighbor, C4.5, naive Bayes, and CART. PMID:25180045

  3. Transformer winding defects identification based on a high frequency method

    NASA Astrophysics Data System (ADS)

    Florkowski, Marek; Furgał, Jakub

    2007-09-01

    The transformer diagnostic methods are systematically being improved and extended due to growing requirements for reliability of power systems in terms of uninterrupted power supply and avoidance of blackouts. Those methods are also driven by longer lifetime of transformers and demand for reduction of transmission and distribution costs. Hence, the detection of winding faults in transformers, both in exploitation or during transportation is an important aspect of power transformer failure prevention. The frequency response analysis method (FRA), more and more frequently used in electric power engineering, has been applied for investigations and signature analysis based on the admittance and transfer function. The paper presents a novel approach to the identification of typical transformer winding problems such as axial or radial movements or turn-to-turn faults. The proposed transfer function discrimination (TFD) criteria are based on the derived transfer function ratios, manifesting higher sensitivity.

  4. Matrix-based image reconstruction methods for tomography

    SciTech Connect

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures.

  5. Method of plasma etching Ga-based compound semiconductors

    SciTech Connect

    Qiu, Weibin; Goddard, Lynford L.

    2012-12-25

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent to the process chamber. The process chamber contains a sample comprising a Ga-based compound semiconductor. The sample is in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. The method includes flowing SiCl.sub.4 gas into the chamber, flowing Ar gas into the chamber, and flowing H.sub.2 gas into the chamber. RF power is supplied independently to the source electrode and the platen. A plasma is generated based on the gases in the process chamber, and regions of a surface of the sample adjacent to one or more masked portions of the surface are etched to create a substantially smooth etched surface including features having substantially vertical walls beneath the masked portions.

  6. Screw thread parameter measurement system based on image processing method

    NASA Astrophysics Data System (ADS)

    Rao, Zhimin; Huang, Kanggao; Mao, Jiandong; Zhang, Yaya; Zhang, Fan

    2013-08-01

    In the industrial production, as an important transmission part, the screw thread is applied extensively in many automation equipments. The traditional measurement methods of screw thread parameter, including integrated test methods of multiparameters and the single parameter measurement method, belong to contact measurement method. In practical the contact measurement exists some disadvantages, such as relatively high time cost, introducing easily human error and causing thread damage. In this paper, as a new kind of real-time and non-contact measurement method, a screw thread parameter measurement system based on image processing method is developed to accurately measure the outside diameter, inside diameter, pitch diameter, pitch, thread height and other parameters of screw thread. In the system the industrial camera is employed to acquire the image of screw thread, some image processing methods are used to obtain the image profile of screw thread and a mathematics model is established to compute the parameters. The C++Builder 6.0 is employed as the software development platform to realize the image process and computation of screw thread parameters. For verifying the feasibility of the measurement system, some experiments were carried out and the measurement errors were analyzed. The experiment results show the image measurement system satisfies the measurement requirements and suitable for real-time detection of screw thread parameters mentioned above. Comparing with the traditional methods the system based on image processing method has some advantages, such as, non-contact, easy operation, high measuring accuracy, no work piece damage, fast error analysis and so on. In the industrial production, this measurement system can provide an important reference value for development of similar parameter measurement system.

  7. PDEs on moving surfaces via the closest point method and a modified grid based particle method

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ruuth, S. J.

    2016-05-01

    Partial differential equations (PDEs) on surfaces arise in a wide range of applications. The closest point method (Ruuth and Merriman (2008) [20]) is a recent embedding method that has been used to solve a variety of PDEs on smooth surfaces using a closest point representation of the surface and standard Cartesian grid methods in the embedding space. The original closest point method (CPM) was designed for problems posed on static surfaces, however the solution of PDEs on moving surfaces is of considerable interest as well. Here we propose solving PDEs on moving surfaces using a combination of the CPM and a modification of the grid based particle method (Leung and Zhao (2009) [12]). The grid based particle method (GBPM) represents and tracks surfaces using meshless particles and an Eulerian reference grid. Our modification of the GBPM introduces a reconstruction step into the original method to ensure that all the grid points within a computational tube surrounding the surface are active. We present a number of examples to illustrate the numerical convergence properties of our combined method. Experiments for advection-diffusion equations that are strongly coupled to the velocity of the surface are also presented.

  8. An efficient frequency recognition method based on likelihood ratio test for SSVEP-based BCI.

    PubMed

    Zhang, Yangsong; Dong, Li; Zhang, Rui; Yao, Dezhong; Zhang, Yu; Xu, Peng

    2014-01-01

    An efficient frequency recognition method is very important for SSVEP-based BCI systems to improve the information transfer rate (ITR). To address this aspect, for the first time, likelihood ratio test (LRT) was utilized to propose a novel multichannel frequency recognition method for SSVEP data. The essence of this new method is to calculate the association between multichannel EEG signals and the reference signals which were constructed according to the stimulus frequency with LRT. For the simulation and real SSVEP data, the proposed method yielded higher recognition accuracy with shorter time window length and was more robust against noise in comparison with the popular canonical correlation analysis- (CCA-) based method and the least absolute shrinkage and selection operator- (LASSO-) based method. The recognition accuracy and information transfer rate (ITR) obtained by the proposed method was higher than those of the CCA-based method and LASSO-based method. The superior results indicate that the LRT method is a promising candidate for reliable frequency recognition in future SSVEP-BCI. PMID:25250058

  9. Numerical methods for characterization of synchrotron radiation based on the Wigner function method

    NASA Astrophysics Data System (ADS)

    Tanaka, Takashi

    2014-06-01

    Numerical characterization of synchrotron radiation based on the Wigner function method is explored in order to accurately evaluate the light source performance. A number of numerical methods to compute the Wigner functions for typical synchrotron radiation sources such as bending magnets, undulators and wigglers, are presented, which significantly improve the computation efficiency and reduce the total computation time. As a practical example of the numerical characterization, optimization of betatron functions to maximize the brilliance of undulator radiation is discussed.

  10. Gravity base, jack-up platform - method and apparatus

    SciTech Connect

    Herrmann, R.P.; Pease, F.T.; Ray, D.R.

    1981-05-05

    The invention relates to an offshore, gravity base, jack-up platform comprising a deck, a gravity base and one or more legs interconnecting the deck and base. The gravity base comprises a generally polygonal shaped, monolithic hull structure with reaction members extending downwardly from the hull to penetrate the waterbed and react to vertical and lateral loads imposed upon the platform while maintaining the gravity hull in a posture elevated above the surface of the waterbed. A method aspect of the invention includes the steps of towing a gravity base, jack-up platform, as a unit, to a preselected offshore site floating upon the gravity hull. During the towing operation, the deck is mounted adjacent the gravity base with a leg or legs projecting through the deck. At a preselected offshore station ballast is added to the gravity base and the platform descends slightly to a posture where the platform is buoyantly supported by the deck. The base is then jacked down toward the seabed and the platform is laterally brought onto station. Ballast is then added to the deck and the reaction members are penetrated into the waterbed to operational soil refusal. Ballast is then ejected from the deck and the deck is jacked to an operational elevation above a predetermined statistical wave crest height.

  11. [Fast Implementation Method of Protein Spots Detection Based on CUDA].

    PubMed

    Xiong, Bangshu; Ye, Yijia; Ou, Qiaofeng; Zhang, Haodong

    2016-02-01

    In order to improve the efficiency of protein spots detection, a fast detection method based on CUDA was proposed. Firstly, the parallel algorithms of the three most time-consuming parts in the protein spots detection algorithm: image preprocessing, coarse protein point detection and overlapping point segmentation were studied. Then, according to single instruction multiple threads executive model of CUDA to adopted data space strategy of separating two-dimensional (2D) images into blocks, various optimizing measures such as shared memory and 2D texture memory are adopted in this study. The results show that the operative efficiency of this method is obviously improved compared to CPU calculation. As the image size increased, this method makes more improvement in efficiency, such as for the image with the size of 2,048 x 2,048, the method of CPU needs 52,641 ms, but the GPU needs only 4,384 ms. PMID:27382745

  12. Diabatization based on the dipole and quadrupole: The DQ method

    SciTech Connect

    Hoyer, Chad E.; Xu, Xuefei; Ma, Dongxia; Gagliardi, Laura E-mail: truhlar@umn.edu; Truhlar, Donald G. E-mail: truhlar@umn.edu

    2014-09-21

    In this work, we present a method, called the DQ scheme (where D and Q stand for dipole and quadrupole, respectively), for transforming a set of adiabatic electronic states to diabatic states by using the dipole and quadrupole moments to determine the transformation coefficients. It is more broadly applicable than methods based only on the dipole moment; for example, it is not restricted to electron transfer reactions, and it works with any electronic structure method and for molecules with and without symmetry, and it is convenient in not requiring orbital transformations. We illustrate this method by prototype applications to two cases, LiH and phenol, for which we compare the results to those obtained by the fourfold-way diabatization scheme.

  13. A Novel Method for Pulsometry Based on Traditional Iranian Medicine

    PubMed Central

    Yousefipoor, Farzane; Nafisi, Vahidreza

    2015-01-01

    Arterial pulse measurement is one of the most important methods for evaluation of healthy conditions. In traditional Iranian medicine (TIM), physician may detect radial pulse by holding four fingers on the patient's wrist. By using this method, under standard condition, the detected pulses are subjective and erroneous, in case of weak and/or abnormal pulses, the ambiguity of diagnosis may rise. In this paper, we present an equipment which is designed and implemented for automation of traditional pulse detection method. By this novel system, the developed noninvasive diagnostic method and database based on the TIM are way forward to apply traditional medicine and diagnose patients with present technology. The accuracy for period measuring is 76% and systolic peak is 72%. PMID:26955566

  14. Spindle extraction method for ISAR image based on Radon transform

    NASA Astrophysics Data System (ADS)

    Wei, Xia; Zheng, Sheng; Zeng, Xiangyun; Zhu, Daoyuan; Xu, Gaogui

    2015-12-01

    In this paper, a method of spindle extraction of target in inverse synthetic aperture radar (ISAR) image is proposed which depends on Radon Transform. Firstly, utilizing Radon Transform to detect all straight lines which are collinear with these line segments in image. Then, using Sobel operator to detect image contour. Finally, finding all intersections of each straight line and image contour, the two intersections which have maximum distance between them is the two ends of this line segment and the longest line segment of all line segments is spindle of target. According to the proposed spindle extraction method, one hundred simulated ISAR images which are respectively rotated 0 degrees, 10 degrees, 20 degrees, 30 degrees and 40 degrees in counterclockwise are used to do experiment and the proposed method and the detection results are more close to the real spindle of target than the method based on Hough Transform .

  15. A Novel Robot Visual Homing Method Based on SIFT Features

    PubMed Central

    Zhu, Qidan; Liu, Chuanjia; Cai, Chengtao

    2015-01-01

    Warping is an effective visual homing method for robot local navigation. However, the performance of the warping method can be greatly influenced by the changes of the environment in a real scene, thus resulting in lower accuracy. In order to solve the above problem and to get higher homing precision, a novel robot visual homing algorithm is proposed by combining SIFT (scale-invariant feature transform) features with the warping method. The algorithm is novel in using SIFT features as landmarks instead of the pixels in the horizon region of the panoramic image. In addition, to further improve the matching accuracy of landmarks in the homing algorithm, a novel mismatching elimination algorithm, based on the distribution characteristics of landmarks in the catadioptric panoramic image, is proposed. Experiments on image databases and on a real scene confirm the effectiveness of the proposed method. PMID:26473880

  16. A history-based method to estimate animal preference.

    PubMed

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  17. A history-based method to estimate animal preference

    PubMed Central

    Maia, Caroline Marques; Volpato, Gilson Luiz

    2016-01-01

    Giving animals their preferred items (e.g., environmental enrichment) has been suggested as a method to improve animal welfare, thus raising the question of how to determine what animals want. Most studies have employed choice tests for detecting animal preferences. However, whether choice tests represent animal preferences remains a matter of controversy. Here, we present a history-based method to analyse data from individual choice tests to discriminate between preferred and non-preferred items. This method differentially weighs choices from older and recent tests performed over time. Accordingly, we provide both a preference index that identifies preferred items contrasted with non-preferred items in successive multiple-choice tests and methods to detect the strength of animal preferences for each item. We achieved this goal by investigating colour choices in the Nile tilapia fish species. PMID:27350213

  18. Object Recognition using Feature- and Color-Based Methods

    NASA Technical Reports Server (NTRS)

    Duong, Tuan; Duong, Vu; Stubberud, Allen

    2008-01-01

    An improved adaptive method of processing image data in an artificial neural network has been developed to enable automated, real-time recognition of possibly moving objects under changing (including suddenly changing) conditions of illumination and perspective. The method involves a combination of two prior object-recognition methods one based on adaptive detection of shape features and one based on adaptive color segmentation to enable recognition in situations in which either prior method by itself may be inadequate. The chosen prior feature-based method is known as adaptive principal-component analysis (APCA); the chosen prior color-based method is known as adaptive color segmentation (ACOSE). These methods are made to interact with each other in a closed-loop system to obtain an optimal solution of the object-recognition problem in a dynamic environment. One of the results of the interaction is to increase, beyond what would otherwise be possible, the accuracy of the determination of a region of interest (containing an object that one seeks to recognize) within an image. Another result is to provide a minimized adaptive step that can be used to update the results obtained by the two component methods when changes of color and apparent shape occur. The net effect is to enable the neural network to update its recognition output and improve its recognition capability via an adaptive learning sequence. In principle, the improved method could readily be implemented in integrated circuitry to make a compact, low-power, real-time object-recognition system. It has been proposed to demonstrate the feasibility of such a system by integrating a 256-by-256 active-pixel sensor with APCA, ACOSE, and neural processing circuitry on a single chip. It has been estimated that such a system on a chip would have a volume no larger than a few cubic centimeters, could operate at a rate as high as 1,000 frames per second, and would consume in the order of milliwatts of power.

  19. Weaving a Formal Methods Education with Problem-Based Learning

    NASA Astrophysics Data System (ADS)

    Gibson, J. Paul

    The idea of weaving formal methods through computing (or software engineering) degrees is not a new one. However, there has been little success in developing and implementing such a curriculum. Formal methods continue to be taught as stand-alone modules and students, in general, fail to see how fundamental these methods are to the engineering of software. A major problem is one of motivation — how can the students be expected to enthusiastically embrace a challenging subject when the learning benefits, beyond passing an exam and achieving curriculum credits, are not clear? Problem-based learning has gradually moved from being an innovative pedagogique technique, commonly used to better-motivate students, to being widely adopted in the teaching of many different disciplines, including computer science and software engineering. Our experience shows that a good problem can be re-used throughout a student's academic life. In fact, the best computing problems can be used with children (young and old), undergraduates and postgraduates. In this paper we present a process for weaving formal methods through a University curriculum that is founded on the application of problem-based learning and a library of good software engineering problems, where students learn about formal methods without sitting a traditional formal methods module. The process of constructing good problems and integrating them into the curriculum is shown to be analagous to the process of engineering software. This approach is not intended to replace more traditional formal methods modules: it will better prepare students for such specialised modules and ensure that all students have an understanding and appreciation for formal methods even if they do not go on to specialise in them.

  20. Lunar-base construction equipment and methods evaluation

    NASA Technical Reports Server (NTRS)

    Boles, Walter W.; Ashley, David B.; Tucker, Richard L.

    1993-01-01

    A process for evaluating lunar-base construction equipment and methods concepts is presented. The process is driven by the need for more quantitative, systematic, and logical methods for assessing further research and development requirements in an area where uncertainties are high, dependence upon terrestrial heuristics is questionable, and quantitative methods are seldom applied. Decision theory concepts are used in determining the value of accurate information and the process is structured as a construction-equipment-and-methods selection methodology. Total construction-related, earth-launch mass is the measure of merit chosen for mathematical modeling purposes. The work is based upon the scope of the lunar base as described in the National Aeronautics and Space Administration's Office of Exploration's 'Exploration Studies Technical Report, FY 1989 Status'. Nine sets of conceptually designed construction equipment are selected as alternative concepts. It is concluded that the evaluation process is well suited for assisting in the establishment of research agendas in an approach that is first broad, with a low level of detail, followed by more-detailed investigations into areas that are identified as critical due to high degrees of uncertainty and sensitivity.

  1. Evaluation of base widening methods on flexible pavements in Wyoming

    NASA Astrophysics Data System (ADS)

    Offei, Edward

    The surface transportation system forms the biggest infrastructure investment in the United States of which the roadway pavement is an integral part. Maintaining the roadways can involve rehabilitation in the form of widening, which requires a longitudinal joint between the existing and new pavement sections to accommodate wider travel lanes, additional travel lanes or modification to shoulder widths. Several methods are utilized for the joint construction between the existing and new pavement sections including vertical, tapered and stepped joints. The objective of this research is to develop a formal recommendation for the preferred joint construction method that provides the best base layer support for the state of Wyoming. Field collection of Dynamic Cone Penetrometer (DCP) data, Falling Weight Deflectometer (FWD) data, base samples for gradation and moisture content were conducted on 28 existing and 4 newly constructed pavement widening projects. A survey of constructability issues on widening projects as experienced by WYDOT engineers was undertaken. Costs of each joint type were compared as well. Results of the analyses indicate that the tapered joint type showed relatively better pavement strength compared to the vertical joint type and could be the preferred joint construction method. The tapered joint type also showed significant base material savings than the vertical joint type. The vertical joint has an 18% increase in cost compared to the tapered joint. This research is intended to provide information and/or recommendation to state policy makers as to which of the base widening joint techniques (vertical, tapered, stepped) for flexible pavement provides better pavement performance.

  2. Springback Compensation Based on FDM-DTF Method

    SciTech Connect

    Liu Qiang; Kang Lan

    2010-06-15

    Stamping part error caused by springback is usually considered to be a tooling defect in sheet metal forming process. This problem can be corrected by adjusting the tooling shape to appropriate shape. In this paper, springback compensation based on FDM-DTF method is proposed to be used for design and modification of the tooling shape. Firstly, based on FDM method, the tooling shape is designed by reversing inner force's direction at the end of forming simulation, the required tooling shape can be got through some iterations. Secondly actual tooling is produced based on results got in the first step. When the tooling and part surface discrete data are investigated, the transfer function between numerical springback error and real springback error can be calculated based on wavelet transform results, which can be used in predicting the tooling shape for the desired product. Finally the FDM-DTF method is proved to control springback effectively after it has been applied in the 2D irregular product springback control.

  3. An Object-Based Method for Chinese Landform Types Classification

    NASA Astrophysics Data System (ADS)

    Ding, Hu; Tao, Fei; Zhao, Wufan; Na, Jiaming; Tang, Guo'an

    2016-06-01

    Landform classification is a necessary task for various fields of landscape and regional planning, for example for landscape evaluation, erosion studies, hazard prediction, et al. This study proposes an improved object-based classification for Chinese landform types using the factor importance analysis of random forest and the gray-level co-occurrence matrix (GLCM). In this research, based on 1km DEM of China, the combination of the terrain factors extracted from DEM are selected by correlation analysis and Sheffield's entropy method. Random forest classification tree is applied to evaluate the importance of the terrain factors, which are used as multi-scale segmentation thresholds. Then the GLCM is conducted for the knowledge base of classification. The classification result was checked by using the 1:4,000,000 Chinese Geomorphological Map as reference. And the overall classification accuracy of the proposed method is 5.7% higher than ISODATA unsupervised classification, and 15.7% higher than the traditional object-based classification method.

  4. Characteristic-based time domain method for antenna analysis

    NASA Astrophysics Data System (ADS)

    Jiao, Dan; Jin, Jian-Ming; Shang, J. S.

    2001-01-01

    The characteristic-based time domain method, developed in the computational fluid dynamics community for solving the Euler equations, is applied to the antenna radiation problem. Based on the principle of the characteristic-based algorithm, a governing equation in the cylindrical coordinate system is formulated directly to facilitate the analysis of body-of-revolution antennas and also to achieve the exact Riemann problem. A finite difference scheme with second-order accuracy in both time and space is constructed from the eigenvalue and eigenvector analysis of the derived governing equation. Rigorous boundary conditions for all the field components are formulated to improve the accuracy of the characteristic-based finite difference scheme. Numerical results demonstrate the validity and accuracy of the proposed technique.

  5. Spectral radiative property control method based on filling solution

    NASA Astrophysics Data System (ADS)

    Jiao, Y.; Liu, L. H.; Hsu, P.-f.

    2014-01-01

    Controlling thermal radiation by tailoring spectral properties of microstructure is a promising method, can be applied in many industrial systems and have been widely researched recently. Among various property tailoring schemes, geometry design of microstructures is a commonly used method. However, the existing radiation property tailoring is limited by adjustability of processed microstructures. In other words, the spectral radiative properties of microscale structures are not possible to change after the gratings are fabricated. In this paper, we propose a method that adjusts the grating spectral properties by means of injecting filling solution, which could modify the thermal radiation in a fabricated microstructure. Therefore, this method overcomes the limitation mentioned above. Both mercury and water are adopted as the filling solution in this study. Aluminum and silver are selected as the grating materials to investigate the generality and limitation of this control method. The rigorous coupled-wave analysis is used to investigate the spectral radiative properties of these filling solution grating structures. A magnetic polaritons mechanism identification method is proposed based on LC circuit model principle. It is found that this control method could be used by different grating materials. Different filling solutions would enable the high absorption peak to move to longer or shorter wavelength band. The results show that the filling solution grating structures are promising for active control of spectral radiative properties.

  6. Efficient variational Bayesian approximation method based on subspace optimization.

    PubMed

    Zheng, Yuling; Fraysse, Aurélia; Rodet, Thomas

    2015-02-01

    Variational Bayesian approximations have been widely used in fully Bayesian inference for approximating an intractable posterior distribution by a separable one. Nevertheless, the classical variational Bayesian approximation (VBA) method suffers from slow convergence to the approximate solution when tackling large dimensional problems. To address this problem, we propose in this paper a more efficient VBA method. Actually, variational Bayesian issue can be seen as a functional optimization problem. The proposed method is based on the adaptation of subspace optimization methods in Hilbert spaces to the involved function space, in order to solve this optimization problem in an iterative way. The aim is to determine an optimal direction at each iteration in order to get a more efficient method. We highlight the efficiency of our new VBA method and demonstrate its application to image processing by considering an ill-posed linear inverse problem using a total variation prior. Comparisons with state of the art variational Bayesian methods through a numerical example show a notable improvement in computation time. PMID:25532179

  7. Differential equation based method for accurate approximations in optimization

    NASA Technical Reports Server (NTRS)

    Pritchard, Jocelyn I.; Adelman, Howard M.

    1990-01-01

    This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.

  8. Misalignment-robust, edge-based image fusion method

    NASA Astrophysics Data System (ADS)

    Xi, Cai; Wei, Zhao

    2012-07-01

    We propose an image fusion method robust to misaligned source images based on their multiscale edge representations. Significant long edge curves at the second scale are selected to decide edge locations at each scale for the multiscale edge representations of source images. Then, processes are only executed on the representations that contain the main spatial structures of the images and also help suppress noise interference. A registration process is embedded in our fusion method. Edge correlation, calculated at the second scale, is involved as a match measure determining the fusion rules and also as a similarity measure quantifying the matching extent between source images, which makes the registration and fusion processes share the same data and hence lessens the computation of our method. Experimental results prove that, no matter whether in a noiseless or noisy condition, the proposed method provides satisfying treatment to misregistered source images and behaves well in terms of visual and objective evaluations on the fusion results, which further verifies the robustness of our edge-based method to misregistration and noise.

  9. CEMS using hot wet extractive method based on DOAS

    NASA Astrophysics Data System (ADS)

    Sun, Bo; Zhang, Chi; Sun, Changku

    2011-11-01

    A continuous emission monitoring system (CEMS) using hot wet extractive method based on differential optical absorption spectroscopy (DOAS) is designed. The developed system is applied to retrieving the concentration of SO2 and NOx in flue gas on-site. The flue gas is carried along a heated sample line into the sample pool at a constant temperature above the dew point. In this case, the adverse impact of water vapor on measurement accuracy is reduced greatly, and the on-line calibration is implemented. And then the flue gas is discharged from the sample pool after the measuring process is complete. The on-site applicability of the system is enhanced by using Programmable Logic Controller (PLC) to control each valve in the system during the measuring and on-line calibration process. The concentration retrieving method used in the system is based on the partial least squares (PLS) regression nonlinear method. The relationship between the known concentration and the differential absorption feature gathered by the PLS nonlinear method can be figured out after the on-line calibration process. Then the concentration measurement of SO2 and NOx can be easily implemented according to the definite relationship. The concentration retrieving method can identify the information and noise effectively, which improves the measuring accuracy of the system. SO2 with four different concentrations are measured by the system under laboratory conditions. The results proved that the full-scale error of this system is less than 2%FS.

  10. A velocity-correction projection method based immersed boundary method for incompressible flows

    NASA Astrophysics Data System (ADS)

    Cai, Shanggui

    2014-11-01

    In the present work we propose a novel direct forcing immersed boundary method based on the velocity-correction projection method of [J.L. Guermond, J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (1)(2003) 112]. The principal idea of immersed boundary method is to correct the velocity in the vicinity of the immersed object by using an artificial force to mimic the presence of the physical boundaries. Therefore, velocity-correction projection method is preferred to its pressure-correction counterpart in the present work. Since the velocity-correct projection method is considered as a dual class of pressure-correction method, the proposed method here can also be interpreted in the way that first the pressure is predicted by treating the viscous term explicitly without the consideration of the immersed boundary, and the solenoidal velocity is used to determine the volume force on the Lagrangian points, then the non-slip boundary condition is enforced by correcting the velocity with the implicit viscous term. To demonstrate the efficiency and accuracy of the proposed method, several numerical simulations are performed and compared with the results in the literature. China Scholarship Council.

  11. Methodical Base of Experimental Studies of Collinear Multibody Decays

    NASA Astrophysics Data System (ADS)

    Kamanin, D. V.; Zhuchko, V. E.; Kondtatyev, N. A.; Alexandrov, A. A.; Alexandrova, I. A.; Kuznetsova, E. A.; Strekalovsky, A. O.; Strekalovsky, O. V.; Pyatkov, Yu. V.; Jacobs, N.; Malaza, V.; Mulgin, S. I.

    2013-06-01

    Our recent experiments dedicated to study of the CCT of 252Cf (sf) were carried out at the COMETA setup based on the mosaics of PIN diodes and special array of 3He filled neutron counters. Principal peculiarity of the experiment consists in measuring of the heavy ions masses in the frame of the TOF-E (time-of-flight vs. energy) method in the wide range of masses and energies and almost collinear recession of the decay partners. The methodical questions of such experiment are under discussion here.

  12. Cepstrum based feature extraction method for fungus detection

    NASA Astrophysics Data System (ADS)

    Yorulmaz, Onur; Pearson, Tom C.; Çetin, A. Enis

    2011-06-01

    In this paper, a method for detection of popcorn kernels infected by a fungus is developed using image processing. The method is based on two dimensional (2D) mel and Mellin-cepstrum computation from popcorn kernel images. Cepstral features that were extracted from popcorn images are classified using Support Vector Machines (SVM). Experimental results show that high recognition rates of up to 93.93% can be achieved for both damaged and healthy popcorn kernels using 2D mel-cepstrum. The success rate for healthy popcorn kernels was found to be 97.41% and the recognition rate for damaged kernels was found to be 89.43%.

  13. Swelling-based method for preparing stable, functionalized polymer colloids.

    PubMed

    Kim, Anthony J; Manoharan, Vinothan N; Crocker, John C

    2005-02-16

    We describe a swelling-based method to prepare sterically stabilized polymer colloids with different functional groups or biomolecules attached to their surface. It should be applicable to a variety of polymeric colloids, including magnetic particles, fluorescent particles, polystyrene particles, PMMA particles, and so forth. The resulting particles are more stable in the presence of monovalent and divalent salt than existing functionalized colloids, even in the absence of any surfactant or protein blocker. While we use a PEG polymer brush here, the method should enable the use of a variety of polymer chemistries and molecular weights. PMID:15700965

  14. An error embedded method based on generalized Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Kim, Philsu; Kim, Junghan; Jung, WonKyu; Bu, Sunyoung

    2016-02-01

    In this paper, we develop an error embedded method based on generalized Chebyshev polynomials for solving stiff initial value problems. The solution and the error at each integration step are calculated by generalized Chebyshev polynomials of two consecutive degrees having overlapping zeros, which enables us to minimize overall computational costs. Further the errors at each integration step are embedded in the algorithm itself. In terms of concrete convergence and stability analysis, the constructed algorithm turns out to have the 6th order convergence and an almost L-stability. We assess the proposed method with several numerical results, showing that it uses larger time step sizes and is numerically more efficient.

  15. A backtranslation method based on codon usage strategy.

    PubMed Central

    Pesole, G; Attimonelli, M; Liuni, S

    1988-01-01

    This study describes a method for the backtranslation of an aminoacidic sequence, an extremely useful tool for various experimental approaches. It involves two computer programs CLUSTER and BACKTR written in Fortran 77 running on a VAX/VMS computer. CLUSTER generates a reliable codon usage table through a cluster analysis, based on a chi 2-like distance between the sequences. BACKTR produces backtranslated sequences according to different options when use is made of the codon usage table obtained in addition to selecting the least ambiguous potential oligonucleotide probes within an aminoacidic sequence. The method was tested by applying it to 158 yeast genes. PMID:3281142

  16. Methods for preparing colloidal nanocrystal-based thin films

    DOEpatents

    Kagan, Cherie R.; Fafarman, Aaron T.; Choi, Ji-Hyuk; Koh, Weon-kyu; Kim, David K.; Oh, Soong Ju; Lai, Yuming; Hong, Sung-Hoon; Saudari, Sangameshwar Rao; Murray, Christopher B.

    2016-05-10

    Methods of exchanging ligands to form colloidal nanocrystals (NCs) with chalcogenocyanate (xCN)-based ligands and apparatuses using the same are disclosed. The ligands may be exchanged by assembling NCs into a thin film and immersing the thin film in a solution containing xCN-based ligands. The ligands may also be exchanged by mixing a xCN-based solution with a dispersion of NCs, flocculating the mixture, centrifuging the mixture, discarding the supernatant, adding a solvent to the pellet, and dispersing the solvent and pellet to form dispersed NCs with exchanged xCN-ligands. The NCs with xCN-based ligands may be used to form thin film devices and/or other electronic, optoelectronic, and photonic devices. Devices comprising nanocrystal-based thin films and methods for forming such devices are also disclosed. These devices may be constructed by depositing NCs on to a substrate to form an NC thin film and then doping the thin film by evaporation and thermal diffusion.

  17. A Flow SPR Immunosensor Based on a Sandwich Direct Method

    PubMed Central

    Tomassetti, Mauro; Conta, Giorgia; Campanella, Luigi; Favero, Gabriele; Sanzò, Gabriella; Mazzei, Franco; Antiochia, Riccarda

    2016-01-01

    In this study, we report the development of an SPR (Surface Plasmon Resonance) immunosensor for the detection of ampicillin, operating under flow conditions. SPR sensors based on both direct (with the immobilization of the antibody) and competitive (with the immobilization of the antigen) methods did not allow the detection of ampicillin. Therefore, a sandwich-based sensor was developed which showed a good linear response towards ampicillin between 10−3 and 10−1 M, a measurement time of ≤20 min and a high selectivity both towards β-lactam antibiotics and antibiotics of different classes. PMID:27187486

  18. A Flow SPR Immunosensor Based on a Sandwich Direct Method.

    PubMed

    Tomassetti, Mauro; Conta, Giorgia; Campanella, Luigi; Favero, Gabriele; Sanzò, Gabriella; Mazzei, Franco; Antiochia, Riccarda

    2016-01-01

    In this study, we report the development of an SPR (Surface Plasmon Resonance) immunosensor for the detection of ampicillin, operating under flow conditions. SPR sensors based on both direct (with the immobilization of the antibody) and competitive (with the immobilization of the antigen) methods did not allow the detection of ampicillin. Therefore, a sandwich-based sensor was developed which showed a good linear response towards ampicillin between 10(-3) and 10(-1) M, a measurement time of ≤20 min and a high selectivity both towards β-lactam antibiotics and antibiotics of different classes. PMID:27187486

  19. Design of a Password-Based EAP Method

    NASA Astrophysics Data System (ADS)

    Manganaro, Andrea; Koblensky, Mingyur; Loreti, Michele

    In recent years, amendments to IEEE standards for wireless networks added support for authentication algorithms based on the Extensible Authentication Protocol (EAP). Available solutions generally use digital certificates or pre-shared keys but the management of the resulting implementations is complex or unlikely to be scalable. In this paper we present EAP-SRP-256, an authentication method proposal that relies on the SRP-6 protocol and provides a strong password-based authentication mechanism. It is intended to meet the IETF security and key management requirements for wireless networks.

  20. Real reproduction and evaluation of color based on BRDF method

    NASA Astrophysics Data System (ADS)

    Qin, Feng; Yang, Weiping; Yang, Jia; Li, Hongning; Luo, Yanlin; Long, Hongli

    2013-12-01

    It is difficult to reproduce the original color of targets really in different illuminating environment using the traditional methods. So a function which can reconstruct the characteristics of reflection about every point on the surface of target is required urgently to improve the authenticity of color reproduction, which known as the Bidirectional Reflectance Distribution Function(BRDF). A method of color reproduction based on the BRDF measurement is introduced in this paper. Radiometry is combined with the colorimetric theories to measure the irradiance and radiance of GretagMacbeth 24 ColorChecker by using PR-715 Radiation Spectrophotometer of PHOTO RESEARCH, Inc, USA. The BRDF and BRF (Bidirectional Reflectance Factor) values of every color piece corresponding to the reference area are calculated according to irradiance and radiance, thus color tristimulus values of 24 ColorChecker are reconstructed. The results reconstructed by BRDF method are compared with values calculated by the reflectance using PR-715, at last, the chromaticity coordinates in color space and color difference between each other are analyzed. The experimental result shows average color difference and sample standard deviation between the method proposed in this paper and traditional reconstruction method depended on reflectance are 2.567 and 1.3049 respectively. The conclusion indicates that the method of color reproduction based on BRDF has the more obvious advantages to describe the color information of object than the reflectance in hemisphere space through the theoretical and experimental analysis. This method proposed in this paper is effective and feasible during the research of reproducing the chromaticity.

  1. [An Effective Wavelength Detection Method Based on Echelle Spectra Reduction].

    PubMed

    Yin, Lu; Bayanheshig; Cui, Ji-cheng; Yang, Jin; Zhu, Ji-wei; Yao, Xue-feng

    2015-03-01

    Echelle spectrometer with high dispersion, high resolution, wide spectral coverage, full spectrum transient direct-reading and many other advantages, is one of the representative of the advanced spectrometer. In the commercialization trend of echelle spectrometer, the method of two-dimension spectra image processing is becoming more and more important. Currently, centroid extraction algorithm often be used first to detect the centroid position of effective facula and then combined with echelle spectrum reduction method to detect the effective wavelength, but this method is more difficult to achieve the desired requirements. To improve the speed, accuracy and the ability of imaging error correction during detecting the effective wavelength, an effective wavelength detection method based on spectra reduction is coming up. At the beginning, the two-dimension spectra will be converted to a one-dimension image using echelle spectra reduction method instead of finding centroid of effective facula. And then by setting appropriate threshold the one-dimension image is easy to be dealing with than the two-dimension spectra image and all of the pixel points stand for effective wavelength can be detected at one time. Based on this new idea, the speed and accuracy of image processing have been improved, at the same time a range of imaging errors can be compensated. Using the echelle spectrograph make a test applying this algorithm for data processing to check whether this method is fit for the spectra image processing or not. Choosing a standard mercury lamp as a light source during the test because the standard mercury lamp have a number of known characteristic lines which can be used to examine the accuracy of wavelength detection. According to experimental result, this method not only increase operation speed but improve accuracy of wavelength detection, also the imaging error lower than 0.05 mm (two pixel) can be corrected, and the wavelength accuracy would up to 0.02 nm

  2. Microbial detection method based on sensing molecular hydrogen

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Stoner, G. E.; Boykin, E. H.

    1974-01-01

    A simple method for detecting bacteria, based on the time of hydrogen evolution, was developed and tested against various members of the Enterobacteriaceae group. The test system consisted of (1) two electrodes, platinum and a reference electrode, (2) a buffer amplifier, and (3) a strip-chart recorder. Hydrogen evolution was measured by an increase in voltage in the negative (cathodic) direction. A linear relationship was established between inoculum size and the time hydrogen was detected (lag period). Lag times ranged from 1 h for 1 million cells/ml to 7 h for 1 cell/ml. For each 10-fold decrease in inoculum, length of the lag period increased 60 to 70 min. Based on the linear relationship between inoculum and lag period, these results indicate the potential application of the hydrogen-sensing method for rapidly detecting coliforms and other gas-producing microorganisms in a variety of clinical, food, and other samples.

  3. The conditional risk probability-based seawall height design method

    NASA Astrophysics Data System (ADS)

    Yang, Xing; Hu, Xiaodong; Li, Zhiqing

    2015-11-01

    The determination of the required seawall height is usually based on the combination of wind speed (or wave height) and still water level according to a specified return period, e.g., 50-year return period wind speed and 50-year return period still water level. In reality, the two variables are be partially correlated. This may be lead to over-design (costs) of seawall structures. The above-mentioned return period for the design of a seawall depends on economy, society and natural environment in the region. This means a specified risk level of overtopping or damage of a seawall structure is usually allowed. The aim of this paper is to present a conditional risk probability-based seawall height design method which incorporates the correlation of the two variables. For purposes of demonstration, the wind speeds and water levels collected from Jiangsu of China are analyzed. The results show this method can improve seawall height design accuracy.

  4. A Human Gait Classification Method Based on Radar Doppler Spectrograms

    NASA Astrophysics Data System (ADS)

    Tivive, Fok Hing Chi; Bouzerdoum, Abdesselam; Amin, Moeness G.

    2010-12-01

    An image classification technique, which has recently been introduced for visual pattern recognition, is successfully applied for human gait classification based on radar Doppler signatures depicted in the time-frequency domain. The proposed method has three processing stages. The first two stages are designed to extract Doppler features that can effectively characterize human motion based on the nature of arm swings, and the third stage performs classification. Three types of arm motion are considered: free-arm swings, one-arm confined swings, and no-arm swings. The last two arm motions can be indicative of a human carrying objects or a person in stressed situations. The paper discusses the different steps of the proposed method for extracting distinctive Doppler features and demonstrates their contributions to the final and desirable classification rates.

  5. A Model Based Security Testing Method for Protocol Implementation

    PubMed Central

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation. PMID:25105163

  6. Geophysics-based method of locating a stationary earth object

    DOEpatents

    Daily, Michael R.; Rohde, Steven B.; Novak, James L.

    2008-05-20

    A geophysics-based method for determining the position of a stationary earth object uses the periodic changes in the gravity vector of the earth caused by the sun- and moon-orbits. Because the local gravity field is highly irregular over a global scale, a model of local tidal accelerations can be compared to actual accelerometer measurements to determine the latitude and longitude of the stationary object.

  7. Geometrical MTF computation method based on the irradiance model

    NASA Astrophysics Data System (ADS)

    Lin, P.-D.; Liu, C.-S.

    2011-01-01

    The Modulation Transfer Function (MTF) is a measure of an optical system's ability to transfer contrast from the specimen to the image plane at a specific resolution. It can be computed either numerically by geometrical optics or measured experimentally by imaging a knife edge or a bar-target pattern of varying spatial frequency. Previously, MTF accuracy was generally affected by the size of the mesh on the image plane. This paper presents a new MTF computation method based on the irradiance model, without counting the number of rays hitting each grid. To verify the method, the MTF in the sagittal and meridional directions of an axis-symmetrical optical system is computed by both the ray-counting and the proposed methods. It is found that the grid size meshed on the image plane significantly affects the MTF of the ray-counting method, sometimes with significantly negative results. The proposed irradiance method is immune to issues of grid size. The CPU computation time for the two methods is approximately the same.

  8. A PDE-Based Fast Local Level Set Method

    NASA Astrophysics Data System (ADS)

    Peng, Danping; Merriman, Barry; Osher, Stanley; Zhao, Hongkai; Kang, Myungjoo

    1999-11-01

    We develop a fast method to localize the level set method of Osher and Sethian (1988, J. Comput. Phys.79, 12) and address two important issues that are intrinsic to the level set method: (a) how to extend a quantity that is given only on the interface to a neighborhood of the interface; (b) how to reset the level set function to be a signed distance function to the interface efficiently without appreciably moving the interface. This fast local level set method reduces the computational effort by one order of magnitude, works in as much generality as the original one, and is conceptually simple and easy to implement. Our approach differs from previous related works in that we extract all the information needed from the level set function (or functions in multiphase flow) and do not need to find explicitly the location of the interface in the space domain. The complexity of our method to do tasks such as extension and distance reinitialization is O(N), where N is the number of points in space, not O(N log N) as in works by Sethian (1996, Proc. Nat. Acad. Sci. 93, 1591) and Helmsen and co-workers (1996, SPIE Microlithography IX, p. 253). This complexity estimation is also valid for quite general geometrically based front motion for our localized method.

  9. Sparse Reconstruction for Bioluminescence Tomography Based on the Semigreedy Method

    PubMed Central

    Guo, Wei; Jia, Kebin; Zhang, Qian; Liu, Xueyan; Feng, Jinchao; Qin, Chenghu; Ma, Xibo; Yang, Xin; Tian, Jie

    2012-01-01

    Bioluminescence tomography (BLT) is a molecular imaging modality which can three-dimensionally resolve the molecular processes in small animals in vivo. The ill-posedness nature of BLT problem makes its reconstruction bears nonunique solution and is sensitive to noise. In this paper, we proposed a sparse BLT reconstruction algorithm based on semigreedy method. To reduce the ill-posedness and computational cost, the optimal permissible source region was automatically chosen by using an iterative search tree. The proposed method obtained fast and stable source reconstruction from the whole body and imposed constraint without using a regularization penalty term. Numerical simulations on a mouse atlas, and in vivo mouse experiments were conducted to validate the effectiveness and potential of the method. PMID:22927887

  10. A Micromechanics-Based Method for Multiscale Fatigue Prediction

    NASA Astrophysics Data System (ADS)

    Moore, John Allan

    An estimated 80% of all structural failures are due to mechanical fatigue, often resulting in catastrophic, dangerous and costly failure events. However, an accurate model to predict fatigue remains an elusive goal. One of the major challenges is that fatigue is intrinsically a multiscale process, which is dependent on a structure's geometric design as well as its material's microscale morphology. The following work begins with a microscale study of fatigue nucleation around non- metallic inclusions. Based on this analysis, a novel multiscale method for fatigue predictions is developed. This method simulates macroscale geometries explicitly while concurrently calculating the simplified response of microscale inclusions. Thus, providing adequate detail on multiple scales for accurate fatigue life predictions. The methods herein provide insight into the multiscale nature of fatigue, while also developing a tool to aid in geometric design and material optimization for fatigue critical devices such as biomedical stents and artificial heart valves.

  11. CT Scanning Imaging Method Based on a Spherical Trajectory.

    PubMed

    Chen, Ping; Han, Yan; Gui, Zhiguo

    2016-01-01

    In industrial computed tomography (CT), the mismatch between the X-ray energy and the effective thickness makes it difficult to ensure the integrity of projection data using the traditional scanning model, because of the limitations of the object's complex structure. So, we have developed a CT imaging method that is based on a spherical trajectory. Considering an unrestrained trajectory for iterative reconstruction, an iterative algorithm can be used to realise the CT reconstruction of a spherical trajectory for complete projection data only. Also, an inclined circle trajectory is used as an example of a spherical trajectory to illustrate the accuracy and feasibility of this new scanning method. The simulation results indicate that the new method produces superior results for a larger cone-beam angle, a limited angle and tabular objects compared with traditional circle trajectory scanning. PMID:26934744

  12. A Swarm-Based Learning Method Inspired by Social Insects

    NASA Astrophysics Data System (ADS)

    He, Xiaoxian; Zhu, Yunlong; Hu, Kunyuan; Niu, Ben

    Inspired by cooperative transport behaviors of ants, on the basis of Q-learning, a new learning method, Neighbor-Information-Reference (NIR) learning method, is present in the paper. This is a swarm-based learning method, in which principles of swarm intelligence are strictly complied with. In NIR learning, the i-interval neighbor's information, namely its discounted reward, is referenced when an individual selects the next state, so that it can make the best decision in a computable local neighborhood. In application, different policies of NIR learning are recommended by controlling the parameters according to time-relativity of concrete tasks. NIR learning can remarkably improve individual efficiency, and make swarm more "intelligent".

  13. Footstep Planning Based on Univector Field Method for Humanoid Robot

    NASA Astrophysics Data System (ADS)

    Hong, Youngdae; Kim, Jong-Hwan

    This paper proposes a footstep planning algorithm based on univector field method optimized by evolutionary programming for humanoid robot to arrive at a target point in a dynamic environment. The univector field method is employed to determine the moving direction of the humanoid robot at every footstep. Modifiable walking pattern generator, extending the conventional 3D-LIPM method by allowing the ZMP variation while in single support phase, is utilized to generate every joint trajectory of a robot satisfying the planned footstep. The proposed algorithm enables the humanoid robot not only to avoid either static or moving obstacles but also step over static obstacles. The performance of the proposed algorithm is demonstrated by computer simulations using a modeled small-sized humanoid robot HanSaRam (HSR)-VIII.

  14. Decision tree based transient stability method -- A case study

    SciTech Connect

    Wehenkel, L.; Pavella, M. . Inst. Montefiore); Euxibie, E.; Heilbronn, B. . Direction des Etudes et Recherches)

    1994-02-01

    The decision tree transient stability method is revisited via a case study carried out on the French EHV power system. In short, the method consists of building off-line decision trees, able to subsequently assess the system transient behavior in terms of precontingency parameters (or attributes'') of it, likely to drive the stability phenomena. This case study aims at investigating practical feasibility aspects and features of the trees, at enhancing their reliability to the extent possible, and at generalizing them. Feasibility aspects encompass data base generation, candidate attributes, stability classes; tree features concern in particular complexity in terms of their size and interpretability capabilities, robustness with respect to both their building and use. Reliability is enhanced by defining and exploiting pragmatic quality measures. Generalization concerns multicontingency, instead of single-contingency trees. The results obtained show real promise for the method to meet practical needs of electric power utilities.

  15. CT Scanning Imaging Method Based on a Spherical Trajectory

    PubMed Central

    2016-01-01

    In industrial computed tomography (CT), the mismatch between the X-ray energy and the effective thickness makes it difficult to ensure the integrity of projection data using the traditional scanning model, because of the limitations of the object’s complex structure. So, we have developed a CT imaging method that is based on a spherical trajectory. Considering an unrestrained trajectory for iterative reconstruction, an iterative algorithm can be used to realise the CT reconstruction of a spherical trajectory for complete projection data only. Also, an inclined circle trajectory is used as an example of a spherical trajectory to illustrate the accuracy and feasibility of this new scanning method. The simulation results indicate that the new method produces superior results for a larger cone-beam angle, a limited angle and tabular objects compared with traditional circle trajectory scanning. PMID:26934744

  16. Traffic Speed Data Imputation Method Based on Tensor Completion

    PubMed Central

    Ran, Bin; Feng, Jianshuai; Liu, Ying; Wang, Wuhong

    2015-01-01

    Traffic speed data plays a key role in Intelligent Transportation Systems (ITS); however, missing traffic data would affect the performance of ITS as well as Advanced Traveler Information Systems (ATIS). In this paper, we handle this issue by a novel tensor-based imputation approach. Specifically, tensor pattern is adopted for modeling traffic speed data and then High accurate Low Rank Tensor Completion (HaLRTC), an efficient tensor completion method, is employed to estimate the missing traffic speed data. This proposed method is able to recover missing entries from given entries, which may be noisy, considering severe fluctuation of traffic speed data compared with traffic volume. The proposed method is evaluated on Performance Measurement System (PeMS) database, and the experimental results show the superiority of the proposed approach over state-of-the-art baseline approaches. PMID:25866501

  17. Grid-based Methods in Relativistic Hydrodynamics and Magnetohydrodynamics

    NASA Astrophysics Data System (ADS)

    Martí, José María; Müller, Ewald

    2015-12-01

    An overview of grid-based numerical methods used in relativistic hydrodynamics (RHD) and magnetohydrodynamics (RMHD) is presented. Special emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods. Results of a set of demanding test bench simulations obtained with different numerical methods are compared in an attempt to assess the present capabilities and limits of the various numerical strategies. Applications to three astrophysical phenomena are briefly discussed to motivate the need for and to demonstrate the success of RHD and RMHD simulations in their understanding. The review further provides FORTRAN programs to compute the exact solution of the Riemann problem in RMHD, and to simulate 1D RMHD flows in Cartesian coordinates.

  18. A protein structural class prediction method based on novel features.

    PubMed

    Zhang, Lichao; Zhao, Xiqiang; Kong, Liang

    2013-09-01

    In this study, a 12-dimensional feature vector is constructed to reflect the general contents and spatial arrangements of the secondary structural elements of a given protein sequence. Among the 12 features, 6 novel features are specially designed to improve the prediction accuracies for α/β and α + β classes based on the distributions of α-helices and β-strands and the characteristics of parallel β-sheets and anti-parallel β-sheets. To evaluate our method, the jackknife cross-validating test is employed on two widely-used datasets, 25PDB and 1189 datasets with sequence similarity lower than 40% and 25%, respectively. The performance of our method outperforms the recently reported methods in most cases, and the 6 newly-designed features have significant positive effect to the prediction accuracies, especially for α/β and α + β classes. PMID:23770446

  19. Method to find community structures based on information centrality

    NASA Astrophysics Data System (ADS)

    Fortunato, Santo; Latora, Vito; Marchiori, Massimo

    2004-11-01

    Community structures are an important feature of many social, biological, and technological networks. Here we study a variation on the method for detecting such communities proposed by Girvan and Newman and based on the idea of using centrality measures to define the community boundaries [M. Girvan and M. E. J. Newman, Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002)]. We develop an algorithm of hierarchical clustering that consists in finding and removing iteratively the edge with the highest information centrality. We test the algorithm on computer generated and real-world networks whose community structure is already known or has been studied by means of other methods. We show that our algorithm, although it runs to completion in a time O(n4) , is very effective especially when the communities are very mixed and hardly detectable by the other methods.

  20. Gradient-based image recovery methods from incomplete Fourier measurements.

    PubMed

    Patel, Vishal M; Maleh, Ray; Gilbert, Anna C; Chellappa, Rama

    2012-01-01

    A major problem in imaging applications such as magnetic resonance imaging and synthetic aperture radar is the task of trying to reconstruct an image with the smallest possible set of Fourier samples, every single one of which has a potential time and/or power cost. The theory of compressive sensing (CS) points to ways of exploiting inherent sparsity in such images in order to achieve accurate recovery using sub-Nyquist sampling schemes. Traditional CS approaches to this problem consist of solving total-variation (TV) minimization programs with Fourier measurement constraints or other variations thereof. This paper takes a different approach. Since the horizontal and vertical differences of a medical image are each more sparse or compressible than the corresponding TV image, CS methods will be more successful in recovering these differences individually. We develop an algorithm called GradientRec that uses a CS algorithm to recover the horizontal and vertical gradients and then estimates the original image from these gradients. We present two methods of solving the latter inverse problem, i.e., one based on least-square optimization and the other based on a generalized Poisson solver. After a thorough derivation of our complete algorithm, we present the results of various experiments that compare the effectiveness of the proposed method against other leading methods. PMID:21690011

  1. FBG interrogation method based on wavelength-swept laser

    NASA Astrophysics Data System (ADS)

    Qin, Chuan; Zhao, Jianlin; Jiang, Biqiang; Rauf, Abdul; Wang, Donghui; Yang, Dexing

    2013-06-01

    Wavelength-swept laser technique is an active demodulation method which integrates laser source and detecting circuit together to achieve compact size. The method also has the advantages such as large demodulation range, high accuracy, and comparatively high speed. In this paper, we present a FBG interrogation method based on wavelength-swept Laser, in which an erbium-doped fiber is used as gain medium and connected by a WDM to form a ring cavity, a fiber FP tunable filter is inserted in the loop for choosing the laser frequency and a gas absorption cell is adopted as a frequency reference. The laser wavelength is swept by driving the FP filter. If the laser wavelength matches with that of FBG sensors, there will be some strong reflection peak signals. Detecting such signals with the transmittance signal after the gas absorption cell synchronously and analyzing them, the center wavelengths of the FBG sensors are calculated out at last. Here, we discuss the data processing method based on the frequency reference, and experimentally study the swept laser characteristics. Finally, we adopt this interrogator to demodulate FBG stress sensors. The results show that, the demodulation range almost covers C+L band, the resolution and accuracy can reach about 1pm or less and 5pm respectively. So it is very suitable for most FBG measurements.

  2. Amplification-based method for microRNA detection.

    PubMed

    Shen, Yanting; Tian, Fei; Chen, Zhenzhu; Li, Rui; Ge, Qinyu; Lu, Zuhong

    2015-09-15

    Over the last two decades, the study of miRNAs has attracted tremendous attention since they regulate gene expression post-transcriptionally and have been demonstrated to be dysregulated in many diseases. Detection methods with higher sensitivity, specificity and selectivity between precursors and mature microRNAs are urgently needed and widely studied. This review gave an overview of the amplification-based technologies including traditional methods, current modified methods and the cross-platforms of them combined with other techniques. Many progresses were found in the modified amplification-based microRNA detection methods, while traditional platforms could not be replaced until now. Several sample-specific normalizers had been validated, suggesting that the different normalizers should be established for different sample types and the combination of several normalizers might be more appropriate than a single universal normalizer. This systematic overview would be useful to provide comprehensive information for subsequent related studies and could reduce the un-necessary repetition in the future. PMID:25930002

  3. A MUSIC-based method for SSVEP signal processing.

    PubMed

    Chen, Kun; Liu, Quan; Ai, Qingsong; Zhou, Zude; Xie, Sheng Quan; Meng, Wei

    2016-03-01

    The research on brain computer interfaces (BCIs) has become a hotspot in recent years because it offers benefit to disabled people to communicate with the outside world. Steady state visual evoked potential (SSVEP)-based BCIs are more widely used because of higher signal to noise ratio and greater information transfer rate compared with other BCI techniques. In this paper, a multiple signal classification based method was proposed for multi-dimensional SSVEP feature extraction. 2-second data epochs from four electrodes achieved excellent accuracy rates including idle state detection. In some asynchronous mode experiments, the recognition accuracy reached up to 100%. The experimental results showed that the proposed method attained good frequency resolution. In most situations, the recognition accuracy was higher than canonical correlation analysis, which is a typical method for multi-channel SSVEP signal processing. Also, a virtual keyboard was successfully controlled by different subjects in an unshielded environment, which proved the feasibility of the proposed method for multi-dimensional SSVEP signal processing in practical applications. PMID:26831487

  4. [Galaxy/quasar classification based on nearest neighbor method].

    PubMed

    Li, Xiang-Ru; Lu, Yu; Zhou, Jian-Ming; Wang, Yong-Jun

    2011-09-01

    With the wide application of high-quality CCD in celestial spectrum imagery and the implementation of many large sky survey programs (e. g., Sloan Digital Sky Survey (SDSS), Two-degree-Field Galaxy Redshift Survey (2dF), Spectroscopic Survey Telescope (SST), Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) program and Large Synoptic Survey Telescope (LSST) program, etc.), celestial observational data are coming into the world like torrential rain. Therefore, to utilize them effectively and fully, research on automated processing methods for celestial data is imperative. In the present work, we investigated how to recognizing galaxies and quasars from spectra based on nearest neighbor method. Galaxies and quasars are extragalactic objects, they are far away from earth, and their spectra are usually contaminated by various noise. Therefore, it is a typical problem to recognize these two types of spectra in automatic spectra classification. Furthermore, the utilized method, nearest neighbor, is one of the most typical, classic, mature algorithms in pattern recognition and data mining, and often is used as a benchmark in developing novel algorithm. For applicability in practice, it is shown that the recognition ratio of nearest neighbor method (NN) is comparable to the best results reported in the literature based on more complicated methods, and the superiority of NN is that this method does not need to be trained, which is useful in incremental learning and parallel computation in mass spectral data processing. In conclusion, the results in this work are helpful for studying galaxies and quasars spectra classification. PMID:22097877

  5. Hydrologic regionalization using wavelet-based multiscale entropy method

    NASA Astrophysics Data System (ADS)

    Agarwal, A.; Maheswaran, R.; Sehgal, V.; Khosa, R.; Sivakumar, B.; Bernhofer, C.

    2016-07-01

    Catchment regionalization is an important step in estimating hydrologic parameters of ungaged basins. This paper proposes a multiscale entropy method using wavelet transform and k-means based hybrid approach for clustering of hydrologic catchments. Multi-resolution wavelet transform of a time series reveals structure, which is often obscured in streamflow records, by permitting gross and fine features of a signal to be separated. Wavelet-based Multiscale Entropy (WME) is a measure of randomness of the given time series at different timescales. In this study, streamflow records observed during 1951-2002 at 530 selected catchments throughout the United States are used to test the proposed regionalization framework. Further, based on the pattern of entropy across multiple scales, each cluster is given an entropy signature that provides an approximation of the entropy pattern of the streamflow data in each cluster. The tests for homogeneity reveals that the proposed approach works very well in regionalization.

  6. Current trends in virtual high throughput screening using ligand-based and structure-based methods.

    PubMed

    Sukumar, Nagamani; Das, Sourav

    2011-12-01

    High throughput in silico methods have offered the tantalizing potential to drastically accelerate the drug discovery process. Yet despite significant efforts expended by academia, national labs and industry over the years, many of these methods have not lived up to their initial promise of reducing the time and costs associated with the drug discovery enterprise, a process that can typically take over a decade and cost hundreds of millions of dollars from conception to final approval and marketing of a drug. Nevertheless structure-based modeling has become a mainstay of computational biology and medicinal chemistry, helping to leverage our knowledge of the biological target and the chemistry of protein-ligand interactions. While ligand-based methods utilize the chemistry of molecules that are known to bind to the biological target, structure-based drug design methods rely on knowledge of the three-dimensional structure of the target, as obtained through crystallographic, spectroscopic or bioinformatics techniques. Here we review recent developments in the methodology and applications of structure-based and ligand-based methods and target-based chemogenomics in Virtual High Throughput Screening (VHTS), highlighting some case studies of recent applications, as well as current research in further development of these methods. The limitations of these approaches will also be discussed, to give the reader an indication of what might be expected in years to come. PMID:21843144

  7. Effect of changing journal clubs from traditional method to evidence-based method on psychiatry residents

    PubMed Central

    Faridhosseini, Farhad; Saghebi, Ali; Khadem-Rezaiyan, Majid; Moharari, Fatemeh; Dadgarmoghaddam, Maliheh

    2016-01-01

    Introduction Journal club is a valuable educational tool in the medical field. This method follows different goals. This study aims to investigate the effect on psychiatry residents of changing journal clubs from the traditional method to the evidence-based method. Method This study was conducted using a before–after design. First- and second-year residents of psychiatry were included in the study. First, the status quo was evaluated by standardized questionnaire regarding the effect of journal club. Then, ten sessions were held to familiarize the residents with the concept of journal club. After that, evidence-based journal club sessions were held. The questionnaire was given to the residents again after the final session. Data were analyzed through descriptive statistics (frequency and percentage frequency, mean and standard deviation), and analytic statistics (paired t-test) using SPSS 22. Results Of a total of 20 first- and second-year residents of psychiatry, the data of 18 residents were finally analyzed. Most of the subjects (17 [93.7%]) were females. The mean overall score before and after the intervention was 1.83±0.45 and 2.85±0.57, respectively, which showed a significant increase (P<0.001). Conclusion Moving toward evidence-based journal clubs seems like an appropriate measure to reach the goals set by this educational tool. PMID:27570469

  8. Global seismic waveform tomography based on the spectral element method.

    NASA Astrophysics Data System (ADS)

    Capdeville, Y.; Romanowicz, B.; Gung, Y.

    2003-04-01

    Because seismogram waveforms contain much more information on the earth structure than body wave time arrivals or surface wave phase velocities, inversion of complete time-domain seismograms should allow much better resolution in global tomography. In order to achieve this, accurate methods for the calculation of forward propagation of waves in a 3D earth need to be utilized, which presents theoretical as well as computational challenges. In the past 8 years, we have developed several global 3D S velocity models based on long period waveform data, and a normal mode asymptotic perturbation formalism (NACT, Li and Romanowicz, 1996). While this approach is relatively accessible from the computational point of view, it relies on the assumption of smooth heterogeneity in a single scattering framework. Recently, the introduction of the spectral element method (SEM) has been a major step forward in the computation of seismic waveforms in a global 3D earth with no restrictions on the size of heterogeneities (Chaljub, 2000). While this method is computationally heavy when the goal is to compute large numbers of seismograms down to typical body wave periods (1-10 sec), it is much more accessible when restricted to low frequencies (T>150sec). When coupled with normal modes (e.g. Capdeville et al., 2000), the numerical computation can be restricted to a spherical shell within which heterogeneity is considered, further reducing the computational time. Here, we present a tomographic method based on the non linear least square inversion of time domain seismograms using the coupled method of spectral elements and modal solution. SEM/modes are used for both the forward modeling and to compute partial derivatives. The parametrisation of the model is also based on the spectral element mesh, the "cubed sphere" (Sadourny, 1972), which leads to a 3D local polynomial parametrization. This parametrization, combined with the excellent earth coverage resulting from the full 3D theory used

  9. An analytical method for Mathieu oscillator based on method of variation of parameter

    NASA Astrophysics Data System (ADS)

    Li, Xianghong; Hou, Jingyu; Chen, Jufeng

    2016-08-01

    A simple, but very accurate analytical method for forced Mathieu oscillator is proposed, the idea of which is based on the method of variation of parameter. Assuming that the time-varying parameter in Mathieu oscillator is constant, one could easily obtain its accurately analytical solution. Then the approximately analytical solution for Mathieu oscillator could be established after substituting periodical time-varying parameter for the constant one in the obtained accurate analytical solution. In order to certify the correctness and precision of the proposed analytical method, the first-order and ninth-order approximation solutions by harmonic balance method (HBM) are also presented. The comparisons between the results by the proposed method with those by the numerical simulation and HBM verify that the results by the proposed analytical method agree very well with those by the numerical simulation. Moreover, the precision of the proposed new analytical method is not only higher than the approximation solution by first-order HBM, but also better than the approximation solution by the ninth-order HBM in large ranges of system parameters.

  10. Mode separation of Lamb waves based on dispersion compensation method.

    PubMed

    Xu, Kailiang; Ta, Dean; Moilanen, Petro; Wang, Weiqi

    2012-04-01

    Ultrasonic Lamb modes typically propagate as a combination of multiple dispersive wave packets. Frequency components of each mode distribute widely in time domain due to dispersion and it is very challenging to separate individual modes by traditional signal processing methods. In the present study, a method of dispersion compensation is proposed for the purpose of mode separation. This numerical method compensates, i.e., compresses, the individual dispersive waveforms into temporal pulses, which thereby become nearly un-overlapped in time and frequency and can thus be extracted individually by rectangular time windows. It was further illustrated that the dispersion compensation also provided a method for predicting the plate thickness. Finally, based on reversibility of the numerical compensation method, an artificial dispersion technique was used to restore the original waveform of each mode from the separated compensated pulse. Performances of the compensation separation techniques were evaluated by processing synthetic and experimental signals which consisted of multiple Lamb modes with high dispersion. Individual modes were extracted with good accordance with the original waveforms and theoretical predictions. PMID:22501050

  11. TRUST-TECH based Methods for Optimization and Learning

    NASA Astrophysics Data System (ADS)

    Reddy, Chandan K.

    2007-12-01

    Many problems that arise in machine learning domain deal with nonlinearity and quite often demand users to obtain global optimal solutions rather than local optimal ones. Optimization problems are inherent in machine learning algorithms and hence many methods in machine learning were inherited from the optimization literature. Popularly known as the initialization problem, the ideal set of parameters required will significantly depend on the given initialization values. The recently developed TRUST-TECH (TRansformation Under STability-reTaining Equilibria CHaracterization) methodology systematically explores the subspace of the parameters to obtain a complete set of local optimal solutions. In this thesis work, we propose TRUST-TECH based methods for solving several optimization and machine learning problems. Two stages namely, the local stage and the neighborhood-search stage, are repeated alternatively in the solution space to achieve improvements in the quality of the solutions. Our methods were tested on both synthetic and real datasets and the advantages of using this novel framework are clearly manifested. This framework not only reduces the sensitivity to initialization, but also allows the flexibility for the practitioners to use various global and local methods that work well for a particular problem of interest. Other hierarchical stochastic algorithms like evolutionary algorithms and smoothing algorithms are also studied and frameworks for combining these methods with TRUST-TECH have been proposed and evaluated on several test systems.

  12. Scene-based nonuniformity correction method using multiscale constant statistics

    NASA Astrophysics Data System (ADS)

    Zuo, Chao; Chen, Qian; Gu, Guohua; Sui, Xiubao; Qian, Weixian

    2011-08-01

    In scene-based nonuniformity correction (NUC) methods for infrared focal plane array cameras, the statistical approaches have been well studied because of their lower computational complexity. However, when the assumptions imposed by statistical algorithms are violated, their performance is poor. Moreover, many of these techniques, like the global constant statistics method, usually need tens of thousands of image frames to obtain a good NUC result. In this paper, we introduce a new statistical NUC method called the multiscale constant statistics (MSCS). The MSCS statically considers that the spatial scale of the temporal constant distribution expands over time. Under the assumption that the nonuniformity is distributed in a higher spatial frequency domain, the spatial range for gain and offset estimates gradually expands to guarantee fast compensation for nonuniformity. Furthermore, an exponential window and a tolerance interval for the acquired data are introduced to capture the drift in nonuniformity and eliminate the ghosting artifacts. The strength of the proposed method lies in its simplicity, low computational complexity, and its good trade-off between convergence rate and correction precision. The NUC ability of the proposed method is demonstrated by using infrared video sequences with both synthetic and real nonuniformity.

  13. Assessment of mesoscopic particle-based methods in microfluidic geometries

    NASA Astrophysics Data System (ADS)

    Zhao, Tongyang; Wang, Xiaogong; Jiang, Lei; Larson, Ronald G.

    2013-08-01

    We assess the accuracy and efficiency of two particle-based mesoscopic simulation methods, namely, Dissipative Particle Dynamics (DPD) and Stochastic Rotation Dynamics (SRD) for predicting a complex flow in a microfluidic geometry. Since both DPD and SRD use soft or weakly interacting particles to carry momentum, both methods contain unavoidable inertial effects and unphysically high fluid compressibility. To assess these effects, we compare the predictions of DPD and SRD for both an exact Stokes-flow solution and nearly exact solutions at finite Reynolds numbers from the finite element method for flow in a straight channel with periodic slip boundary conditions. This flow represents a periodic electro-osmotic flow, which is a complex flow with an analytical solution for zero Reynolds number. We find that SRD is roughly ten-fold faster than DPD in predicting the flow field, with better accuracy at low Reynolds numbers. However, SRD has more severe problems with compressibility effects than does DPD, which limits the Reynolds numbers attainable in SRD to around 25-50, while DPD can achieve Re higher than this before compressibility effects become too large. However, since the SRD method runs much faster than DPD does, we can afford to enlarge the number of grid cells in SRD to reduce the fluid compressibility at high Reynolds number. Our simulations provide a method to estimate the range of conditions for which SRD or DPD is preferable for mesoscopic simulations.

  14. Selection of Construction Methods: A Knowledge-Based Approach

    PubMed Central

    Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects. PMID:24453925

  15. DGGE-based detection method for Quahog Parasite Unknown (QPX).

    PubMed

    Gast, R J; Cushman, E; Moran, D M; Uhlinger, K R; Leavitt, D; Smolowitz, R

    2006-06-12

    Quahog Parasite Unknown (QPX) is a significant cause of hard clam Mercenaria mercenaria mortality along the northeast coast of the United States. It infects both wild and cultured clams, often annually in plots that are heavily farmed. Subclinically infected clams can be identified by histological examination of the mantle tissue, but there is currently no method available to monitor the presence of QPX in the environment. Here, we report on a polymerase chain reaction (PCR)-based method that will facilitate the detection of QPX in natural samples and seed clams. With our method, between 10 and 100 QPX cells can be detected in 1 l of water, 1 g of sediment and 100 mg of clam tissue. Denaturing gradient gel electrophoresis (DGGE) is used to establish whether the PCR products are the same as those in the control QPX culture. We used the method to screen 100 seed clams of 15 mm, and found that 10 to 12% of the clams were positive for the presence of the QPX organism. This method represents a reliable and sensitive procedure for screening both environmental samples and potentially contaminated small clams. PMID:16875398

  16. Variation block-based genomics method for crop plants

    PubMed Central

    2014-01-01

    Background In contrast with wild species, cultivated crop genomes consist of reshuffled recombination blocks, which occurred by crossing and selection processes. Accordingly, recombination block-based genomics analysis can be an effective approach for the screening of target loci for agricultural traits. Results We propose the variation block method, which is a three-step process for recombination block detection and comparison. The first step is to detect variations by comparing the short-read DNA sequences of the cultivar to the reference genome of the target crop. Next, sequence blocks with variation patterns are examined and defined. The boundaries between the variation-containing sequence blocks are regarded as recombination sites. All the assumed recombination sites in the cultivar set are used to split the genomes, and the resulting sequence regions are termed variation blocks. Finally, the genomes are compared using the variation blocks. The variation block method identified recurring recombination blocks accurately and successfully represented block-level diversities in the publicly available genomes of 31 soybean and 23 rice accessions. The practicality of this approach was demonstrated by the identification of a putative locus determining soybean hilum color. Conclusions We suggest that the variation block method is an efficient genomics method for the recombination block-level comparison of crop genomes. We expect that this method will facilitate the development of crop genomics by bringing genomics technologies to the field of crop breeding. PMID:24929792

  17. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. PMID:23246613

  18. A beam hardening correction method based on HL consistency

    NASA Astrophysics Data System (ADS)

    Mou, Xuanqin; Tang, Shaojie; Yu, Hengyong

    2006-08-01

    XCT with polychromatic tube spectrum causes artifact called beam hardening effect. The current correction in CT device is carried by apriori polynomial from water phantom experiment. This paper proposes a new beam hardening correction algorithm that the correction polynomial depends on the relativity of projection data in angles, which obeys Helgasson-Ludwig Consistency (HL Consistency). Firstly, a bi-polynomial is constructed to characterize the beam hardening effect based on the physical model of medical x-ray imaging. In this bi-polynomial, a factor r(γ,β) represents the ratio of the attenuation contributions caused by high density mass (bone, etc.) to low density mass (muscle, vessel, blood, soft tissue, fat, etc.) respectively in the projection angle β and fan angle γ. Secondly, let r(γ,β)=0, the bi-polynomial is degraded as a sole-polynomial. The coefficient of this polynomial can be calculated based on HL Consistency. Then, the primary correction is reached, which is also more efficient in theoretical than the correction method in current CT devices. Thirdly, based on the result of a normal CT reconstruction from the corrected projection data, r(γ,β) can be estimated. Fourthly, the coefficient of bi-polynomial can also be calculated based HL Consistency and the final correction are achieved. Experiments of circular cone beam CT indicate this method an excellent property. Correcting beam hardening effect based on HL Consistency, not only achieving a self-adaptive and more precise correction, but also getting rid of regular inconvenient water phantom experiments, will renovate the correction technique of current CT devices.

  19. A supervoxel-based segmentation method for prostate MR images

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, LiZhi; Fei, Baowei

    2015-03-01

    Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a "Supervoxel" based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A threedimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%+/-3.2%. The segmentation method can be used not only for the prostate but also for other organs.

  20. Tunnel Point Cloud Filtering Method Based on Elliptic Cylindrical Model

    NASA Astrophysics Data System (ADS)

    Zhua, Ningning; Jiaa, Yonghong; Luo, Lun

    2016-06-01

    The large number of bolts and screws that attached to the subway shield ring plates, along with the great amount of accessories of metal stents and electrical equipments mounted on the tunnel walls, make the laser point cloud data include lots of non-tunnel section points (hereinafter referred to as non-points), therefore affecting the accuracy for modeling and deformation monitoring. This paper proposed a filtering method for the point cloud based on the elliptic cylindrical model. The original laser point cloud data was firstly projected onto a horizontal plane, and a searching algorithm was given to extract the edging points of both sides, which were used further to fit the tunnel central axis. Along the axis the point cloud was segmented regionally, and then fitted as smooth elliptic cylindrical surface by means of iteration. This processing enabled the automatic filtering of those inner wall non-points. Experiments of two groups showed coincident results, that the elliptic cylindrical model based method could effectively filter out the non-points, and meet the accuracy requirements for subway deformation monitoring. The method provides a new mode for the periodic monitoring of tunnel sections all-around deformation in subways routine operation and maintenance.

  1. Warped document image correction method based on heterogeneous registration strategies

    NASA Astrophysics Data System (ADS)

    Tong, Lijing; Zhan, Guoliang; Peng, Quanyao; Li, Yang; Li, Yifan

    2013-03-01

    With the popularity of digital camera and the application requirement of digitalized document images, using digital cameras to digitalize document images has become an irresistible trend. However, the warping of the document surface impacts on the quality of the Optical Character Recognition (OCR) system seriously. To improve the warped document image's vision quality and the OCR rate, this paper proposed a warped document image correction method based on heterogeneous registration strategies. This method mosaics two warped images of the same document from different viewpoints. Firstly, two feature points are selected from one image. Then the two feature points are registered in the other image base on heterogeneous registration strategies. At last, image mosaics are done for the two images, and the best mosaiced image is selected by OCR recognition results. As a result, for the best mosaiced image, the distortions are mostly removed and the OCR results are improved markedly. Experimental results show that the proposed method can resolve the issue of warped document image correction more effectively.

  2. Hybrid perturbation methods based on statistical time series models

    NASA Astrophysics Data System (ADS)

    San-Juan, Juan Félix; San-Martín, Montserrat; Pérez, Iván; López, Rosario

    2016-04-01

    In this work we present a new methodology for orbit propagation, the hybrid perturbation theory, based on the combination of an integration method and a prediction technique. The former, which can be a numerical, analytical or semianalytical theory, generates an initial approximation that contains some inaccuracies derived from the fact that, in order to simplify the expressions and subsequent computations, not all the involved forces are taken into account and only low-order terms are considered, not to mention the fact that mathematical models of perturbations not always reproduce physical phenomena with absolute precision. The prediction technique, which can be based on either statistical time series models or computational intelligence methods, is aimed at modelling and reproducing missing dynamics in the previously integrated approximation. This combination results in the precision improvement of conventional numerical, analytical and semianalytical theories for determining the position and velocity of any artificial satellite or space debris object. In order to validate this methodology, we present a family of three hybrid orbit propagators formed by the combination of three different orders of approximation of an analytical theory and a statistical time series model, and analyse their capability to process the effect produced by the flattening of the Earth. The three considered analytical components are the integration of the Kepler problem, a first-order and a second-order analytical theories, whereas the prediction technique is the same in the three cases, namely an additive Holt-Winters method.

  3. Integrated method for the measurement of trace atmospheric bases

    NASA Astrophysics Data System (ADS)

    Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.

    2011-09-01

    Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace atmospheric nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications, as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  4. Integrated method for the measurement of trace nitrogenous atmospheric bases

    NASA Astrophysics Data System (ADS)

    Key, D.; Stihle, J.; Petit, J.-E.; Bonnet, C.; Depernon, L.; Liu, O.; Kennedy, S.; Latimer, R.; Burgoyne, M.; Wanger, D.; Webster, A.; Casunuran, S.; Hidalgo, S.; Thomas, M.; Moss, J. A.; Baum, M. M.

    2011-12-01

    Nitrogenous atmospheric bases are thought to play a key role in the global nitrogen cycle, but their sources, transport, and sinks remain poorly understood. Of the many methods available to measure such compounds in ambient air, few meet the current need of being applicable to the complete range of potential analytes and fewer still are convenient to implement using instrumentation that is standard to most laboratories. In this work, an integrated approach to measuring trace, atmospheric, gaseous nitrogenous bases has been developed and validated. The method uses a simple acid scrubbing step to capture and concentrate the bases as their phosphite salts, which then are derivatized and analyzed using GC/MS and/or LC/MS. The advantages of both techniques in the context of the present measurements are discussed. The approach is sensitive, selective, reproducible, as well as convenient to implement and has been validated for different sampling strategies. The limits of detection for the families of tested compounds are suitable for ambient measurement applications (e.g., methylamine, 1 pptv; ethylamine, 2 pptv; morpholine, 1 pptv; aniline, 1 pptv; hydrazine, 0.1 pptv; methylhydrazine, 2 pptv), as supported by field measurements in an urban park and in the exhaust of on-road vehicles.

  5. a Robust Pct Method Based on Complex Least Squares Adjustment Method

    NASA Astrophysics Data System (ADS)

    Haiqiang, F.; Jianjun, Z.; Changcheng, W.; Qinghua, X.; Rong, Z.

    2013-07-01

    Polarization Coherence Tomography (PCT) method has the good performance in deriving the vegetation vertical structure. However, Errors caused by temporal decorrelation and vegetation height and ground phase always propagate to the data analysis and contaminate the results. In order to overcome this disadvantage, we exploit Complex Least Squares Adjustment Method to compute vegetation height and ground phase based on Random Volume over Ground and Volume Temporal Decorrelation (RVoG + VTD) model. By the fusion of different polarimetric InSAR data, we can use more observations to obtain more robust estimations of temporal decorrelation and vegetation height, and then, we introduce them into PCT to acquire more accurate vegetation vertical structure. Finally the new approach is validated on E-SAR data of Oberpfaffenhofen, Germany. The results demonstrate that the robust method can greatly improve accusation of vegetation vertical structure.

  6. A new smartphone-based method for wound area measurement.

    PubMed

    Foltynski, Piotr; Ladyzynski, Piotr; Wojcicki, Jan M

    2014-04-01

    Proper wound healing can be assessed by monitoring the wound surface area. Its reduction by 10 or 50% should be achieved after 1 or 4 weeks, respectively, from the start of the applied therapy. There are various methods of wound area measurement, which differ in terms of the cost of the devices and their accuracy. This article presents an originally developed method for wound area measurement. It is based on the automatic recognition of the wound contour with a software application running on a smartphone. The wound boundaries have to be traced manually on transparent foil placed over the wound. After taking a picture of the wound outline over a grid of 1 × 1 cm, the AreaMe software calculates the wound area, sends the data to a clinical database using an Internet connection, and creates a graph of the wound area change over time. The accuracy and precision of the new method was assessed and compared with the accuracy and precision of commercial devices: Visitrak and SilhouetteMobile. The comparison was performed using 108 wound shapes that were measured five times with each device, using an optical scanner as a reference device. The accuracy of the new method was evaluated by calculating relative errors and comparing them with relative errors for the Visitrak and the SilhouetteMobile devices. The precision of the new method was determined by calculating the coefficients of variation and comparing them with the coefficients of variation for the Visitrak and the SilhouetteMobile devices. A statistical analysis revealed that the new method was more accurate and more precise than the Visitrak device but less accurate and less precise than the SilhouetteMobile device. Thus, the AreaMe application is a superior alternative to the Visitrak device because it provides not only a more accurate measurement of the wound area but also stores the data for future use by the physician. PMID:24102380

  7. Density functional theory based generalized effective fragment potential method

    SciTech Connect

    Nguyen, Kiet A. E-mail: ruth.pachter@wpafb.af.mil; Pachter, Ruth E-mail: ruth.pachter@wpafb.af.mil; Day, Paul N.

    2014-06-28

    We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes.

  8. Density functional theory based generalized effective fragment potential method.

    PubMed

    Nguyen, Kiet A; Pachter, Ruth; Day, Paul N

    2014-06-28

    We present a generalized Kohn-Sham (KS) density functional theory (DFT) based effective fragment potential (EFP2-DFT) method for the treatment of solvent effects. Similar to the original Hartree-Fock (HF) based potential with fitted parameters for water (EFP1) and the generalized HF based potential (EFP2-HF), EFP2-DFT includes electrostatic, exchange-repulsion, polarization, and dispersion potentials, which are generated for a chosen DFT functional for a given isolated molecule. The method does not have fitted parameters, except for implicit parameters within a chosen functional and the dispersion correction to the potential. The electrostatic potential is modeled with a multipolar expansion at each atomic center and bond midpoint using Stone's distributed multipolar analysis. The exchange-repulsion potential between two fragments is composed of the overlap and kinetic energy integrals and the nondiagonal KS matrices in the localized molecular orbital basis. The polarization potential is derived from the static molecular polarizability. The dispersion potential includes the intermolecular D3 dispersion correction of Grimme et al. [J. Chem. Phys. 132, 154104 (2010)]. The potential generated from the CAMB3LYP functional has mean unsigned errors (MUEs) with respect to results from coupled cluster singles, doubles, and perturbative triples with a complete basis set limit (CCSD(T)/CBS) extrapolation, of 1.7, 2.2, 2.0, and 0.5 kcal/mol, for the S22, water-benzene clusters, water clusters, and n-alkane dimers benchmark sets, respectively. The corresponding EFP2-HF errors for the respective benchmarks are 2.41, 3.1, 1.8, and 2.5 kcal/mol. Thus, the new EFP2-DFT-D3 method with the CAMB3LYP functional provides comparable or improved results at lower computational cost and, therefore, extends the range of applicability of EFP2 to larger system sizes. PMID:24985612

  9. Neural cell image segmentation method based on support vector machine

    NASA Astrophysics Data System (ADS)

    Niu, Shiwei; Ren, Kan

    2015-10-01

    In the analysis of neural cell images gained by optical microscope, accurate and rapid segmentation is the foundation of nerve cell detection system. In this paper, a modified image segmentation method based on Support Vector Machine (SVM) is proposed to reduce the adverse impact caused by low contrast ratio between objects and background, adherent and clustered cells' interference etc. Firstly, Morphological Filtering and OTSU Method are applied to preprocess images for extracting the neural cells roughly. Secondly, the Stellate Vector, Circularity and Histogram of Oriented Gradient (HOG) features are computed to train SVM model. Finally, the incremental learning SVM classifier is used to classify the preprocessed images, and the initial recognition areas identified by the SVM classifier are added to the library as the positive samples for training SVM model. Experiment results show that the proposed algorithm can achieve much better segmented results than the classic segmentation algorithms.

  10. An Optimization-based Atomistic-to-Continuum Coupling Method

    DOE PAGESBeta

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less

  11. Novel parameter-based flexure bearing design method

    NASA Astrophysics Data System (ADS)

    Amoedo, Simon; Thebaud, Edouard; Gschwendtner, Michael; White, David

    2016-06-01

    A parameter study was carried out on the design variables of a flexure bearing to be used in a Stirling engine with a fixed axial displacement and a fixed outer diameter. A design method was developed in order to assist identification of the optimum bearing configuration. This was achieved through a parameter study of the bearing carried out with ANSYS®. The parameters varied were the number and the width of the arms, the thickness of the bearing, the eccentricity, the size of the starting and ending holes, and the turn angle of the spiral. Comparison was made between the different designs in terms of axial and radial stiffness, the natural frequency, and the maximum induced stresses. Moreover, the Finite Element Analysis (FEA) was compared to theoretical results for a given design. The results led to a graphical design method which assists the selection of flexure bearing geometrical parameters based on pre-determined geometric and material constraints.

  12. Star-Based Methods for Pleiades HR Commissioning

    NASA Astrophysics Data System (ADS)

    Fourest, S.; Kubik, P.; Lebègue, L.; Déchoz, C.; Lacherade, S.; Blanchet, G.

    2012-07-01

    PLEIADES is the highest resolution civilian earth observing system ever developed in Europe. This imagery program is conducted by the French National Space Agency, CNES. It has been operating since 2012 a first satellite PLEIADES-HR launched on 2011 December 17th, a second one should be launched by the end of the year. Each satellite is designed to provide optical 70 cm resolution colored images to civilian and defense users. Thanks to the extreme agility of the satellite, new calibration methods have been tested, based on the observation of celestial bodies, and stars in particular. It has then been made possible to perform MTF measurement, re-focusing, geometrical bias and focal plane assessment, absolute calibration, ghost images localization, micro-vibrations measurement, etc… Starting from an overview of the star acquisition process, this paper will discuss the methods and present the results obtained during the first four months of the commissioning phase.

  13. Vision-based method for tracking meat cuts in slaughterhouses.

    PubMed

    Larsen, Anders Boesen Lindbo; Hviid, Marchen Sonja; Jørgensen, Mikkel Engbo; Larsen, Rasmus; Dahl, Anders Lindbjerg

    2014-01-01

    Meat traceability is important for linking process and quality parameters from the individual meat cuts back to the production data from the farmer that produced the animal. Current tracking systems rely on physical tagging, which is too intrusive for individual meat cuts in a slaughterhouse environment. In this article, we demonstrate a computer vision system for recognizing meat cuts at different points along a slaughterhouse production line. More specifically, we show that 211 pig loins can be identified correctly between two photo sessions. The pig loins undergo various perturbation scenarios (hanging, rough treatment and incorrect trimming) and our method is able to handle these perturbations gracefully. This study shows that the suggested vision-based approach to tracking is a promising alternative to the more intrusive methods currently available. PMID:23962525

  14. Phase retrieval-based distribution detecting method for transparent objects

    NASA Astrophysics Data System (ADS)

    Wu, Liang; Tao, Shaohua; Xiao, Si

    2015-11-01

    A distribution detecting method to recover the distribution of transparent objects from their diffraction intensities is proposed. First, on the basis of the Gerchberg-Saxton algorithm, a wavefront function involving the phase change of the object is retrieved from the incident light intensity and the diffraction intensity, then the phase change of the object is calculated from the retrieved wavefront function by using a gradient field-based phase estimation algorithm, which circumvents the common phase wrapping problem. Finally, a linear model between the distribution of the object and the phase change is set up, and the distribution of the object can be calculated from the obtained phase change. The effectiveness of the proposed method is verified with simulations and experiments.

  15. An FPGA-based heterogeneous image fusion system design method

    NASA Astrophysics Data System (ADS)

    Song, Le; Lin, Yu-chi; Chen, Yan-hua; Zhao, Mei-rong

    2011-08-01

    Taking the advantages of FPGA's low cost and compact structure, an FPGA-based heterogeneous image fusion platform is established in this study. Altera's Cyclone IV series FPGA is adopted as the core processor of the platform, and the visible light CCD camera and infrared thermal imager are used as the image-capturing device in order to obtain dualchannel heterogeneous video images. Tailor-made image fusion algorithms such as gray-scale weighted averaging, maximum selection and minimum selection methods are analyzed and compared. VHDL language and the synchronous design method are utilized to perform a reliable RTL-level description. Altera's Quartus II 9.0 software is applied to simulate and implement the algorithm modules. The contrast experiments of various fusion algorithms show that, preferably image quality of the heterogeneous image fusion can be obtained on top of the proposed system. The applied range of the different fusion algorithms is also discussed.

  16. An Optimization-based Atomistic-to-Continuum Coupling Method

    SciTech Connect

    Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.

    2014-08-21

    In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally, we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.

  17. Improved artificial bee colony algorithm based gravity matching navigation method.

    PubMed

    Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang

    2014-01-01

    Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position. PMID:25046019

  18. Network-Based Inference Methods for Drug Repositioning

    PubMed Central

    Zhang, Heng; Cao, Yiqin; Tang, Wenliang

    2015-01-01

    Mining potential drug-disease associations can speed up drug repositioning for pharmaceutical companies. Previous computational strategies focused on prior biological information for association inference. However, such information may not be comprehensively available and may contain errors. Different from previous research, two inference methods, ProbS and HeatS, were introduced in this paper to predict direct drug-disease associations based only on the basic network topology measure. Bipartite network topology was used to prioritize the potentially indicated diseases for a drug. Experimental results showed that both methods can receive reliable prediction performance and achieve AUC values of 0.9192 and 0.9079, respectively. Case studies on real drugs indicated that some of the strongly predicted associations were confirmed by results in the Comparative Toxicogenomics Database (CTD). Finally, a comprehensive prediction of drug-disease associations enables us to suggest many new drug indications for further studies. PMID:25969690

  19. Method to implement the CCD timing generator based on FPGA

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin

    2010-07-01

    With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.

  20. A novel virtual viewpoint merging method based on machine learning

    NASA Astrophysics Data System (ADS)

    Zheng, Di; Peng, Zongju; Wang, Hui; Jiang, Gangyi; Chen, Fen

    2014-11-01

    In multi-view video system, multiple video plus depth is main data format of 3D scene representation. Continuous virtual views can be generated by using depth image based rendering (DIBR) technique. DIBR process includes geometric mapping, hole filling and merging. Unique weights, inversely proportional to the distance between the virtual and real cameras, are used to merge the virtual views. However, the weights might not the optimal ones in terms of virtual view quality. In this paper, a novel virtual view merging algorithm is proposed. In the proposed algorithm, machine learning method is utilized to establish an optimal weight model. In the model, color, depth, color gradient and sequence parameters are taken into consideration. Firstly, we render the same virtual view from left and right views, and select the training samples by using a threshold. Then, the eigenvalues of the samples are extracted and the optimal merging weights are calculated as training labels. Finally, support vector classifier (SVC) is adopted to establish the model which is used for guiding virtual views rendering. Experimental results show that the proposed method can improve the quality of virtual views for most sequences. Especially, it is effective in the case of large distance between the virtual and real cameras. And compared to the original method of virtual view synthesis, the proposed method can obtain more than 0.1dB gain for some sequences.

  1. A novel non-uniformity correction method based on ROIC

    NASA Astrophysics Data System (ADS)

    Chen, Xiaoming; Li, Yujue; Di, Chao; Wang, Xinxing; Cao, Yi

    2011-11-01

    Infrared focal plane arrays (IRFPA) suffer from inherent low frequency and fixed patter noised (FPN). They are thus limited by their inability to calibrate out individual detector variations including detector dark current (offset) and responsivity (gain). To achieve high quality infrared image by mitigating the FPN of IRFPAs, we have developed a novel non-uniformity correction (NUC) method based on read-out integrated circuit (ROIC). The offset and gain correction coefficients can be calculated by function fitting for the linear relationship between the detector's output and a reference voltage in ROIC. We tested the purposed method using an infrared imaging system using the ULIS 03 19 1 detector with real nonuniformity. A set of 384*288 infrared images with 12 bits was collected to evaluate the performance. With the experiments, the non-uniformity was greatly eliminated. We also used the universe non-uniformity (NU) parameter to estimate the performance. The calculated NU parameters with the two-point calibration (TPC) and the purposed method imply that the purposed method has almost as good performance as TPC.

  2. Optimal grid-based methods for thin film micromagnetics simulations

    NASA Astrophysics Data System (ADS)

    Muratov, C. B.; Osipov, V. V.

    2006-08-01

    Thin film micromagnetics are a broad class of materials with many technological applications, primarily in magnetic memory. The dynamics of the magnetization distribution in these materials is traditionally modeled by the Landau-Lifshitz-Gilbert (LLG) equation. Numerical simulations of the LLG equation are complicated by the need to compute the stray field due to the inhomogeneities in the magnetization which presents the chief bottleneck for the simulation speed. Here, we introduce a new method for computing the stray field in a sample for a reduced model of ultra-thin film micromagnetics. The method uses a recently proposed idea of optimal finite difference grids for approximating Neumann-to-Dirichlet maps and has an advantage of being able to use non-uniform discretization in the film plane, as well as an efficient way of dealing with the boundary conditions at infinity for the stray field. We present several examples of the method's implementation and give a detailed comparison of its performance for studying domain wall structures compared to the conventional FFT-based methods.

  3. Method for fabricating beryllium-based multilayer structures

    DOEpatents

    Skulina, Kenneth M.; Bionta, Richard M.; Makowiecki, Daniel M.; Alford, Craig S.

    2003-02-18

    Beryllium-based multilayer structures and a process for fabricating beryllium-based multilayer mirrors, useful in the wavelength region greater than the beryllium K-edge (111 .ANG. or 11.1 nm). The process includes alternating sputter deposition of beryllium and a metal, typically from the fifth row of the periodic table, such as niobium (Nb), molybdenum (Mo), ruthenium (Ru), and rhodium (Rh). The process includes not only the method of sputtering the materials, but the industrial hygiene controls for safe handling of beryllium. The mirrors made in accordance with the process may be utilized in soft x-ray and extreme-ultraviolet projection lithography, which requires mirrors of high reflectivity (>60%) for x-rays in the range of 60-140 .ANG. (60-14.0 nm).

  4. Biosensor method and system based on feature vector extraction

    DOEpatents

    Greenbaum, Elias; Rodriguez, Jr., Miguel; Qi, Hairong; Wang, Xiaoling

    2012-04-17

    A method of biosensor-based detection of toxins comprises the steps of providing at least one time-dependent control signal generated by a biosensor in a gas or liquid medium, and obtaining a time-dependent biosensor signal from the biosensor in the gas or liquid medium to be monitored or analyzed for the presence of one or more toxins selected from chemical, biological or radiological agents. The time-dependent biosensor signal is processed to obtain a plurality of feature vectors using at least one of amplitude statistics and a time-frequency analysis. At least one parameter relating to toxicity of the gas or liquid medium is then determined from the feature vectors based on reference to the control signal.

  5. Hybrid Modeling Method for a DEP Based Particle Manipulation

    PubMed Central

    Miled, Mohamed Amine; Gagne, Antoine; Sawan, Mohamad

    2013-01-01

    In this paper, a new modeling approach for Dielectrophoresis (DEP) based particle manipulation is presented. The proposed method fulfills missing links in finite element modeling between the multiphysic simulation and the biological behavior. This technique is amongst the first steps to develop a more complex platform covering several types of manipulations such as magnetophoresis and optics. The modeling approach is based on a hybrid interface using both ANSYS and MATLAB to link the propagation of the electrical field in the micro-channel to the particle motion. ANSYS is used to simulate the electrical propagation while MATLAB interprets the results to calculate cell displacement and send the new information to ANSYS for another turn. The beta version of the proposed technique takes into account particle shape, weight and its electrical properties. First obtained results are coherent with experimental results. PMID:23364197

  6. Transistor-based particle detection systems and methods

    DOEpatents

    Jain, Ankit; Nair, Pradeep R.; Alam, Muhammad Ashraful

    2015-06-09

    Transistor-based particle detection systems and methods may be configured to detect charged and non-charged particles. Such systems may include a supporting structure contacting a gate of a transistor and separating the gate from a dielectric of the transistor, and the transistor may have a near pull-in bias and a sub-threshold region bias to facilitate particle detection. The transistor may be configured to change current flow through the transistor in response to a change in stiffness of the gate caused by securing of a particle to the gate, and the transistor-based particle detection system may configured to detect the non-charged particle at least from the change in current flow.

  7. Method of plasma etching GA-based compound semiconductors

    SciTech Connect

    Qiu, Weibin; Goddard, Lynford L.

    2013-01-01

    A method of plasma etching Ga-based compound semiconductors includes providing a process chamber and a source electrode adjacent thereto. The chamber contains a Ga-based compound semiconductor sample in contact with a platen which is electrically connected to a first power supply, and the source electrode is electrically connected to a second power supply. SiCl.sub.4 and Ar gases are flowed into the chamber. RF power is supplied to the platen at a first power level, and RF power is supplied to the source electrode. A plasma is generated. Then, RF power is supplied to the platen at a second power level lower than the first power level and no greater than about 30 W. Regions of a surface of the sample adjacent to one or more masked portions of the surface are etched at a rate of no more than about 25 nm/min to create a substantially smooth etched surface.

  8. 'Fertility Awareness-Based Methods' and subfertility: a systematic review.

    PubMed

    Thijssen, A; Meier, A; Panis, K; Ombelet, W

    2014-01-01

    Fertility awareness based methods (FABMs) can be used to ameliorate the likelihood to conceive. A literature search was performed to evaluate the relationship of cervical mucus monitoring (CMM) and the day-specific -pregnancy rate, in case of subfertility. A MEDLINE search revealed a total of 3331 articles. After excluding articles based on their relevance, 10 studies and were selected. The observed studies demonstrated that the cervical mucus monitoring (CMM) can identify the days with the highest pregnancy rate. According to the literature, the quality of the vaginal discharge correlates well with the cycle-specific probability of pregnancy in normally fertile couples but less in subfertile couples. The results indicate an urgent need for more prospective randomised trials and -prospective cohort studies on CMM in a subfertile population to evaluate the effectiveness of CMM in the subfertile couple. PMID:25374654

  9. Simultaneous least squares fitter based on the Lagrange multiplier method

    NASA Astrophysics Data System (ADS)

    Guan, Ying-Hui; Lü, Xiao-Rui; Zheng, Yang-Heng; Zhu, Yong-Sheng

    2013-10-01

    We developed a least squares fitter used for extracting expected physics parameters from the correlated experimental data in high energy physics. This fitter considers the correlations among the observables and handles the nonlinearity using linearization during the χ2 minimization. This method can naturally be extended to the analysis with external inputs. By incorporating with Lagrange multipliers, the fitter includes constraints among the measured observables and the parameters of interest. We applied this fitter to the study of the D0-D¯0 mixing parameters as the test-bed based on MC simulation. The test results show that the fitter gives unbiased estimators with correct uncertainties and the approach is credible.

  10. Method for Real-Time Model Based Structural Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Smith, Timothy A. (Inventor); Urnes, James M., Sr. (Inventor); Reichenbach, Eric Y. (Inventor)

    2015-01-01

    A system and methods for real-time model based vehicle structural anomaly detection are disclosed. A real-time measurement corresponding to a location on a vehicle structure during an operation of the vehicle is received, and the real-time measurement is compared to expected operation data for the location to provide a modeling error signal. A statistical significance of the modeling error signal to provide an error significance is calculated, and a persistence of the error significance is determined. A structural anomaly is indicated, if the persistence exceeds a persistence threshold value.

  11. Methods and applications of positron-based medical imaging

    NASA Astrophysics Data System (ADS)

    Herzog, H.

    2007-02-01

    Positron emission tomography (PET) is a diagnostic imaging method to examine metabolic functions and their disorders. Dedicated ring systems of scintillation detectors measure the 511 keV γ-radiation produced in the course of the positron emission from radiolabelled metabolically active molecules. A great number of radiopharmaceuticals labelled with 11C, 13N, 15O, or 18F positron emitters have been applied both for research and clinical purposes in neurology, cardiology and oncology. The recent success of PET with rapidly increasing installations is mainly based on the use of [ 18F]fluorodeoxyglucose (FDG) in oncology where it is most useful to localize primary tumours and their metastases.

  12. Supersampling method for efficient grid-based electronic structure calculations

    NASA Astrophysics Data System (ADS)

    Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn

    2016-03-01

    The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing.

  13. [Others physical methods in psychiatric treatment based on electromagnetic stimulation].

    PubMed

    Zyss, Tomasz; Rachel, Wojciech; Datka, Wojciech; Hese, Robert T; Gorczyca, Piotr; Zięba, Andrzej; Piekoszewski, Wojciech

    2016-01-01

    In the last decades a few new physical methods based on the electromagnetic head stimulation were subjected to the clinical research. To them belong:--vagus nerve stimulation (VNS),--magnetic seizure therapy/magnetoconvulsive therapy (MST/MCT),--deep stimulation of the brain (DBS) and--transcranial direct current stimulation (tDCS). The paper presents a description of mentioned techniques (nature, advantages, defects, restrictions), which were compared to the applied electroconvulsive treatment ECT, earlier described transcranial magnetic stimulation TMS and the pharmacotherapy (the basis of the psychiatric treatment). PMID:27197431

  14. Supersampling method for efficient grid-based electronic structure calculations.

    PubMed

    Ryu, Seongok; Choi, Sunghwan; Hong, Kwangwoo; Kim, Woo Youn

    2016-03-01

    The egg-box effect, the spurious variation of energy and force due to the discretization of continuous space, is an inherent vexing problem in grid-based electronic structure calculations. Its effective suppression allowing for large grid spacing is thus crucial for accurate and efficient computations. We here report that the supersampling method drastically alleviates it by eliminating the rapidly varying part of a target function along both radial and angular directions. In particular, the use of the sinc filtering function performs best because as an ideal low pass filter it clearly cuts out the high frequency region beyond allowed by a given grid spacing. PMID:26957151

  15. Rapid Mapping Method Based on Free Blocks of Surveys

    NASA Astrophysics Data System (ADS)

    Yu, Xianwen; Wang, Huiqing; Wang, Jinling

    2016-06-01

    While producing large-scale larger than 1:2000 maps in cities or towns, the obstruction from buildings leads to difficult and heavy tasks of measuring mapping control points. In order to avoid measuring the mapping control points and shorten the time of fieldwork, in this paper, a quick mapping method is proposed. This method adjusts many free blocks of surveys together, and transforms the points from all free blocks of surveys into the same coordinate system. The entire surveying area is divided into many free blocks, and connection points are set on the boundaries between free blocks. An independent coordinate system of every free block is established via completely free station technology, and the coordinates of the connection points, detail points and control points in every free block in the corresponding independent coordinate systems are obtained based on poly-directional open traverses. Error equations are established based on connection points, which are determined together to obtain the transformation parameters. All points are transformed from the independent coordinate systems to a transitional coordinate system via the transformation parameters. Several control points are then measured by GPS in a geodetic coordinate system. All the points can then be transformed from the transitional coordinate system to the geodetic coordinate system. In this paper, the implementation process and mathematical formulas of the new method are presented in detail, and the formula to estimate the precision of surveys is given. An example has demonstrated that the precision of using the new method could meet large-scale mapping needs.

  16. Human Temporal Bone Removal: The Skull Base Block Method.

    PubMed

    Dinh, Christine; Szczupak, Mikhaylo; Moon, Seo; Angeli, Simon; Eshraghi, Adrien; Telischi, Fred F

    2015-08-01

    Objectives To describe a technique for harvesting larger temporal bone specimens from human cadavers for the training of otolaryngology residents and fellows on the various approaches to the lateral and posterolateral skull base. Design Human cadaveric anatomical study. The calvarium was excised 6 cm above the superior aspect of the ear canal. The brain and cerebellum were carefully removed, and the cranial nerves were cut sharply. Two bony cuts were performed, one in the midsagittal plane and the other in the coronal plane at the level of the optic foramen. Setting Medical school anatomy laboratory. Participants Human cadavers. Main Outcome Measures Anatomical contents of specimens and technical effort required. Results Larger temporal bone specimens containing portions of the parietal, occipital, and sphenoidal bones were consistently obtained using this technique of two bone cuts. All specimens were inspected and contained pertinent surface and skull base landmarks. Conclusions The skull base block method allows for larger temporal bone specimens using a two bone cut technique that is efficient and reproducible. These specimens have the necessary anatomical bony landmarks for studying the complexity, utility, and limitations of lateral and posterolateral approaches to the skull base, important for the education of otolaryngology residents and fellows. PMID:26225316

  17. Iterative support detection-based split Bregman method for wavelet frame-based image inpainting.

    PubMed

    He, Liangtian; Wang, Yilun

    2014-12-01

    The wavelet frame systems have been extensively studied due to their capability of sparsely approximating piece-wise smooth functions, such as images, and the corresponding wavelet frame-based image restoration models are mostly based on the penalization of the l1 norm of wavelet frame coefficients for sparsity enforcement. In this paper, we focus on the image inpainting problem based on the wavelet frame, propose a weighted sparse restoration model, and develop a corresponding efficient algorithm. The new algorithm combines the idea of iterative support detection method, first proposed by Wang and Yin for sparse signal reconstruction, and the split Bregman method for wavelet frame l1 model of image inpainting, and more important, naturally makes use of the specific multilevel structure of the wavelet frame coefficients to enhance the recovery quality. This new algorithm can be considered as the incorporation of prior structural information of the wavelet frame coefficients into the traditional l1 model. Our numerical experiments show that the proposed method is superior to the original split Bregman method for wavelet frame-based l1 norm image inpainting model as well as some typical l(p) (0 ≤ p < 1) norm-based nonconvex algorithms such as mean doubly augmented Lagrangian method, in terms of better preservation of sharp edges, due to their failing to make use of the structure of the wavelet frame coefficients. PMID:25312924

  18. An acoustic intensity-based method and its aeroacoustic applications

    NASA Astrophysics Data System (ADS)

    Yu, Chao

    Aircraft noise prediction and control is one of the most urgent and challenging tasks worldwide. A hybrid approach is usually considered for predicting the aerodynamic noise. The approach separates the field into aerodynamic source and acoustic propagation regions. Conventional CFD solvers are typically used to evaluate the flow field in the source region. Once the sound source is predicted, the linearized Euler Equations (LEE) can be used to extend the near-field CFD solution to the mid-field acoustic radiation. However, the far-field extension is very time consuming and always prohibited by the excessive computer memory requirements. The FW-H method, instead, predicts the far-field radiation using the flow-field quantities on a closed control surface (that encloses the entire aerodynamic source region) if the wave equation is assumed outside. The surface integration, however, has to be carried out for each far-field location. This would be still computationally intensive for a practical 3D problem even though the intensity in terms of the CPU time has been much decreased compared with that required by the LEE methods. For an accurate far-field prediction, the other difficulty of using the FW-H method is that the complete control surface may be infeasible to accomplish for most practical applications. Motivated by the need for the accurate and efficient far-field prediction techniques, an Acoustic Intensity-Based Method (AIBM) has been developed based on an acoustic input from an OPEN control surface. The AIBM assumes that the sound propagation is governed by the modified Helmholtz equation on and outside a control surface that encloses all the nonlinear effects and noise sources. The prediction of the acoustic radiation field is carried out by the inverse method with an input of acoustic pressure derivative and its simultaneous, co-located acoustic pressure. The reconstructed acoustic radiation field using the AIBM is unique due to the unique continuation theory

  19. A content-based image retrieval method for optical colonoscopy images based on image recognition techniques

    NASA Astrophysics Data System (ADS)

    Nosato, Hirokazu; Sakanashi, Hidenori; Takahashi, Eiichi; Murakawa, Masahiro

    2015-03-01

    This paper proposes a content-based image retrieval method for optical colonoscopy images that can find images similar to ones being diagnosed. Optical colonoscopy is a method of direct observation for colons and rectums to diagnose bowel diseases. It is the most common procedure for screening, surveillance and treatment. However, diagnostic accuracy for intractable inflammatory bowel diseases, such as ulcerative colitis (UC), is highly dependent on the experience and knowledge of the medical doctor, because there is considerable variety in the appearances of colonic mucosa within inflammations with UC. In order to solve this issue, this paper proposes a content-based image retrieval method based on image recognition techniques. The proposed retrieval method can find similar images from a database of images diagnosed as UC, and can potentially furnish the medical records associated with the retrieved images to assist the UC diagnosis. Within the proposed method, color histogram features and higher order local auto-correlation (HLAC) features are adopted to represent the color information and geometrical information of optical colonoscopy images, respectively. Moreover, considering various characteristics of UC colonoscopy images, such as vascular patterns and the roughness of the colonic mucosa, we also propose an image enhancement method to highlight the appearances of colonic mucosa in UC. In an experiment using 161 UC images from 32 patients, we demonstrate that our method improves the accuracy of retrieving similar UC images.

  20. Scanning-fiber-based imaging method for tissue engineering

    NASA Astrophysics Data System (ADS)

    Hofmann, Matthias C.; Whited, Bryce M.; Mitchell, Josh; Vogt, William C.; Criswell, Tracy; Rylander, Christopher; Rylander, Marissa Nichole; Soker, Shay; Wang, Ge; Xu, Yong

    2012-06-01

    A scanning-fiber-based method developed for imaging bioengineered tissue constructs such as synthetic carotid arteries is reported. Our approach is based on directly embedding one or more hollow-core silica fibers within the tissue scaffold to function as micro-imaging channels (MIC). The imaging process is carried out by translating and rotating an angle-polished fiber micro-mirror within the MIC to scan excitation light across the tissue scaffold. The locally emitted fluorescent signals are captured using an electron multiplying CCD camera and then mapped into fluorophore distributions according to fiber micro-mirror positions. Using an optical phantom composed of fluorescent microspheres, tissue scaffolds, and porcine skin, we demonstrated single-cell-level imaging resolution (20 to 30 μm) at an imaging depth that exceeds the photon transport mean free path by one order of magnitude. This result suggests that the imaging depth is no longer constrained by photon scattering, but rather by the requirement that the fluorophore signal overcomes the background ``noise'' generated by processes such as scaffold autofluorescence. Finally, we demonstrated the compatibility of our imaging method with tissue engineering by visualizing endothelial cells labeled with green fluorescent protein through a ~500 μm thick and highly scattering electrospun scaffold.

  1. Scanning-fiber-based imaging method for tissue engineering

    PubMed Central

    Hofmann, Matthias C.; Whited, Bryce M.; Mitchell, Josh; Vogt, William C.; Criswell, Tracy; Rylander, Christopher; Rylander, Marissa Nichole; Soker, Shay; Wang, Ge

    2012-01-01

    Abstract A scanning-fiber-based method developed for imaging bioengineered tissue constructs such as synthetic carotid arteries is reported. Our approach is based on directly embedding one or more hollow-core silica fibers within the tissue scaffold to function as micro-imaging channels (MIC). The imaging process is carried out by translating and rotating an angle-polished fiber micro-mirror within the MIC to scan excitation light across the tissue scaffold. The locally emitted fluorescent signals are captured using an electron multiplying CCD camera and then mapped into fluorophore distributions according to fiber micro-mirror positions. Using an optical phantom composed of fluorescent microspheres, tissue scaffolds, and porcine skin, we demonstrated single-cell-level imaging resolution (20 to 30 μm) at an imaging depth that exceeds the photon transport mean free path by one order of magnitude. This result suggests that the imaging depth is no longer constrained by photon scattering, but rather by the requirement that the fluorophore signal overcomes the background “noise” generated by processes such as scaffold autofluorescence. Finally, we demonstrated the compatibility of our imaging method with tissue engineering by visualizing endothelial cells labeled with green fluorescent protein through a ∼500  μm thick and highly scattering electrospun scaffold. PMID:22734766

  2. A Progressive Image Compression Method Based on EZW Algorithm

    NASA Astrophysics Data System (ADS)

    Du, Ke; Lu, Jianming; Yahagi, Takashi

    A simple method based on the EZW algorithm is presented for improving image compression performance. Recent success in wavelet image coding is mainly attributed to recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's EZW(Embedded Zerotree Wavelets)(1), Said and Pearlman's SPIHT(Set Partitioning In Hierarchical Trees)(2), and Bing-Bing Chai's SLCCA(Significance-Linked Connected Component Analysis for Wavelet Image Coding)(3). The EZW algorithm is based on five key concepts: (1) a DWT(Discrete Wavelet Transform) or hierarchical subband decomposition, (2) prediction of the absence of significant information across scales by exploiting self-similarity inherent in images, (3) entropy-coded successive-approximation quantization, (4) universal lossless data compression which is achieved via adaptive arithmetic coding. and (5) DWT coefficients' degeneration from high scale subbands to low scale subbands. In this paper, we have improved the self-similarity statistical characteristic in concept (5) and present a progressive image compression method.

  3. Quality control and analytical methods for baculovirus-based products.

    PubMed

    Roldão, António; Vicente, Tiago; Peixoto, Cristina; Carrondo, Manuel J T; Alves, Paula M

    2011-07-01

    Recombinant baculoviruses (rBac) are used for many different applications, ranging from bio-insecticides to the production of heterologous proteins, high-throughput screening of gene functions, drug delivery, in vitro assembly studies, design of antiviral drugs, bio-weapons, building blocks for electronics, biosensors and chemistry, and recently as a delivery system in gene therapy. Independent of the application, the quality, quantity and purity of rBac-based products are pre-requisites demanded by regulatory authorities for product licensing. To guarantee maximization utility, it is necessary to delineate optimized production schemes either using trial-and-error experimental setups ("brute force" approach) or rational design of experiments by aid of in silico mathematical models (Systems Biology approach). For that, one must define all of the main steps in the overall process, identify the main bioengineering issues affecting each individual step and implement, if required, accurate analytical methods for product characterization. In this review, current challenges for quality control (QC) technologies for up- and down-stream processing of rBac-based products are addressed. In addition, a collection of QC methods for monitoring/control of the production of rBac derived products are presented as well as innovative technologies for faster process optimization and more detailed product characterization. PMID:21784235

  4. Quantitative methods to direct exploration based on hydrogeologic information

    USGS Publications Warehouse

    Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.

    2006-01-01

    Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.

  5. Application of rule based methods to predicting storm surge

    NASA Astrophysics Data System (ADS)

    Royston, S. J.; Horsburgh, K. J.; Lawry, J.

    2012-04-01

    The accurate forecast of storm surge, the long wavelength sea level response to meteorological forcing, is imperative for flood warning purposes. There remain regions of the world where operational forecast systems have not been developed and in these locations it is worthwhile considering numerically simpler, data-driven techniques to provide operational services. In this paper, we investigate the applicability of a class of data driven methods referred to as rule based models to the problem of forecasting storm surge. The accuracy of the rule based model is found to be comparable to several alternative data-driven techniques, all of which result in marginally worse but acceptable forecasts compared with the UK's operational hydrodynamic forecast model, given the reduction in computational effort. Promisingly, the rule based model is considered to be skillful in forecasting total water levels above a given flood warning threshold, with a Brier Skill Score of 0.58 against a climatological forecast (the operational storm surge system has a Brier Skill Score of up to 0.75 for the same data set). The structure of the model can be interrogated as IF-THEN rules and we find that the model structure in this case is consistent with our understanding of the physical system. Furthermore, the rule based approach provides probabilistic forecasts of storm surge, which is much more informative to flood warning managers than alternative approaches. Therefore, the rule based model provides reasonably skillful forecasts in comparison with the operational forecast model, for a significant reduction in development and run time, and is therefore considered to be an appropriate data driven approach that could be employed to forecast storm surge in regions of the world where a fully fledged hydrodynamic forecast system does not exist, provided a good observational and meteorological forecast can be made.

  6. Framework of a Contour Based Depth Map Coding Method

    NASA Astrophysics Data System (ADS)

    Wang, Minghui; He, Xun; Jin, Xin; Goto, Satoshi

    Stereo-view and multi-view video formats are heavily investigated topics given their vast application potential. Depth Image Based Rendering (DIBR) system has been developed to improve Multiview Video Coding (MVC). Depth image is introduced to synthesize virtual views on the decoder side in this system. Depth image is a piecewise image, which is filled with sharp contours and smooth interior. Contours in a depth image show more importance than interior in view synthesis process. In order to improve the quality of the synthesized views and reduce the bitrate of depth image, a contour based coding strategy is proposed. First, depth image is divided into layers by different depth value intervals. Then regions, which are defined as the basic coding unit in this work, are segmented from each layer. The region is further divided into the contour and the interior. Two different procedures are employed to code contours and interiors respectively. A vector-based strategy is applied to code the contour lines. Straight lines in contours cost few of bits since they are regarded as vectors. Pixels, which are out of straight lines, are coded one by one. Depth values in the interior of a region are modeled by a linear or nonlinear formula. Coefficients in the formula are retrieved by regression. This process is called interior painting. Unlike conventional block based coding method, the residue between original frame and reconstructed frame (by contour rebuilt and interior painting) is not sent to decoder. In this proposal, contour is coded in a lossless way whereas interior is coded in a lossy way. Experimental results show that the proposed Contour Based Depth map Coding (CBDC) achieves a better performance than JMVC (reference software of MVC) in the high quality scenarios.

  7. 3D face recognition based on a modified ICP method

    NASA Astrophysics Data System (ADS)

    Zhao, Kankan; Xi, Jiangtao; Yu, Yanguang; Chicharo, Joe F.

    2011-11-01

    3D face recognition technique has gained much more attention recently, and it is widely used in security system, identification system, and access control system, etc. The core technique in 3D face recognition is to find out the corresponding points in different 3D face images. The classic partial Iterative Closest Point (ICP) method is iteratively align the two point sets based on repetitively calculate the closest points as the corresponding points in each iteration. After several iterations, the corresponding points can be obtained accurately. However, if two 3D face images with different scale are from the same person, the classic partial ICP does not work. In this paper we propose a modified partial Iterative Closest Point (ICP) method in which the scaling effect is considered to achieve 3D face recognition. We design a 3x3 diagonal matrix as the scale matrix in each iteration of the classic partial ICP. The probing face image which is multiplied by the scale matrix will keep the similar scale with the reference face image. Therefore, we can accurately determine the corresponding points even the scales of probing image and reference image are different. 3D face images in our experiments are acquired by a 3D data acquisition system based on Digital Fringe Projection Profilometry (DFPP). A 3D database consists of 30 group images, three images with the same scale, which are from the same person with different views, are included in each group. And in different groups, the scale of the 3 images may be different from other groups. The experiment results show that our proposed method can achieve 3D face recognition, especially in the case that the scales of probing image and referent image are different.

  8. A vocal-based analytical method for goose behaviour recognition.

    PubMed

    Steen, Kim Arild; Therkildsen, Ole Roland; Karstoft, Henrik; Green, Ole

    2012-01-01

    Since human-wildlife conflicts are increasing, the development of cost-effective methods for reducing damage or conflict levels is important in wildlife management. A wide range of devices to detect and deter animals causing conflict are used for this purpose, although their effectiveness is often highly variable, due to habituation to disruptive or disturbing stimuli. Automated recognition of behaviours could form a critical component of a system capable of altering the disruptive stimuli to avoid this. In this paper we present a novel method to automatically recognise goose behaviour based on vocalisations from flocks of free-living barnacle geese (Branta leucopsis). The geese were observed and recorded in a natural environment, using a shielded shotgun microphone. The classification used Support Vector Machines (SVMs), which had been trained with labeled data. Greenwood Function Cepstral Coefficients (GFCC) were used as features for the pattern recognition algorithm, as they can be adjusted to the hearing capabilities of different species. Three behaviours are classified based in this approach, and the method achieves a good recognition of foraging behaviour (86-97% sensitivity, 89-98% precision) and a reasonable recognition of flushing (79-86%, 66-80%) and landing behaviour(73-91%, 79-92%). The Support Vector Machine has proven to be a robust classifier for this kind of classification, as generality and non-linear capabilities are important. We conclude that vocalisations can be used to automatically detect behaviour of conflict wildlife species, and as such, may be used as an integrated part of a wildlife management system. PMID:22737037

  9. Histogram-Based Calibration Method for Pipeline ADCs.

    PubMed

    Son, Hyeonuk; Jang, Jaewon; Kim, Heetae; Kang, Sungho

    2015-01-01

    Measurement and calibration of an analog-to-digital converter (ADC) using a histogram-based method requires a large volume of data and a long test duration, especially for a high resolution ADC. A fast and accurate calibration method for pipelined ADCs is proposed in this research. The proposed calibration method composes histograms through the outputs of each stage and calculates error sources. The digitized outputs of a stage are influenced directly by the operation of the prior stage, so the results of the histogram provide the information of errors in the prior stage. The composed histograms reduce the required samples and thus calibration time being implemented by simple modules. For 14-bit resolution pipelined ADC, the measured maximum integral non-linearity (INL) is improved from 6.78 to 0.52 LSB, and the spurious-free dynamic range (SFDR) and signal-to-noise-and-distortion ratio (SNDR) are improved from 67.0 to 106.2dB and from 65.6 to 84.8dB, respectively. PMID:26070196

  10. A conductivity-based interface tracking method for microfluidic application

    NASA Astrophysics Data System (ADS)

    Salgado, Juan David; Horiuchi, Keisuke; Dutta, Prashanta

    2006-05-01

    A novel conductivity-based interface tracking method is developed for 'lab-on-a-chip' applications to measure the velocity of the liquid-gas boundary during the filling process. This interface tracking system consists of two basic components: a fluidic circuit and an electronic circuit. The fluidic circuit is composed of a microchannel network where a number of very thin electrodes are placed in the flow path to detect the location of the liquid-gas interface in order to quantify the speed of a traveling liquid front. The electronic circuit is placed on a microelectronic chip that works as a logical switch. This interface tracking method is used to evaluate the performance of planar electrokinetic micropumps formed on a hybrid poly-di-methyl-siloxane (PDMS)-glass platform. In this study, the thickness of the planar micropump is set to be 10 µm, while the externally applied electric field is ranged from 100 V mm-1 to 200 V mm-1. For a particular geometric and electrokinetic condition, repeatable flow results are obtained from the speed of the liquid-gas interface. Flow results obtained from this interface tracking method are compared to those of other existing flow measuring techniques. The maximum error of this interface tracking sensor is less than 5%, even in an ultra low flow velocity.