Sample records for sinc method based

  1. Interlaminar Stresses by Refined Beam Theories and the Sinc Method Based on Interpolation of Highest Derivative

    NASA Technical Reports Server (NTRS)

    Slemp, Wesley C. H.; Kapania, Rakesh K.; Tessler, Alexander

    2010-01-01

    Computation of interlaminar stresses from the higher-order shear and normal deformable beam theory and the refined zigzag theory was performed using the Sinc method based on Interpolation of Highest Derivative. The Sinc method based on Interpolation of Highest Derivative was proposed as an efficient method for determining through-the-thickness variations of interlaminar stresses from one- and two-dimensional analysis by integration of the equilibrium equations of three-dimensional elasticity. However, the use of traditional equivalent single layer theories often results in inaccuracies near the boundaries and when the lamina have extremely large differences in material properties. Interlaminar stresses in symmetric cross-ply laminated beams were obtained by solving the higher-order shear and normal deformable beam theory and the refined zigzag theory with the Sinc method based on Interpolation of Highest Derivative. Interlaminar stresses and bending stresses from the present approach were compared with a detailed finite element solution obtained by ABAQUS/Standard. The results illustrate the ease with which the Sinc method based on Interpolation of Highest Derivative can be used to obtain the through-the-thickness distributions of interlaminar stresses from the beam theories. Moreover, the results indicate that the refined zigzag theory is a substantial improvement over the Timoshenko beam theory due to the piecewise continuous displacement field which more accurately represents interlaminar discontinuities in the strain field. The higher-order shear and normal deformable beam theory more accurately captures the interlaminar stresses at the ends of the beam because it allows transverse normal strain. However, the continuous nature of the displacement field requires a large number of monomial terms before the interlaminar stresses are computed as accurately as the refined zigzag theory.

  2. Propensity Score-Based Methods versus MTE-Based Methods in Causal Inference: Identification, Estimation, and Application

    ERIC Educational Resources Information Center

    Zhou, Xiang; Xie, Yu

    2016-01-01

    Since the seminal introduction of the propensity score (PS) by Rosenbaum and Rubin, PS-based methods have been widely used for drawing causal inferences in the behavioral and social sciences. However, the PS approach depends on the ignorability assumption: there are no unobserved confounders once observed covariates are taken into account. For…

  3. Methods for determining time of death.

    PubMed

    Madea, Burkhard

    2016-12-01

    Medicolegal death time estimation must estimate the time since death reliably. Reliability can only be provided empirically by statistical analysis of errors in field studies. Determining the time since death requires the calculation of measurable data along a time-dependent curve back to the starting point. Various methods are used to estimate the time since death. The current gold standard for death time estimation is a previously established nomogram method based on the two-exponential model of body cooling. Great experimental and practical achievements have been realized using this nomogram method. To reduce the margin of error of the nomogram method, a compound method was developed based on electrical and mechanical excitability of skeletal muscle, pharmacological excitability of the iris, rigor mortis, and postmortem lividity. Further increasing the accuracy of death time estimation involves the development of conditional probability distributions for death time estimation based on the compound method. Although many studies have evaluated chemical methods of death time estimation, such methods play a marginal role in daily forensic practice. However, increased precision of death time estimation has recently been achieved by considering various influencing factors (i.e., preexisting diseases, duration of terminal episode, and ambient temperature). Putrefactive changes may be used for death time estimation in water-immersed bodies. Furthermore, recently developed technologies, such as H magnetic resonance spectroscopy, can be used to quantitatively study decompositional changes. This review addresses the gold standard method of death time estimation in forensic practice and promising technological and scientific developments in the field.

  4. A general method for the quantitative assessment of mineral pigments.

    PubMed

    Ares, M C Zurita; Fernández, J M

    2016-01-01

    A general method for the estimation of mineral pigment contents in different bases has been proposed using a sole set of calibration curves, (one for each pigment), calculated for a white standard base, thus elaborating patterns for each utilized base is not necessary. The method can be used in different bases and its validity had ev en been proved in strongly tinted bases. The method consists of a novel procedure that combines diffuse reflectance spectroscopy, second derivatives and the Kubelka-Munk function. This technique has proved to be at least one order of magnitude more sensitive than X-Ray diffraction for colored compounds, since it allowed the determination of the pigment amount in colored samples containing 0.5 wt% of pigment that was not detected by X-Ray Diffraction. The method can be used to estimate the concentration of mineral pigments in a wide variety of either natural or artificial materials, since it does not requiere the calculation of each pigment pattern in every base. This fact could have important industrial consequences, as the proposed method would be more convenient, faster and cheaper. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.

    PubMed

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.

  6. Precision and accuracy in smFRET based structural studies—A benchmark study of the Fast-Nano-Positioning System

    NASA Astrophysics Data System (ADS)

    Nagy, Julia; Eilert, Tobias; Michaelis, Jens

    2018-03-01

    Modern hybrid structural analysis methods have opened new possibilities to analyze and resolve flexible protein complexes where conventional crystallographic methods have reached their limits. Here, the Fast-Nano-Positioning System (Fast-NPS), a Bayesian parameter estimation-based analysis method and software, is an interesting method since it allows for the localization of unknown fluorescent dye molecules attached to macromolecular complexes based on single-molecule Förster resonance energy transfer (smFRET) measurements. However, the precision, accuracy, and reliability of structural models derived from results based on such complex calculation schemes are oftentimes difficult to evaluate. Therefore, we present two proof-of-principle benchmark studies where we use smFRET data to localize supposedly unknown positions on a DNA as well as on a protein-nucleic acid complex. Since we use complexes where structural information is available, we can compare Fast-NPS localization to the existing structural data. In particular, we compare different dye models and discuss how both accuracy and precision can be optimized.

  7. Criminal investigations: pupil pharmacological reactivity as method for assessing time since death is fallacious.

    PubMed

    Orrico, Marco; Melotti, Roberto; Mantovani, Anna; Avesani, Barbara; De Marco, Roberto; De Leo, Domenico

    2008-12-01

    Determination of the time since death in the early postmortem period is one of the most critical issues to be faced by criminal investigators. One of the techniques is the evaluation of the pupil pharmacological reactivity. In the present work, we aim at identifying whether an objective and single method, based on pharmacological pupil reaction, is feasible or not. Between 2002 and 2003 calendar years, we observed 309 bodies, whose eyes have been each instilled apart, within 26 hours since death, with either a myotic substance or a mydriatic solution. Our results show that the real effectiveness of pupil pharmacological reactivity as method for assessing the time since death in early postmortem period is not only questionable but even highly misleading if not replaced by alternative objective physiological tests and appropriate professional judgments by the investigators.

  8. A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks

    PubMed Central

    Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong

    2015-01-01

    This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571

  9. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    PubMed

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.

  10. Anisotropic elastic moduli reconstruction in transversely isotropic model using MRE

    NASA Astrophysics Data System (ADS)

    Song, Jiah; In Kwon, Oh; Seo, Jin Keun

    2012-11-01

    Magnetic resonance elastography (MRE) is an elastic tissue property imaging modality in which the phase-contrast based MRI imaging technique is used to measure internal displacement induced by a harmonically oscillating mechanical vibration. MRE has made rapid technological progress in the past decade and has now reached the stage of clinical use. Most of the research outcomes are based on the assumption of isotropy. Since soft tissues like skeletal muscles show anisotropic behavior, the MRE technique should be extended to anisotropic elastic property imaging. This paper considers reconstruction in a transversely isotropic model, which is the simplest case of anisotropy, and develops a new non-iterative reconstruction method for visualizing the elastic moduli distribution. This new method is based on an explicit representation formula using the Newtonian potential of measured displacement. Hence, the proposed method does not require iterations since it directly recovers the anisotropic elastic moduli. We perform numerical simulations in order to demonstrate the feasibility of the proposed method in recovering a two-dimensional anisotropic tensor.

  11. Calculating Exclusive Breastfeeding Rates: Comparing Dietary "24-Hour Recall" with Recall "Since Birth" Methods.

    PubMed

    Abdel-Hady, Doaa M; El-Gilany, Abdel-Hady

    2016-12-01

    Calculating exclusive breastfeeding (EBF) rates based on the previous-day recall has been recommended by the World Health Organization to avoid the recall bias but it also may not accurately reflect the feeding pattern since birth and leads to overestimate of the proportion of exclusively breastfed infants. The objective of this study was to compare the (EBF) rates calculated by the 24-hour recall and since birth recall and their association with different sociodemographic and maternal data. Prospective descriptive study in Mansoura District including 1,102 mother-infant dyad attending primary healthcare centers for vaccination. One thousand ninety-one and 1,029 were followed up at 4 and 6 months during a period from January to October 2015. Sociodemographic data, maternal, antenatal, birth, and some infant related data were collected through interview. Questions about EBF using the 24-hour recall and since birth recall definitions were asked. This study shows consistent difference in breastfeeding pattern reported by 24-hour recall with recall since birth at all age intervals. At the age of 6 months 13.6% of infants were EBF as reported by 24-hour recall method versus 5.2% for recall since birth method. Different factors were associated with EBF practice reported using these different methods. The two recall methods describe the reality in different and incomplete ways. It is better to measure and report EBF rated using both methods so as to give a full picture of breastfeeding practice. And it is very important to distinguish between both methods and not to be used interchangeably with each other.

  12. MR Imaging-based Semi-quantitative Methods for Knee Osteoarthritis

    PubMed Central

    JARRAYA, Mohamed; HAYASHI, Daichi; ROEMER, Frank Wolfgang; GUERMAZI, Ali

    2016-01-01

    Magnetic resonance imaging (MRI)-based semi-quantitative (SQ) methods applied to knee osteoarthritis (OA) have been introduced during the last decade and have fundamentally changed our understanding of knee OA pathology since then. Several epidemiological studies and clinical trials have used MRI-based SQ methods to evaluate different outcome measures. Interest in MRI-based SQ scoring system has led to continuous update and refinement. This article reviews the different SQ approaches for MRI-based whole organ assessment of knee OA and also discuss practical aspects of whole joint assessment. PMID:26632537

  13. Diversified models for portfolio selection based on uncertain semivariance

    NASA Astrophysics Data System (ADS)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  14. A Compound LAMS-MOODLE Environment to Support Collaborative Project-Based Learning: A Case Study with the Group Investigation Method

    ERIC Educational Resources Information Center

    Paschalis, Giorgos

    2017-01-01

    Collaborative project-based learning is well established as a component of several courses in higher education, since it seems to motivate students and make them active in the learning process. Collaborative Project-Based Learning methods are demanded so that tutors become able to intervene and guide the students in flexible ways: by encouraging…

  15. A validation study of a rapid field-based rating system for discriminating among flow permanence classes of headwater streams in South Carolina

    EPA Science Inventory

    Rapid field-based protocols for classifying flow permanence of headwater streams are needed to inform timely regulatory decisions. Such an existing method was developed for and has been used in North Carolina since 1997. The method uses ordinal scoring of 26 geomorphology, hydr...

  16. Collaborative voxel-based surgical virtual environments.

    PubMed

    Acosta, Eric; Muniz, Gilbert; Armonda, Rocco; Bowyer, Mark; Liu, Alan

    2008-01-01

    Virtual Reality-based surgical simulators can utilize Collaborative Virtual Environments (C-VEs) to provide team-based training. To support real-time interactions, C-VEs are typically replicated on each user's local computer and a synchronization method helps keep all local copies consistent. This approach does not work well for voxel-based C-VEs since large and frequent volumetric updates make synchronization difficult. This paper describes a method that allows multiple users to interact within a voxel-based C-VE for a craniotomy simulator being developed. Our C-VE method requires smaller update sizes and provides faster synchronization update rates than volumetric-based methods. Additionally, we address network bandwidth/latency issues to simulate networked haptic and bone drilling tool interactions with a voxel-based skull C-VE.

  17. Design Tool Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    Conventional optimization methods are based on a deterministic approach since their purpose is to find out an exact solution. However, such methods have initial condition dependence and the risk of falling into local solution. In this paper, we propose a new optimization method based on the concept of path integrals used in quantum mechanics. The method obtains a solution as an expected value (stochastic average) using a stochastic process. The advantages of this method are that it is not affected by initial conditions and does not require techniques based on experiences. We applied the new optimization method to a hang glider design. In this problem, both the hang glider design and its flight trajectory were optimized. The numerical calculation results prove that performance of the method is sufficient for practical use.

  18. Mapping Robinia pseudoacacia forest health in the Yellow River delta by using high-resolution IKONOS imagery and object-based image analysis

    NASA Astrophysics Data System (ADS)

    Wang, Hong; Lu, Kaiyu; Pu, Ruiliang

    2016-10-01

    The Robinia pseudoacacia forest in the Yellow River delta of China has been planted since the 1970s, and a large area of dieback of the forest has occurred since the 1990s. To assess the condition of the R. pseudoacacia forest in three forest areas (i.e., Gudao, Machang, and Abandoned Yellow River) in the delta, we combined an estimation of scale parameters tool and geometry/topology assessment criteria to determine the optimal scale parameters, selected optimal predictive variables determined by stepwise discriminant analysis, and compared object-based image analysis (OBIA) and pixel-based approaches using IKONOS data. The experimental results showed that the optimal segmentation scale is 5 for both the Gudao and Machang forest areas, and 12 for the Abandoned Yellow River forest area. The results produced by the OBIA method were much better than those created by the pixel-based method. The overall accuracy of the OBIA method was 93.7% (versus 85.4% by the pixel-based) for Gudao, 89.0% (versus 72.7%) for Abandoned Yellow River, and 91.7% (versus 84.4%) for Machang. Our analysis results demonstrated that the OBIA method was an effective tool for rapidly mapping and assessing the health levels of forest.

  19. Spatial dynamics of the invasive defoliator amber-marked birch leafminer across the Anchorage landscape

    Treesearch

    J.E. Lundquist; R.M. Reich; M. Tuffly

    2012-01-01

    The amber-marked birch leafminer has caused severe infestations of birch species in Anchorage, AK, since 2002. Its spatial distribution has been monitored since 2006 and summarized using interpolated surfaces based on simple kriging. In this study, we developed methods of assessing and describing spatial distribution of the leafminer as they vary from year to year, and...

  20. Metric Similarity in Vegetation-Based Wetland Assessment Methods

    EPA Science Inventory

    Wetland vegetation is a recognized indicator group for wetland assessments, but until recently few published protocols used plant-based indicators. To examine the proliferation of such protocols since 1999, this report reviewed 20 published index of biotic integrity (IBI) type p...

  1. The Tool for Designing Engineering Systems Using a New Optimization Method Based on a Stochastic Process

    NASA Astrophysics Data System (ADS)

    Yoshida, Hiroaki; Yamaguchi, Katsuhito; Ishikawa, Yoshio

    The conventional optimization methods were based on a deterministic approach, since their purpose is to find out an exact solution. However, these methods have initial condition dependence and risk of falling into local solution. In this paper, we propose a new optimization method based on a concept of path integral method used in quantum mechanics. The method obtains a solutions as an expected value (stochastic average) using a stochastic process. The advantages of this method are not to be affected by initial conditions and not to need techniques based on experiences. We applied the new optimization method to a design of the hang glider. In this problem, not only the hang glider design but also its flight trajectory were optimized. The numerical calculation results showed that the method has a sufficient performance.

  2. A Direct Cell Quenching Method for Cell-Culture Based Metabolomics

    EPA Science Inventory

    A crucial step in metabolomic analysis of cellular extracts is the cell quenching process. The conventional method first uses trypsin to detach cells from their growth surface. This inevitably changes the profile of cellular metabolites since the detachment of cells from the extr...

  3. In singulo biochemistry: when less is more.

    PubMed

    Bustamante, Carlos

    2008-01-01

    It has been over one-and-a-half decades since methods of single-molecule detection and manipulation were first introduced in biochemical research. Since then, the application of these methods to an expanding variety of problems has grown at a vertiginous pace. While initially many of these experiments led more to confirmatory results than to new discoveries, today single-molecule methods are often the methods of choice to establish new mechanism-based results in biochemical research. Throughout this process, improvements in the sensitivity, versatility, and both spatial and temporal resolution of these techniques has occurred hand in hand with their applications. We discuss here some of the advantages of single-molecule methods over their bulk counterparts and argue that these advantages should help establish them as essential tools in the technical arsenal of the modern biochemist.

  4. Characterizing Task-Based OpenMP Programs

    PubMed Central

    Muddukrishna, Ananya; Jonsson, Peter A.; Brorsson, Mats

    2015-01-01

    Programmers struggle to understand performance of task-based OpenMP programs since profiling tools only report thread-based performance. Performance tuning also requires task-based performance in order to balance per-task memory hierarchy utilization against exposed task parallelism. We provide a cost-effective method to extract detailed task-based performance information from OpenMP programs. We demonstrate the utility of our method by quickly diagnosing performance problems and characterizing exposed task parallelism and per-task instruction profiles of benchmarks in the widely-used Barcelona OpenMP Tasks Suite. Programmers can tune performance faster and understand performance tradeoffs more effectively than existing tools by using our method to characterize task-based performance. PMID:25860023

  5. A scale space feature based registration technique for fusion of satellite imagery

    NASA Technical Reports Server (NTRS)

    Raghavan, Srini; Cromp, Robert F.; Campbell, William C.

    1997-01-01

    Feature based registration is one of the most reliable methods to register multi-sensor images (both active and passive imagery) since features are often more reliable than intensity or radiometric values. The only situation where a feature based approach will fail is when the scene is completely homogenous or densely textural in which case a combination of feature and intensity based methods may yield better results. In this paper, we present some preliminary results of testing our scale space feature based registration technique, a modified version of feature based method developed earlier for classification of multi-sensor imagery. The proposed approach removes the sensitivity in parameter selection experienced in the earlier version as explained later.

  6. Computational methods for aerodynamic design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Peeters, M. F.

    1983-01-01

    Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.

  7. Multivariate Optimization for Extraction of Pyrethroids in Milk and Validation for GC-ECD and CG-MS/MS Analysis

    PubMed Central

    Zanchetti Meneghini, Leonardo; Rübensam, Gabriel; Claudino Bica, Vinicius; Ceccon, Amanda; Barreto, Fabiano; Flores Ferrão, Marco; Bergold, Ana Maria

    2014-01-01

    A simple and inexpensive method based on solvent extraction followed by low temperature clean-up was applied for determination of seven pyrethroids residues in bovine raw milk using gas chromatography coupled to tandem mass spectrometry (GC-MS/MS) and gas chromatography with electron-capture detector (GC-ECD). Sample extraction procedure was established through the evaluation of seven different extraction protocols, evaluated in terms of analyte recovery and cleanup efficiency. Sample preparation optimization was based on Doehlert design using fifteen runs with three different variables. Response surface methodologies and polynomial analysis were used to define the best extraction conditions. Method validation was carried out based on SANCO guide parameters and assessed by multivariate analysis. Method performance was considered satisfactory since mean recoveries were between 87% and 101% for three distinct concentrations. Accuracy and precision were lower than ±20%, and led to no significant differences (p < 0.05) between results obtained by GC-ECD and GC-MS/MS techniques. The method has been applied to routine analysis for determination of pyrethroid residues in bovine raw milk in the Brazilian National Residue Control Plan since 2013, in which a total of 50 samples were analyzed. PMID:25380457

  8. Collaborative Information Retrieval Method among Personal Repositories

    NASA Astrophysics Data System (ADS)

    Kamei, Koji; Yukawa, Takashi; Yoshida, Sen; Kuwabara, Kazuhiro

    In this paper, we describe a collaborative information retrieval method among personal repositorie and an implementation of the method on a personal agent framework. We propose a framework for personal agents that aims to enable the sharing and exchange of information resources that are distributed unevenly among individuals. The kernel of a personal agent framework is an RDF(resource description framework)-based information repository for storing, retrieving and manipulating privately collected information, such as documents the user read and/or wrote, email he/she exchanged, web pages he/she browsed, etc. The repository also collects annotations to information resources that describe relationships among information resources and records of interaction between the user and information resources. Since the information resources in a personal repository and their structure are personalized, information retrieval from other users' is an important application of the personal agent. A vector space model with a personalized concept-base is employed as an information retrieval mechanism in a personal repository. Since a personalized concept-base is constructed from information resources in a personal repository, it reflects its user's knowledge and interests. On the other hand, it leads to another problem while querying other users' personal repositories; that is, simply transferring query requests does not provide desirable results. To solve this problem, we propose a query equalization scheme based on a relevance feedback method for collaborative information retrieval between personalized concept-bases. In this paper, we describe an implementation of the collaborative information retrieval method and its user interface on the personal agent framework.

  9. A probabilistic neural network based approach for predicting the output power of wind turbines

    NASA Astrophysics Data System (ADS)

    Tabatabaei, Sajad

    2017-03-01

    Finding the authentic predicting tools of eliminating the uncertainty of wind speed forecasts is highly required while wind power sources are strongly penetrating. Recently, traditional predicting models of generating point forecasts have no longer been trustee. Thus, the present paper aims at utilising the concept of prediction intervals (PIs) to assess the uncertainty of wind power generation in power systems. Besides, this paper uses a newly introduced non-parametric approach called lower upper bound estimation (LUBE) to build the PIs since the forecasting errors are unable to be modelled properly by applying distribution probability functions. In the present proposed LUBE method, a PI combination-based fuzzy framework is used to overcome the performance instability of neutral networks (NNs) used in LUBE. In comparison to other methods, this formulation more suitably has satisfied the PI coverage and PI normalised average width (PINAW). Since this non-linear problem has a high complexity, a new heuristic-based optimisation algorithm comprising a novel modification is introduced to solve the aforesaid problems. Based on data sets taken from a wind farm in Australia, the feasibility and satisfying performance of the suggested method have been investigated.

  10. Multidisciplinary Graduate Training in Social Research Methodology and Computer-Assisted Qualitative Data Analysis: A Hands-On/Hands-Off Course Design

    ERIC Educational Resources Information Center

    Bourque, Claude Julie; Bourdon, Sylvain

    2017-01-01

    Drawing on the experience of training graduate students and researchers in qualitative and mixed-methods analysis since the mid-1990s, the authors reflect on the evolution of a multidisciplinary graduate course developed in a Canadian university since 2007. The hands-on/hands-off course design based on the use of NVivo was developed in parallel…

  11. A voxel-based approach to gray matter asymmetries.

    PubMed

    Luders, E; Gaser, C; Jancke, L; Schlaug, G

    2004-06-01

    Voxel-based morphometry (VBM) was used to analyze gray matter (GM) asymmetries in a large sample (n = 60) of male and female professional musicians with and without absolute pitch (AP). We chose to examine these particular groups because previous studies using traditional region-of-interest (ROI) analyses have shown differences in hemispheric asymmetry related to AP and gender. Voxel-based methods may have advantages over traditional ROI-based methods since the analysis can be performed across the whole brain with minimal user bias. After determining that the VBM method was sufficiently sensitive for the detection of differences in GM asymmetries between groups, we found that male AP musicians were more leftward lateralized in the anterior region of the planum temporale (PT) than male non-AP musicians. This confirmed the results of previous studies using ROI-based methods that showed an association between PT asymmetry and the AP phenotype. We further observed that male non-AP musicians revealed an increased leftward GM asymmetry in the postcentral gyrus compared to female non-AP musicians, again corroborating results of a previously published study using ROI-based methods. By analyzing hemispheric GM differences across our entire sample, we were able to partially confirm findings of previous studies using traditional morphometric techniques, as well as more recent, voxel-based analyses. In addition, we found some unusually pronounced GM asymmetries in our musician sample not previously detected in subjects unselected for musical training. Since we were able to validate gender- and AP-related brain asymmetries previously described using traditional ROI-based morphometric techniques, the results of our analyses support the use of VBM for examinations of GM asymmetries.

  12. Effect of rapid thermal annealing temperature on the dispersion of Si nanocrystals in SiO2 matrix

    NASA Astrophysics Data System (ADS)

    Saxena, Nupur; Kumar, Pragati; Gupta, Vinay

    2015-05-01

    Effect of rapid thermal annealing temperature on the dispersion of silicon nanocrystals (Si-NC's) embedded in SiO2 matrix grown by atom beam sputtering (ABS) method is reported. The dispersion of Si NCs in SiO2 is an important issue to fabricate high efficiency devices based on Si-NC's. The transmission electron microscopy studies reveal that the precipitation of excess silicon is almost uniform and the particles grow in almost uniform size upto 850 °C. The size distribution of the particles broadens and becomes bimodal as the temperature is increased to 950 °C. This suggests that by controlling the annealing temperature, the dispersion of Si-NC's can be controlled. The results are supported by selected area diffraction (SAED) studies and micro photoluminescence (PL) spectroscopy. The discussion of effect of particle size distribution on PL spectrum is presented based on tight binding approximation (TBA) method using Gaussian and log-normal distribution of particles. The study suggests that the dispersion and consequently emission energy varies as a function of particle size distribution and that can be controlled by annealing parameters.

  13. Superpave binder implementation : final report.

    DOT National Transportation Integrated Search

    1999-01-01

    Oregon Department of Transportation (ODOT) has specified performance-based asphalts (PBAs) since 1991. Developed by the Pacific Coast Conference on Asphalt Specifications (PCCAS) in 1990, the PBA concept uses conventional test methods for classificat...

  14. Evaluation of the Laplace Integral. Classroom Notes

    ERIC Educational Resources Information Center

    Chen, Hongwei

    2004-01-01

    Based on the dominated convergence theorem and parametric differentiation, two different evaluations of the Laplace integral are displayed. This article presents two different proofs of (1) which may be of interest since they are based on principles within the realm of real analysis. The first method applies the dominated convergence theorem to…

  15. A novel point cloud registration using 2D image features

    NASA Astrophysics Data System (ADS)

    Lin, Chien-Chou; Tai, Yen-Chou; Lee, Jhong-Jin; Chen, Yong-Sheng

    2017-01-01

    Since a 3D scanner only captures a scene of a 3D object at a time, a 3D registration for multi-scene is the key issue of 3D modeling. This paper presents a novel and an efficient 3D registration method based on 2D local feature matching. The proposed method transforms the point clouds into 2D bearing angle images and then uses the 2D feature based matching method, SURF, to find matching pixel pairs between two images. The corresponding points of 3D point clouds can be obtained by those pixel pairs. Since the corresponding pairs are sorted by their distance between matching features, only the top half of the corresponding pairs are used to find the optimal rotation matrix by the least squares approximation. In this paper, the optimal rotation matrix is derived by orthogonal Procrustes method (SVD-based approach). Therefore, the 3D model of an object can be reconstructed by aligning those point clouds with the optimal transformation matrix. Experimental results show that the accuracy of the proposed method is close to the ICP, but the computation cost is reduced significantly. The performance is six times faster than the generalized-ICP algorithm. Furthermore, while the ICP requires high alignment similarity of two scenes, the proposed method is robust to a larger difference of viewing angle.

  16. A noninvasive, direct real-time PCR method for sex determination in multiple avian species

    USGS Publications Warehouse

    Brubaker, Jessica L.; Karouna-Renier, Natalie K.; Chen, Yu; Jenko, Kathryn; Sprague, Daniel T.; Henry, Paula F.P.

    2011-01-01

    Polymerase chain reaction (PCR)-based methods to determine the sex of birds are well established and have seen few modifications since they were first introduced in the 1990s. Although these methods allowed for sex determination in species that were previously difficult to analyse, they were not conducive to high-throughput analysis because of the laboriousness of DNA extraction and gel electrophoresis. We developed a high-throughput real-time PCR-based method for analysis of sex in birds, which uses noninvasive sample collection and avoids DNA extraction and gel electrophoresis.

  17. Measurement of edge residual stresses in glass by the phase-shifting method

    NASA Astrophysics Data System (ADS)

    Ajovalasit, A.; Petrucci, G.; Scafidi, M.

    2011-05-01

    Control and measurement of residual stress in glass is of great importance in the industrial field. Since glass is a birefringent material, the residual stress analysis is based mainly on the photoelastic method. This paper considers two methods of automated analysis of membrane residual stress in glass sheets, based on the phase-shifting concept in monochromatic light. In particular these methods are the automated versions of goniometric compensation methods of Tardy and Sénarmont. The proposed methods can effectively replace manual methods of compensation (goniometric compensation of Tardy and Sénarmont, Babinet and Babinet-Soleil compensators) provided by current standards on the analysis of residual stresses in glasses.

  18. Crack image segmentation based on improved DBC method

    NASA Astrophysics Data System (ADS)

    Cao, Ting; Yang, Nan; Wang, Fengping; Gao, Ting; Wang, Weixing

    2017-11-01

    With the development of computer vision technology, crack detection based on digital image segmentation method arouses global attentions among researchers and transportation ministries. Since the crack always exhibits the random shape and complex texture, it is still a challenge to accomplish reliable crack detection results. Therefore, a novel crack image segmentation method based on fractal DBC (differential box counting) is introduced in this paper. The proposed method can estimate every pixel fractal feature based on neighborhood information which can consider the contribution from all possible direction in the related block. The block moves just one pixel every time so that it could cover all the pixels in the crack image. Unlike the classic DBC method which only describes fractal feature for the related region, this novel method can effectively achieve crack image segmentation according to the fractal feature of every pixel. The experiment proves the proposed method can achieve satisfactory results in crack detection.

  19. Predicting missing links via correlation between nodes

    NASA Astrophysics Data System (ADS)

    Liao, Hao; Zeng, An; Zhang, Yi-Cheng

    2015-10-01

    As a fundamental problem in many different fields, link prediction aims to estimate the likelihood of an existing link between two nodes based on the observed information. Since this problem is related to many applications ranging from uncovering missing data to predicting the evolution of networks, link prediction has been intensively investigated recently and many methods have been proposed so far. The essential challenge of link prediction is to estimate the similarity between nodes. Most of the existing methods are based on the common neighbor index and its variants. In this paper, we propose to calculate the similarity between nodes by the Pearson correlation coefficient. This method is found to be very effective when applied to calculate similarity based on high order paths. We finally fuse the correlation-based method with the resource allocation method, and find that the combined method can substantially outperform the existing methods, especially in sparse networks.

  20. Colorimetric micro-assay for accelerated screening of mould inhibitors

    Treesearch

    Carol A. Clausen; Vina W. Yang

    2013-01-01

    Since current standard laboratory methods are time-consuming macro-assays that rely on subjective visual ratings of mould growth, rapid and quantitative laboratory methods are needed to screen potential mould inhibitors for use in and on cellulose-based products. A colorimetric micro-assay has been developed that uses XTT tetrazolium salt to enzymatically assess...

  1. Method for phosphorothioate antisense DNA sequencing by capillary electrophoresis with UV detection.

    PubMed

    Froim, D; Hopkins, C E; Belenky, A; Cohen, A S

    1997-11-01

    The progress of antisense DNA therapy demands development of reliable and convenient methods for sequencing short single-stranded oligonucleotides. A method of phosphorothioate antisense DNA sequencing analysis using UV detection coupled to capillary electrophoresis (CE) has been developed based on a modified chain termination sequencing method. The proposed method reduces the sequencing cost since it uses affordable CE-UV instrumentation and requires no labeling with minimal sample processing before analysis. Cycle sequencing with ThermoSequenase generates quantities of sequencing products that are readily detectable by UV. Discrimination of undesired components from sequencing products in the reaction mixture, previously accomplished by fluorescent or radioactive labeling, is now achieved by bringing concentrations of undesired components below the UV detection range which yields a 'clean', well defined sequence. UV detection coupled with CE offers additional conveniences for sequencing since it can be accomplished with commercially available CE-UV equipment and is readily amenable to automation.

  2. Method for phosphorothioate antisense DNA sequencing by capillary electrophoresis with UV detection.

    PubMed Central

    Froim, D; Hopkins, C E; Belenky, A; Cohen, A S

    1997-01-01

    The progress of antisense DNA therapy demands development of reliable and convenient methods for sequencing short single-stranded oligonucleotides. A method of phosphorothioate antisense DNA sequencing analysis using UV detection coupled to capillary electrophoresis (CE) has been developed based on a modified chain termination sequencing method. The proposed method reduces the sequencing cost since it uses affordable CE-UV instrumentation and requires no labeling with minimal sample processing before analysis. Cycle sequencing with ThermoSequenase generates quantities of sequencing products that are readily detectable by UV. Discrimination of undesired components from sequencing products in the reaction mixture, previously accomplished by fluorescent or radioactive labeling, is now achieved by bringing concentrations of undesired components below the UV detection range which yields a 'clean', well defined sequence. UV detection coupled with CE offers additional conveniences for sequencing since it can be accomplished with commercially available CE-UV equipment and is readily amenable to automation. PMID:9336449

  3. Onomatopoeia characters extraction from comic images using constrained Delaunay triangulation

    NASA Astrophysics Data System (ADS)

    Liu, Xiangping; Shoji, Kenji; Mori, Hiroshi; Toyama, Fubito

    2014-02-01

    A method for extracting onomatopoeia characters from comic images was developed based on stroke width feature of characters, since they nearly have a constant stroke width in a number of cases. An image was segmented with a constrained Delaunay triangulation. Connected component grouping was performed based on the triangles generated by the constrained Delaunay triangulation. Stroke width calculation of the connected components was conducted based on the altitude of the triangles generated with the constrained Delaunay triangulation. The experimental results proved the effectiveness of the proposed method.

  4. An XML-based method for astronomy software designing

    NASA Astrophysics Data System (ADS)

    Liao, Mingxue; Aili, Yusupu; Zhang, Jin

    XML-based method for standardization of software designing is introduced and analyzed and successfully applied to renovating the hardware and software of the digital clock at Urumqi Astronomical Station. Basic strategy for eliciting time information from the new digital clock of FT206 in the antenna control program is introduced. By FT206, the need to compute how many centuries passed since a certain day with sophisticated formulas is eliminated and it is no longer necessary to set right UT time for the computer holding control over antenna because the information about year, month, day are all deduced from Julian day dwelling in FT206, rather than from computer time. With XML-based method and standard for software designing, various existing designing methods are unified, communications and collaborations between developers are facilitated, and thus Internet-based mode of developing software becomes possible. The trend of development of XML-based designing method is predicted.

  5. High Frequency Vibration Based Fatigue Testing of Developmental Alloys

    NASA Astrophysics Data System (ADS)

    Holycross, Casey M.; Srinivasan, Raghavan; George, Tommy J.; Tamirisakandala, Seshacharyulu; Russ, Stephan M.

    Many fatigue test methods have been previously developed to rapidly evaluate fatigue behavior. This increased test speed can come at some expense, since these methods may require non-standard specimen geometry or increased facility and equipment capability. One such method, developed by George et al, involves a base-excited plate specimen driven into a high frequency bending resonant mode. This resonant mode is of sufficient frequency (typically 1200 to 1700 Hertz) to accumulate 107 cycles in a few hours. One of the main limitations of this test method is that fatigue cracking is almost certainly guaranteed to be surface initiated at regions of high stress. This brings into question the validity of the fatigue test results, as compared to more traditional uniaxial, smooth-bar testing, since high stresses are subjecting only a small volume to fatigue damage. This limitation also brings into question the suitability of this method to screen developmental alloys, should their initiation life be governed by subsurface flaws. However, if applicable, the rapid generation of fatigue data using this method would facilitate faster design iterations, identifying more quickly, material and manufacturing process deficiencies. The developmental alloy used in this study was a powder metallurgy boron-modified Ti-6Al-4V, a new alloy currently being considered for gas turbine engine fan blades. Plate specimens were subjected to fully reversed bending fatigue. Results are compared with existing data from commercially available Ti-6Al-4V using both vibration based and more traditional fatigue test methods.

  6. The extraction of spot signal in Shack-Hartmann wavefront sensor based on sparse representation

    NASA Astrophysics Data System (ADS)

    Zhang, Yanyan; Xu, Wentao; Chen, Suting; Ge, Junxiang; Wan, Fayu

    2016-07-01

    Several techniques have been used with Shack-Hartmann wavefront sensors to determine the local wave-front gradient across each lenslet. While the centroid error of Shack-Hartmann wavefront sensor is relatively large since the skylight background and the detector noise. In this paper, we introduce a new method based on sparse representation to extract the target signal from the background and the noise. First, an over complete dictionary of the spot signal is constructed based on two-dimensional Gaussian model. Then the Shack-Hartmann image is divided into sub blocks. The corresponding coefficients of each block is computed in the over complete dictionary. Since the coefficients of the noise and the target are large different, then extract the target by setting a threshold to the coefficients. Experimental results show that the target can be well extracted and the deviation, RMS and PV of the centroid are all smaller than the method of subtracting threshold.

  7. Past, Present and Future of Surgical Meshes: A Review.

    PubMed

    Baylón, Karen; Rodríguez-Camarillo, Perla; Elías-Zúñiga, Alex; Díaz-Elizondo, Jose Antonio; Gilkerson, Robert; Lozano, Karen

    2017-08-22

    Surgical meshes, in particular those used to repair hernias, have been in use since 1891. Since then, research in the area has expanded, given the vast number of post-surgery complications such as infection, fibrosis, adhesions, mesh rejection, and hernia recurrence. Researchers have focused on the analysis and implementation of a wide range of materials: meshes with different fiber size and porosity, a variety of manufacturing methods, and certainly a variety of surgical and implantation procedures. Currently, surface modification methods and development of nanofiber based systems are actively being explored as areas of opportunity to retain material strength and increase biocompatibility of available meshes. This review summarizes the history of surgical meshes and presents an overview of commercial surgical meshes, their properties, manufacturing methods, and observed biological response, as well as the requirements for an ideal surgical mesh and potential manufacturing methods.

  8. Estimating Body Composition in Adolescent Sprint Athletes: Comparison of Different Methods in a 3 Years Longitudinal Design

    PubMed Central

    Aerenhouts, Dirk

    2015-01-01

    A recommended field method to assess body composition in adolescent sprint athletes is currently lacking. Existing methods developed for non-athletic adolescents were not longitudinally validated and do not take maturation status into account. This longitudinal study compared two field methods, i.e., a Bio Impedance Analysis (BIA) and a skinfold based equation, with underwater densitometry to track body fat percentage relative to years from age at peak height velocity in adolescent sprint athletes. In this study, adolescent sprint athletes (34 girls, 35 boys) were measured every 6 months during 3 years (age at start = 14.8 ± 1.5yrs in girls and 14.7 ± 1.9yrs in boys). Body fat percentage was estimated in 3 different ways: 1) using BIA with the TANITA TBF 410; 2) using a skinfold based equation; 3) using underwater densitometry which was considered as the reference method. Height for age since birth was used to estimate age at peak height velocity. Cross-sectional analyses were performed using repeated measures ANOVA and Pearson correlations between measurement methods at each occasion. Data were analyzed longitudinally using a multilevel cross-classified model with the PROC Mixed procedure. In boys, compared to underwater densitometry, the skinfold based formula revealed comparable values for body fatness during the study period whereas BIA showed a different pattern leading to an overestimation of body fatness starting from 4 years after age at peak height velocity. In girls, both the skinfold based formula and BIA overestimated body fatness across the whole range of years from peak height velocity. The skinfold based method appears to give an acceptable estimation of body composition during growth as compared to underwater densitometry in male adolescent sprinters. In girls, caution is warranted when interpreting estimations of body fatness by both BIA and a skinfold based formula since both methods tend to give an overestimation. PMID:26317426

  9. Real-Time Occlusion Handling in Augmented Reality Based on an Object Tracking Approach

    PubMed Central

    Tian, Yuan; Guan, Tao; Wang, Cheng

    2010-01-01

    To produce a realistic augmentation in Augmented Reality, the correct relative positions of real objects and virtual objects are very important. In this paper, we propose a novel real-time occlusion handling method based on an object tracking approach. Our method is divided into three steps: selection of the occluding object, object tracking and occlusion handling. The user selects the occluding object using an interactive segmentation method. The contour of the selected object is then tracked in the subsequent frames in real-time. In the occlusion handling step, all the pixels on the tracked object are redrawn on the unprocessed augmented image to produce a new synthesized image in which the relative position between the real and virtual object is correct. The proposed method has several advantages. First, it is robust and stable, since it remains effective when the camera is moved through large changes of viewing angles and volumes or when the object and the background have similar colors. Second, it is fast, since the real object can be tracked in real-time. Last, a smoothing technique provides seamless merging between the augmented and virtual object. Several experiments are provided to validate the performance of the proposed method. PMID:22319278

  10. Three-dimensional compound comparison methods and their application in drug discovery.

    PubMed

    Shin, Woong-Hee; Zhu, Xiaolei; Bures, Mark Gregory; Kihara, Daisuke

    2015-07-16

    Virtual screening has been widely used in the drug discovery process. Ligand-based virtual screening (LBVS) methods compare a library of compounds with a known active ligand. Two notable advantages of LBVS methods are that they do not require structural information of a target receptor and that they are faster than structure-based methods. LBVS methods can be classified based on the complexity of ligand structure information utilized: one-dimensional (1D), two-dimensional (2D), and three-dimensional (3D). Unlike 1D and 2D methods, 3D methods can have enhanced performance since they treat the conformational flexibility of compounds. In this paper, a number of 3D methods will be reviewed. In addition, four representative 3D methods were benchmarked to understand their performance in virtual screening. Specifically, we tested overall performance in key aspects including the ability to find dissimilar active compounds, and computational speed.

  11. Using Microanalytical Simulation Methods in Educational Evaluation: An Exploratory Study

    ERIC Educational Resources Information Center

    Sondergeld, Toni A.; Beltyukova, Svetlana A.; Fox, Christine M.; Stone, Gregory E.

    2012-01-01

    Scientifically based research used to inform evidence based school reform efforts has been required by the federal government in order to receive grant funding since the reenactment of No Child Left Behind (2002). Educational evaluators are thus faced with the challenge to use rigorous research designs to establish causal relationships. However,…

  12. a Universal De-Noising Algorithm for Ground-Based LIDAR Signal

    NASA Astrophysics Data System (ADS)

    Ma, Xin; Xiang, Chengzhi; Gong, Wei

    2016-06-01

    Ground-based lidar, working as an effective remote sensing tool, plays an irreplaceable role in the study of atmosphere, since it has the ability to provide the atmospheric vertical profile. However, the appearance of noise in a lidar signal is unavoidable, which leads to difficulties and complexities when searching for more information. Every de-noising method has its own characteristic but with a certain limitation, since the lidar signal will vary with the atmosphere changes. In this paper, a universal de-noising algorithm is proposed to enhance the SNR of a ground-based lidar signal, which is based on signal segmentation and reconstruction. The signal segmentation serving as the keystone of the algorithm, segments the lidar signal into three different parts, which are processed by different de-noising method according to their own characteristics. The signal reconstruction is a relatively simple procedure that is to splice the signal sections end to end. Finally, a series of simulation signal tests and real dual field-of-view lidar signal shows the feasibility of the universal de-noising algorithm.

  13. Multi-Atlas Based Segmentation of Brainstem Nuclei from MR Images by Deep Hyper-Graph Learning.

    PubMed

    Dong, Pei; Guo, Yangrong; Gao, Yue; Liang, Peipeng; Shi, Yonghong; Wang, Qian; Shen, Dinggang; Wu, Guorong

    2016-10-01

    Accurate segmentation of brainstem nuclei (red nucleus and substantia nigra) is very important in various neuroimaging applications such as deep brain stimulation and the investigation of imaging biomarkers for Parkinson's disease (PD). Due to iron deposition during aging, image contrast in the brainstem is very low in Magnetic Resonance (MR) images. Hence, the ambiguity of patch-wise similarity makes the recently successful multi-atlas patch-based label fusion methods have difficulty to perform as competitive as segmenting cortical and sub-cortical regions from MR images. To address this challenge, we propose a novel multi-atlas brainstem nuclei segmentation method using deep hyper-graph learning. Specifically, we achieve this goal in three-fold. First , we employ hyper-graph to combine the advantage of maintaining spatial coherence from graph-based segmentation approaches and the benefit of harnessing population priors from multi-atlas based framework. Second , besides using low-level image appearance, we also extract high-level context features to measure the complex patch-wise relationship. Since the context features are calculated on a tentatively estimated label probability map, we eventually turn our hyper-graph learning based label propagation into a deep and self-refining model. Third , since anatomical labels on some voxels (usually located in uniform regions) can be identified much more reliably than other voxels (usually located at the boundary between two regions), we allow these reliable voxels to propagate their labels to the nearby difficult-to-label voxels. Such hierarchical strategy makes our proposed label fusion method deep and dynamic. We evaluate our proposed label fusion method in segmenting substantia nigra (SN) and red nucleus (RN) from 3.0 T MR images, where our proposed method achieves significant improvement over the state-of-the-art label fusion methods.

  14. Optimal network alignment with graphlet degree vectors.

    PubMed

    Milenković, Tijana; Ng, Weng Leong; Hayes, Wayne; Przulj, Natasa

    2010-06-30

    Important biological information is encoded in the topology of biological networks. Comparative analyses of biological networks are proving to be valuable, as they can lead to transfer of knowledge between species and give deeper insights into biological function, disease, and evolution. We introduce a new method that uses the Hungarian algorithm to produce optimal global alignment between two networks using any cost function. We design a cost function based solely on network topology and use it in our network alignment. Our method can be applied to any two networks, not just biological ones, since it is based only on network topology. We use our new method to align protein-protein interaction networks of two eukaryotic species and demonstrate that our alignment exposes large and topologically complex regions of network similarity. At the same time, our alignment is biologically valid, since many of the aligned protein pairs perform the same biological function. From the alignment, we predict function of yet unannotated proteins, many of which we validate in the literature. Also, we apply our method to find topological similarities between metabolic networks of different species and build phylogenetic trees based on our network alignment score. The phylogenetic trees obtained in this way bear a striking resemblance to the ones obtained by sequence alignments. Our method detects topologically similar regions in large networks that are statistically significant. It does this independent of protein sequence or any other information external to network topology.

  15. Rectification of curved document images based on single view three-dimensional reconstruction.

    PubMed

    Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang

    2016-10-01

    Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.

  16. Private and Public Sector Enterprise Resource Planning System Post-Implementation Practices: A Comparative Mixed Method Investigation

    ERIC Educational Resources Information Center

    Bachman, Charles A.

    2010-01-01

    While private sector organizations have implemented enterprise resource planning (ERP) systems since the mid 1990s, ERP implementations within the public sector lagged by several years. This research conducted a mixed method, comparative assessment of post "go-live" ERP implementations between public and private sector organization. Based on a…

  17. A Novel Approach to the Design of Passive Filters in Electric Grids

    NASA Astrophysics Data System (ADS)

    Filho da Costa Castro, José; Lima, Lucas Ramalho; Belchior, Fernando Nunes; Ribeiro, Paulo Fernando

    2016-12-01

    The design of shunt passive filters has been a topic of constant research since the 70's. Due to the lower cost, passive shunt filters are still considered a preferred option. This paper presents a novel approach for the placement and sizing of passive filters through ranking solutions based on the minimization of the total harmonic distortion (THDV) of the supply system rather than one specific bus, without neglecting the individual harmonic distortions. The developed method was implemented using Matlab/Simulink and applied to a test system. The results shown that is possible to minimize the total voltage harmonic distortion using a system approach during the filter selection. Additionally, since the method is mainly based on a heurist approach, it avoids the complexity associated with of use of advanced mathematical tools such as artificial intelligence techniques. The analyses contemplate a sinusoidal voltage utility and also the condition with background distortion utility.

  18. Investigation of the charging characteristics of micrometer sized droplets based on parallel plate capacitor model.

    PubMed

    Zhang, Yanzhen; Liu, Yonghong; Wang, Xiaolong; Shen, Yang; Ji, Renjie; Cai, Baoping

    2013-02-05

    The charging characteristics of micrometer sized aqueous droplets have attracted more and more attentions due to the development of the microfluidics technology since the electrophoretic motion of a charged droplet can be used as the droplet actuation method. This work proposed a novel method of investigating the charging characteristics of micrometer sized aqueous droplets based on parallel plate capacitor model. With this method, the effects of the electric field strength, electrolyte concentration, and ion species on the charging characteristics of the aqueous droplets was investigated. Experimental results showed that the charging characteristics of micrometer sized droplets can be investigated by this method.

  19. Variable Density Effects in Stochastic Lagrangian Models for Turbulent Combustion

    DTIC Science & Technology

    2016-07-20

    PDF methods in dealing with chemical reaction and convection are preserved irrespective of density variation. Since the density variation in a typical...combustion process may be as large as factor of seven, including variable- density effects in PDF methods is of significance. Conventionally, the...strategy of modelling variable density flows in PDF methods is similar to that used for second-moment closure models (SMCM): models are developed based on

  20. A Model Based Security Testing Method for Protocol Implementation

    PubMed Central

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation. PMID:25105163

  1. A model based security testing method for protocol implementation.

    PubMed

    Fu, Yu Long; Xin, Xiao Long

    2014-01-01

    The security of protocol implementation is important and hard to be verified. Since the penetration testing is usually based on the experience of the security tester and the specific protocol specifications, a formal and automatic verification method is always required. In this paper, we propose an extended model of IOLTS to describe the legal roles and intruders of security protocol implementations, and then combine them together to generate the suitable test cases to verify the security of protocol implementation.

  2. European validation of a real-time PCR-based method for detection of Listeria monocytogenes in soft cheese.

    PubMed

    Gianfranceschi, Monica Virginia; Rodriguez-Lazaro, David; Hernandez, Marta; González-García, Patricia; Comin, Damiano; Gattuso, Antonietta; Delibato, Elisabetta; Sonnessa, Michele; Pasquali, Frederique; Prencipe, Vincenza; Sreter-Lancz, Zuzsanna; Saiz-Abajo, María-José; Pérez-De-Juan, Javier; Butrón, Javier; Kozačinski, Lidija; Tomic, Danijela Horvatek; Zdolec, Nevijo; Johannessen, Gro S; Jakočiūnė, Džiuginta; Olsen, John Elmerdahl; De Santis, Paola; Lovari, Sarah; Bertasi, Barbara; Pavoni, Enrico; Paiusco, Antonella; De Cesare, Alessandra; Manfreda, Gerardo; De Medici, Dario

    2014-08-01

    The classical microbiological method for detection of Listeria monocytogenes requires around 7 days for final confirmation, and due to perishable nature of RTE food products, there is a clear need for an alternative methodology for detection of this pathogen. This study presents an international (at European level) ISO 16140-based validation trial of a non-proprietary real-time PCR-based methodology that can generate final results in the following day of the analysis. This methodology is based on an ISO compatible enrichment coupled to a bacterial DNA extraction and a consolidated real-time PCR assay. Twelve laboratories from six European countries participated in this trial, and soft cheese was selected as food model since it can represent a difficult matrix for the bacterial DNA extraction and real-time PCR amplification. The limit of detection observed was down to 10 CFU per 25 of sample, showing excellent concordance and accordance values between samples and laboratories (>75%). In addition, excellent values were obtained for relative accuracy, specificity and sensitivity (82.75%, 96.70% and 97.62%, respectively) when the results obtained for the real-time PCR-based methods were compared to those of the ISO 11290-1 standard method. An interesting observation was that the L. monocytogenes detection by the real-time PCR method was less affected in the presence of Listeria innocua in the contaminated samples, proving therefore to be more reliable than the reference method. The results of this international trial demonstrate that the evaluated real-time PCR-based method represents an excellent alterative to the ISO standard since it shows a higher performance as well as reduce the extent of the analytical process, and can be easily implemented routinely by the competent authorities and food industry laboratories. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Digital barcodes of suspension array using laser induced breakdown spectroscopy

    PubMed Central

    He, Qinghua; Liu, Yixi; He, Yonghong; Zhu, Liang; Zhang, Yilong; Shen, Zhiyuan

    2016-01-01

    We show a coding method of suspension array based on the laser induced breakdown spectroscopy (LIBS), which promotes the barcodes from analog to digital. As the foundation of digital optical barcodes, nanocrystals encoded microspheres are prepared with self-assembly encapsulation method. We confirm that digital multiplexing of LIBS-based coding method becomes feasible since the microsphere can be coded with direct read-out data of wavelengths, and the method can avoid fluorescence signal crosstalk between barcodes and analyte tags, which lead to overall advantages in accuracy and stability to current fluorescent multicolor coding method. This demonstration increases the capability of multiplexed detection and accurate filtrating, expanding more extensive applications of suspension array in life science. PMID:27808270

  4. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  5. Transfer Efficiency and Cooling Cost by Thermal Loss based on Nitrogen Evaporation Method for Superconducting MAGLEV System

    NASA Astrophysics Data System (ADS)

    Chung, Y. D.; Kim, D. W.; Lee, C. Y.

    2017-07-01

    This paper presents the feasibility of technical fusion between wireless power transfer (WPT) and superconducting technology to improve the transfer efficiency and evaluate operating costs such as refrigerant consumption. Generally, in WPT technology, the various copper wires have been adopted. From this reason, the transfer efficiency is limited since the copper wires of Q value are intrinsically critical point. On the other hand, as superconducting wires keep larger current density and relatively higher Q value, the superconducting resonance coil can be expected as a reasonable option to deliver large transfer power as well as improve the transfer ratio since it exchanges energy at a much higher rate and keeps stronger magnetic fields out. However, since superconducting wires should be cooled indispensably, the cooling cost of consumed refrigerant for resonance HTS wires should be estimated. In this study, the transmission ratios using HTS resonance receiver (Rx) coil and various cooled and noncooled copper resonance Rx coils were presented under non cooled copper antenna within input power of 200 W of 370 kHz respectively. In addition, authors evaluated cooling cost of liquid nitrogen for HTS resonance coil and various cooled copper resonance coils based on nitrogen evaporation method.

  6. High resolution melt curve analysis based on methylation status for human semen identification.

    PubMed

    Fachet, Caitlyn; Quarino, Lawrence; Karnas, K Joy

    2017-03-01

    A high resolution melt curve assay to differentiate semen from blood, saliva, urine, and vaginal fluid based on methylation status at the Dapper Isoform 1 (DACT1) gene was developed. Stains made from blood, saliva, urine, semen, and vaginal fluid were obtained from volunteers and DNA was isolated using either organic extraction (saliva, urine, and vaginal fluid) or Chelex ® 100 extraction (blood and semen). Extracts were then subjected to bisulfite modification in order to convert unmethylated cytosines to uracil, consequently creating sequences whose amplicons have melt curves that vary depending on their initial methylation status. When primers designed to amplify the promoter region of the DACT1 gene were used, DNA from semen samples was distinguishable from other fluids by a having a statistically significant lower melting temperature. The assay was found to be sperm-significant since semen from a vasectomized man produced a melting temperature similar to the non-semen body fluids. Blood and semen stains stored up to 5 months and tested at various intervals showed little variation in melt temperature indicating the methylation status was stable during the course of the study. The assay is a more viable method for forensic science practice than most molecular-based methods for body fluid stain identification since it is time efficient and utilizes instrumentation common to forensic biology laboratories. In addition, the assay is advantageous over traditional presumptive chemical methods for body fluid identification since results are confirmatory and the assay offers the possibility of multiplexing which may test for multiple body fluids simultaneously.

  7. Optimizing distance-based methods for large data sets

    NASA Astrophysics Data System (ADS)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  8. A progressive data compression scheme based upon adaptive transform coding: Mixture block coding of natural images

    NASA Technical Reports Server (NTRS)

    Rost, Martin C.; Sayood, Khalid

    1991-01-01

    A method for efficiently coding natural images using a vector-quantized variable-blocksized transform source coder is presented. The method, mixture block coding (MBC), incorporates variable-rate coding by using a mixture of discrete cosine transform (DCT) source coders. Which coders are selected to code any given image region is made through a threshold driven distortion criterion. In this paper, MBC is used in two different applications. The base method is concerned with single-pass low-rate image data compression. The second is a natural extension of the base method which allows for low-rate progressive transmission (PT). Since the base method adapts easily to progressive coding, it offers the aesthetic advantage of progressive coding without incorporating extensive channel overhead. Image compression rates of approximately 0.5 bit/pel are demonstrated for both monochrome and color images.

  9. Estimation of salient regions related to chronic gastritis using gastric X-ray images.

    PubMed

    Togo, Ren; Ishihara, Kenta; Ogawa, Takahiro; Haseyama, Miki

    2016-10-01

    Since technical knowledge and a high degree of experience are necessary for diagnosis of chronic gastritis, computer-aided diagnosis (CAD) systems that analyze gastric X-ray images are desirable in the field of medicine. Therefore, a new method that estimates salient regions related to chronic gastritis/non-gastritis for supporting diagnosis is presented in this paper. In order to estimate salient regions related to chronic gastritis/non-gastritis, the proposed method monitors the distance between a target image feature and Support Vector Machine (SVM)-based hyperplane for its classification. Furthermore, our method realizes removal of the influence of regions outside the stomach by using positional relationships between the stomach and other organs. Consequently, since the proposed method successfully estimates salient regions of gastric X-ray images for which chronic gastritis and non-gastritis are unknown, visual support for inexperienced clinicians becomes feasible. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Past, Present and Future of Surgical Meshes: A Review

    PubMed Central

    Baylón, Karen; Rodríguez-Camarillo, Perla; Elías-Zúñiga, Alex; Díaz-Elizondo, Jose Antonio; Gilkerson, Robert; Lozano, Karen

    2017-01-01

    Surgical meshes, in particular those used to repair hernias, have been in use since 1891. Since then, research in the area has expanded, given the vast number of post-surgery complications such as infection, fibrosis, adhesions, mesh rejection, and hernia recurrence. Researchers have focused on the analysis and implementation of a wide range of materials: meshes with different fiber size and porosity, a variety of manufacturing methods, and certainly a variety of surgical and implantation procedures. Currently, surface modification methods and development of nanofiber based systems are actively being explored as areas of opportunity to retain material strength and increase biocompatibility of available meshes. This review summarizes the history of surgical meshes and presents an overview of commercial surgical meshes, their properties, manufacturing methods, and observed biological response, as well as the requirements for an ideal surgical mesh and potential manufacturing methods. PMID:28829367

  11. An efficient direct method for image registration of flat objects

    NASA Astrophysics Data System (ADS)

    Nikolaev, Dmitry; Tihonkih, Dmitrii; Makovetskii, Artyom; Voronin, Sergei

    2017-09-01

    Image alignment of rigid surfaces is a rapidly developing area of research and has many practical applications. Alignment methods can be roughly divided into two types: feature-based methods and direct methods. Known SURF and SIFT algorithms are examples of the feature-based methods. Direct methods refer to those that exploit the pixel intensities without resorting to image features and image-based deformations are general direct method to align images of deformable objects in 3D space. Nevertheless, it is not good for the registration of images of 3D rigid objects since the underlying structure cannot be directly evaluated. In the article, we propose a model that is suitable for image alignment of rigid flat objects under various illumination models. The brightness consistency assumptions used for reconstruction of optimal geometrical transformation. Computer simulation results are provided to illustrate the performance of the proposed algorithm for computing of an accordance between pixels of two images.

  12. Ontology-Based Adaptive Dynamic e-Learning Map Planning Method for Conceptual Knowledge Learning

    ERIC Educational Resources Information Center

    Chen, Tsung-Yi; Chu, Hui-Chuan; Chen, Yuh-Min; Su, Kuan-Chun

    2016-01-01

    E-learning improves the shareability and reusability of knowledge, and surpasses the constraints of time and space to achieve remote asynchronous learning. Since the depth of learning content often varies, it is thus often difficult to adjust materials based on the individual levels of learners. Therefore, this study develops an ontology-based…

  13. School-Based BMI and Body Composition Screening and Parent Notification in California: Methods and Messages

    ERIC Educational Resources Information Center

    Madsen, Kristine A.; Linchey, Jennifer

    2012-01-01

    Background: School-based body mass index (BMI) or body composition screening is increasing, but little is known about the process of parent notification. Since 2001, California has required annual screening of body composition via the FITNESSGRAM, with optional notification. This study sought to identify the prevalence of parental notification…

  14. The Impact of the Document-Based Question on the Teaching of United States History.

    ERIC Educational Resources Information Center

    Rothschild, Eric

    2000-01-01

    Provides historical information on the Document-Based Question (DBQ) that has been a part of the Advanced Placement (AP) U.S. history examination since 1973. Focuses on the effects that DBQ had on course content and teaching methods. Addresses the new changes made with the redesign of the DBQ in 1982. (CMK)

  15. James Webb Space Telescope segment phasing using differential optical transfer functions

    PubMed Central

    Codona, Johanan L.; Doble, Nathan

    2015-01-01

    Differential optical transfer function (dOTF) is an image-based, noniterative wavefront sensing method that uses two star images with a single small change in the pupil. We describe two possible methods for introducing the required pupil modification to the James Webb Space Telescope, one using a small (<λ/4) displacement of a single segment's actuator and another that uses small misalignments of the NIRCam's filter wheel. While both methods should work with NIRCam, the actuator method will allow both MIRI and NIRISS to be used for segment phasing, which is a new functionality. Since the actuator method requires only small displacements, it should provide a fast and safe phasing alternative that reduces the mission risk and can be performed frequently for alignment monitoring and maintenance. Since a single actuator modification can be seen by all three cameras, it should be possible to calibrate the non-common-path aberrations between them. Large segment discontinuities can be measured using dOTFs in two filter bands. Using two images of a star field, aberrations along multiple lines of sight through the telescope can be measured simultaneously. Also, since dOTF gives the pupil field amplitude as well as the phase, it could provide a first approximation or constraint to the planned iterative phase retrieval algorithms. PMID:27042684

  16. Advances in high-resolution mass spectrometry based on metabolomics studies for food--a review.

    PubMed

    Rubert, Josep; Zachariasova, Milena; Hajslova, Jana

    2015-01-01

    Food authenticity becomes a necessity for global food policies, since food placed in the market without fail has to be authentic. It has always been a challenge, since in the past minor components, called also markers, have been mainly monitored by chromatographic methods in order to authenticate the food. Nevertheless, nowadays, advanced analytical methods have allowed food fingerprints to be achieved. At the same time they have been also combined with chemometrics, which uses statistical methods in order to verify food and to provide maximum information by analysing chemical data. These sophisticated methods based on different separation techniques or stand alone have been recently coupled to high-resolution mass spectrometry (HRMS) in order to verify the authenticity of food. The new generation of HRMS detectors have experienced significant advances in resolving power, sensitivity, robustness, extended dynamic range, easier mass calibration and tandem mass capabilities, making HRMS more attractive and useful to the food metabolomics community, therefore becoming a reliable tool for food authenticity. The purpose of this review is to summarise and describe the most recent metabolomics approaches in the area of food metabolomics, and to discuss the strengths and drawbacks of the HRMS analytical platforms combined with chemometrics.

  17. Developing collaborative classifiers using an expert-based model

    USGS Publications Warehouse

    Mountrakis, G.; Watts, R.; Luo, L.; Wang, Jingyuan

    2009-01-01

    This paper presents a hierarchical, multi-stage adaptive strategy for image classification. We iteratively apply various classification methods (e.g., decision trees, neural networks), identify regions of parametric and geographic space where accuracy is low, and in these regions, test and apply alternate methods repeating the process until the entire image is classified. Currently, classifiers are evaluated through human input using an expert-based system; therefore, this paper acts as the proof of concept for collaborative classifiers. Because we decompose the problem into smaller, more manageable sub-tasks, our classification exhibits increased flexibility compared to existing methods since classification methods are tailored to the idiosyncrasies of specific regions. A major benefit of our approach is its scalability and collaborative support since selected low-accuracy classifiers can be easily replaced with others without affecting classification accuracy in high accuracy areas. At each stage, we develop spatially explicit accuracy metrics that provide straightforward assessment of results by non-experts and point to areas that need algorithmic improvement or ancillary data. Our approach is demonstrated in the task of detecting impervious surface areas, an important indicator for human-induced alterations to the environment, using a 2001 Landsat scene from Las Vegas, Nevada. ?? 2009 American Society for Photogrammetry and Remote Sensing.

  18. Rapid extraction of image texture by co-occurrence using a hybrid data structure

    NASA Astrophysics Data System (ADS)

    Clausi, David A.; Zhao, Yongping

    2002-07-01

    Calculation of co-occurrence probabilities is a popular method for determining texture features within remotely sensed digital imagery. Typically, the co-occurrence features are calculated by using a grey level co-occurrence matrix (GLCM) to store the co-occurring probabilities. Statistics are applied to the probabilities in the GLCM to generate the texture features. This method is computationally intensive since the matrix is usually sparse leading to many unnecessary calculations involving zero probabilities when applying the statistics. An improvement on the GLCM method is to utilize a grey level co-occurrence linked list (GLCLL) to store only the non-zero co-occurring probabilities. The GLCLL suffers since, to achieve preferred computational speeds, the list should be sorted. An improvement on the GLCLL is to utilize a grey level co-occurrence hybrid structure (GLCHS) based on an integrated hash table and linked list approach. Texture features obtained using this technique are identical to those obtained using the GLCM and GLCLL. The GLCHS method is implemented using the C language in a Unix environment. Based on a Brodatz test image, the GLCHS method is demonstrated to be a superior technique when compared across various window sizes and grey level quantizations. The GLCHS method required, on average, 33.4% ( σ=3.08%) of the computational time required by the GLCLL. Significant computational gains are made using the GLCHS method.

  19. Accuracy of Lagrange-sinc functions as a basis set for electronic structure calculations of atoms and molecules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Sunghwan; Hong, Kwangwoo; Kim, Jaewook

    2015-03-07

    We developed a self-consistent field program based on Kohn-Sham density functional theory using Lagrange-sinc functions as a basis set and examined its numerical accuracy for atoms and molecules through comparison with the results of Gaussian basis sets. The result of the Kohn-Sham inversion formula from the Lagrange-sinc basis set manifests that the pseudopotential method is essential for cost-effective calculations. The Lagrange-sinc basis set shows faster convergence of the kinetic and correlation energies of benzene as its size increases than the finite difference method does, though both share the same uniform grid. Using a scaling factor smaller than or equal tomore » 0.226 bohr and pseudopotentials with nonlinear core correction, its accuracy for the atomization energies of the G2-1 set is comparable to all-electron complete basis set limits (mean absolute deviation ≤1 kcal/mol). The same basis set also shows small mean absolute deviations in the ionization energies, electron affinities, and static polarizabilities of atoms in the G2-1 set. In particular, the Lagrange-sinc basis set shows high accuracy with rapid convergence in describing density or orbital changes by an external electric field. Moreover, the Lagrange-sinc basis set can readily improve its accuracy toward a complete basis set limit by simply decreasing the scaling factor regardless of systems.« less

  20. Generating Sudoku puzzles and its applications in teaching mathematics

    NASA Astrophysics Data System (ADS)

    Evans, Ryan; Lindner, Brett; Shi, Yixun

    2011-07-01

    This article presents a few methods for generating Sudoku puzzles. These methods are developed based on the concepts of matrix, permutation, and modular functions, and therefore can be used to form application examples or student projects when teaching various mathematics courses. Mathematical properties of these methods are studied, connections between the methods are investigated, and student projects are suggested. Since most students tend to enjoy games, studies like this may help raising students' interests and enhance their problem-solving skills.

  1. Subcellular location prediction of proteins using support vector machines with alignment of block sequences utilizing amino acid composition.

    PubMed

    Tamura, Takeyuki; Akutsu, Tatsuya

    2007-11-30

    Subcellular location prediction of proteins is an important and well-studied problem in bioinformatics. This is a problem of predicting which part in a cell a given protein is transported to, where an amino acid sequence of the protein is given as an input. This problem is becoming more important since information on subcellular location is helpful for annotation of proteins and genes and the number of complete genomes is rapidly increasing. Since existing predictors are based on various heuristics, it is important to develop a simple method with high prediction accuracies. In this paper, we propose a novel and general predicting method by combining techniques for sequence alignment and feature vectors based on amino acid composition. We implemented this method with support vector machines on plant data sets extracted from the TargetP database. Through fivefold cross validation tests, the obtained overall accuracies and average MCC were 0.9096 and 0.8655 respectively. We also applied our method to other datasets including that of WoLF PSORT. Although there is a predictor which uses the information of gene ontology and yields higher accuracy than ours, our accuracies are higher than existing predictors which use only sequence information. Since such information as gene ontology can be obtained only for known proteins, our predictor is considered to be useful for subcellular location prediction of newly-discovered proteins. Furthermore, the idea of combination of alignment and amino acid frequency is novel and general so that it may be applied to other problems in bioinformatics. Our method for plant is also implemented as a web-system and available on http://sunflower.kuicr.kyoto-u.ac.jp/~tamura/slpfa.html.

  2. An adjoint method of sensitivity analysis for residual vibrations of structures subject to impacts

    NASA Astrophysics Data System (ADS)

    Yan, Kun; Cheng, Gengdong

    2018-03-01

    For structures subject to impact loads, the residual vibration reduction is more and more important as the machines become faster and lighter. An efficient sensitivity analysis of residual vibration with respect to structural or operational parameters is indispensable for using a gradient based optimization algorithm, which reduces the residual vibration in either active or passive way. In this paper, an integrated quadratic performance index is used as the measure of the residual vibration, since it globally measures the residual vibration response and its calculation can be simplified greatly with Lyapunov equation. Several sensitivity analysis approaches for performance index were developed based on the assumption that the initial excitations of residual vibration were given and independent of structural design. Since the resulting excitations by the impact load often depend on structural design, this paper aims to propose a new efficient sensitivity analysis method for residual vibration of structures subject to impacts to consider the dependence. The new method is developed by combining two existing methods and using adjoint variable approach. Three numerical examples are carried out and demonstrate the accuracy of the proposed method. The numerical results show that the dependence of initial excitations on structural design variables may strongly affects the accuracy of sensitivities.

  3. Implied alignment: a synapomorphy-based multiple-sequence alignment method and its use in cladogram search

    NASA Technical Reports Server (NTRS)

    Wheeler, Ward C.

    2003-01-01

    A method to align sequence data based on parsimonious synapomorphy schemes generated by direct optimization (DO; earlier termed optimization alignment) is proposed. DO directly diagnoses sequence data on cladograms without an intervening multiple-alignment step, thereby creating topology-specific, dynamic homology statements. Hence, no multiple-alignment is required to generate cladograms. Unlike general and globally optimal multiple-alignment procedures, the method described here, implied alignment (IA), takes these dynamic homologies and traces them back through a single cladogram, linking the unaligned sequence positions in the terminal taxa via DO transformation series. These "lines of correspondence" link ancestor-descendent states and, when displayed as linearly arrayed columns without hypothetical ancestors, are largely indistinguishable from standard multiple alignment. Since this method is based on synapomorphy, the treatment of certain classes of insertion-deletion (indel) events may be different from that of other alignment procedures. As with all alignment methods, results are dependent on parameter assumptions such as indel cost and transversion:transition ratios. Such an IA could be used as a basis for phylogenetic search, but this would be questionable since the homologies derived from the implied alignment depend on its natal cladogram and any variance, between DO and IA + Search, due to heuristic approach. The utility of this procedure in heuristic cladogram searches using DO and the improvement of heuristic cladogram cost calculations are discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.

  4. Predicting microRNA-disease associations using label propagation based on linear neighborhood similarity.

    PubMed

    Li, Guanghui; Luo, Jiawei; Xiao, Qiu; Liang, Cheng; Ding, Pingjian

    2018-05-12

    Interactions between microRNAs (miRNAs) and diseases can yield important information for uncovering novel prognostic markers. Since experimental determination of disease-miRNA associations is time-consuming and costly, attention has been given to designing efficient and robust computational techniques for identifying undiscovered interactions. In this study, we present a label propagation model with linear neighborhood similarity, called LPLNS, to predict unobserved miRNA-disease associations. Additionally, a preprocessing step is performed to derive new interaction likelihood profiles that will contribute to the prediction since new miRNAs and diseases lack known associations. Our results demonstrate that the LPLNS model based on the known disease-miRNA associations could achieve impressive performance with an AUC of 0.9034. Furthermore, we observed that the LPLNS model based on new interaction likelihood profiles could improve the performance to an AUC of 0.9127. This was better than other comparable methods. In addition, case studies also demonstrated our method's outstanding performance for inferring undiscovered interactions between miRNAs and diseases, especially for novel diseases. Copyright © 2018. Published by Elsevier Inc.

  5. Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team 1998

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available under the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching and vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.

  6. Summary of Work for Joint Research Interchanges with DARWIN Integrated Product Team

    NASA Technical Reports Server (NTRS)

    Hesselink, Lambertus

    1999-01-01

    The intent of Stanford University's SciVis group is to develop technologies that enabled comparative analysis and visualization techniques for simulated and experimental flow fields. These techniques would then be made available un- der the Joint Research Interchange for potential injection into the DARWIN Workspace Environment (DWE). In the past, we have focused on techniques that exploited feature based comparisons such as shock and vortex extractions. Our current research effort focuses on finding a quantitative comparison of general vector fields based on topological features. Since the method relies on topological information, grid matching an@ vector alignment is not needed in the comparison. This is often a problem with many data comparison techniques. In addition, since only topology based information is stored and compared for each field, there is a significant compression of information that enables large databases to be quickly searched. This report will briefly (1) describe current technologies in the area of comparison techniques, (2) will describe the theory of our new method and finally (3) summarize a few of the results.

  7. A Lesson Based on Student-Generated Ideas: A Practical Example Highlighting the Role of a Teacher

    ERIC Educational Resources Information Center

    Fuentes, Sarah Quebec

    2011-01-01

    The role of a teacher is different from that in traditional mathematics instruction when the implementation of a lesson is based on students' ideas. The author's experience teaching the same lesson (of the latter format) to two different classes of pre-service teachers in an elementary mathematics methods course is described. Since whole-class…

  8. Credit Where Credit Is Due: An Approach to Education Returns Based on Shapley Values

    ERIC Educational Resources Information Center

    Barakat, Bilal; Crespo Cuaresma, Jesus

    2017-01-01

    We propose the use of methods based on the Shapley value to assess the fact that private returns to lower levels of educational attainment should be credited with part of the returns from higher attainment levels, since achieving primary education is a necessary condition to enter secondary and tertiary educational levels. We apply the proposed…

  9. Mapping a Strategic Plan for Health: Community-Based Participatory Research with Underserved, Low-Income, Urban Neighborhoods

    ERIC Educational Resources Information Center

    Zandee, Gail

    2012-01-01

    Since 2002, community-based participatory research methods have been used by the Calvin College Nursing Department to map out a strategic health plan for three urban, low-income, underserved neighborhoods. Nine focus groups and 449 door-to-door health surveys were completed across the three urban neighborhoods between 2002 and 2004. Neighborhood…

  10. The dynamic micro computed tomography at SSRF

    NASA Astrophysics Data System (ADS)

    Chen, R.; Xu, L.; Du, G.; Deng, B.; Xie, H.; Xiao, T.

    2018-05-01

    Synchrotron radiation micro-computed tomography (SR-μCT) is a critical technique for quantitative characterizing the 3D internal structure of samples, recently the dynamic SR-μCT has been attracting vast attention since it can evaluate the three-dimensional structure evolution of a sample. A dynamic μCT method, which is based on monochromatic beam, was developed at the X-ray Imaging and Biomedical Application Beamline at Shanghai Synchrotron Radiation Facility, by combining the compressed sensing based CT reconstruction algorithm and hardware upgrade. The monochromatic beam based method can achieve quantitative information, and lower dose than the white beam base method in which the lower energy beam is absorbed by the sample rather than contribute to the final imaging signal. The developed method is successfully used to investigate the compression of the air sac during respiration in a bell cricket, providing new knowledge for further research on the insect respiratory system.

  11. Shrinkage regression-based methods for microarray missing value imputation.

    PubMed

    Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng

    2013-01-01

    Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.

  12. Rotational stellar structures based on the Lagrangian variational principle

    NASA Astrophysics Data System (ADS)

    Yasutake, Nobutoshi; Fujisawa, Kotaro; Yamada, Shoichi

    2017-06-01

    A new method for multi-dimensional stellar structures is proposed in this study. As for stellar evolution calculations, the Heney method is the defacto standard now, but basically assumed to be spherical symmetric. It is one of the difficulties for deformed stellar-evolution calculations to trace the potentially complex movements of each fluid element. On the other hand, our new method is very suitable to follow such movements, since it is based on the Lagrange coordinate. This scheme is also based on the variational principle, which is adopted to the studies for the pasta structures inside of neutron stars. Our scheme could be a major break through for evolution calculations of any types of deformed stars: proto-planets, proto-stars, and proto-neutron stars, etc.

  13. A method of minimum volume simplex analysis constrained unmixing for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao

    2017-07-01

    The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.

  14. Manpower studies for the United States. Part II. Demand for eye care. A public opinion poll based upon a Gallup poll survey.

    PubMed

    Reinecke, R D; Steinberg, T

    1981-04-01

    This is the second in the series of Ophthalmology Manpower Studies. Part I presented estimates of disease prevalence and incidence, the average amount of time required to care for such conditions, and based on that information, the total hours of ophthalmological services required to care for all the projected need in the population. Using different estimates of the average number of hours worked per year per ophthalmologist (based on a 35, 40 and 48 hours/week in patient care), estimates of the total number of ophthalmologists required were calculated. This method is basically similar to the method later adopted by the Graduate Medical Education National Advisory Committee (GMENAC) to arrive at estimates of hours of ophthalmological services required for 1990. However, instead of using all the need present in the population, the GMENAC panel chose to use an "adjusted-needs based" model as a compromise between total need and actual utilization, the former being an overestimation and the latter being an underestimation since it is in part a function of the barriers to medical care. Since some of these barriers to medical care include informational factors, as well as availability and accessibility, this study was undertaken to assess the utilization of these services and the adequacy of present ophthalmological manpower in the opinion of the consumer. Also, since the consumer's choice or behavior depends on being informed about the differences between optometrists and ophthalmologists, such knowledge was assessed and the responses further evaluated after explanatory statements were made to the responders.

  15. A Novel Method for Block Size Forensics Based on Morphological Operations

    NASA Astrophysics Data System (ADS)

    Luo, Weiqi; Huang, Jiwu; Qiu, Guoping

    Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.

  16. High-pressure torsion for new hydrogen storage materials.

    PubMed

    Edalati, Kaveh; Akiba, Etsuo; Horita, Zenji

    2018-01-01

    High-pressure torsion (HPT) is widely used as a severe plastic deformation technique to create ultrafine-grained structures with promising mechanical and functional properties. Since 2007, the method has been employed to enhance the hydrogenation kinetics in different Mg-based hydrogen storage materials. Recent studies showed that the method is effective not only for increasing the hydrogenation kinetics but also for improving the hydrogenation activity, for enhancing the air resistivity and more importantly for synthesizing new nanostructured hydrogen storage materials with high densities of lattice defects. This manuscript reviews some major findings on the impact of HPT process on the hydrogen storage performance of different titanium-based and magnesium-based materials.

  17. Scrum-Based Learning Environment: Fostering Self-Regulated Learning

    ERIC Educational Resources Information Center

    Linden, Tanya

    2018-01-01

    Academics teaching software development courses are experimenting with teaching methods aiming to improve students' learning experience and learning outcomes. Since Agile software development is gaining popularity in industry due to positive effects on managing projects, academics implement similar Agile approaches in student-centered learning…

  18. Relating the 2010 signalized intersection methodology to alternate approaches in the context of NYC conditions.

    DOT National Transportation Integrated Search

    2013-11-01

    The Highway Capacity Manual (HCM) has had a delay-based level of service methodology for signalized intersections since 1985. : The 2010 HCM has revised the method for calculating delay. This happened concurrent with such jurisdictions as NYC reviewi...

  19. Bivariate drought frequency analysis using the copula method

    NASA Astrophysics Data System (ADS)

    Mirabbasi, Rasoul; Fakheri-Fard, Ahmad; Dinpashoh, Yagob

    2012-04-01

    Droughts are major natural hazards with significant environmental and economic impacts. In this study, two-dimensional copulas were applied to the analysis of the meteorological drought characteristics of the Sharafkhaneh gauge station, located in the northwest of Iran. Two major drought characteristics, duration and severity, as defined by the standardized precipitation index, were abstracted from observed drought events. Since drought duration and severity exhibited a significant correlation and since they were modeled using different distributions, copulas were used to construct the joint distribution function of the drought characteristics. The parameter of copulas was estimated using the method of the Inference Function for Margins. Several copulas were tested in order to determine the best data fit. According to the error analysis and the tail dependence coefficient, the Galambos copula provided the best fit for the observed drought data. Some bivariate probabilistic properties of droughts, based on the derived copula-based joint distribution, were also investigated. These probabilistic properties can provide useful information for water resource planning and management.

  20. Patch-based frame interpolation for old films via the guidance of motion paths

    NASA Astrophysics Data System (ADS)

    Xia, Tianran; Ding, Youdong; Yu, Bing; Huang, Xi

    2018-04-01

    Due to improper preservation, traditional films will appear frame loss after digital. To deal with this problem, this paper presents a new adaptive patch-based method of frame interpolation via the guidance of motion paths. Our method is divided into three steps. Firstly, we compute motion paths between two reference frames using optical flow estimation. Then, the adaptive bidirectional interpolation with holes filled is applied to generate pre-intermediate frames. Finally, using patch match to interpolate intermediate frames with the most similar patches. Since the patch match is based on the pre-intermediate frames that contain the motion paths constraint, we show a natural and inartificial frame interpolation. We test different types of old film sequences and compare with other methods, the results prove that our method has a desired performance without hole or ghost effects.

  1. Fault Diagnosis for Micro-Gas Turbine Engine Sensors via Wavelet Entropy

    PubMed Central

    Yu, Bing; Liu, Dongdong; Zhang, Tianhong

    2011-01-01

    Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can’t be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient. PMID:22163734

  2. Fault diagnosis for micro-gas turbine engine sensors via wavelet entropy.

    PubMed

    Yu, Bing; Liu, Dongdong; Zhang, Tianhong

    2011-01-01

    Sensor fault diagnosis is necessary to ensure the normal operation of a gas turbine system. However, the existing methods require too many resources and this need can't be satisfied in some occasions. Since the sensor readings are directly affected by sensor state, sensor fault diagnosis can be performed by extracting features of the measured signals. This paper proposes a novel fault diagnosis method for sensors based on wavelet entropy. Based on the wavelet theory, wavelet decomposition is utilized to decompose the signal in different scales. Then the instantaneous wavelet energy entropy (IWEE) and instantaneous wavelet singular entropy (IWSE) are defined based on the previous wavelet entropy theory. Subsequently, a fault diagnosis method for gas turbine sensors is proposed based on the results of a numerically simulated example. Then, experiments on this method are carried out on a real micro gas turbine engine. In the experiment, four types of faults with different magnitudes are presented. The experimental results show that the proposed method for sensor fault diagnosis is efficient.

  3. On isocentre adjustment and quality control in linear accelerator based radiosurgery with circular collimators and room lasers.

    PubMed

    Treuer, H; Hoevels, M; Luyken, K; Gierich, A; Kocher, M; Müller, R P; Sturm, V

    2000-08-01

    We have developed a densitometric method for measuring the isocentric accuracy and the accuracy of marking the isocentre position for linear accelerator based radiosurgery with circular collimators and room lasers. Isocentric shots are used to determine the accuracy of marking the isocentre position with room lasers and star shots are used to determine the wobble of the gantry and table rotation movement, the effect of gantry sag, the stereotactic collimator alignment, and the minimal distance between gantry and table rotation axes. Since the method is based on densitometric measurements, beam spot stability is implicitly tested. The method developed is also suitable for quality assurance and has proved to be useful in optimizing isocentric accuracy. The method is simple to perform and only requires a film box and film scanner for instrumentation. Thus, the method has the potential to become widely available and may therefore be useful in standardizing the description of linear accelerator based radiosurgical systems.

  4. Information fusion methods based on physical laws.

    PubMed

    Rao, Nageswara S V; Reister, David B; Barhen, Jacob

    2005-01-01

    We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.

  5. Detection of no-model input-output pairs in closed-loop systems.

    PubMed

    Potts, Alain Segundo; Alvarado, Christiam Segundo Morales; Garcia, Claudio

    2017-11-01

    The detection of no-model input-output (IO) pairs is important because it can speed up the multivariable system identification process, since all the pairs with null transfer functions are previously discarded and it can also improve the identified model quality, thus improving the performance of model based controllers. In the available literature, the methods focus just on the open-loop case, since in this case there is not the effect of the controller forcing the main diagonal in the transfer matrix to one and all the other terms to zero. In this paper, a modification of a previous method able to detect no-model IO pairs in open-loop systems is presented, but adapted to perform this duty in closed-loop systems. Tests are performed by using the traditional methods and the proposed one to show its effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Levitation force of small clearance superconductor-magnet system under non-coaxial condition

    NASA Astrophysics Data System (ADS)

    Xu, Jimin; Jin, Yingze; Yuan, Xiaoyang; Miao, Xusheng

    2017-03-01

    A novel superconducting tilting-pad bearing was proposed for the advanced research of reusable liquid hydrogen turbopump in liquid rocket. The bearing is a combination of superconducting magnetic bearing and hydrodynamic fluid-film bearing. Since the viscosity of cryogenic fuel to activate superconducting state and form hydrodynamic fluid-film is very low, bearing clearance will be very small. This study focuses on the investigation of superconducting levitation force in this kind of small clearance superconductor-magnet system. Based on Bean critical state model and three-dimensional finite element method, an analysis method is presented to obtain the levitation force under such situation. Since the complicated operational conditions and structural arrangement for application in liquid rocket, center lines of bulk superconductor and magnet rotor will usually be in non-coaxial state. Superconducting levitation forces in axial direction and radial direction under non-coaxial situation are also analyzed by the presented method.

  7. Photodynamic therapy in dermatology: past, present, and future

    NASA Astrophysics Data System (ADS)

    Darlenski, Razvigor; Fluhr, Joachim W.

    2013-06-01

    Photodynamic therapy (PDT) is a noninvasive therapeutic method first introduced in the field of dermatology. It is mainly used for the treatment of precancerous and superficial malignant skin tumors. Today PDT finds new applications not only for nononcologic dermatoses but also in the field of other medical specialties such as otorhinolaryngology, ophthalmology, neurology, gastroenterology, and urology. We are witnessing a broadening of the spectrum of skin diseases that are treated by PDT. Since its introduction, PDT protocol has evolved significantly in terms of increasing method efficacy and patient safety. In this era of evidence-based medicine, it is expected that much effort will be put into creating a worldwide accepted consensus on PDT. A review on the current knowledge of PDT is given, and the historical basis of the method's evolution since its introduction in the 1900s is presented. At the end, future challenges of PDT are focused on discussing gaps that exist for research in the field.

  8. Magnetically-refreshable receptor platform structures for reusable nano-biosensor chips

    NASA Astrophysics Data System (ADS)

    Yoo, Haneul; Lee, Dong Jun; Cho, Dong-guk; Park, Juhun; Nam, Ki Wan; Tak Cho, Young; Park, Jae Yeol; Chen, Xing; Hong, Seunghun

    2016-01-01

    We developed a magnetically-refreshable receptor platform structure which can be integrated with quite versatile nano-biosensor structures to build reusable nano-biosensor chips. This structure allows one to easily remove used receptor molecules from a biosensor surface and reuse the biosensor for repeated sensing operations. Using this structure, we demonstrated reusable immunofluorescence biosensors. Significantly, since our method allows one to place receptor molecules very close to a nano-biosensor surface, it can be utilized to build reusable carbon nanotube transistor-based biosensors which require receptor molecules within a Debye length from the sensor surface. Furthermore, we also show that a single sensor chip can be utilized to detect two different target molecules simply by replacing receptor molecules using our method. Since this method does not rely on any chemical reaction to refresh sensor chips, it can be utilized for versatile biosensor structures and virtually-general receptor molecular species.

  9. National Audubon society's technology initiatives for bird conservation: a summary of application development for the Christmas bird count

    Treesearch

    Kathy Dale

    2005-01-01

    Since 1998, Audubon's Christmas Bird Count (CBC) has been supported by an Internet-based data entry application that was initially designed to accommodate the traditional paper-based methods of this long-running bird monitoring program. The first efforts to computerize the data and the entry procedures have informed a planned strategy to revise the current...

  10. Single-channel EEG-based mental fatigue detection based on deep belief network.

    PubMed

    Pinyi Li; Wenhui Jiang; Fei Su

    2016-08-01

    Mental fatigue has a pernicious influence on road and work place safety as well as a negative symptom of many acute and chronic illnesses, since the ability of concentrating, responding and judging quickly decreases during the fatigue or drowsiness stage. Electroencephalography (EEG) has been proven to be a robust physiological indicator of human cognitive state over the last few decades. But most existing EEG-based fatigue detection methods have poor performance in accuracy. This paper proposed a single-channel EEG-based mental fatigue detection method based on Deep Belief Network (DBN). The fused nonliear features from specified sub-bands and dynamic analysis, a total of 21 features are extracted as the input of the DBN to discriminate three classes of mental state including alert, slight fatigue and severe fatigue. Experimental results show the good performance of the proposed model comparing with those state-of-art methods.

  11. Shaping low-thrust trajectories with thrust-handling feature

    NASA Astrophysics Data System (ADS)

    Taheri, Ehsan; Kolmanovsky, Ilya; Atkins, Ella

    2018-02-01

    Shape-based methods are becoming popular in low-thrust trajectory optimization due to their fast computation speeds. In existing shape-based methods constraints are treated at the acceleration level but not at the thrust level. These two constraint types are not equivalent since spacecraft mass decreases over time as fuel is expended. This paper develops a shape-based method based on a Fourier series approximation that is capable of representing trajectories defined in spherical coordinates and that enforces thrust constraints. An objective function can be incorporated to minimize overall mission cost, i.e., achieve minimum ΔV . A representative mission from Earth to Mars is studied. The proposed Fourier series technique is demonstrated capable of generating feasible and near-optimal trajectories. These attributes can facilitate future low-thrust mission designs where different trajectory alternatives must be rapidly constructed and evaluated.

  12. The relation between periods’ identification and noises in hydrologic series data

    NASA Astrophysics Data System (ADS)

    Sang, Yan-Fang; Wang, Dong; Wu, Ji-Chun; Zhu, Qing-Ping; Wang, Ling

    2009-04-01

    SummaryIdentification of dominant periods is a typical and important issue in hydrologic series data analysis, since it is the basis of building effective stochastic models, understanding complex hydrologic processes, etc. However it is still a difficult task due to the influence of many interrelated factors, such as noises in hydrologic series data. In this paper, firstly the great influence of noises on periods' identification has been analyzed. Then, based on two conventional methods of hydrologic series analysis: wavelet analysis (WA) and maximum entropy spectral analysis (MESA), a new method of periods' identification of hydrologic series data, main series spectral analysis (MSSA), has been put forward, whose main idea is to identify periods of the main series on the basis of reducing hydrologic noises. Various methods (include fast Fourier transform (FFT), MESA and MSSA) have been applied to both synthetic series and observed hydrologic series. Results show that conventional methods (FFT and MESA) are not as good as expected due to the great influence of noises. However, this influence is not so strong while using the new method MSSA. In addition, by using the new de-noising method proposed in this paper, which is suitable for both normal noises and skew noises, the results are more reasonable, since noises separated from hydrologic series data generally follow skew probability distributions. In conclusion, based on comprehensive analyses, it can be stated that the proposed method MSSA could improve periods' identification by effectively reducing the influence of hydrologic noises.

  13. Sequential and simultaneous SLAR block adjustment. [spline function analysis for mapping

    NASA Technical Reports Server (NTRS)

    Leberl, F.

    1975-01-01

    Two sequential methods of planimetric SLAR (Side Looking Airborne Radar) block adjustment, with and without splines, and three simultaneous methods based on the principles of least squares are evaluated. A limited experiment with simulated SLAR images indicates that sequential block formation with splines followed by external interpolative adjustment is superior to the simultaneous methods such as planimetric block adjustment with similarity transformations. The use of the sequential block formation is recommended, since it represents an inexpensive tool for satisfactory point determination from SLAR images.

  14. Controlling the joint local false discovery rate is more powerful than meta-analysis methods in joint analysis of summary statistics from multiple genome-wide association studies.

    PubMed

    Jiang, Wei; Yu, Weichuan

    2017-02-15

    In genome-wide association studies (GWASs) of common diseases/traits, we often analyze multiple GWASs with the same phenotype together to discover associated genetic variants with higher power. Since it is difficult to access data with detailed individual measurements, summary-statistics-based meta-analysis methods have become popular to jointly analyze datasets from multiple GWASs. In this paper, we propose a novel summary-statistics-based joint analysis method based on controlling the joint local false discovery rate (Jlfdr). We prove that our method is the most powerful summary-statistics-based joint analysis method when controlling the false discovery rate at a certain level. In particular, the Jlfdr-based method achieves higher power than commonly used meta-analysis methods when analyzing heterogeneous datasets from multiple GWASs. Simulation experiments demonstrate the superior power of our method over meta-analysis methods. Also, our method discovers more associations than meta-analysis methods from empirical datasets of four phenotypes. The R-package is available at: http://bioinformatics.ust.hk/Jlfdr.html . eeyu@ust.hk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  15. Software system safety

    NASA Technical Reports Server (NTRS)

    Uber, James G.

    1988-01-01

    Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.

  16. Joint image and motion reconstruction for PET using a B-spline motion model.

    PubMed

    Blume, Moritz; Navab, Nassir; Rafecas, Magdalena

    2012-12-21

    We present a novel joint image and motion reconstruction method for PET. The method is based on gated data and reconstructs an image together with a motion function. The motion function can be used to transform the reconstructed image to any of the input gates. All available events (from all gates) are used in the reconstruction. The presented method uses a B-spline motion model, together with a novel motion regularization procedure that does not need a regularization parameter (which is usually extremely difficult to adjust). Several image and motion grid levels are used in order to reduce the reconstruction time. In a simulation study, the presented method is compared to a recently proposed joint reconstruction method. While the presented method provides comparable reconstruction quality, it is much easier to use since no regularization parameter has to be chosen. Furthermore, since the B-spline discretization of the motion function depends on fewer parameters than a displacement field, the presented method is considerably faster and consumes less memory than its counterpart. The method is also applied to clinical data, for which a novel purely data-driven gating approach is presented.

  17. Variational Bayes method for estimating transit route OD flows using APC data.

    DOT National Transportation Integrated Search

    2017-01-31

    The focus of this study is on the use of large quantities of APC data to estimate OD flows : for transit bus routes. Since most OD flow estimation methodologies based on boarding and : alighting counts were developed before the prevalence of APC tech...

  18. Label-free screening of foodborne Salmonella using surface plasmon resonance imaging

    USDA-ARS?s Scientific Manuscript database

    Since 15 pathogens cause approximately 95% of the foodborne infections, it is desirable to develop rapid and simultaneous screening methods for these major pathogens. In this study, we developed an immunoassay for Salmonella based on surface plasmon resonance imaging (SPRi). The sensor surface modif...

  19. Detecting overpressure using the Eaton and Equivalent Depth methods in Offshore Nova Scotia, Canada

    NASA Astrophysics Data System (ADS)

    Ernanda; Primasty, A. Q. T.; Akbar, K. A.

    2018-03-01

    Overpressure is an abnormal high subsurface pressure of any fluids which exceeds the hydrostatic pressure of column of water or formation brine. In Offshore Nova Scotia Canada, the values and depth of overpressure zone are determined using the eaton and equivalent depth method, based on well data and the normal compaction trend analysis. Since equivalent depth method is using effective vertical stress principle and Eaton method considers physical property ratio (velocity). In this research, pressure evaluation only applicable on Penobscot L-30 well. An abnormal pressure is detected at depth 11804 feet as possibly overpressure zone, based on pressure gradient curve and calculation between the Eaton method (7241.3 psi) and Equivalent Depth method (6619.4 psi). Shales within Abenaki formation especially Baccaro Member is estimated as possible overpressure zone due to hydrocarbon generation mechanism.

  20. Palmprint Recognition Across Different Devices.

    PubMed

    Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming

    2012-01-01

    In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD.

  1. Palmprint Recognition across Different Devices

    PubMed Central

    Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming

    2012-01-01

    In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD. PMID:22969380

  2. A New Hybrid-Multiscale SSA Prediction of Non-Stationary Time Series

    NASA Astrophysics Data System (ADS)

    Ghanbarzadeh, Mitra; Aminghafari, Mina

    2016-02-01

    Singular spectral analysis (SSA) is a non-parametric method used in the prediction of non-stationary time series. It has two parameters, which are difficult to determine and very sensitive to their values. Since, SSA is a deterministic-based method, it does not give good results when the time series is contaminated with a high noise level and correlated noise. Therefore, we introduce a novel method to handle these problems. It is based on the prediction of non-decimated wavelet (NDW) signals by SSA and then, prediction of residuals by wavelet regression. The advantages of our method are the automatic determination of parameters and taking account of the stochastic structure of time series. As shown through the simulated and real data, we obtain better results than SSA, a non-parametric wavelet regression method and Holt-Winters method.

  3. Determination of excipient based solubility increases using the CheqSol method.

    PubMed

    Etherson, Kelly; Halbert, Gavin; Elliott, Moira

    2014-04-25

    Aqueous solubility is an essential characteristic assessed during drug development to determine a compound's drug-likeness since solubility plays an important pharmaceutical role. However, nearly half of the drug candidates discovered today display poor water solubility; therefore methods have to be applied to increase solubility. Solubility determination using the CheqSol method is a novel rapid solubility screening technique for ionisable compounds. The aim of this study is to determine if the CheqSol method can be employed to determine solubility increases of four test drugs (ibuprofen, gliclazide, atenolol and propranolol) induced by non-ionising excipients such as hydroxypropyl-β-cyclodextrin and poloxamers 407 and 188. CheqSol assays were performed for the drugs alone or in combination with varying solubiliser concentrations. The measured intrinsic solubility of all four drugs increased with all the excipients tested in an excipient concentration dependent manner providing results consistent with previous literature. The results demonstrate that it may be possible to use this method to determine the solubility increases induced by non-ionic solubilising excipients with results that are comparable to standard equilibrium based solubility techniques. Since the technique is automated and requires only small drug quantities it may serve as a useful solubility or formulation screening tool providing more detailed physicochemical information than multiwell plate or similar visual systems. Copyright © 2014. Published by Elsevier B.V.

  4. Recent trends related to the use of formal methods in software engineering

    NASA Technical Reports Server (NTRS)

    Prehn, Soren

    1986-01-01

    An account is given of some recent developments and trends related to the development and use of formal methods in software engineering. Ongoing activities in Europe are focussed on, since there seems to be a notable difference in attitude towards industrial usage of formal methods in Europe and in the U.S. A more detailed account is given of the currently most widespread formal method in Europe: the Vienna Development Method. Finally, the use of Ada is discussed in relation to the application of formal methods, and the potential for constructing Ada-specific tools based on that method is considered.

  5. Proof of concept of a "greener" protein purification/enrichment method based on carboxylate-terminated carbosilane dendrimer-protein interactions.

    PubMed

    González-García, Estefanía; Maly, Marek; de la Mata, Francisco Javier; Gómez, Rafael; Marina, María Luisa; García, María Concepción

    2016-11-01

    Protein sample preparation is a critical and an unsustainable step since it involves the use of tedious methods that usually require high amount of solvents. The development of new materials offers additional opportunities in protein sample preparation. This work explores, for the first time, the potential application of carboxylate-terminated carbosilane dendrimers to the purification/enrichment of proteins. Studies on dendrimer binding to proteins, based on protein fluorescence intensity and emission wavelengths measurements, demonstrated the interaction between carboxylate-terminated carbosilane dendrimers and proteins at all tested pH levels. Interactions were greatly affected by the protein itself, pH, and dendrimer concentration and generation. Especially interesting was the interaction at acidic pH since it resulted in a significant protein precipitation. Dendrimer-protein interactions were modeled observing stable complexes for all proteins. Carboxylate-terminated carbosilane dendrimers at acidic pH were successfully used in the purification/enrichment of proteins extracted from a complex sample. Graphical Abstract Images showing the growing turbidity of solutions containing a mixture of proteins (lysozyme, myoglobin, and BSA) at different protein:dendrimer ratios (1:0, 1:1, 1:8, and 1:20) at acidic pH and SDS-PAGE profiles of the corresponsing supernatants. Comparison of SDS-PAGE profiles for the pellets obtained during the purification of proteins present in a complex sample using a conventional "no-clean" method based on acetone precipitation and the proposed "greener" method using carboxylate-terminated carbosilane dendrimer at a 1:20 protein:dendrimer ratio.

  6. A reactive, scalable, and transferable model for molecular energies from a neural network approach based on local information

    NASA Astrophysics Data System (ADS)

    Unke, Oliver T.; Meuwly, Markus

    2018-06-01

    Despite the ever-increasing computer power, accurate ab initio calculations for large systems (thousands to millions of atoms) remain infeasible. Instead, approximate empirical energy functions are used. Most current approaches are either transferable between different chemical systems, but not particularly accurate, or they are fine-tuned to a specific application. In this work, a data-driven method to construct a potential energy surface based on neural networks is presented. Since the total energy is decomposed into local atomic contributions, the evaluation is easily parallelizable and scales linearly with system size. With prediction errors below 0.5 kcal mol-1 for both unknown molecules and configurations, the method is accurate across chemical and configurational space, which is demonstrated by applying it to datasets from nonreactive and reactive molecular dynamics simulations and a diverse database of equilibrium structures. The possibility to use small molecules as reference data to predict larger structures is also explored. Since the descriptor only uses local information, high-level ab initio methods, which are computationally too expensive for large molecules, become feasible for generating the necessary reference data used to train the neural network.

  7. Nonlinear vibration absorption for a flexible arm via a virtual vibration absorber

    NASA Astrophysics Data System (ADS)

    Bian, Yushu; Gao, Zhihui

    2017-07-01

    A semi-active vibration absorption method is put forward to attenuate nonlinear vibration of a flexible arm based on the internal resonance. To maintain the 2:1 internal resonance condition and the desirable damping characteristic, a virtual vibration absorber is suggested. It is mathematically equivalent to a vibration absorber but its frequency and damping coefficients can be readily adjusted by simple control algorithms, thereby replacing those hard-to-implement mechanical designs. Through theoretical analyses and numerical simulations, it is proven that the internal resonance can be successfully established for the flexible arm, and the vibrational energy of flexible arm can be transferred to and dissipated by the virtual vibration absorber. Finally, experimental results are presented to validate the theoretical predictions. Since the proposed method absorbs rather than suppresses vibrational energy of the primary system, it is more convenient to reduce strong vibration than conventional active vibration suppression methods based on smart material actuators with limited energy output. Furthermore, since it aims to establish an internal vibrational energy transfer channel from the primary system to the vibration absorber rather than directly respond to external excitations, it is especially applicable for attenuating nonlinear vibration excited by unpredictable excitations.

  8. Morphological observation and analysis using automated image cytometry for the comparison of trypan blue and fluorescence-based viability detection method.

    PubMed

    Chan, Leo Li-Ying; Kuksin, Dmitry; Laverty, Daniel J; Saldi, Stephanie; Qiu, Jean

    2015-05-01

    The ability to accurately determine cell viability is essential to performing a well-controlled biological experiment. Typical experiments range from standard cell culturing to advanced cell-based assays that may require cell viability measurement for downstream experiments. The traditional cell viability measurement method has been the trypan blue (TB) exclusion assay. However, since the introduction of fluorescence-based dyes for cell viability measurement using flow or image-based cytometry systems, there have been numerous publications comparing the two detection methods. Although previous studies have shown discrepancies between TB exclusion and fluorescence-based viability measurements, image-based morphological analysis was not performed in order to examine the viability discrepancies. In this work, we compared TB exclusion and fluorescence-based viability detection methods using image cytometry to observe morphological changes due to the effect of TB on dead cells. Imaging results showed that as the viability of a naturally-dying Jurkat cell sample decreased below 70 %, many TB-stained cells began to exhibit non-uniform morphological characteristics. Dead cells with these characteristics may be difficult to count under light microscopy, thus generating an artificially higher viability measurement compared to fluorescence-based method. These morphological observations can potentially explain the differences in viability measurement between the two methods.

  9. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    PubMed

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  10. Missing RRI interpolation for HRV analysis using locally-weighted partial least squares regression.

    PubMed

    Kamata, Keisuke; Fujiwara, Koichi; Yamakawa, Toshiki; Kano, Manabu

    2016-08-01

    The R-R interval (RRI) fluctuation in electrocardiogram (ECG) is called heart rate variability (HRV). Since HRV reflects autonomic nervous function, HRV-based health monitoring services, such as stress estimation, drowsy driving detection, and epileptic seizure prediction, have been proposed. In these HRV-based health monitoring services, precise R wave detection from ECG is required; however, R waves cannot always be detected due to ECG artifacts. Missing RRI data should be interpolated appropriately for HRV analysis. The present work proposes a missing RRI interpolation method by utilizing using just-in-time (JIT) modeling. The proposed method adopts locally weighted partial least squares (LW-PLS) for RRI interpolation, which is a well-known JIT modeling method used in the filed of process control. The usefulness of the proposed method was demonstrated through a case study of real RRI data collected from healthy persons. The proposed JIT-based interpolation method could improve the interpolation accuracy in comparison with a static interpolation method.

  11. Multi-criteria decision making development of ion chromatographic method for determination of inorganic anions in oilfield waters based on artificial neural networks retention model.

    PubMed

    Stefanović, Stefica Cerjan; Bolanča, Tomislav; Luša, Melita; Ukić, Sime; Rogošić, Marko

    2012-02-24

    This paper describes the development of ad hoc methodology for determination of inorganic anions in oilfield water, since their composition often significantly differs from the average (concentration of components and/or matrix). Therefore, fast and reliable method development has to be performed in order to ensure the monitoring of desired properties under new conditions. The method development was based on computer assisted multi-criteria decision making strategy. The used criteria were: maximal value of objective functions used, maximal robustness of the separation method, minimal analysis time, and maximal retention distance between two nearest components. Artificial neural networks were used for modeling of anion retention. The reliability of developed method was extensively tested by the validation of performance characteristics. Based on validation results, the developed method shows satisfactory performance characteristics, proving the successful application of computer assisted methodology in the described case study. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Comparative analysis of ROS-based monocular SLAM methods for indoor navigation

    NASA Astrophysics Data System (ADS)

    Buyval, Alexander; Afanasyev, Ilya; Magid, Evgeni

    2017-03-01

    This paper presents a comparison of four most recent ROS-based monocular SLAM-related methods: ORB-SLAM, REMODE, LSD-SLAM, and DPPTAM, and analyzes their feasibility for a mobile robot application in indoor environment. We tested these methods using video data that was recorded from a conventional wide-angle full HD webcam with a rolling shutter. The camera was mounted on a human-operated prototype of an unmanned ground vehicle, which followed a closed-loop trajectory. Both feature-based methods (ORB-SLAM, REMODE) and direct SLAMrelated algorithms (LSD-SLAM, DPPTAM) demonstrated reasonably good results in detection of volumetric objects, corners, obstacles and other local features. However, we met difficulties with recovering typical for offices homogeneously colored walls, since all of these methods created empty spaces in a reconstructed sparse 3D scene. This may cause collisions of an autonomously guided robot with unfeatured walls and thus limits applicability of maps, which are obtained by the considered monocular SLAM-related methods for indoor robot navigation.

  13. Jaccard distance based weighted sparse representation for coarse-to-fine plant species recognition.

    PubMed

    Zhang, Shanwen; Wu, Xiaowei; You, Zhuhong

    2017-01-01

    Leaf based plant species recognition plays an important role in ecological protection, however its application to large and modern leaf databases has been a long-standing obstacle due to the computational cost and feasibility. Recognizing such limitations, we propose a Jaccard distance based sparse representation (JDSR) method which adopts a two-stage, coarse to fine strategy for plant species recognition. In the first stage, we use the Jaccard distance between the test sample and each training sample to coarsely determine the candidate classes of the test sample. The second stage includes a Jaccard distance based weighted sparse representation based classification(WSRC), which aims to approximately represent the test sample in the training space, and classify it by the approximation residuals. Since the training model of our JDSR method involves much fewer but more informative representatives, this method is expected to overcome the limitation of high computational and memory costs in traditional sparse representation based classification. Comparative experimental results on a public leaf image database demonstrate that the proposed method outperforms other existing feature extraction and SRC based plant recognition methods in terms of both accuracy and computational speed.

  14. Collaborative WiFi Fingerprinting Using Sensor-Based Navigation on Smartphones.

    PubMed

    Zhang, Peng; Zhao, Qile; Li, You; Niu, Xiaoji; Zhuang, Yuan; Liu, Jingnan

    2015-07-20

    This paper presents a method that trains the WiFi fingerprint database using sensor-based navigation solutions. Since micro-electromechanical systems (MEMS) sensors provide only a short-term accuracy but suffer from the accuracy degradation with time, we restrict the time length of available indoor navigation trajectories, and conduct post-processing to improve the sensor-based navigation solution. Different middle-term navigation trajectories that move in and out of an indoor area are combined to make up the database. Furthermore, we evaluate the effect of WiFi database shifts on WiFi fingerprinting using the database generated by the proposed method. Results show that the fingerprinting errors will not increase linearly according to database (DB) errors in smartphone-based WiFi fingerprinting applications.

  15. Collaborative WiFi Fingerprinting Using Sensor-Based Navigation on Smartphones

    PubMed Central

    Zhang, Peng; Zhao, Qile; Li, You; Niu, Xiaoji; Zhuang, Yuan; Liu, Jingnan

    2015-01-01

    This paper presents a method that trains the WiFi fingerprint database using sensor-based navigation solutions. Since micro-electromechanical systems (MEMS) sensors provide only a short-term accuracy but suffer from the accuracy degradation with time, we restrict the time length of available indoor navigation trajectories, and conduct post-processing to improve the sensor-based navigation solution. Different middle-term navigation trajectories that move in and out of an indoor area are combined to make up the database. Furthermore, we evaluate the effect of WiFi database shifts on WiFi fingerprinting using the database generated by the proposed method. Results show that the fingerprinting errors will not increase linearly according to database (DB) errors in smartphone-based WiFi fingerprinting applications. PMID:26205269

  16. Applying temporal abstraction and case-based reasoning to predict approaching influenza waves.

    PubMed

    Schmidt, Rainer; Gierl, Lothar

    2002-01-01

    The goal of the TeCoMed project is to send early warnings against forthcoming waves or even epidemics of infectious diseases, especially of influenza, to interested practitioners, pharmacists etc. in the German federal state Mecklenburg-Western Pomerania. The forecast of these waves is based on written confirmations of unfitness for work of the main German health insurance company. Since influenza waves are difficult to predict because of their cyclic but not regular behaviour, statistical methods based on the computation of mean values are not helpful. Instead, we have developed a prognostic model that makes use of similar former courses. Our method combines Case-based Reasoning with Temporal Abstraction to decide whether early warning is appropriate.

  17. Motion analysis report

    NASA Technical Reports Server (NTRS)

    Badler, N. I.

    1985-01-01

    Human motion analysis is the task of converting actual human movements into computer readable data. Such movement information may be obtained though active or passive sensing methods. Active methods include physical measuring devices such as goniometers on joints of the body, force plates, and manually operated sensors such as a Cybex dynamometer. Passive sensing de-couples the position measuring device from actual human contact. Passive sensors include Selspot scanning systems (since there is no mechanical connection between the subject's attached LEDs and the infrared sensing cameras), sonic (spark-based) three-dimensional digitizers, Polhemus six-dimensional tracking systems, and image processing systems based on multiple views and photogrammetric calculations.

  18. Cross spectral, active and passive approach to face recognition for improved performance

    NASA Astrophysics Data System (ADS)

    Grudzien, A.; Kowalski, M.; Szustakowski, M.

    2017-08-01

    Biometrics is a technique for automatic recognition of a person based on physiological or behavior characteristics. Since the characteristics used are unique, biometrics can create a direct link between a person and identity, based on variety of characteristics. The human face is one of the most important biometric modalities for automatic authentication. The most popular method of face recognition which relies on processing of visual information seems to be imperfect. Thermal infrared imagery may be a promising alternative or complement to visible range imaging due to its several reasons. This paper presents an approach of combining both methods.

  19. A method for assigning species into groups based on generalized Mahalanobis distance between habitat model coefficients

    USGS Publications Warehouse

    Williams, C.J.; Heglund, P.J.

    2009-01-01

    Habitat association models are commonly developed for individual animal species using generalized linear modeling methods such as logistic regression. We considered the issue of grouping species based on their habitat use so that management decisions can be based on sets of species rather than individual species. This research was motivated by a study of western landbirds in northern Idaho forests. The method we examined was to separately fit models to each species and to use a generalized Mahalanobis distance between coefficient vectors to create a distance matrix among species. Clustering methods were used to group species from the distance matrix, and multidimensional scaling methods were used to visualize the relations among species groups. Methods were also discussed for evaluating the sensitivity of the conclusions because of outliers or influential data points. We illustrate these methods with data from the landbird study conducted in northern Idaho. Simulation results are presented to compare the success of this method to alternative methods using Euclidean distance between coefficient vectors and to methods that do not use habitat association models. These simulations demonstrate that our Mahalanobis-distance- based method was nearly always better than Euclidean-distance-based methods or methods not based on habitat association models. The methods used to develop candidate species groups are easily explained to other scientists and resource managers since they mainly rely on classical multivariate statistical methods. ?? 2008 Springer Science+Business Media, LLC.

  20. ENVIRONMENTAL GOODS AND SERIVCES FROM RESTORATION ALTERNATIVES: EMERGY-BASED METHOD OF BENEFIT VALUE

    EPA Science Inventory

    Although economic benefit-cost analyses of environmental regulations has been conducted since the 1970s, an effective methodology has yet to be developed for the integrated assessment of regulatory impacts on the larger system as a whole, including its social and environmental as...

  1. Unsupervised Ontology Generation from Unstructured Text. CRESST Report 827

    ERIC Educational Resources Information Center

    Mousavi, Hamid; Kerr, Deirdre; Iseli, Markus R.

    2013-01-01

    Ontologies are a vital component of most knowledge acquisition systems, and recently there has been a huge demand for generating ontologies automatically since manual or supervised techniques are not scalable. In this paper, we introduce "OntoMiner", a rule-based, iterative method to extract and populate ontologies from unstructured or…

  2. Revegetation practices on the Santa Rita Experimental Range

    Treesearch

    Bruce D. Munda; Mark J. Pater

    2003-01-01

    This paper discusses the revegetation activites on the Santa Rita Experimental Range since 1903. Revegetation research includes experiments to evaluate adaptation, seedbed preparation, and sowing methods. We also discuss criteria used to determine if a site has the potential for a successful revegetation. Successful revegetation was initially based on plant emergence...

  3. Genetic improvement through selective breeding: Part of an integrated strategy to reduce disease loss and Antibiotic use

    USDA-ARS?s Scientific Manuscript database

    Bacterial cold water disease (BCWD) is a frequent cause of elevated mortality in rainbow trout, and outbreaks often require the use of antibiotic treatment. Since antimicrobial resistance is of concern, additional control methods are desirable. Family-based selective breeding offers new opportuniti...

  4. SMOS soil moisture validation with U.S. in situ newworks

    USDA-ARS?s Scientific Manuscript database

    Estimation of soil moisture at large scale has been performed using several satellite-based passive microwave sensors using a variety of retrieval methods. The most recent source of soil moisture is the European Space Agency Soil Moisture and Ocean Salinity (SMOS) mission. Since it is a new sensor u...

  5. Inside and outside: Teacher-Researcher Collaboration

    ERIC Educational Resources Information Center

    Herrenkohl, Leslie Rupert; Kawasaki, Keiko; DeWater, Lezlie Salvatore

    2010-01-01

    In this paper, we discuss our approach to teacher-researcher collaboration and how it is similar and different from other models of teacher collaboration. Our approach to collaboration employed design experimentation (Brown, 1992; Design Based Research Collective, 2003) as a central method since it yields important findings for teachers'…

  6. A modified reverse one-hybrid screen identifies transcriptional activation in Phyochrome-Interacting Factor 3

    USDA-ARS?s Scientific Manuscript database

    Transcriptional activation domains (TAD) are difficult to predict and identify, since they are not conserved and have little consensus. Here, we describe a yeast-based screening method that is able to identify individual amino acid residues involved in transcriptional activation in a high throughput...

  7. The Analysis of Spontaneous Processes Using Equilibrium Thermodynamics

    ERIC Educational Resources Information Center

    Honig, J. M.; Ben-Amotz, Dor

    2006-01-01

    The derivations based on the use of deficit functions provide a simple means of demonstrating the extremism conditions that are applicable to various thermodynamics function. The method shows that the maximum quantity of work is available from a system only when the processes are carried out reversibly since irreversible (spontaneous)…

  8. Detroit's Fight for Equal Educational Opportunity.

    ERIC Educational Resources Information Center

    Zwerdling, A. L.

    To meet the challenge of equal educational opportunity, current methods of public school finance must be revised. The present financial system, based on State equalization of local property tax valuation, is inequitable since it results in many school districts, particularly those in large cities, having inadequate resources to meet extraordinary…

  9. Clustering self-organizing maps (SOM) method for human papillomavirus (HPV) DNA as the main cause of cervical cancer disease

    NASA Astrophysics Data System (ADS)

    Bustamam, A.; Aldila, D.; Fatimah, Arimbi, M. D.

    2017-07-01

    One of the most widely used clustering method, since it has advantage on its robustness, is Self-Organizing Maps (SOM) method. This paper discusses the application of SOM method on Human Papillomavirus (HPV) DNA which is the main cause of cervical cancer disease, the most dangerous cancer in developing countries. We use 18 types of HPV DNA-based on the newest complete genome. By using open-source-based program R, clustering process can separate 18 types of HPV into two different clusters. There are two types of HPV in the first cluster while 16 others in the second cluster. The analyzing result of 18 types HPV based on the malignancy of the virus (the difficultness to cure). Two of HPV types the first cluster can be classified as tame HPV, while 16 others in the second cluster are classified as vicious HPV.

  10. Efficient operation scheduling for adsorption chillers using predictive optimization-based control methods

    NASA Astrophysics Data System (ADS)

    Bürger, Adrian; Sawant, Parantapa; Bohlayer, Markus; Altmann-Dieses, Angelika; Braun, Marco; Diehl, Moritz

    2017-10-01

    Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.

  11. Basis set construction for molecular electronic structure theory: natural orbital and Gauss-Slater basis for smooth pseudopotentials.

    PubMed

    Petruzielo, F R; Toulouse, Julien; Umrigar, C J

    2011-02-14

    A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.

  12. Performance evaluation of infrared imaging system in field test

    NASA Astrophysics Data System (ADS)

    Wang, Chensheng; Guo, Xiaodong; Ren, Tingting; Zhang, Zhi-jie

    2014-11-01

    Infrared imaging system has been applied widely in both military and civilian fields. Since the infrared imager has various types and different parameters, for system manufacturers and customers, there is great demand for evaluating the performance of IR imaging systems with a standard tool or platform. Since the first generation IR imager was developed, the standard method to assess the performance has been the MRTD or related improved methods which are not perfect adaptable for current linear scanning imager or 2D staring imager based on FPA detector. For this problem, this paper describes an evaluation method based on the triangular orientation discrimination metric which is considered as the effective and emerging method to evaluate the synthesis performance of EO system. To realize the evaluation in field test, an experiment instrument is developed. And considering the importance of operational environment, the field test is carried in practical atmospheric environment. The test imagers include panoramic imaging system and staring imaging systems with different optics and detectors parameters (both cooled and uncooled). After showing the instrument and experiment setup, the experiment results are shown. The target range performance is analyzed and discussed. In data analysis part, the article gives the range prediction values obtained from TOD method, MRTD method and practical experiment, and shows the analysis and results discussion. The experimental results prove the effectiveness of this evaluation tool, and it can be taken as a platform to give the uniform performance prediction reference.

  13. A fuzzy MCDM approach for evaluating school performance based on linguistic information

    NASA Astrophysics Data System (ADS)

    Musani, Suhaina; Jemain, Abdul Aziz

    2013-11-01

    Decision making is the process of finding the best option among the feasible alternatives. This process should consider a variety of criteria, but this study only focus on academic achievement. The data used is the percentage of candidates who obtained Malaysian Certificate of Education (SPM) in Melaka based on school academic achievement for each subject. 57 secondary schools in Melaka as listed by the Ministry of Education involved in this study. Therefore the school ranking can be done using MCDM (Multi Criteria Decision Making) methods. The objective of this study is to develop a rational method for evaluating school performance based on linguistic information. Since the information or level of academic achievement provided in linguistic manner, there is a possible chance of getting incomplete or uncertain problems. So in order to overcome the situation, the information could be provided as fuzzy numbers. Since fuzzy set represents the uncertainty in human perceptions. In this research, VIKOR (Multi Criteria Optimization and Compromise Solution) has been used as a MCDM tool for the school ranking process in fuzzy environment. Results showed that fuzzy set theory can solve the limitations of using MCDM when there is uncertainty problems exist in the data.

  14. Allele-specific HLA-DR typing by mass spectrometry: an alternative to hybridization-based typing methods.

    PubMed

    Worrall, T A; Schmeckpeper, B J; Corvera, J S; Cotter, R J

    2000-11-01

    The primer oligomer base extension (PROBE) reaction, combined with matrix-assisted laser desorption/ionization time-of-flight mass spectrometry, is used to characterize HLA-DR2 polymorphism. Alleles are distinguished rapidly and accurately by measuring the mass of primer extension products at every known variable region of HLA-DR2 alleles. Since differentiation of alleles by PROBE relies on measuring differences in extension product mass rather than differences in hybridization properties, mistyped alleles resulting from nonspecific hybridization are absent. The method shows considerable potential for high-throughput screening of HLA-DR polymorphism in a chip-based format, including rapid tissue typing of unrelated volunteer donors.

  15. High-pressure torsion for new hydrogen storage materials

    PubMed Central

    Edalati, Kaveh; Akiba, Etsuo; Horita, Zenji

    2018-01-01

    Abstract High-pressure torsion (HPT) is widely used as a severe plastic deformation technique to create ultrafine-grained structures with promising mechanical and functional properties. Since 2007, the method has been employed to enhance the hydrogenation kinetics in different Mg-based hydrogen storage materials. Recent studies showed that the method is effective not only for increasing the hydrogenation kinetics but also for improving the hydrogenation activity, for enhancing the air resistivity and more importantly for synthesizing new nanostructured hydrogen storage materials with high densities of lattice defects. This manuscript reviews some major findings on the impact of HPT process on the hydrogen storage performance of different titanium-based and magnesium-based materials. PMID:29511396

  16. An ICA-based method for the segmentation of pigmented skin lesions in macroscopic images.

    PubMed

    Cavalcanti, Pablo G; Scharcanski, Jacob; Di Persia, Leandro E; Milone, Diego H

    2011-01-01

    Segmentation is an important step in computer-aided diagnostic systems for pigmented skin lesions, since that a good definition of the lesion area and its boundary at the image is very important to distinguish benign from malignant cases. In this paper a new skin lesion segmentation method is proposed. This method uses Independent Component Analysis to locate skin lesions in the image, and this location information is further refined by a Level-set segmentation method. Our method was evaluated in 141 images and achieved an average segmentation error of 16.55%, lower than the results for comparable state-of-the-art methods proposed in literature.

  17. Multi-objects recognition for distributed intelligent sensor networks

    NASA Astrophysics Data System (ADS)

    He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.

    2008-04-01

    This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.

  18. Method for optimizing channelized quadratic observers for binary classification of large-dimensional image datasets

    PubMed Central

    Kupinski, M. K.; Clarkson, E.

    2015-01-01

    We present a new method for computing optimized channels for channelized quadratic observers (CQO) that is feasible for high-dimensional image data. The method for calculating channels is applicable in general and optimal for Gaussian distributed image data. Gradient-based algorithms for determining the channels are presented for five different information-based figures of merit (FOMs). Analytic solutions for the optimum channels for each of the five FOMs are derived for the case of equal mean data for both classes. The optimum channels for three of the FOMs under the equal mean condition are shown to be the same. This result is critical since some of the FOMs are much easier to compute. Implementing the CQO requires a set of channels and the first- and second-order statistics of channelized image data from both classes. The dimensionality reduction from M measurements to L channels is a critical advantage of CQO since estimating image statistics from channelized data requires smaller sample sizes and inverting a smaller covariance matrix is easier. In a simulation study we compare the performance of ideal and Hotelling observers to CQO. The optimal CQO channels are calculated using both eigenanalysis and a new gradient-based algorithm for maximizing Jeffrey's divergence (J). Optimal channel selection without eigenanalysis makes the J-CQO on large-dimensional image data feasible. PMID:26366764

  19. The mercedes-benz approach to γ-ray astronomy

    NASA Astrophysics Data System (ADS)

    Akerlof, Carl W.

    1988-02-01

    The sensitivity requirements for ground-based γ-ray astronomy are reviewed in the light of the most reliable estimates of stellar fluxes above 100 GeV. Current data strongly favor the construction of detectors with the lowest energy thresholds. Since improvements in angular resolution are limited by shower fluctuations, better methods of rejecting hadronic showers must be found to reliably observe the known astrophysical sources. Several possible methods for reducing this hadronic background are discussed.

  20. Efficient Online Learning Algorithms Based on LSTM Neural Networks.

    PubMed

    Ergen, Tolga; Kozat, Suleyman Serdar

    2017-09-13

    We investigate online nonlinear regression and introduce novel regression structures based on the long short term memory (LSTM) networks. For the introduced structures, we also provide highly efficient and effective online training methods. To train these novel LSTM-based structures, we put the underlying architecture in a state space form and introduce highly efficient and effective particle filtering (PF)-based updates. We also provide stochastic gradient descent and extended Kalman filter-based updates. Our PF-based training method guarantees convergence to the optimal parameter estimation in the mean square error sense provided that we have a sufficient number of particles and satisfy certain technical conditions. More importantly, we achieve this performance with a computational complexity in the order of the first-order gradient-based methods by controlling the number of particles. Since our approach is generic, we also introduce a gated recurrent unit (GRU)-based approach by directly replacing the LSTM architecture with the GRU architecture, where we demonstrate the superiority of our LSTM-based approach in the sequential prediction task via different real life data sets. In addition, the experimental results illustrate significant performance improvements achieved by the introduced algorithms with respect to the conventional methods over several different benchmark real life data sets.

  1. Filtering and left ventricle segmentation of the fetal heart in ultrasound images

    NASA Astrophysics Data System (ADS)

    Vargas-Quintero, Lorena; Escalante-Ramírez, Boris

    2013-11-01

    In this paper, we propose to use filtering methods and a segmentation algorithm for the analysis of fetal heart in ultrasound images. Since noise speckle makes difficult the analysis of ultrasound images, the filtering process becomes a useful task in these types of applications. The filtering techniques consider in this work assume that the speckle noise is a random variable with a Rayleigh distribution. We use two multiresolution methods: one based on wavelet decomposition and the another based on the Hermite transform. The filtering process is used as way to strengthen the performance of the segmentation tasks. For the wavelet-based approach, a Bayesian estimator at subband level for pixel classification is employed. The Hermite method computes a mask to find those pixels that are corrupted by speckle. On the other hand, we picked out a method based on a deformable model or "snake" to evaluate the influence of the filtering techniques in the segmentation task of left ventricle in fetal echocardiographic images.

  2. Explanation-based generalization of partially ordered plans

    NASA Technical Reports Server (NTRS)

    Kambhampati, Subbarao; Kedar, Smadar

    1991-01-01

    Most previous work in analytic generalization of plans dealt with totally ordered plans. These methods cannot be directly applied to generalizing partially ordered plans, since they do not capture all interactions among plan operators for all total orders of such plans. We introduce a new method for generalizing partially ordered plans. This method is based on providing explanation-based generalization (EBG) with explanations which systematically capture the interactions among plan operators for all the total orders of a partially-ordered plan. The explanations are based on the Modal Truth Criterion which states the necessary and sufficient conditions for ensuring the truth of a proposition at any point in a plan, for a class of partially ordered plans. The generalizations obtained by this method guarantee successful and interaction-free execution of any total order of the generalized plan. In addition, the systematic derivation of the generalization algorithms from the Modal Truth Criterion obviates the need for carrying out a separate formal proof of correctness of the EBG algorithms.

  3. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    PubMed

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  4. NMR screening in fragment-based drug design: a practical guide.

    PubMed

    Kim, Hai-Young; Wyss, Daniel F

    2015-01-01

    Fragment-based drug design (FBDD) comprises both fragment-based screening (FBS) to find hits and elaboration of these hits to lead compounds. Typical fragment hits have lower molecular weight (<300-350 Da) and lower initial potency but higher ligand efficiency when compared to those from high-throughput screening. NMR spectroscopy has been widely used for FBDD since it identifies and localizes the binding site of weakly interacting hits on the target protein. Here we describe ligand-based NMR methods for hit identification from fragment libraries and for functional cross-validation of primary hits.

  5. An energy efficient multiple mobile sinks based routing algorithm for wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Zhong, Peijun; Ruan, Feng

    2018-03-01

    With the fast development of wireless sensor networks (WSNs), more and more energy efficient routing algorithms have been proposed. However, one of the research challenges is how to alleviate the hot spot problem since nodes close to static sink (or base station) tend to die earlier than other sensors. The introduction of mobile sink node can effectively alleviate this problem since sink node can move along certain trajectories, causing hot spot nodes more evenly distributed. In this paper, we mainly study the energy efficient routing method with multiple mobile sinks support. We divide the whole network into several clusters and study the influence of mobile sink number on network lifetime. Simulation results show that the best network performance appears when mobile sink number is about 3 under our simulation environment.

  6. Two-way coupled SPH and particle level set fluid simulation.

    PubMed

    Losasso, Frank; Talton, Jerry; Kwatra, Nipun; Fedkiw, Ronald

    2008-01-01

    Grid-based methods have difficulty resolving features on or below the scale of the underlying grid. Although adaptive methods (e.g. RLE, octrees) can alleviate this to some degree, separate techniques are still required for simulating small-scale phenomena such as spray and foam, especially since these more diffuse materials typically behave quite differently than their denser counterparts. In this paper, we propose a two-way coupled simulation framework that uses the particle level set method to efficiently model dense liquid volumes and a smoothed particle hydrodynamics (SPH) method to simulate diffuse regions such as sprays. Our novel SPH method allows us to simulate both dense and diffuse water volumes, fully incorporates the particles that are automatically generated by the particle level set method in under-resolved regions, and allows for two way mixing between dense SPH volumes and grid-based liquid representations.

  7. The Indian Summer Monsoon onset revisited: new approach based on the analysis of historical wind observations

    NASA Astrophysics Data System (ADS)

    Ordoñez, Paulina; Gallego, David; Ribera, Pedro; Peña-Ortiz, Cristina; Garcia-Herrera, Ricardo; Vega, Inmaculada; Gómez, Francisco de Paula

    2016-04-01

    The Indian Summer Monsoon onset is one of the meteorological events most anticipated in the world. Due to its relevance for the population, the India Meteorological Department has dated the onset over the southern tip of the Indian Peninsula (Kerala) since 1901. The traditional method to date the onset was based in the judgment of skilled meteorologist and because of this, the method was considered subjective and not adequate for the study of long-term changes in the onset. A new method for determining the monsoon onset based solely on objective criteria has been in use since 2006. Unfortunately, the new method relies -among other variables- on OLR measurements. This requirement impedes the construction of an objective onset series before the satellite era. An alternative approach to establish the onset by objective methods is the use of the wind field. During the last decade, some works have demonstrated that the changes in the wind direction in some areas of the Indian Ocean can be used to determine the monsoon onset rather precisely. However, this method requires precise wind observations over a large oceanic area which has limited the periods covered for such kind of indices to those of the reanalysis products. In this work we present a new approach to track the Indian monsoon onset based solely on historical wind direction measurements taken onboard ships. Our new series provides an objective record of the onset since the last decade of the 19th century and perhaps more importantly, it can incorporate any new historical wind record not yet known in order to extend the series length. The new series captures quite precisely the rapid precipitation increase associated to the monsoon onset, correlates well with previous approaches and it is robust against anomalous (bogus) onsets. Although no significant trends in the onset date were detected, a tendency to later than average onsets during the 1900-1925 and 1970-1990 periods and earlier than average onsets between 1940 and 1965 have been found. Our results show a relatively stable link between the ENSO cycle and the onset date; however this relationship is weaker in decades characterized by prevalent La Niña conditions. Furthermore, it was found that the link between the Pacific Decadal Oscillation (PDO) and the onset date is limited to the phases characterized by a shift from negative to positive PDO phases. This research was funded by the Spanish Ministerio de Economía y Competitividad through the projects CGL2013-44530-P and CGL2014-51721-REDT

  8. Tensor-Train Split-Operator Fourier Transform (TT-SOFT) Method: Multidimensional Nonadiabatic Quantum Dynamics.

    PubMed

    Greene, Samuel M; Batista, Victor S

    2017-09-12

    We introduce the "tensor-train split-operator Fourier transform" (TT-SOFT) method for simulations of multidimensional nonadiabatic quantum dynamics. TT-SOFT is essentially the grid-based SOFT method implemented in dynamically adaptive tensor-train representations. In the same spirit of all matrix product states, the tensor-train format enables the representation, propagation, and computation of observables of multidimensional wave functions in terms of the grid-based wavepacket tensor components, bypassing the need of actually computing the wave function in its full-rank tensor product grid space. We demonstrate the accuracy and efficiency of the TT-SOFT method as applied to propagation of 24-dimensional wave packets, describing the S 1 /S 2 interconversion dynamics of pyrazine after UV photoexcitation to the S 2 state. Our results show that the TT-SOFT method is a powerful computational approach for simulations of quantum dynamics of polyatomic systems since it avoids the exponential scaling problem of full-rank grid-based representations.

  9. Study on photon transport problem based on the platform of molecular optical simulation environment.

    PubMed

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (SP(n)), and physical measurement to verify the performance of our study method on both accuracy and efficiency.

  10. Study on Photon Transport Problem Based on the Platform of Molecular Optical Simulation Environment

    PubMed Central

    Peng, Kuan; Gao, Xinbo; Liang, Jimin; Qu, Xiaochao; Ren, Nunu; Chen, Xueli; Ma, Bin; Tian, Jie

    2010-01-01

    As an important molecular imaging modality, optical imaging has attracted increasing attention in the recent years. Since the physical experiment is usually complicated and expensive, research methods based on simulation platforms have obtained extensive attention. We developed a simulation platform named Molecular Optical Simulation Environment (MOSE) to simulate photon transport in both biological tissues and free space for optical imaging based on noncontact measurement. In this platform, Monte Carlo (MC) method and the hybrid radiosity-radiance theorem are used to simulate photon transport in biological tissues and free space, respectively, so both contact and noncontact measurement modes of optical imaging can be simulated properly. In addition, a parallelization strategy for MC method is employed to improve the computational efficiency. In this paper, we study the photon transport problems in both biological tissues and free space using MOSE. The results are compared with Tracepro, simplified spherical harmonics method (S P n), and physical measurement to verify the performance of our study method on both accuracy and efficiency. PMID:20445737

  11. Generation of structural topologies using efficient technique based on sorted compliances

    NASA Astrophysics Data System (ADS)

    Mazur, Monika; Tajs-Zielińska, Katarzyna; Bochenek, Bogdan

    2018-01-01

    Topology optimization, although well recognized is still widely developed. It has gained recently more attention since large computational ability become available for designers. This process is stimulated simultaneously by variety of emerging, innovative optimization methods. It is observed that traditional gradient-based mathematical programming algorithms, in many cases, are replaced by novel and e cient heuristic methods inspired by biological, chemical or physical phenomena. These methods become useful tools for structural optimization because of their versatility and easy numerical implementation. In this paper engineering implementation of a novel heuristic algorithm for minimum compliance topology optimization is discussed. The performance of the topology generator is based on implementation of a special function utilizing information of compliance distribution within the design space. With a view to cope with engineering problems the algorithm has been combined with structural analysis system Ansys.

  12. Using the Ridge Regression Procedures to Estimate the Multiple Linear Regression Coefficients

    NASA Astrophysics Data System (ADS)

    Gorgees, HazimMansoor; Mahdi, FatimahAssim

    2018-05-01

    This article concerns with comparing the performance of different types of ordinary ridge regression estimators that have been already proposed to estimate the regression parameters when the near exact linear relationships among the explanatory variables is presented. For this situations we employ the data obtained from tagi gas filling company during the period (2008-2010). The main result we reached is that the method based on the condition number performs better than other methods since it has smaller mean square error (MSE) than the other stated methods.

  13. [Recurrence plot analysis of HRV for brain ischemia and asphyxia].

    PubMed

    Chen, Xiaoming; Qiu, Yihong; Zhu, Yisheng

    2008-02-01

    Heart rate variability (HRV) is the tiny variability existing in the cycles of the heart beats, which reflects the corresponding balance between sympathetic and vagus nerves. Since the nonlinear characteristic of HRV is confirmed, the Recurrence Plot method, a nonlinear dynamic analysis method based on the complexity, could be used to analyze HRV. The results showed the recurrence plot structures and some quantitative indices (L-Mean, L-Entr) during asphyxia insult vary significantly as compared to those in normal conditions, which offer a new method to monitor brain asphyxia injury.

  14. Partial Variance of Increments Method in Solar Wind Observations and Plasma Simulations

    NASA Astrophysics Data System (ADS)

    Greco, A.; Matthaeus, W. H.; Perri, S.; Osman, K. T.; Servidio, S.; Wan, M.; Dmitruk, P.

    2018-02-01

    The method called "PVI" (Partial Variance of Increments) has been increasingly used in analysis of spacecraft and numerical simulation data since its inception in 2008. The purpose of the method is to study the kinematics and formation of coherent structures in space plasmas, a topic that has gained considerable attention, leading the development of identification methods, observations, and associated theoretical research based on numerical simulations. This review paper will summarize key features of the method and provide a synopsis of the main results obtained by various groups using the method. This will enable new users or those considering methods of this type to find details and background collected in one place.

  15. Scene-based nonuniformity correction for airborne point target detection systems.

    PubMed

    Zhou, Dabiao; Wang, Dejiang; Huo, Lijun; Liu, Rang; Jia, Ping

    2017-06-26

    Images acquired by airborne infrared search and track (IRST) systems are often characterized by nonuniform noise. In this paper, a scene-based nonuniformity correction method for infrared focal-plane arrays (FPAs) is proposed based on the constant statistics of the received radiation ratios of adjacent pixels. The gain of each pixel is computed recursively based on the ratios between adjacent pixels, which are estimated through a median operation. Then, an elaborate mathematical model describing the error propagation, derived from random noise and the recursive calculation procedure, is established. The proposed method maintains the characteristics of traditional methods in calibrating the whole electro-optics chain, in compensating for temporal drifts, and in not preserving the radiometric accuracy of the system. Moreover, the proposed method is robust since the frame number is the only variant, and is suitable for real-time applications owing to its low computational complexity and simplicity of implementation. The experimental results, on different scenes from a proof-of-concept point target detection system with a long-wave Sofradir FPA, demonstrate the compelling performance of the proposed method.

  16. Retina lesion and microaneurysm segmentation using morphological reconstruction methods with ground-truth data.

    PubMed

    Karnowski, Thomas P; Govindasamy, V; Tobin, Kenneth W; Chaum, Edward; Abramoff, M D

    2008-01-01

    In this work we report on a method for lesion segmentation based on the morphological reconstruction methods of Sbeh et. al. We adapt the method to include segmentation of dark lesions with a given vasculature segmentation. The segmentation is performed at a variety of scales determined using ground-truth data. Since the method tends to over-segment imagery, ground-truth data was used to create post-processing filters to separate nuisance blobs from true lesions. A sensitivity and specificity of 90% of classification of blobs into nuisance and actual lesion was achieved on two data sets of 86 images and 1296 images.

  17. Dropout Prediction in E-Learning Courses through the Combination of Machine Learning Techniques

    ERIC Educational Resources Information Center

    Lykourentzou, Ioanna; Giannoukos, Ioannis; Nikolopoulos, Vassilis; Mpardis, George; Loumos, Vassili

    2009-01-01

    In this paper, a dropout prediction method for e-learning courses, based on three popular machine learning techniques and detailed student data, is proposed. The machine learning techniques used are feed-forward neural networks, support vector machines and probabilistic ensemble simplified fuzzy ARTMAP. Since a single technique may fail to…

  18. In-situ soil carbon analysis using inelastic neutron scattering

    USDA-ARS?s Scientific Manuscript database

    In situ soil carbon analysis using inelastic neutron scattering (INS) is based on the emission of 4.43 MeV gamma rays from carbon nuclei excited by fast neutrons. This in-situ method has excellent potential for easily measuring soil carbon since it does not require soil core sampling and processing ...

  19. Early Career Researcher Challenges: Substantive and Methods-Based Insights

    ERIC Educational Resources Information Center

    McAlpine, Lynn; Amundsen, Cheryl

    2015-01-01

    Navigating academic work as well as career possibilities during and post-Ph.D. is challenging. To better understand these challenges, since 2010, we have investigated the experiences of early career scientists longitudinally using a range of qualitative data collection formats. For this study, we examined the experiences of four students and four…

  20. Using Web-Based Interactive Multimedia to Supplement Traditional Teaching Methods: A Pilot Program for Medical Training of Non-Medical Personnel

    DTIC Science & Technology

    2005-03-01

    41 E. TRA INING SO LUTIO NS O NLINE .................................................. 42 1. Navy Learning...in relation to the technology and skill set. Since we know our audience, and are not, say, launching a new product into a market area, we can take a

  1. The Languages of Communication. A Logical and Psychological Examination.

    ERIC Educational Resources Information Center

    Gordon, George N.

    Two methods of analysis, logical and psychological (or, loosely, aesthetic and functional) are used to investigate the many kinds of languages man uses to communicate, the ways in which these languages operate, and the reasons for communication failures. Based on a discussion of the nature of symbols, since most languages of communication draw…

  2. Comparison of different detection methods for citrus greening disease based on airborne multispectral and hyperspectral imagery

    USDA-ARS?s Scientific Manuscript database

    Citrus greening or Huanglongbing (HLB) is a devastating disease spread in many citrus groves since first found in 2005 in Florida. Multispectral (MS) and hyperspectral (HS) airborne images of citrus groves in Florida were taken to detect citrus greening infected trees in 2007 and 2010. Ground truthi...

  3. Using Computer Simulations for Promoting Model-Based Reasoning: Epistemological and Educational Dimensions

    ERIC Educational Resources Information Center

    Develaki, Maria

    2017-01-01

    Scientific reasoning is particularly pertinent to science education since it is closely related to the content and methodologies of science and contributes to scientific literacy. Much of the research in science education investigates the appropriate framework and teaching methods and tools needed to promote students' ability to reason and…

  4. The Use of Conceptual Relations in Content Analysis and Data Base Storage.

    ERIC Educational Resources Information Center

    Schank, Roger C.

    Since natural language may be assumed to have an underlying conceptual structure, it is desirable to have the machine structure its own experience, both linguistic and nonlinguistic, in a manner concomitant with the human method for doing so. This paper presents some attempts at organizing the machine's information conceptually. The different…

  5. A Blue Spectral Shift of the Hemoglobin Soret Band Correlates with the Age (Time Since Deposition) of Dried Bloodstains

    PubMed Central

    Hanson, Erin K.; Ballantyne, Jack

    2010-01-01

    The ability to determine the time since deposition of a bloodstain found at a crime scene could prove invaluable to law enforcement investigators, defining the time frame in which the individual depositing the evidence was present. Although various methods of accomplishing this have been proposed, none has gained widespread use due to poor time resolution and weak age correlation. We have developed a method for the estimation of the time since deposition (TSD) of dried bloodstains using UV-VIS spectrophotometric analysis of hemoglobin (Hb) that is based upon its characteristic oxidation chemistry. A detailed study of the Hb Soret band (λmax = 412 nm) in aged bloodstains revealed a blue shift (shift to shorter wavelength) as the age of the stain increases. The extent of this shift permits, for the first time, a distinction to be made between bloodstains that were deposited minutes, hours, days and weeks prior to recovery and analysis. The extent of the blue shift was found to be a function of ambient relative humidity and temperature. The method is extremely sensitive, requiring as little as a 1 µl dried bloodstain for analysis. We demonstrate that it might be possible to perform TSD measurements at the crime scene using a portable low-sample-volume spectrophotometer. PMID:20877468

  6. Medical image security using modified chaos-based cryptography approach

    NASA Astrophysics Data System (ADS)

    Talib Gatta, Methaq; Al-latief, Shahad Thamear Abd

    2018-05-01

    The progressive development in telecommunication and networking technologies have led to the increased popularity of telemedicine usage which involve storage and transfer of medical images and related information so security concern is emerged. This paper presents a method to provide the security to the medical images since its play a major role in people healthcare organizations. The main idea in this work based on the chaotic sequence in order to provide efficient encryption method that allows reconstructing the original image from the encrypted image with high quality and minimum distortion in its content and doesn’t effect in human treatment and diagnosing. Experimental results prove the efficiency of the proposed method using some of statistical measures and robust correlation between original image and decrypted image.

  7. A PC-based inverse design method for radial and mixed flow turbomachinery

    NASA Technical Reports Server (NTRS)

    Skoe, Ivar Helge

    1991-01-01

    An Inverse Design Method suitable for radial and mixed flow turbomachinery is presented. The codes are based on the streamline curvature concept; therefore, it is applicable for current personal computers from the 286/287 range. In addition to the imposed aerodynamic constraints, mechanical constraints are imposed during the design process to ensure that the resulting geometry satisfies production consideration and that structural considerations are taken into account. By the use of Bezier Curves in the geometric modeling, the same subroutine is used to prepare input for both aero and structural files since it is important to ensure that the geometric data is identical to both structural analysis and production. To illustrate the method, a mixed flow turbine design is shown.

  8. Adaptive compressed sensing of remote-sensing imaging based on the sparsity prediction

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-10-01

    The conventional compressive sensing works based on the non-adaptive linear projections, and the parameter of its measurement times is usually set empirically. As a result, the quality of image reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was given. Then an estimation method for the sparsity of image was proposed based on the two dimensional discrete cosine transform (2D DCT). With an energy threshold given beforehand, the DCT coefficients were processed with both energy normalization and sorting in descending order, and the sparsity of the image can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of image effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparse degree estimated with the energy threshold provided, the proposed method can ensure the quality of image reconstruction.

  9. Adaptive compressed sensing of multi-view videos based on the sparsity estimation

    NASA Astrophysics Data System (ADS)

    Yang, Senlin; Li, Xilong; Chong, Xin

    2017-11-01

    The conventional compressive sensing for videos based on the non-adaptive linear projections, and the measurement times is usually set empirically. As a result, the quality of videos reconstruction is always affected. Firstly, the block-based compressed sensing (BCS) with conventional selection for compressive measurements was described. Then an estimation method for the sparsity of multi-view videos was proposed based on the two dimensional discrete wavelet transform (2D DWT). With an energy threshold given beforehand, the DWT coefficients were processed with both energy normalization and sorting by descending order, and the sparsity of the multi-view video can be achieved by the proportion of dominant coefficients. And finally, the simulation result shows that, the method can estimate the sparsity of video frame effectively, and provides an active basis for the selection of compressive observation times. The result also shows that, since the selection of observation times is based on the sparsity estimated with the energy threshold provided, the proposed method can ensure the reconstruction quality of multi-view videos.

  10. Watershed-based segmentation of the corpus callosum in diffusion MRI

    NASA Astrophysics Data System (ADS)

    Freitas, Pedro; Rittner, Leticia; Appenzeller, Simone; Lapa, Aline; Lotufo, Roberto

    2012-02-01

    The corpus callosum (CC) is one of the most important white matter structures of the brain, interconnecting the two cerebral hemispheres, and is related to several neurodegenerative diseases. Since segmentation is usually the first step for studies in this structure, and manual volumetric segmentation is a very time-consuming task, it is important to have a robust automatic method for CC segmentation. We propose here an approach for fully automatic 3D segmentation of the CC in the magnetic resonance diffusion tensor images. The method uses the watershed transform and is performed on the fractional anisotropy (FA) map weighted by the projection of the principal eigenvector in the left-right direction. The section of the CC in the midsagittal slice is used as seed for the volumetric segmentation. Experiments with real diffusion MRI data showed that the proposed method is able to quickly segment the CC without any user intervention, with great results when compared to manual segmentation. Since it is simple, fast and does not require parameter settings, the proposed method is well suited for clinical applications.

  11. Tissue viability imaging for quantification of skin erythema and blanching

    NASA Astrophysics Data System (ADS)

    Nilsson, Gert E.; Leahy, Martin J.

    2010-02-01

    Naked eye observation has up to recently been the main method of determining skin erythema (vasodilatation) and blanching (vasoconstriction) in skin testing. Since naked eye observation is a highly subjective and investigatordependent method, it is difficult to attain reproducibility and to compare results reported by different researchers performing their studies at different laboratories. Consequently there is a need for more objective, quantitative and versatile methods in the assessment of alterations in skin erythema and blanching caused by internal and external factors such as the intake of vasoactive drugs, application of agents on the skin surface and by constituents in the environment. Since skin microcirculation is sensitive to applied pressure and heat, such methods should preferably be noninvasive and designed for remote use without touching the skin. As skin microcirculation further possesses substantial spatial variability, imaging techniques are to be preferred before single point measurements. An emerging technology based on polarization digital camera spectroscopy - Tissue Viability Imaging (TiVi) - fulfills these requirements. The principles of TiVi (1) and some of its early applications (2-5) are addressed in this paper.

  12. Ultrasensitive low noise voltage amplifier for spectral analysis.

    PubMed

    Giusi, G; Crupi, F; Pace, C

    2008-08-01

    Recently we have proposed several voltage noise measurement methods that allow, at least in principle, the complete elimination of the noise introduced by the measurement amplifier. The most severe drawback of these methods is that they require a multistep measurement procedure. Since environmental conditions may change in the different measurement steps, the final result could be affected by these changes. This problem is solved by the one-step voltage noise measurement methodology based on a novel amplifier topology proposed in this paper. Circuit implementations for the amplifier building blocks based on operational amplifiers are critically discussed. The proposed approach is validated through measurements performed on a prototype circuit.

  13. The NASA/industry Design Analysis Methods for Vibrations (DAMVIBS) Program: A government overview

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.

    1992-01-01

    LaRC, under the Design Analysis Methods for Vibrations (DAMVIBS) Program, set out in 1984 to establish the technology base needed by the rotorcraft industry for developing an advanced finite-element-based dynamics design analysis capability for vibrations. Considerable work was performed by the industry participants in the program since that time. Because the DAMVIBS Program is being phased out, a government/industry assessment of the program was made to identify those accomplishments and contributions which may be ascribed to the program. The purpose is to provide an overview of the program and its accomplishments and contributions from the perspective of the government sponsoring organization.

  14. Bell-Curve Based Evolutionary Strategies for Structural Optimization

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    2001-01-01

    Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity. However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold. One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumbersome binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back and Dasgupta and Michalesicz. We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.

  15. Energy Savings in Cellular Networks Based on Space-Time Structure of Traffic Loads

    NASA Astrophysics Data System (ADS)

    Sun, Jingbo; Wang, Yue; Yuan, Jian; Shan, Xiuming

    Since most of energy consumed by the telecommunication infrastructure is due to the Base Transceiver Station (BTS), switching off BTSs when traffic load is low has been recognized as an effective way of saving energy. In this letter, an energy saving scheme is proposed to minimize the number of active BTSs based on the space-time structure of traffic loads as determined by principal component analysis. Compared to existing methods, our approach models traffic loads more accurately, and has a much smaller input size. As it is implemented in an off-line manner, our scheme also avoids excessive communications and computing overheads. Simulation results show that the proposed method has a comparable performance in energy savings.

  16. A Critical Plane-energy Model for Multiaxial Fatigue Life Prediction of Homogeneous and Heterogeneous Materials

    NASA Astrophysics Data System (ADS)

    Wei, Haoyang

    A new critical plane-energy model is proposed in this thesis for multiaxial fatigue life prediction of homogeneous and heterogeneous materials. Brief review of existing methods, especially on the critical plane-based and energy-based methods, are given first. Special focus is on one critical plane approach which has been shown to work for both brittle and ductile metals. The key idea is to automatically change the critical plane orientation with respect to different materials and stress states. One potential drawback of the developed model is that it needs an empirical calibration parameter for non-proportional multiaxial loadings since only the strain terms are used and the out-of-phase hardening cannot be considered. The energy-based model using the critical plane concept is proposed with help of the Mroz-Garud hardening rule to explicitly include the effect of non-proportional hardening under fatigue cyclic loadings. Thus, the empirical calibration for non-proportional loading is not needed since the out-of-phase hardening is naturally included in the stress calculation. The model predictions are compared with experimental data from open literature and it is shown the proposed model can work for both proportional and non-proportional loadings without the empirical calibration. Next, the model is extended for the fatigue analysis of heterogeneous materials integrating with finite element method. Fatigue crack initiation of representative volume of heterogeneous materials is analyzed using the developed critical plane-energy model and special focus is on the microstructure effect on the multiaxial fatigue life predictions. Several conclusions and future work is drawn based on the proposed study.

  17. Sika deer (Cervus nippon)-specific real-time PCR method to detect fraudulent labelling of meat and meat products.

    PubMed

    Kaltenbrunner, Maria; Hochegger, Rupert; Cichna-Markl, Margit

    2018-05-08

    Since game meat is more valuable and expensive than meat from domesticated animal species it is a potential target for adulteration. Analytical methods must allow the identification and quantification of meat species to be applicable for the detection of fraudulent labelling. We developed a real-time PCR assay for the authentication of sika deer (Cervus nippon) and products thereof. The primer/probe system amplifies a 71 bp fragment of the kappa-casein precursor gene. Since the target sequence contained only one sika deer-specific base, we introduced a deliberate base mismatch in the forward primer. The real-time PCR assay did not show cross-reactivity with 19 animal and 49 plant species tested. Low cross-reactivity was observed with red deer, fallow deer, reindeer and moose. However, with a ΔCt value of ≥11.79 between sika deer and the cross-reacting species, cross-reactivity will not affect the accuracy of the method. LOD and LOQ, determined by analysing serial dilutions of a DNA extract containing 1% (w/w) sika deer DNA in pig DNA, were 0.3% and 0.5%, respectively. The accuracy was evaluated by analysing DNA mixtures and DNA isolates from meat extract mixtures and meat mixtures. In general, recoveries were in the range from 70 to 130%.

  18. Advances in explosives analysis—part II: photon and neutron methods

    DOE PAGES

    Brown, Kathryn E.; Greenfield, Margo T.; McGrane, Shawn D.; ...

    2015-10-07

    The number and capability of explosives detection and analysis methods have increased dramatically since publication of the Analytical and Bioanalytical Chemistry special issue devoted to Explosives Analysis [Moore DS, Goodpaster JV, Anal Bioanal Chem 395:245–246, 2009]. Here we review and critically evaluate the latest (the past five years) important advances in explosives detection, with details of the improvements over previous methods, and suggest possible avenues towards further advances in, e.g., stand-off distance, detection limit, selectivity, and penetration through camouflage or packaging. Our review consists of two parts. Part I discussed methods based on animals, chemicals (including colorimetry, molecularly imprinted polymers,more » electrochemistry, and immunochemistry), ions (both ion-mobility spectrometry and mass spectrometry), and mechanical devices. In Part II, we review methods based on photons, from very energetic photons including X-rays and gamma rays down to the terahertz range, and neutrons.« less

  19. Advances in explosives analysis—part I. animal, chemical, ion, and mechanical methods

    DOE PAGES

    Brown, Kathryn E.; Greenfield, Margo T.; McGrane, Shawn D.; ...

    2015-10-13

    The number and capability of explosives detection and analysis methods have increased substantially since the publication of the Analytical and Bioanalytical Chemistry special issue devoted to Explosives Analysis (Moore and Goodpaster, Anal Bioanal Chem 395(2):245–246, 2009). We review and critically evaluate the latest (the past five years) important advances in explosives detection, with details of the improvements over previous methods, and suggest possible avenues towards further advances in, e.g., stand-off distance, detection limit, selectivity, and penetration through camouflage or packaging. The review consists of two parts. Moreover, Part I, reviews methods based on animals, chemicals (including colorimetry, molecularly imprinted polymers,more » electrochemistry, and immunochemistry), ions (both ion-mobility spectrometry and mass spectrometry), and mechanical devices. Part II will review methods based on photons, from very energetic photons including X-rays and gamma rays down to the terahertz range, and neutrons.« less

  20. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    NASA Astrophysics Data System (ADS)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  1. Acoustic Radiation Force Elasticity Imaging in Diagnostic Ultrasound

    PubMed Central

    Doherty, Joshua R.; Trahey, Gregg E.; Nightingale, Kathryn R.; Palmeri, Mark L.

    2013-01-01

    The development of ultrasound-based elasticity imaging methods has been the focus of intense research activity since the mid-1990s. In characterizing the mechanical properties of soft tissues, these techniques image an entirely new subset of tissue properties that cannot be derived with conventional ultrasound techniques. Clinically, tissue elasticity is known to be associated with pathological condition and with the ability to image these features in vivo, elasticity imaging methods may prove to be invaluable tools for the diagnosis and/or monitoring of disease. This review focuses on ultrasound-based elasticity imaging methods that generate an acoustic radiation force to induce tissue displacements. These methods can be performed non-invasively during routine exams to provide either qualitative or quantitative metrics of tissue elasticity. A brief overview of soft tissue mechanics relevant to elasticity imaging is provided, including a derivation of acoustic radiation force, and an overview of the various acoustic radiation force elasticity imaging methods. PMID:23549529

  2. Acoustic radiation force elasticity imaging in diagnostic ultrasound.

    PubMed

    Doherty, Joshua R; Trahey, Gregg E; Nightingale, Kathryn R; Palmeri, Mark L

    2013-04-01

    The development of ultrasound-based elasticity imaging methods has been the focus of intense research activity since the mid-1990s. In characterizing the mechanical properties of soft tissues, these techniques image an entirely new subset of tissue properties that cannot be derived with conventional ultrasound techniques. Clinically, tissue elasticity is known to be associated with pathological condition and with the ability to image these features in vivo; elasticity imaging methods may prove to be invaluable tools for the diagnosis and/or monitoring of disease. This review focuses on ultrasound-based elasticity imaging methods that generate an acoustic radiation force to induce tissue displacements. These methods can be performed noninvasively during routine exams to provide either qualitative or quantitative metrics of tissue elasticity. A brief overview of soft tissue mechanics relevant to elasticity imaging is provided, including a derivation of acoustic radiation force, and an overview of the various acoustic radiation force elasticity imaging methods.

  3. Measurement of the Microwave Refractive Index of Materials Based on Parallel Plate Waveguides

    NASA Astrophysics Data System (ADS)

    Zhao, F.; Pei, J.; Kan, J. S.; Zhao, Q.

    2017-12-01

    An electrical field scanning apparatus based on a parallel plate waveguide method is constructed, which collects the amplitude and phase matrices as a function of the relative position. On the basis of such data, a method for calculating the refractive index of the measured wedge samples is proposed in this paper. The measurement and calculation results of different PTFE samples reveal that the refractive index measured by the apparatus is substantially consistent with the refractive index inferred with the permittivity of the sample. The proposed refractive index calculation method proposed in this paper is a competitive method for the characterization of the refractive index of materials with positive refractive index. Since the apparatus and method can be used to measure and calculate arbitrary direction of the microwave propagation, it is believed that both of them can be applied to the negative refractive index materials, such as metamaterials or “left-handed” materials.

  4. Identification and characterization of earthquake clusters: a comparative analysis for selected sequences in Italy

    NASA Astrophysics Data System (ADS)

    Peresan, Antonella; Gentili, Stefania

    2017-04-01

    Identification and statistical characterization of seismic clusters may provide useful insights about the features of seismic energy release and their relation to physical properties of the crust within a given region. Moreover, a number of studies based on spatio-temporal analysis of main-shocks occurrence require preliminary declustering of the earthquake catalogs. Since various methods, relying on different physical/statistical assumptions, may lead to diverse classifications of earthquakes into main events and related events, we aim to investigate the classification differences among different declustering techniques. Accordingly, a formal selection and comparative analysis of earthquake clusters is carried out for the most relevant earthquakes in North-Eastern Italy, as reported in the local OGS-CRS bulletins, compiled at the National Institute of Oceanography and Experimental Geophysics since 1977. The comparison is then extended to selected earthquake sequences associated with a different seismotectonic setting, namely to events that occurred in the region struck by the recent Central Italy destructive earthquakes, making use of INGV data. Various techniques, ranging from classical space-time windows methods to ad hoc manual identification of aftershocks, are applied for detection of earthquake clusters. In particular, a statistical method based on nearest-neighbor distances of events in space-time-energy domain, is considered. Results from clusters identification by the nearest-neighbor method turn out quite robust with respect to the time span of the input catalogue, as well as to minimum magnitude cutoff. The identified clusters for the largest events reported in North-Eastern Italy since 1977 are well consistent with those reported in earlier studies, which were aimed at detailed manual aftershocks identification. The study shows that the data-driven approach, based on the nearest-neighbor distances, can be satisfactorily applied to decompose the seismic catalog into background seismicity and individual sequences of earthquake clusters, also in areas characterized by moderate seismic activity, where the standard declustering techniques may turn out rather gross approximations. With these results acquired, the main statistical features of seismic clusters are explored, including complex interdependence of related events, with the aim to characterize the space-time patterns of earthquakes occurrence in North-Eastern Italy and capture their basic differences with Central Italy sequences.

  5. Advances in biosensor development for the screening of antibiotic residues in food products of animal origin - A comprehensive review.

    PubMed

    Gaudin, Valérie

    2017-04-15

    Antibiotic residues may be found in food of animal origin, since veterinary drugs are used for preventive and curative purposes to treat animals. The control of veterinary drug residues in food is necessary to ensure consumer safety. Screening methods are the first step in the control of antibiotic residues in food of animal origin. Conventional screening methods are based on different technologies, microbiological methods, immunological methods or physico-chemical methods (e.g. thin-layer chromatography, HPLC, LC-MS/MS). Screening methods should be simple, quick, inexpensive and specific, with low detection limits and high sample throughput. Biosensors can meet some of these requirements. Therefore, the development of biosensors for the screening of antibiotic residues has been increasing since the 1980s. The present review provides extensive and up-to-date findings on biosensors for the screening of antibiotic residues in food products of animal origin. Biosensors are constituted of a bioreceptor and a transducer. In the detection of antibiotic residues, even though antibodies were the first bioreceptors to be used, new kinds of bioreceptors are being developed more and more (enzymes, aptamers, MIPs); their advantages and drawbacks are discussed in this review. The different categories of transducers (electrochemical, mass-based biosensors, optical and thermal) and their potential applications for the screening of antibiotic residues in food are presented. Moreover, the advantages and drawbacks of the different types of transducers are discussed. Lastly, outlook and the future development of biosensors for the control of antibiotic residues in food are highlighted. Copyright © 2016. Published by Elsevier B.V.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Toro, Javier, E-mail: jjtoroca@unal.edu.co; Duarte, Oscar, E-mail: ogduartev@unal.edu.co; Requena, Ignacio, E-mail: requena@decsai.ugr.es

    The concept of vulnerability has been used to describe the susceptibility of physical, biotic, and social systems to harm or hazard. In this sense, it is a tool that reduces the uncertainties of Environmental Impact Assessment (EIA) since it does not depend exclusively on the value assessments of the evaluator, but rather is based on the environmental state indicators of the site where the projects or activities are being carried out. The concept of vulnerability thus reduces the possibility that evaluators will subjectively interpret results, and be influenced by outside interests and pressures during projects. However, up until now, EIAmore » has been hindered by a lack of effective methods. This research study analyzes the concept of vulnerability, defines Vulnerability Importance and proposes its inclusion in qualitative EIA methodology. The method used to quantify Vulnerability Importance is based on a set of environmental factors and indicators that provide a comprehensive overview of the environmental state. The results obtained in Colombia highlight the usefulness and objectivity of this method since there is a direct relation between this value and the environmental state of the departments analyzed. - Research Highlights: Black-Right-Pointing-Pointer The concept of vulnerability could be considered defining Vulnerability Importance included in qualitative EIA methodology. Black-Right-Pointing-Pointer The use of the concept of environmental vulnerability could reduce the subjectivity of qualitative methods of EIA. Black-Right-Pointing-Pointer A method to quantify the Vulnerability Importance proposed provides a comprehensive overview of the environmental state. Black-Right-Pointing-Pointer Results in Colombia highlight the usefulness and objectivity of this method.« less

  7. Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    2004-01-01

    A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.

  8. Analysis of energy-based algorithms for RNA secondary structure prediction

    PubMed Central

    2012-01-01

    Background RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. Results We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Conclusions Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets. PMID:22296803

  9. Analysis of energy-based algorithms for RNA secondary structure prediction.

    PubMed

    Hajiaghayi, Monir; Condon, Anne; Hoos, Holger H

    2012-02-01

    RNA molecules play critical roles in the cells of organisms, including roles in gene regulation, catalysis, and synthesis of proteins. Since RNA function depends in large part on its folded structures, much effort has been invested in developing accurate methods for prediction of RNA secondary structure from the base sequence. Minimum free energy (MFE) predictions are widely used, based on nearest neighbor thermodynamic parameters of Mathews, Turner et al. or those of Andronescu et al. Some recently proposed alternatives that leverage partition function calculations find the structure with maximum expected accuracy (MEA) or pseudo-expected accuracy (pseudo-MEA) methods. Advances in prediction methods are typically benchmarked using sensitivity, positive predictive value and their harmonic mean, namely F-measure, on datasets of known reference structures. Since such benchmarks document progress in improving accuracy of computational prediction methods, it is important to understand how measures of accuracy vary as a function of the reference datasets and whether advances in algorithms or thermodynamic parameters yield statistically significant improvements. Our work advances such understanding for the MFE and (pseudo-)MEA-based methods, with respect to the latest datasets and energy parameters. We present three main findings. First, using the bootstrap percentile method, we show that the average F-measure accuracy of the MFE and (pseudo-)MEA-based algorithms, as measured on our largest datasets with over 2000 RNAs from diverse families, is a reliable estimate (within a 2% range with high confidence) of the accuracy of a population of RNA molecules represented by this set. However, average accuracy on smaller classes of RNAs such as a class of 89 Group I introns used previously in benchmarking algorithm accuracy is not reliable enough to draw meaningful conclusions about the relative merits of the MFE and MEA-based algorithms. Second, on our large datasets, the algorithm with best overall accuracy is a pseudo MEA-based algorithm of Hamada et al. that uses a generalized centroid estimator of base pairs. However, between MFE and other MEA-based methods, there is no clear winner in the sense that the relative accuracy of the MFE versus MEA-based algorithms changes depending on the underlying energy parameters. Third, of the four parameter sets we considered, the best accuracy for the MFE-, MEA-based, and pseudo-MEA-based methods is 0.686, 0.680, and 0.711, respectively (on a scale from 0 to 1 with 1 meaning perfect structure predictions) and is obtained with a thermodynamic parameter set obtained by Andronescu et al. called BL* (named after the Boltzmann likelihood method by which the parameters were derived). Large datasets should be used to obtain reliable measures of the accuracy of RNA structure prediction algorithms, and average accuracies on specific classes (such as Group I introns and Transfer RNAs) should be interpreted with caution, considering the relatively small size of currently available datasets for such classes. The accuracy of the MEA-based methods is significantly higher when using the BL* parameter set of Andronescu et al. than when using the parameters of Mathews and Turner, and there is no significant difference between the accuracy of MEA-based methods and MFE when using the BL* parameters. The pseudo-MEA-based method of Hamada et al. with the BL* parameter set significantly outperforms all other MFE and MEA-based algorithms on our large data sets.

  10. Advanced Methods for Determining Prediction Uncertainty in Model-Based Prognostics with Application to Planetary Rovers

    NASA Technical Reports Server (NTRS)

    Daigle, Matthew J.; Sankararaman, Shankar

    2013-01-01

    Prognostics is centered on predicting the time of and time until adverse events in components, subsystems, and systems. It typically involves both a state estimation phase, in which the current health state of a system is identified, and a prediction phase, in which the state is projected forward in time. Since prognostics is mainly a prediction problem, prognostic approaches cannot avoid uncertainty, which arises due to several sources. Prognostics algorithms must both characterize this uncertainty and incorporate it into the predictions so that informed decisions can be made about the system. In this paper, we describe three methods to solve these problems, including Monte Carlo-, unscented transform-, and first-order reliability-based methods. Using a planetary rover as a case study, we demonstrate and compare the different methods in simulation for battery end-of-discharge prediction.

  11. Alkaloid profiling of the traditional Chinese medicine Rhizoma corydalis using high performance liquid chromatography-tandem quadrupole time-of-flight mass spectrometry

    PubMed Central

    Sun, Mingqian; Liu, Jianxun; Lin, Chengren; Miao, Lan; Lin, Li

    2014-01-01

    Since alkaloids are the major active constituents of Rhizoma corydalis (RC), a convenient and accurate analytical method is needed for their identification and characterization. Here we report a method to profile the alkaloids in RC based on liquid chromatography-tandem quadrupole time-of-flight mass spectrometry (LC–Q-TOF-MS/MS). A total of 16 alkaloids belonging to four different classes were identified by comparison with authentic standards. The fragmentation pathway of each class of alkaloid was clarified and their differences were elucidated. Furthermore, based on an analysis of fragmentation pathways and alkaloid profiling, a rapid and accurate method for the identification of unknown alkaloids in RC is proposed. The method could also be useful for the quality control of RC. PMID:26579385

  12. Autonomous rock detection on mars through region contrast

    NASA Astrophysics Data System (ADS)

    Xiao, Xueming; Cui, Hutao; Yao, Meibao; Tian, Yang

    2017-08-01

    In this paper, we present a new autonomous rock detection approach through region contrast. Unlike current state-of-art pixel-level rock segmenting methods, new method deals with this issue in region level, which will significantly reduce the computational cost. Image is firstly splitted into homogeneous regions based on intensity information and spatial layout. Considering the high-water memory constraints of onboard flight processor, only low-level features, average intensity and variation of superpixel, are measured. Region contrast is derived as the integration of intensity contrast and smoothness measurement. Rocks are then segmented from the resulting contrast map by an adaptive threshold. Since the merely intensity-based method may cause false detection in background areas with different illuminations from surroundings, a more reliable method is further proposed by introducing spatial factor and background similarity to the region contrast. Spatial factor demonstrates the locality of contrast, while background similarity calculates the probability of each subregion belonging to background. Our method is efficient in dealing with large images and only few parameters are needed. Preliminary experimental results show that our algorithm outperforms edge-based methods in various grayscale rover images.

  13. Accurate estimation of human body orientation from RGB-D sensors.

    PubMed

    Liu, Wu; Zhang, Yongdong; Tang, Sheng; Tang, Jinhui; Hong, Richang; Li, Jintao

    2013-10-01

    Accurate estimation of human body orientation can significantly enhance the analysis of human behavior, which is a fundamental task in the field of computer vision. However, existing orientation estimation methods cannot handle the various body poses and appearances. In this paper, we propose an innovative RGB-D-based orientation estimation method to address these challenges. By utilizing the RGB-D information, which can be real time acquired by RGB-D sensors, our method is robust to cluttered environment, illumination change and partial occlusions. Specifically, efficient static and motion cue extraction methods are proposed based on the RGB-D superpixels to reduce the noise of depth data. Since it is hard to discriminate all the 360 (°) orientation using static cues or motion cues independently, we propose to utilize a dynamic Bayesian network system (DBNS) to effectively employ the complementary nature of both static and motion cues. In order to verify our proposed method, we build a RGB-D-based human body orientation dataset that covers a wide diversity of poses and appearances. Our intensive experimental evaluations on this dataset demonstrate the effectiveness and efficiency of the proposed method.

  14. Effects of global signal regression and subtraction methods on resting-state functional connectivity using arterial spin labeling data.

    PubMed

    Silva, João Paulo Santos; Mônaco, Luciana da Mata; Paschoal, André Monteiro; Oliveira, Ícaro Agenor Ferreira de; Leoni, Renata Ferranti

    2018-05-16

    Arterial spin labeling (ASL) is an established magnetic resonance imaging (MRI) technique that is finding broader applications in functional studies of the healthy and diseased brain. To promote improvement in cerebral blood flow (CBF) signal specificity, many algorithms and imaging procedures, such as subtraction methods, were proposed to eliminate or, at least, minimize noise sources. Therefore, this study addressed the main considerations of how CBF functional connectivity (FC) is changed, regarding resting brain network (RBN) identification and correlations between regions of interest (ROI), by different subtraction methods and removal of residual motion artifacts and global signal fluctuations (RMAGSF). Twenty young healthy participants (13 M/7F, mean age = 25 ± 3 years) underwent an MRI protocol with a pseudo-continuous ASL (pCASL) sequence. Perfusion-based images were obtained using simple, sinc and running subtraction. RMAGSF removal was applied to all CBF time series. Independent Component Analysis (ICA) was used for RBN identification, while Pearson' correlation was performed for ROI-based FC analysis. Temporal signal-to-noise ratio (tSNR) was higher in CBF maps obtained by sinc subtraction, although RMAGSF removal had a significant effect on maps obtained with simple and running subtractions. Neither the subtraction method nor the RMAGSF removal directly affected the identification of RBNs. However, the number of correlated and anti-correlated voxels varied for different subtraction and filtering methods. In an ROI-to-ROI level, changes were prominent in FC values and their statistical significance. Our study showed that both RMAGSF filtering and subtraction method might influence resting-state FC results, especially in an ROI level, consequently affecting FC analysis and its interpretation. Taking our results and the whole discussion together, we understand that for an exploratory assessment of the brain, one could avoid removing RMAGSF to not bias FC measures, but could use sinc subtraction to minimize low-frequency contamination. However, CBF signal specificity and frequency range for filtering purposes still need to be assessed in future studies. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. flowVS: channel-specific variance stabilization in flow cytometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  16. flowVS: channel-specific variance stabilization in flow cytometry

    DOE PAGES

    Azad, Ariful; Rajwa, Bartek; Pothen, Alex

    2016-07-28

    Comparing phenotypes of heterogeneous cell populations from multiple biological conditions is at the heart of scientific discovery based on flow cytometry (FC). When the biological signal is measured by the average expression of a biomarker, standard statistical methods require that variance be approximately stabilized in populations to be compared. Since the mean and variance of a cell population are often correlated in fluorescence-based FC measurements, a preprocessing step is needed to stabilize the within-population variances.

  17. Introducing undergraduate students to global health challenges through web-based learning.

    PubMed

    White, Jerry L

    2005-01-01

    Since many students cannot afford the expense of international travel, creative and active learning methods are needed to help students experience the increased awareness that results from exposure to global health concepts. The global health course described in this article uses a variety of web-based learning experiences and other interactive strategies to equip future nurses for leadership roles in global health. An emphasis on written communication is an important component of the course.

  18. Experimental and Computational Investigations of Vertical Axis Wind Turbine Enclosed with Flanged Diffuser

    NASA Astrophysics Data System (ADS)

    Surya Raj, G.; Sangeetha, N.; Prince, M.

    2018-02-01

    Generation of wind energy is a must to meet out additional demand. To meet out the additional demand several long term plans were considered now being taken up for generation of energy for the fast developing industries. Detailed researches were since taken up to improve the efficiency of such vertical axis wind turbine (VAWT). In this work VAWT with diffuser and without diffuser arrangement are considered for experimental and analysis. Five diffusers were since provided around its blades of VAWT which will be placed inside a pentagon shaped fabricated structure. In this power output of the diffuser based VAWT arrangement were studied in both numerical and experimental methods and related with that of a bared VAWT. Finally, it was found that the output power of diffuser based VAWT generates approximately two times than that of bared VAWT.

  19. A method to determine agro-climatic zones based on correlation and cluster analyses

    NASA Astrophysics Data System (ADS)

    Borges Valeriano, Taynara Tuany; de Souza Rolim, Glauco; de Oliveira Aparecido, Lucas Eduardo

    2017-12-01

    Determining agro-climatic zones (ACZs) is traditionally made by cross-comparing meteorological elements such as air temperature, rainfall, and water deficit (DEF). This study proposes a new method based on correlations between monthly DEFs during the crop cycle and annual yield and performs a multivariate cluster analysis on these correlations. This `correlation method' was applied to all municipalities in the state of São Paulo to determine ACZs for coffee plantations. A traditional ACZ method for coffee, which is based on temperature and DEF ranges (Evangelista et al.; RBEAA, 6:445-452, 2002), was applied to the study area to compare against the correlation method. The traditional ACZ classified the "Alta Mogina," "Média Mogiana," and "Garça and Marília" regions as traditional coffee regions that were either suitable or even restricted for coffee plantations. These traditional regions have produced coffee since 1800 and should not be classified as restricted. The correlation method classified those areas as high-producing regions and expanded them into other areas. The proposed method is innovative, because it is more detailed than common ACZ methods. Each developmental crop phase was analyzed based on correlations between the monthly DEF and yield, improving the importance of crop physiology in relation to climate.

  20. Local-in-Time Adjoint-Based Method for Optimal Control/Design Optimization of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Yamaleev, N. K.; Diskin, B.; Nielsen, E. J.

    2009-01-01

    .We study local-in-time adjoint-based methods for minimization of ow matching functionals subject to the 2-D unsteady compressible Euler equations. The key idea of the local-in-time method is to construct a very accurate approximation of the global-in-time adjoint equations and the corresponding sensitivity derivative by using only local information available on each time subinterval. In contrast to conventional time-dependent adjoint-based optimization methods which require backward-in-time integration of the adjoint equations over the entire time interval, the local-in-time method solves local adjoint equations sequentially over each time subinterval. Since each subinterval contains relatively few time steps, the storage cost of the local-in-time method is much lower than that of the global adjoint formulation, thus making the time-dependent optimization feasible for practical applications. The paper presents a detailed comparison of the local- and global-in-time adjoint-based methods for minimization of a tracking functional governed by the Euler equations describing the ow around a circular bump. Our numerical results show that the local-in-time method converges to the same optimal solution obtained with the global counterpart, while drastically reducing the memory cost as compared to the global-in-time adjoint formulation.

  1. Developing an Optical Lunar Occultation Measurement Reduction System for Observations at Kaau Observatory

    NASA Astrophysics Data System (ADS)

    Malawi, Abdulrahman A.

    2013-06-01

    We present here a detailed explanation of the reduction method that we use to determine the angular diameters of the stars occulted by the dark limb of the moon. This is a main part of the lunar occultation observation program running at King Abdul Aziz University observatory since late 1993. The process is based on the least square model fitting method of analyzing occultation data, first introduced by Nather et al. (Astron. J. 75:963, 1970).

  2. A novel three-stage distance-based consensus ranking method

    NASA Astrophysics Data System (ADS)

    Aghayi, Nazila; Tavana, Madjid

    2018-05-01

    In this study, we propose a three-stage weighted sum method for identifying the group ranks of alternatives. In the first stage, a rank matrix, similar to the cross-efficiency matrix, is obtained by computing the individual rank position of each alternative based on importance weights. In the second stage, a secondary goal is defined to limit the vector of weights since the vector of weights obtained in the first stage is not unique. Finally, in the third stage, the group rank position of alternatives is obtained based on a distance of individual rank positions. The third stage determines a consensus solution for the group so that the ranks obtained have a minimum distance from the ranks acquired by each alternative in the previous stage. A numerical example is presented to demonstrate the applicability and exhibit the efficacy of the proposed method and algorithms.

  3. A novel murmur-based heart sound feature extraction technique using envelope-morphological analysis

    NASA Astrophysics Data System (ADS)

    Yao, Hao-Dong; Ma, Jia-Li; Fu, Bin-Bin; Wang, Hai-Yang; Dong, Ming-Chui

    2015-07-01

    Auscultation of heart sound (HS) signals serves as an important primary approach to diagnose cardiovascular diseases (CVDs) for centuries. Confronting the intrinsic drawbacks of traditional HS auscultation, computer-aided automatic HS auscultation based on feature extraction technique has witnessed explosive development. Yet, most existing HS feature extraction methods adopt acoustic or time-frequency features which exhibit poor relationship with diagnostic information, thus restricting the performance of further interpretation and analysis. Tackling such a bottleneck problem, this paper innovatively proposes a novel murmur-based HS feature extraction method since murmurs contain massive pathological information and are regarded as the first indications of pathological occurrences of heart valves. Adapting discrete wavelet transform (DWT) and Shannon envelope, the envelope-morphological characteristics of murmurs are obtained and three features are extracted accordingly. Validated by discriminating normal HS and 5 various abnormal HS signals with extracted features, the proposed method provides an attractive candidate in automatic HS auscultation.

  4. Benchmarking of HPCC: A novel 3D molecular representation combining shape and pharmacophoric descriptors for efficient molecular similarity assessments.

    PubMed

    Karaboga, Arnaud S; Petronin, Florent; Marchetti, Gino; Souchet, Michel; Maigret, Bernard

    2013-04-01

    Since 3D molecular shape is an important determinant of biological activity, designing accurate 3D molecular representations is still of high interest. Several chemoinformatic approaches have been developed to try to describe accurate molecular shapes. Here, we present a novel 3D molecular description, namely harmonic pharma chemistry coefficient (HPCC), combining a ligand-centric pharmacophoric description projected onto a spherical harmonic based shape of a ligand. The performance of HPCC was evaluated by comparison to the standard ROCS software in a ligand-based virtual screening (VS) approach using the publicly available directory of useful decoys (DUD) data set comprising over 100,000 compounds distributed across 40 protein targets. Our results were analyzed using commonly reported statistics such as the area under the curve (AUC) and normalized sum of logarithms of ranks (NSLR) metrics. Overall, our HPCC 3D method is globally as efficient as the state-of-the-art ROCS software in terms of enrichment and slightly better for more than half of the DUD targets. Since it is largely admitted that VS results depend strongly on the nature of the protein families, we believe that the present HPCC solution is of interest over the current ligand-based VS methods. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Link Correlation Based Transmit Sector Antenna Selection for Alamouti Coded OFDM

    NASA Astrophysics Data System (ADS)

    Ahn, Chang-Jun

    In MIMO systems, the deployment of a multiple antenna technique can enhance the system performance. However, since the cost of RF transmitters is much higher than that of antennas, there is growing interest in techniques that use a larger number of antennas than the number of RF transmitters. These methods rely on selecting the optimal transmitter antennas and connecting them to the respective. In this case, feedback information (FBI) is required to select the optimal transmitter antenna elements. Since FBI is control overhead, the rate of the feedback is limited. This motivates the study of limited feedback techniques where only partial or quantized information from the receiver is conveyed back to the transmitter. However, in MIMO/OFDM systems, it is difficult to develop an effective FBI quantization method for choosing the space-time, space-frequency, or space-time-frequency processing due to the numerous subchannels. Moreover, MIMO/OFDM systems require antenna separation of 5 ∼ 10 wavelengths to keep the correlation coefficient below 0.7 to achieve a diversity gain. In this case, the base station requires a large space to set up multiple antennas. To reduce these problems, in this paper, we propose the link correlation based transmit sector antenna selection for Alamouti coded OFDM without FBI.

  6. Transient Stability Output Margin Estimation Based on Energy Function Method

    NASA Astrophysics Data System (ADS)

    Miwa, Natsuki; Tanaka, Kazuyuki

    In this paper, a new method of estimating critical generation margin (CGM) in power systems is proposed from the viewpoint of transient stability diagnostic. The proposed method has the capability to directly compute the stability limit output for a given contingency based on transient energy function method (TEF). Since CGM can be directly obtained by the limit output using estimated P-θ curves and is easy to understand, it is more useful rather than conventional critical clearing time (CCT) of energy function method. The proposed method can also estimate CGM as its negative value that means unstable in present load profile, then negative CGM can be directly utilized as generator output restriction. The proposed method is verified its accuracy and fast solution ability by applying to simple 3-machine model and IEEJ EAST10-machine standard model. Furthermore the useful application to severity ranking of transient stability for a lot of contingency cases is discussed by using CGM.

  7. Scalable High Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning

    PubMed Central

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C.

    2015-01-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data,, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked auto-encoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework image registration experiments were conducted on 7.0-tesla brain MR images. In all experiments, the results showed the new image registration framework consistently demonstrated more accurate registration results when compared to state-of-the-art. PMID:26552069

  8. Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning.

    PubMed

    Wu, Guorong; Kim, Minjeong; Wang, Qian; Munsell, Brent C; Shen, Dinggang

    2016-07-01

    Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.

  9. Revenue Bond Financing Auxiliary Service Facilities Construction at the State Colleges.

    ERIC Educational Resources Information Center

    Maryland Board of Trustees of the State Colleges, Baltimore.

    Since the State of Maryland does not provide funds for the construction of dormitories, dining halls, student activities, buildings, and similar ancillary services, an outline of cost responsibilities for such facilities in the state college system is presented. Based on a discussion of the financing methods for ancillary projects, the role of the…

  10. Toward a New Method of Decoding Algebraic Codes Using Groebner Bases

    DTIC Science & Technology

    1993-10-01

    variables over GF(2m). A celebrated algorithm by Buchberger produces a reduced Groebner basis of that ideal. It tums out that, since the common roots of...all the polynomials in the ideal are a set of isolated points, this reduced Groebner basis is in triangular form, and the univariate polynomial in that

  11. Student Satisfaction Process in Virtual Learning System: Considerations Based in Information and Service Quality from Brazil's Experience

    ERIC Educational Resources Information Center

    Machado-Da-Silva, Fábio Nazareno; Meirelles, Fernando de Souza; Filenga, Douglas; Filho, Marino Brugnolo

    2014-01-01

    Distance learning has undergone great changes, especially since the advent of the Internet and communication and information technology. Questions have been asked following the growth of this mode of instructional activity. Researchers have investigated methods to assess the benefits of e-learning from a number of perspectives. This survey…

  12. Additive nonlinear biomass equations: A likelihood-based approach

    Treesearch

    David L. R. Affleck; Ulises Dieguez-Aranda

    2016-01-01

    Since Parresol’s (Can. J. For. Res. 31:865-878, 2001) seminal article on the topic, it has become standard to develop nonlinear tree biomass equations to ensure compatibility among total and component predictions and to fit these equations using multistep generalized least-squares methods. In particular, many studies have specified equations for total tree...

  13. Findings from a Pre-Kindergarten Classroom: Making the Case for STEM in Early Childhood Education

    ERIC Educational Resources Information Center

    Tippett, Christine D.; Milford, Todd M.

    2017-01-01

    Science, technology, engineering, and mathematics (STEM) in early childhood education is an area currently given little attention in the literature, which is unfortunate since young children are natural scientists and engineers. Here, we outline our mixed-methods design-based research investigation of a pre-kindergarten (Pre-K) classroom where two…

  14. Halogen free benzoxazine based curable compositions for high T.sub.g applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tietze, Roger; Nguyen, Yen-Loan

    A method for forming a halogen-free curable composition containing a benzoxazine monomer, at least one epoxy resin, a catalyst, a toughening agent and a solvent. The halogen-free curable composition is especially suited for use in automobile and aerospace applications since the composition, upon curing, produces a composite having a high glass transition temperature.

  15. The root iron reductase assay: an examination of key factors that must be respected to generate meaningful assay results

    USDA-ARS?s Scientific Manuscript database

    Plant iron researchers have been quantifying root iron reductase activity since the 1970's, using a simple spectrophotometric method based on the color change of a ferrous iron chromophore. The technique was used by Chaney, Brown, and Tiffin (1972) to demonstrate the obligatory reduction of ferric i...

  16. High-speed autofocusing of a cell using diffraction pattern

    NASA Astrophysics Data System (ADS)

    Oku, Hiromasa; Ishikawa, Masatoshi; Theodorus; Hashimoto, Koichi

    2006-05-01

    This paper proposes a new autofocusing method for observing cells under a transmission illumination. The focusing method uses a quick and simple focus estimation technique termed “depth from diffraction,” which is based on a diffraction pattern in a defocused image of a biological specimen. Since this method can estimate the focal position of the specimen from only a single defocused image, it can easily realize high-speed autofocusing. To demonstrate the method, it was applied to continuous focus tracking of a swimming paramecium, in combination with two-dimensional position tracking. Three-dimensional tracking of the paramecium for 70 s was successfully demonstrated.

  17. Create and Publish a Hierarchical Progressive Survey (HiPS)

    NASA Astrophysics Data System (ADS)

    Fernique, P.; Boch, T.; Pineau, F.; Oberto, A.

    2014-05-01

    Since 2009, the CDS promotes a method for visualizing based on the HEALPix sky tessellation. This method, called “Hierarchical Progressive Survey" or HiPS, allows one to display a survey progressively. It is particularly suited for all-sky surveys or deep fields. This visualization method is now integrated in several applications, notably Aladin, the SiTools/MIZAR CNES framework, and the recent HTML5 “Aladin Lite". Also, more than one hundred surveys are already available in this view mode. In this article, we will present the progress concerning this method and its recent adaptation to the astronomical catalogs such as the GAIA simulation.

  18. Global antioxidant response of meat.

    PubMed

    Carrillo, Celia; Barrio, Ángela; Del Mar Cavia, María; Alonso-Torre, Sara

    2017-06-01

    The global antioxidant response (GAR) method uses an enzymatic digestion to release antioxidants from foods. Owing to the importance of digestion for protein breakdown and subsequent release of bioactive compounds, the aim of the present study was to compare the GAR method for meat with the existing methodologies: the extraction-based method and QUENCHER. Seven fresh meats were analyzed using ABTS and FRAP assays. Our results indicated that the GAR of meat was higher than the total antioxidant capacity (TAC) assessed with the traditional extraction-based method. When evaluated with GAR, the thermal treatment led to an increase in the TAC of the soluble fraction, contrasting with a decreased TAC after cooking measured using the extraction-based method. The effect of thermal treatment on the TAC assessed by the QUENCHER method seemed to be dependent on the assay applied, since results from ABTS differed from FRAP. Our results allow us to hypothesize that the activation of latent bioactive peptides along the gastrointestinal tract should be taken into consideration when evaluating the TAC of meat. Therefore, we conclude that the GAR method may be more appropriate for assessing the TAC of meat than the existing, most commonly used methods. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  19. Accuracy improvement of multimodal measurement of speed of sound based on image processing

    NASA Astrophysics Data System (ADS)

    Nitta, Naotaka; Kaya, Akio; Misawa, Masaki; Hyodo, Koji; Numano, Tomokazu

    2017-07-01

    Since the speed of sound (SOS) reflects tissue characteristics and is expected as an evaluation index of elasticity and water content, the noninvasive measurement of SOS is eagerly anticipated. However, it is difficult to measure the SOS by using an ultrasound device alone. Therefore, we have presented a noninvasive measurement method of SOS using ultrasound (US) and magnetic resonance (MR) images. By this method, we determine the longitudinal SOS based on the thickness measurement using the MR image and the time of flight (TOF) measurement using the US image. The accuracy of SOS measurement is affected by the accuracy of image registration and the accuracy of thickness measurements in the MR and US images. In this study, we address the accuracy improvement in the latter thickness measurement, and present an image-processing-based method for improving the accuracy of thickness measurement. The method was investigated by using in vivo data obtained from a tissue-engineered cartilage implanted in the back of a rat, with an unclear boundary.

  20. Optical encryption of multiple three-dimensional objects based on multiple interferences and single-pixel digital holography

    NASA Astrophysics Data System (ADS)

    Wang, Ying; Liu, Qi; Wang, Jun; Wang, Qiong-Hua

    2018-03-01

    We present an optical encryption method of multiple three-dimensional objects based on multiple interferences and single-pixel digital holography. By modifying the Mach–Zehnder interferometer, the interference of the multiple objects beams and the one reference beam is used to simultaneously encrypt multiple objects into a ciphertext. During decryption, each three-dimensional object can be decrypted independently without having to decrypt other objects. Since the single-pixel digital holography based on compressive sensing theory is introduced, the encrypted data of this method is effectively reduced. In addition, recording fewer encrypted data can greatly reduce the bandwidth of network transmission. Moreover, the compressive sensing essentially serves as a secret key that makes an intruder attack invalid, which means that the system is more secure than the conventional encryption method. Simulation results demonstrate the feasibility of the proposed method and show that the system has good security performance. Project supported by the National Natural Science Foundation of China (Grant Nos. 61405130 and 61320106015).

  1. A general method to determine the stability of compressible flows

    NASA Technical Reports Server (NTRS)

    Guenther, R. A.; Chang, I. D.

    1982-01-01

    Several problems were studied using two completely different approaches. The initial method was to use the standard linearized perturbation theory by finding the value of the individual small disturbance quantities based on the equations of motion. These were serially eliminated from the equations of motion to derive a single equation that governs the stability of fluid dynamic system. These equations could not be reduced unless the steady state variable depends only on one coordinate. The stability equation based on one dependent variable was found and was examined to determine the stability of a compressible swirling jet. The second method applied a Lagrangian approach to the problem. Since the equations developed were based on different assumptions, the condition of stability was compared only for the Rayleigh problem of a swirling flow, both examples reduce to the Rayleigh criterion. This technique allows including the viscous shear terms which is not possible in the first method. The same problem was then examined to see what effect shear has on stability.

  2. Defining Tsunami Magnitude as Measure of Potential Impact

    NASA Astrophysics Data System (ADS)

    Titov, V. V.; Tang, L.

    2016-12-01

    The goal of tsunami forecast, as a system for predicting potential impact of a tsunami at coastlines, requires quick estimate of a tsunami magnitude. This goal has been recognized since the beginning of tsunami research. The work of Kajiura, Soloviev, Abe, Murty, and many others discussed several scales for tsunami magnitude based on estimates of tsunami energy. However, difficulties of estimating tsunami energy based on available tsunami measurements at coastal sea-level stations has carried significant uncertainties and has been virtually impossible in real time, before tsunami impacts coastlines. The slow process of tsunami magnitude estimates, including collection of vast amount of available coastal sea-level data from affected coastlines, made it impractical to use any tsunami magnitude scales in tsunami warning operations. Uncertainties of estimates made tsunami magnitudes difficult to use as universal scale for tsunami analysis. Historically, the earthquake magnitude has been used as a proxy of tsunami impact estimates, since real-time seismic data is available of real-time processing and ample amount of seismic data is available for an elaborate post event analysis. This measure of tsunami impact carries significant uncertainties in quantitative tsunami impact estimates, since the relation between the earthquake and generated tsunami energy varies from case to case. In this work, we argue that current tsunami measurement capabilities and real-time modeling tools allow for establishing robust tsunami magnitude that will be useful for tsunami warning as a quick estimate for tsunami impact and for post-event analysis as a universal scale for tsunamis inter-comparison. We present a method for estimating the tsunami magnitude based on tsunami energy and present application of the magnitude analysis for several historical events for inter-comparison with existing methods.

  3. Computer-Aided Evaluation of Blood Vessel Geometry From Acoustic Images.

    PubMed

    Lindström, Stefan B; Uhlin, Fredrik; Bjarnegård, Niclas; Gylling, Micael; Nilsson, Kamilla; Svensson, Christina; Yngman-Uhlin, Pia; Länne, Toste

    2018-04-01

    A method for computer-aided assessment of blood vessel geometries based on shape-fitting algorithms from metric vision was evaluated. Acoustic images of cross sections of the radial artery and cephalic vein were acquired, and medical practitioners used a computer application to measure the wall thickness and nominal diameter of these blood vessels with a caliper method and the shape-fitting method. The methods performed equally well for wall thickness measurements. The shape-fitting method was preferable for measuring the diameter, since it reduced systematic errors by up to 63% in the case of the cephalic vein because of its eccentricity. © 2017 by the American Institute of Ultrasound in Medicine.

  4. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, K. D.

    1985-01-01

    A direct-inverse technique and computer program called TAMSEP that can be sued for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicing the flowfield about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  5. Mobile/android application for QRS detection using zero cross method

    NASA Astrophysics Data System (ADS)

    Rizqyawan, M. I.; Simbolon, A. I.; Suhendra, M. A.; Amri, M. F.; Kusumandari, D. E.

    2018-03-01

    In automatic ECG signal processing, one of the main topics of research is QRS complex detection. Detecting correct QRS complex or R peak is important since it is used to measure several other ECG metrics. One of the robust methods for QRS detection is Zero Cross method. This method uses an addition of high-frequency signal and zero crossing count to detect QRS complex which has a low-frequency oscillation. This paper presents an application of QRS detection using Zero Cross algorithm in the Android-based system. The performance of the algorithm in the mobile environment is measured. The result shows that this method is suitable for real-time QRS detection in a mobile application.

  6. A direct-inverse method for transonic and separated flows about airfoils

    NASA Technical Reports Server (NTRS)

    Carlson, Leland A.

    1990-01-01

    A direct-inverse technique and computer program called TAMSEP that can be used for the analysis of the flow about airfoils at subsonic and low transonic freestream velocities is presented. The method is based upon a direct-inverse nonconservative full potential inviscid method, a Thwaites laminar boundary layer technique, and the Barnwell turbulent momentum integral scheme; and it is formulated using Cartesian coordinates. Since the method utilizes inverse boundary conditions in regions of separated flow, it is suitable for predicting the flow field about airfoils having trailing edge separated flow under high lift conditions. Comparisons with experimental data indicate that the method should be a useful tool for applied aerodynamic analyses.

  7. A Chaotic Ordered Hierarchies Consistency Analysis Performance Evaluation Model

    NASA Astrophysics Data System (ADS)

    Yeh, Wei-Chang

    2013-02-01

    The Hierarchies Consistency Analysis (HCA) is proposed by Guh in-cooperated along with some case study on a Resort to reinforce the weakness of Analytical Hierarchy Process (AHP). Although the results obtained enabled aid for the Decision Maker to make more reasonable and rational verdicts, the HCA itself is flawed. In this paper, our objective is to indicate the problems of HCA, and then propose a revised method called chaotic ordered HCA (COH in short) which can avoid problems. Since the COH is based upon Guh's method, the Decision Maker establishes decisions in a way similar to that of the original method.

  8. Final report on cermet high-level waste forms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kobisk, E.H.; Quinby, T.C.; Aaron, W.S.

    1981-08-01

    Cermets are being developed as an alternate method for the fixation of defense and commercial high level radioactive waste in a terminal disposal form. Following initial feasibility assessments of this waste form, consisting of ceramic particles dispersed in an iron-nickel base alloy, significantly improved processing methods were developed. The characterization of cermets has continued through property determinations on samples prepared by various methods from a variety of simulated and actual high-level wastes. This report describes the status of development of the cermet waste form as it has evolved since 1977. 6 tables, 18 figures.

  9. Physical, Chemical and Biochemical Modifications of Protein-Based Films and Coatings: An Extensive Review

    PubMed Central

    Zink, Joël; Wyrobnik, Tom; Prinz, Tobias; Schmid, Markus

    2016-01-01

    Protein-based films and coatings are an interesting alternative to traditional petroleum-based materials. However, their mechanical and barrier properties need to be enhanced in order to match those of the latter. Physical, chemical, and biochemical methods can be used for this purpose. The aim of this article is to provide an overview of the effects of various treatments on whey, soy, and wheat gluten protein-based films and coatings. These three protein sources have been chosen since they are among the most abundantly used and are well described in the literature. Similar behavior might be expected for other protein sources. Most of the modifications are still not fully understood at a fundamental level, but all the methods discussed change the properties of the proteins and resulting products. Mastering these modifications is an important step towards the industrial implementation of protein-based films. PMID:27563881

  10. Post processing of protein-compound docking for fragment-based drug discovery (FBDD): in-silico structure-based drug screening and ligand-binding pose prediction.

    PubMed

    Fukunishi, Yoshifumi

    2010-01-01

    For fragment-based drug development, both hit (active) compound prediction and docking-pose (protein-ligand complex structure) prediction of the hit compound are important, since chemical modification (fragment linking, fragment evolution) subsequent to the hit discovery must be performed based on the protein-ligand complex structure. However, the naïve protein-compound docking calculation shows poor accuracy in terms of docking-pose prediction. Thus, post-processing of the protein-compound docking is necessary. Recently, several methods for the post-processing of protein-compound docking have been proposed. In FBDD, the compounds are smaller than those for conventional drug screening. This makes it difficult to perform the protein-compound docking calculation. A method to avoid this problem has been reported. Protein-ligand binding free energy estimation is useful to reduce the procedures involved in the chemical modification of the hit fragment. Several prediction methods have been proposed for high-accuracy estimation of protein-ligand binding free energy. This paper summarizes the various computational methods proposed for docking-pose prediction and their usefulness in FBDD.

  11. Range estimation of passive infrared targets through the atmosphere

    NASA Astrophysics Data System (ADS)

    Cho, Hoonkyung; Chun, Joohwan; Seo, Doochun; Choi, Seokweon

    2013-04-01

    Target range estimation is traditionally based on radar and active sonar systems in modern combat systems. However, jamming signals tremendously degrade the performance of such active sensor devices. We introduce a simple target range estimation method and the fundamental limits of the proposed method based on the atmosphere propagation model. Since passive infrared (IR) sensors measure IR signals radiating from objects in different wavelengths, this method has robustness against electromagnetic jamming. The measured target radiance of each wavelength at the IR sensor depends on the emissive properties of target material and various attenuation factors (i.e., the distance between sensor and target and atmosphere environment parameters). MODTRAN is a tool that models atmospheric propagation of electromagnetic radiation. Based on the results from MODTRAN and atmosphere propagation-based modeling, the target range can be estimated. To analyze the proposed method's performance statistically, we use maximum likelihood estimation (MLE) and evaluate the Cramer-Rao lower bound (CRLB) via the probability density function of measured radiance. We also compare CRLB and the variance of MLE using Monte-Carlo simulation.

  12. Multi-objective based spectral unmixing for hyperspectral images

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Shi, Zhenwei

    2017-02-01

    Sparse hyperspectral unmixing assumes that each observed pixel can be expressed by a linear combination of several pure spectra in a priori library. Sparse unmixing is challenging, since it is usually transformed to a NP-hard l0 norm based optimization problem. Existing methods usually utilize a relaxation to the original l0 norm. However, the relaxation may bring in sensitive weighted parameters and additional calculation error. In this paper, we propose a novel multi-objective based algorithm to solve the sparse unmixing problem without any relaxation. We transform sparse unmixing to a multi-objective optimization problem, which contains two correlative objectives: minimizing the reconstruction error and controlling the endmember sparsity. To improve the efficiency of multi-objective optimization, a population-based randomly flipping strategy is designed. Moreover, we theoretically prove that the proposed method is able to recover a guaranteed approximate solution from the spectral library within limited iterations. The proposed method can directly deal with l0 norm via binary coding for the spectral signatures in the library. Experiments on both synthetic and real hyperspectral datasets demonstrate the effectiveness of the proposed method.

  13. Edge Detection Method Based on Neural Networks for COMS MI Images

    NASA Astrophysics Data System (ADS)

    Lee, Jin-Ho; Park, Eun-Bin; Woo, Sun-Hee

    2016-12-01

    Communication, Ocean And Meteorological Satellite (COMS) Meteorological Imager (MI) images are processed for radiometric and geometric correction from raw image data. When intermediate image data are matched and compared with reference landmark images in the geometrical correction process, various techniques for edge detection can be applied. It is essential to have a precise and correct edged image in this process, since its matching with the reference is directly related to the accuracy of the ground station output images. An edge detection method based on neural networks is applied for the ground processing of MI images for obtaining sharp edges in the correct positions. The simulation results are analyzed and characterized by comparing them with the results of conventional methods, such as Sobel and Canny filters.

  14. Cluster-based query expansion using external collections in medical information retrieval.

    PubMed

    Oh, Heung-Seon; Jung, Yuchul

    2015-12-01

    Utilizing external collections to improve retrieval performance is challenging research because various test collections are created for different purposes. Improving medical information retrieval has also gained much attention as various types of medical documents have become available to researchers ever since they started storing them in machine processable formats. In this paper, we propose an effective method of utilizing external collections based on the pseudo relevance feedback approach. Our method incorporates the structure of external collections in estimating individual components in the final feedback model. Extensive experiments on three medical collections (TREC CDS, CLEF eHealth, and OHSUMED) were performed, and the results were compared with a representative expansion approach utilizing the external collections to show the superiority of our method. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. A survey of the sociodemographic and educational characteristics of oral health technicians in public primary health care teams in Minas Gerais, Brazil

    PubMed Central

    2013-01-01

    Background To describe some sociodemographic and educational characteristics of oral health technicians (OHTs) in public primary health care teams in the state of Minas Gerais, Brazil. Methods A cross-sectional descriptive study was performed based on the telephone survey of a representative sample comprising 231 individuals. A pre-tested instrument was used for the data collection, including questions on gender, age in years, years of work as an OHT, years since graduation as an OHT, formal schooling, individual income in a month, and participation in continuing educational programmes. The descriptive statistic was developed and the formation of clusters, by the agglomerative hierarchy technique based on the furthest neighbour, was based on the age, years of work as an OHT, time since graduation as an OHT, formal schooling, individual income in a month, and participation in continuing educational programmes. Results Most interviewees (97.1%) were female. A monthly income of USD 300.00 to 600.00 was reported by 77.5% of the sample. Having educational qualifications in excess of their role was reported by approximately 20% of the participants. The median time since graduation was six years, and half of the sample had worked for four years as an OHT. Most interviewees (67.6%) reported having participated in professional continuing educational programmes. Two different clusters were identified based on the sociodemographic and educational characteristics of the sample. Conclusions The Brazilian OHTs in public primary health care teams in the state of Minas Gerais are mostly female who have had little time since graduation, working experience, and formal schooling sufficient for professional practice. PMID:24365451

  16. Observing system simulation experiments with multiple methods

    NASA Astrophysics Data System (ADS)

    Ishibashi, Toshiyuki

    2014-11-01

    An observing System Simulation Experiment (OSSE) is a method to evaluate impacts of hypothetical observing systems on analysis and forecast accuracy in numerical weather prediction (NWP) systems. Since OSSE requires simulations of hypothetical observations, uncertainty of OSSE results is generally larger than that of observing system experiments (OSEs). To reduce such uncertainty, OSSEs for existing observing systems are often carried out as calibration of the OSSE system. The purpose of this study is to achieve reliable OSSE results based on results of OSSEs with multiple methods. There are three types of OSSE methods. The first one is the sensitivity observing system experiment (SOSE) based OSSE (SOSEOSSE). The second one is the ensemble of data assimilation cycles (ENDA) based OSSE (ENDA-OSSE). The third one is the nature-run (NR) based OSSE (NR-OSSE). These three OSSE methods have very different properties. The NROSSE evaluates hypothetical observations in a virtual (hypothetical) world, NR. The ENDA-OSSE is very simple method but has a sampling error problem due to a small size ensemble. The SOSE-OSSE requires a very highly accurate analysis field as a pseudo truth of the real atmosphere. We construct these three types of OSSE methods in the Japan meteorological Agency (JMA) global 4D-Var experimental system. In the conference, we will present initial results of these OSSE systems and their comparisons.

  17. Patch-based generation of a pseudo CT from conventional MRI sequences for MRI-only radiotherapy of the brain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andreasen, Daniel, E-mail: dana@dtu.dk; Van Leemput, Koen; Hansen, Rasmus H.

    Purpose: In radiotherapy (RT) based on magnetic resonance imaging (MRI) as the only modality, the information on electron density must be derived from the MRI scan by creating a so-called pseudo computed tomography (pCT). This is a nontrivial task, since the voxel-intensities in an MRI scan are not uniquely related to electron density. To solve the task, voxel-based or atlas-based models have typically been used. The voxel-based models require a specialized dual ultrashort echo time MRI sequence for bone visualization and the atlas-based models require deformable registrations of conventional MRI scans. In this study, we investigate the potential of amore » patch-based method for creating a pCT based on conventional T{sub 1}-weighted MRI scans without using deformable registrations. We compare this method against two state-of-the-art methods within the voxel-based and atlas-based categories. Methods: The data consisted of CT and MRI scans of five cranial RT patients. To compare the performance of the different methods, a nested cross validation was done to find optimal model parameters for all the methods. Voxel-wise and geometric evaluations of the pCTs were done. Furthermore, a radiologic evaluation based on water equivalent path lengths was carried out, comparing the upper hemisphere of the head in the pCT and the real CT. Finally, the dosimetric accuracy was tested and compared for a photon treatment plan. Results: The pCTs produced with the patch-based method had the best voxel-wise, geometric, and radiologic agreement with the real CT, closely followed by the atlas-based method. In terms of the dosimetric accuracy, the patch-based method had average deviations of less than 0.5% in measures related to target coverage. Conclusions: We showed that a patch-based method could generate an accurate pCT based on conventional T{sub 1}-weighted MRI sequences and without deformable registrations. In our evaluations, the method performed better than existing voxel-based and atlas-based methods and showed a promising potential for RT of the brain based only on MRI.« less

  18. Advantages and applicability of commonly used homogenisation methods for climate data

    NASA Astrophysics Data System (ADS)

    Ribeiro, Sara; Caineta, Júlio; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina

    2014-05-01

    Homogenisation of climate data is a very relevant subject since these data are required as an input in a wide range of studies, such as atmospheric modelling, weather forecasting, climate change monitoring, or hydrological and environmental projects. Often, climate data series include non-natural irregularities which have to be detected and removed prior to their use, otherwise it would generate biased and erroneous results. Relocation of weather stations or changes in the measuring instruments are amongst the most relevant causes for these inhomogeneities. Depending on the climate variable, its temporal resolution and spatial continuity, homogenisation methods can be more or less effective. For example, due to its natural variability, precipitation is identified as a very challenging variable to be homogenised. During the last two decades, numerous methods have been proposed to homogenise climate data. In order to compare, evaluate and develop those methods, the European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), was released in 2008. Existing homogenisation methods were improved based on the benchmark exercise issued by this project. A recent approach based on Direct Sequential Simulation (DSS), not yet evaluated by the benchmark exercise, is also presented as an innovative methodology for homogenising climate data series. DSS already proved to be a successful geostatistical method in environmental and hydrological studies, and it provides promising results for the homogenisation of climate data. Since DSS is a geostatistical stochastic approach, it accounts for the joint spatial and temporal dependence between observations, as well as the relative importance of stations both in terms of distance and correlation. This work presents a chronological review of the most commonly used homogenisation methods for climate data and available software packages. A short description and classification is provided for each method. Their advantages and applicability are discussed based on literature review and on the results of the HOME project. Acknowledgements: The authors gratefully acknowledge the financial support of "Fundação para a Ciência e Tecnologia" (FCT), Portugal, through the research project PTDC/GEO-MET/4026/2012 ("GSIMCLI - Geostatistical simulation with local distributions for the homogenization and interpolation of climate data").

  19. Robust Programming Problems Based on the Mean-Variance Model Including Uncertainty Factors

    NASA Astrophysics Data System (ADS)

    Hasuike, Takashi; Ishii, Hiroaki

    2009-01-01

    This paper considers robust programming problems based on the mean-variance model including uncertainty sets and fuzzy factors. Since these problems are not well-defined problems due to fuzzy factors, it is hard to solve them directly. Therefore, introducing chance constraints, fuzzy goals and possibility measures, the proposed models are transformed into the deterministic equivalent problems. Furthermore, in order to solve these equivalent problems efficiently, the solution method is constructed introducing the mean-absolute deviation and doing the equivalent transformations.

  20. 5-Hydroxymethylcytosine Profiling in Human DNA.

    PubMed

    Thomson, John P; Nestor, Colm E; Meehan, Richard R

    2017-01-01

    Since its "re-discovery" in 2009, there has been significant interest in defining the genome-wide distribution of DNA marked by 5-hydroxymethylation at cytosine bases (5hmC). In recent years, technological advances have resulted in a multitude of unique strategies to map 5hmC across the human genome. Here we discuss the wide range of approaches available to map this modification and describe in detail the affinity based methods which result in the enrichment of 5hmC marked DNA for downstream analysis.

  1. A scheme for parameterizing cirrus cloud ice water content in general circulation models

    NASA Technical Reports Server (NTRS)

    Heymsfield, Andrew J.; Donner, Leo J.

    1990-01-01

    Clouds strongly influence th earth's energy budget. They control th amount of solar radiative energy absorbed by the climate system, partitioning the energy between the atmosphere and the earth's surface. They also control the loss of energy to space by their effect on thermal emission. Cirrus and altostratus are the most frequent cloud types, having an annual average global coverage of 35 and 40 percent, respectively. Cirrus is composed almost entirely of ice crystals and the same is frequently true of the upper portions of altostratus since they are often formed by the thickening of cirrostratus and by the spreading of the middle or upper portions of thunderstorms. Thus, since ice clouds cover such a large portion of the earth's surface, they almost certainly have an important effect on climate. With this recognition, researchers developing climate models are seeking largely unavailable methods for specifying the conditions for ice cloud formation, and quantifying the spatial distribution of ice water content, IWC, a necessary step in deriving their radiative characteristics since radiative properties are apparently related to IWC. A method is developed for specifying IWC in climate models, based on theory and measurements in cirrus during FIRE and other experiments.

  2. Green synthesis of Ag nanoparticles using plant metabolites

    NASA Astrophysics Data System (ADS)

    Filippi, Antonio; Mattiello, Alessandro; Musetti, Rita; Petrussa, Elisa; Braidot, Enrico; Marchiol, Luca

    2017-08-01

    Nano-biotechnology is one of the most promising areas in modern nanoscience and technology. In this emerging area of research, nanoparticles (NPs) play an important role since the large-scale production and huge numbers of utilization. Gold and silver nanoparticles are among the most extensively studied nanomaterials, since they show high stability and low chemical reactivity in comparison to other metals. They are commonly synthesized using toxic chemical reducing agents able to reduce metal ions into uncharged NPs and/or high energy supplied procedures. The most commonly used method for the synthesis of NPs requires toxic chemicals like N,N-dimethyl formamide (DMF) or trisodium citrate, but recently a green technique, based on natural reducing agents, has been suggested to substitute the nature-unfriendly chemical methods. Many scientific works put in evidence the efficacy of plant extracts to reduce metal salts into the respective NPs, but this process lacks a clear control of NPs shapes and dimensions, since many different metabolites present into the extracts could participate to the process. This paper aims to clarify the reducing action of single pure natural compounds usually present in plant tissues and to obtain a stable and reproducible protocol for NPs synthesis.

  3. Semantic data association for planar features in outdoor 6D-SLAM using lidar

    NASA Astrophysics Data System (ADS)

    Ulas, C.; Temeltas, H.

    2013-05-01

    Simultaneous Localization and Mapping (SLAM) is a fundamental problem of the autonomous systems in GPS (Global Navigation System) denied environments. The traditional probabilistic SLAM methods uses point features as landmarks and hold all the feature positions in their state vector in addition to the robot pose. The bottleneck of the point-feature based SLAM methods is the data association problem, which are mostly based on a statistical measure. The data association performance is very critical for a robust SLAM method since all the filtering strategies are applied after a known correspondence. For point-features, two different but very close landmarks in the same scene might be confused while giving the correspondence decision when their positions and error covariance matrix are solely taking into account. Instead of using the point features, planar features can be considered as an alternative landmark model in the SLAM problem to be able to provide a more consistent data association. Planes contain rich information for the solution of the data association problem and can be distinguished easily with respect to point features. In addition, planar maps are very compact since an environment has only very limited number of planar structures. The planar features does not have to be large structures like building wall or roofs; the small plane segments can also be used as landmarks like billboards, traffic posts and some part of the bridges in urban areas. In this paper, a probabilistic plane-feature extraction method from 3DLiDAR data and the data association based on the extracted semantic information of the planar features is introduced. The experimental results show that the semantic data association provides very satisfactory result in outdoor 6D-SLAM.

  4. Bell-Curve Based Evolutionary Strategies for Structural Optimization

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    2000-01-01

    Evolutionary methods are exceedingly popular with practitioners of many fields; more so than perhaps any optimization tool in existence. Historically Genetic Algorithms (GAs) led the way in practitioner popularity (Reeves 1997). However, in the last ten years Evolutionary Strategies (ESs) and Evolutionary Programs (EPS) have gained a significant foothold (Glover 1998). One partial explanation for this shift is the interest in using GAs to solve continuous optimization problems. The typical GA relies upon a cumber-some binary representation of the design variables. An ES or EP, however, works directly with the real-valued design variables. For detailed references on evolutionary methods in general and ES or EP in specific see Back (1996) and Dasgupta and Michalesicz (1997). We call our evolutionary algorithm BCB (bell curve based) since it is based upon two normal distributions.

  5. Path Searching Based Crease Detection for Large Scale Scanned Document Images

    NASA Astrophysics Data System (ADS)

    Zhang, Jifu; Li, Yi; Li, Shutao; Sun, Bin; Sun, Jun

    2017-12-01

    Since the large size documents are usually folded for preservation, creases will occur in the scanned images. In this paper, a crease detection method is proposed to locate the crease pixels for further processing. According to the imaging process of contactless scanners, the shading on both sides of the crease usually varies a lot. Based on this observation, a convex hull based algorithm is adopted to extract the shading information of the scanned image. Then, the possible crease path can be achieved by applying the vertical filter and morphological operations on the shading image. Finally, the accurate crease is detected via Dijkstra path searching. Experimental results on the dataset of real scanned newspapers demonstrate that the proposed method can obtain accurate locations of the creases in the large size document images.

  6. Multi-sensor image registration based on algebraic projective invariants.

    PubMed

    Li, Bin; Wang, Wei; Ye, Hao

    2013-04-22

    A new automatic feature-based registration algorithm is presented for multi-sensor images with projective deformation. Contours are firstly extracted from both reference and sensed images as basic features in the proposed method. Since it is difficult to design a projective-invariant descriptor from the contour information directly, a new feature named Five Sequential Corners (FSC) is constructed based on the corners detected from the extracted contours. By introducing algebraic projective invariants, we design a descriptor for each FSC that is ensured to be robust against projective deformation. Further, no gray scale related information is required in calculating the descriptor, thus it is also robust against the gray scale discrepancy between the multi-sensor image pairs. Experimental results utilizing real image pairs are presented to show the merits of the proposed registration method.

  7. Non-stationary signal analysis based on general parameterized time-frequency transform and its application in the feature extraction of a rotary machine

    NASA Astrophysics Data System (ADS)

    Zhou, Peng; Peng, Zhike; Chen, Shiqian; Yang, Yang; Zhang, Wenming

    2018-06-01

    With the development of large rotary machines for faster and more integrated performance, the condition monitoring and fault diagnosis for them are becoming more challenging. Since the time-frequency (TF) pattern of the vibration signal from the rotary machine often contains condition information and fault feature, the methods based on TF analysis have been widely-used to solve these two problems in the industrial community. This article introduces an effective non-stationary signal analysis method based on the general parameterized time-frequency transform (GPTFT). The GPTFT is achieved by inserting a rotation operator and a shift operator in the short-time Fourier transform. This method can produce a high-concentrated TF pattern with a general kernel. A multi-component instantaneous frequency (IF) extraction method is proposed based on it. The estimation for the IF of every component is accomplished by defining a spectrum concentration index (SCI). Moreover, such an IF estimation process is iteratively operated until all the components are extracted. The tests on three simulation examples and a real vibration signal demonstrate the effectiveness and superiority of our method.

  8. Identification of an EMS-induced causal mutation in a gene required for boron-mediated root development by low-coverage genome re-sequencing in Arabidopsis

    PubMed Central

    Tabata, Ryo; Kamiya, Takehiro; Shigenobu, Shuji; Yamaguchi, Katsushi; Yamada, Masashi; Hasebe, Mitsuyasu; Fujiwara, Toru; Sawa, Shinichiro

    2013-01-01

    Next-generation sequencing (NGS) technologies enable the rapid production of an enormous quantity of sequence data. These powerful new technologies allow the identification of mutations by whole-genome sequencing. However, most reported NGS-based mapping methods, which are based on bulked segregant analysis, are costly and laborious. To address these limitations, we designed a versatile NGS-based mapping method that consists of a combination of low- to medium-coverage multiplex SOLiD (Sequencing by Oligonucleotide Ligation and Detection) and classical genetic rough mapping. Using only low to medium coverage reduces the SOLiD sequencing costs and, since just 10 to 20 mutant F2 plants are required for rough mapping, the operation is simple enough to handle in a laboratory with limited space and funding. As a proof of principle, we successfully applied this method to identify the CTR1, which is involved in boron-mediated root development, from among a population of high boron requiring Arabidopsis thaliana mutants. Our work demonstrates that this NGS-based mapping method is a moderately priced and versatile method that can readily be applied to other model organisms. PMID:23104114

  9. Comparison of two DSC-based methods to predict drug-polymer solubility.

    PubMed

    Rask, Malte Bille; Knopp, Matthias Manne; Olesen, Niels Erik; Holm, René; Rades, Thomas

    2018-04-05

    The aim of the present study was to compare two DSC-based methods to predict drug-polymer solubility (melting point depression method and recrystallization method) and propose a guideline for selecting the most suitable method based on physicochemical properties of both the drug and the polymer. Using the two methods, the solubilities of celecoxib, indomethacin, carbamazepine, and ritonavir in polyvinylpyrrolidone, hydroxypropyl methylcellulose, and Soluplus® were determined at elevated temperatures and extrapolated to room temperature using the Flory-Huggins model. For the melting point depression method, it was observed that a well-defined drug melting point was required in order to predict drug-polymer solubility, since the method is based on the depression of the melting point as a function of polymer content. In contrast to previous findings, it was possible to measure melting point depression up to 20 °C below the glass transition temperature (T g ) of the polymer for some systems. Nevertheless, in general it was possible to obtain solubility measurements at lower temperatures using polymers with a low T g . Finally, for the recrystallization method it was found that the experimental composition dependence of the T g must be differentiable for compositions ranging from 50 to 90% drug (w/w) so that one T g corresponds to only one composition. Based on these findings, a guideline for selecting the most suitable thermal method to predict drug-polymer solubility based on the physicochemical properties of the drug and polymer is suggested in the form of a decision tree. Copyright © 2018 Elsevier B.V. All rights reserved.

  10. A linear parameter-varying multiobjective control law design based on youla parametrization for a flexible blended wing body aircraft

    NASA Astrophysics Data System (ADS)

    Demourant, F.; Ferreres, G.

    2013-12-01

    This article presents a methodology for a linear parameter-varying (LPV) multiobjective flight control law design for a blended wing body (BWB) aircraft and results. So, the method is a direct design of a parametrized control law (with respect to some measured flight parameters) through a multimodel convex design to optimize a set of specifications on the full-flight domain and different mass cases. The methodology is based on the Youla parameterization which is very useful since closed loop specifications are affine with respect to Youla parameter. The LPV multiobjective design method is detailed and applied to the BWB flexible aircraft example.

  11. Reliability optimization design of the gear modification coefficient based on the meshing stiffness

    NASA Astrophysics Data System (ADS)

    Wang, Qianqian; Wang, Hui

    2018-04-01

    Since the time varying meshing stiffness of gear system is the key factor affecting gear vibration, it is important to design the meshing stiffness to reduce vibration. Based on the effect of gear modification coefficient on the meshing stiffness, considering the random parameters, reliability optimization design of the gear modification is researched. The dimension reduction and point estimation method is used to estimate the moment of the limit state function, and the reliability is obtained by the forth moment method. The cooperation of the dynamic amplitude results before and after optimization indicates that the research is useful for the reduction of vibration and noise and the improvement of the reliability.

  12. Dead pixel replacement in LWIR microgrid polarimeters.

    PubMed

    Ratliff, Bradley M; Tyo, J Scott; Boger, James K; Black, Wiley T; Bowers, David L; Fetrow, Matthew P

    2007-06-11

    LWIR imaging arrays are often affected by nonresponsive pixels, or "dead pixels." These dead pixels can severely degrade the quality of imagery and often have to be replaced before subsequent image processing and display of the imagery data. For LWIR arrays that are integrated with arrays of micropolarizers, the problem of dead pixels is amplified. Conventional dead pixel replacement (DPR) strategies cannot be employed since neighboring pixels are of different polarizations. In this paper we present two DPR schemes. The first is a modified nearest-neighbor replacement method. The second is a method based on redundancy in the polarization measurements.We find that the redundancy-based DPR scheme provides an order-of-magnitude better performance for typical LWIR polarimetric data.

  13. Functional Magnetic Resonance Imaging Methods

    PubMed Central

    Chen, Jingyuan E.; Glover, Gary H.

    2015-01-01

    Since its inception in 1992, Functional Magnetic Resonance Imaging (fMRI) has become an indispensible tool for studying cognition in both the healthy and dysfunctional brain. FMRI monitors changes in the oxygenation of brain tissue resulting from altered metabolism consequent to a task-based evoked neural response or from spontaneous fluctuations in neural activity in the absence of conscious mentation (the “resting state”). Task-based studies have revealed neural correlates of a large number of important cognitive processes, while fMRI studies performed in the resting state have demonstrated brain-wide networks that result from brain regions with synchronized, apparently spontaneous activity. In this article, we review the methods used to acquire and analyze fMRI signals. PMID:26248581

  14. An optical liquid level sensor based on core-offset fusion splicing method using polarization-maintaining fiber

    NASA Astrophysics Data System (ADS)

    Lou, Weimin; Chen, Debao; Shen, Changyu; Lu, Yanfang; Liu, Huanan; Wei, Jian

    2016-01-01

    A simple liquid level sensor using a small piece of hydrofluoric acid (HF) etched polarization maintaining fiber (PMF), with SMF-PMF-SMF fiber structure based on Mach- Zehnder interference (MZI) mechanism is proposed. The core-offset fusion splicing method induced cladding modes interfere with the core mode. Moreover, the changing liquid level would influence the optical path difference of the MZI since the effective refractive indices of the air and the liquid is different. Both the variations of the wavelength shifts and power intensity attenuation corresponding to the liquid level can be obtained with a sensitivity of 0.4956nm/mm and 0.2204dB/mm, respectively.

  15. The NASA/industry Design Analysis Methods for Vibrations (DAMVIBS) program: A government overview

    NASA Technical Reports Server (NTRS)

    Kvaternik, Raymond G.

    1993-01-01

    NASA-Langley, under the Design Analysis Methods for Vibrations (DAMVIBS) Program, set out in 1984 to establish the technology base needed by the rotorcraft industry for developing an advanced finite-element-based dynamics design analysis capability for vibrations. Considerable work has been done by the industry participants in the program since that time. Because the DAMVIBS Program is being phased out, a government/industry assessment of the program has been made to identify those accomplishments and contributions which may be ascribed to the program. The purpose of this paper is to provide an overview of the program and its accomplishments and contributions from the perspective of the government sponsoring organization.

  16. Detection of Mycoplasma hyopneumoniae by polymerase chain reaction in swine presenting respiratory problems

    PubMed Central

    Yamaguti, M.; Muller, E.E.; Piffer, A.I.; Kich, J.D.; Klein, C.S.; Kuchiishi, S.S.

    2008-01-01

    Since Mycoplasma hyopneumoniae isolation in appropriate media is a difficult task and impractical for daily routine diagnostics, Nested-PCR (N-PCR) techniques are currently used to improve the direct diagnostic sensitivity of Swine Enzootic Pneumonia. In a first experiment, this paper describes a N-PCR technique optimization based on three variables: different sampling sites, sample transport media, and DNA extraction methods, using eight pigs. Based on the optimization results, a second experiment was conducted for testing validity using 40 animals. In conclusion, the obtained results of the N-PCR optimization and validation allow us to recommend this test as a routine monitoring diagnostic method for Mycoplasma hyopneumoniae infection in swine herds. PMID:24031248

  17. Link Prediction in Evolving Networks Based on Popularity of Nodes.

    PubMed

    Wang, Tong; He, Xing-Sheng; Zhou, Ming-Yang; Fu, Zhong-Qian

    2017-08-02

    Link prediction aims to uncover the underlying relationship behind networks, which could be utilized to predict missing edges or identify the spurious edges. The key issue of link prediction is to estimate the likelihood of potential links in networks. Most classical static-structure based methods ignore the temporal aspects of networks, limited by the time-varying features, such approaches perform poorly in evolving networks. In this paper, we propose a hypothesis that the ability of each node to attract links depends not only on its structural importance, but also on its current popularity (activeness), since active nodes have much more probability to attract future links. Then a novel approach named popularity based structural perturbation method (PBSPM) and its fast algorithm are proposed to characterize the likelihood of an edge from both existing connectivity structure and current popularity of its two endpoints. Experiments on six evolving networks show that the proposed methods outperform state-of-the-art methods in accuracy and robustness. Besides, visual results and statistical analysis reveal that the proposed methods are inclined to predict future edges between active nodes, rather than edges between inactive nodes.

  18. Application of A Mobile Platform-based System for the Management of Fundus Diease in Outpatient Settings.

    PubMed

    Dend, Xun; Li, Hong-Yan; Yin, Hong; Liang, Jian-Hong; Chen, Yi; Li, Xiao-Xin; Zhao, Ming-Wei

    2016-08-01

    Objective To evaluate the application of a mobile platform-based system in the management of fundus disease in outpatient settings. Methods In the outpatient departments of fundus disease,premature babies requiring eye examination under general anesthesia and adults requiring intraocular surgery were enrolled as the subjects. According to the existing clinical practices,we developed a system that met the requirements of clinical practices and optimized the clinical management. Based on the FileMaker database,the tablet computers were used as the mobile platform and the system could also be run in iPad and PC terminals.Results Since 2013,the system recorded 7500 cases of special examinations. Since July 2015,4100 cases of intravitreal drug injection were also recored in the system. Multiple-point and real-time reservation pattern increased the efficiency and opimize the clinical management. All the clinical data were digitalized. Conclusion The mobile platform-based system can increase the efficacy of examination and other clinical processes and standardize data collection;thus,it is feasible for the clinical practices in outpatient departments of ophthalmology.

  19. Erythropoietin abuse and erythropoietin gene doping: detection strategies in the genomic era.

    PubMed

    Diamanti-Kandarakis, Evanthia; Konstantinopoulos, Panagiotis A; Papailiou, Joanna; Kandarakis, Stylianos A; Andreopoulos, Anastasios; Sykiotis, Gerasimos P

    2005-01-01

    The administration of recombinant human erythropoietin (rhEPO) increases the maximum oxygen consumption capacity, and is therefore abused as a doping method in endurance sports. The detection of erythropoietin (EPO) abuse is based on direct pharmacological and indirect haematological approaches, both of which have several limitations. In addition, current detection methods cannot cope with the emerging doping strategies of EPO mimicry, analogues and gene doping, and thus novel detection strategies are urgently needed. Direct detection methods for EPO misuse can be either pharmacological approaches that identify exogenous substances based on their physicochemical properties, or molecular methods that recognise EPO transgenes or gene transfer vectors. Since direct detection with molecular methods requires invasive procedures, it is not appropriate for routine screening of large numbers of athletes. In contrast, novel indirect methods based on haematological and/or molecular profiling could be better suited as screening tools, and athletes who are suspect of doping would then be submitted to direct pharmacological and molecular tests. This article reviews the current state of the EPO doping field, discusses available detection methods and their shortcomings, outlines emerging pharmaceutical and genetic technologies in EPO misuse, and proposes potential directions for the development of novel detection strategies.

  20. To evaluate the safety and efficiency of low level laser therapy (LLLT) in treating decubitus ulcers: a review

    NASA Astrophysics Data System (ADS)

    Ahmed, Ambereen

    2015-03-01

    Introduction: Pressure sores (decubitus ulcer) are a serious problem in health care management, especially for middleaged to older people who are bed-ridden. Although preventative measures are used, the condition remains common and development of novel, improved treatment methods are desirable. This article reviews the application of laser-based methods, previously shown to be effective in accelerating wound-healing in animal models and in the treatment of decubitus ulcers in humans. Methods: About 23 scientific articles on the effect of low level laser therapy (LLLT) on wound healing in animals and humans from 2000-2014 were reviewed. Additionally, results of several randomized controlled trials (RCTs) were reviewed, and compared with other treatment methods available. Results: Whilst carefully controlled, laboratory-based animal studies indicated that LLLT can reduce healing time for several types of injuries, however similar studies in humans failed to demonstrate consistent beneficial effects in the clinical setting. An acceleration of decubitus ulcer healing has been occasionally found, although limited to certain wavelengths and sometimes only in combination with other types of therapies. Indeed, some of the clinical articles indicated that certain laser wavelengths can have detrimental effects on time of healing. Conclusions: To date, there remains no convincing evidence that LLLT has consistent medical benefit in treating decubitus ulcers. Caution should be applied when considering LLLT since only certain wavelengths utilized have shown beneficial effects. It is concluded that, more RCTs are needed since, there is no clinical justification for LLLT, alone or in combination with other methods, in treating decubitus ulcers.

  1. Hybrid matrix method for stable numerical analysis of the propagation of Dirac electrons in gapless bilayer graphene superlattices

    NASA Astrophysics Data System (ADS)

    Briones-Torres, J. A.; Pernas-Salomón, R.; Pérez-Álvarez, R.; Rodríguez-Vargas, I.

    2016-05-01

    Gapless bilayer graphene (GBG), like monolayer graphene, is a material system with unique properties, such as anti-Klein tunneling and intrinsic Fano resonances. These properties rely on the gapless parabolic dispersion relation and the chiral nature of bilayer graphene electrons. In addition, propagating and evanescent electron states coexist inherently in this material, giving rise to these exotic properties. In this sense, bilayer graphene is unique, since in most material systems in which Fano resonance phenomena are manifested an external source that provides extended states is required. However, from a numerical standpoint, the presence of evanescent-divergent states in the eigenfunctions linear superposition representing the Dirac spinors, leads to a numerical degradation (the so called Ωd problem) in the practical applications of the standard Coefficient Transfer Matrix (K) method used to study charge transport properties in Bilayer Graphene based multi-barrier systems. We present here a straightforward procedure based in the hybrid compliance-stiffness matrix method (H) that can overcome this numerical degradation. Our results show that in contrast to standard matrix method, the proposed H method is suitable to study the transmission and transport properties of electrons in GBG superlattice since it remains numerically stable regardless the size of the superlattice and the range of values taken by the input parameters: the energy and angle of the incident electrons, the barrier height and the thickness and number of barriers. We show that the matrix determinant can be used as a test of the numerical accuracy in real calculations.

  2. Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.

    PubMed

    Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S

    2010-03-01

    This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.

  3. Self-optimizing Pitch Control for Large Scale Wind Turbine Based on ADRC

    NASA Astrophysics Data System (ADS)

    Xia, Anjun; Hu, Guoqing; Li, Zheng; Huang, Dongxiao; Wang, Fengxiang

    2018-01-01

    Since wind turbine is a complex nonlinear and strong coupling system, traditional PI control method can hardly achieve good control performance. A self-optimizing pitch control method based on the active-disturbance-rejection control theory is proposed in this paper. A linear model of the wind turbine is derived by linearizing the aerodynamic torque equation and the dynamic response of wind turbine is transformed into a first-order linear system. An expert system is designed to optimize the amplification coefficient according to the pitch rate and the speed deviation. The purpose of the proposed control method is to regulate the amplification coefficient automatically and keep the variations of pitch rate and rotor speed in proper ranges. Simulation results show that the proposed pitch control method has the ability to modify the amplification coefficient effectively, when it is not suitable, and keep the variations of pitch rate and rotor speed in proper ranges

  4. Nonlinear spline wavefront reconstruction through moment-based Shack-Hartmann sensor measurements.

    PubMed

    Viegers, M; Brunner, E; Soloviev, O; de Visser, C C; Verhaegen, M

    2017-05-15

    We propose a spline-based aberration reconstruction method through moment measurements (SABRE-M). The method uses first and second moment information from the focal spots of the SH sensor to reconstruct the wavefront with bivariate simplex B-spline basis functions. The proposed method, since it provides higher order local wavefront estimates with quadratic and cubic basis functions can provide the same accuracy for SH arrays with a reduced number of subapertures and, correspondingly, larger lenses which can be beneficial for application in low light conditions. In numerical experiments the performance of SABRE-M is compared to that of the first moment method SABRE for aberrations of different spatial orders and for different sizes of the SH array. The results show that SABRE-M is superior to SABRE, in particular for the higher order aberrations and that SABRE-M can give equal performance as SABRE on a SH grid of halved sampling.

  5. Forecasting Construction Cost Index based on visibility graph: A network approach

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong

    2018-03-01

    Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.

  6. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  7. Meshfree truncated hierarchical refinement for isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Atri, H. R.; Shojaee, S.

    2018-05-01

    In this paper truncated hierarchical B-spline (THB-spline) is coupled with reproducing kernel particle method (RKPM) to blend advantages of the isogeometric analysis and meshfree methods. Since under certain conditions, the isogeometric B-spline and NURBS basis functions are exactly represented by reproducing kernel meshfree shape functions, recursive process of producing isogeometric bases can be omitted. More importantly, a seamless link between meshfree methods and isogeometric analysis can be easily defined which provide an authentic meshfree approach to refine the model locally in isogeometric analysis. This procedure can be accomplished using truncated hierarchical B-splines to construct new bases and adaptively refine them. It is also shown that the THB-RKPM method can provide efficient approximation schemes for numerical simulations and represent a promising performance in adaptive refinement of partial differential equations via isogeometric analysis. The proposed approach for adaptive locally refinement is presented in detail and its effectiveness is investigated through well-known benchmark examples.

  8. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  9. A novel orthoimage mosaic method using the weighted A* algorithm for UAV imagery

    NASA Astrophysics Data System (ADS)

    Zheng, Maoteng; Zhou, Shunping; Xiong, Xiaodong; Zhu, Junfeng

    2017-12-01

    A weighted A* algorithm is proposed to select optimal seam-lines in orthoimage mosaic for UAV (Unmanned Aircraft Vehicle) imagery. The whole workflow includes four steps: the initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is then detected based on DSM (Digital Surface Model) data; the vertices (conjunction nodes) of initial network are relocated since some of them are on the high objects (buildings, trees and other artificial structures); and, the initial seam-lines are finally refined using the weighted A* algorithm based on the edge diagram and the relocated vertices. The method was tested with two real UAV datasets. Preliminary results show that the proposed method produces acceptable mosaic images in both the urban and mountainous areas, and is better than the result of the state-of-the-art methods on the datasets.

  10. A single 24 h recall overestimates exclusive breastfeeding practices among infants aged less than six months in rural Ethiopia.

    PubMed

    Fenta, Esete Habtemariam; Yirgu, Robel; Shikur, Bilal; Gebreyesus, Seifu Hagos

    2017-01-01

    Exclusive breastfeeding (EBF) to six months is one of the World Health Organization's (WHOs) infant and young child feeding (IYCF) core indicators. Single 24 h recall method is currently in use to measure exclusive breastfeeding practice among children of age less than six months. This approach overestimates the prevalence of EBF, especially among small population groups. This justifies the need to look for alternative measurement techniques to have a valid estimate regardless of population characteristics. The study involved 422 infants of age less than six months, living in Gurage zone, Southern Ethiopia. The study was conducted from January to February 2016. Child feeding practices were measured for seven consecutive days using 24 h recall method. Recall since birth, was used to measure breastfeeding practices from birth to the day of data collection. Data on EBF obtained by using single 24 h recall were compared with seven days repeated 24 h recall method. McNemar's test was done to assess if a significant difference existed in rates of EBF between measurement methods. The mean age of infants in months was 3 (SD -1.43). Exclusive breastfeeding prevalence was highest (76.7%; 95% CI 72.6, 80.8) when EBF was estimated using single 24 h recall. The prevalence of EBF based on seven repeated 24 h recall was 53.2% (95% CI: 48.3, 58.0). The estimated prevalence of EBF since birth based on retrospective data (recall since birth) was 50.2% (95% CI 45.4, 55.1). Compared to the EBF estimates obtained from seven repeated 24 h recall, single 24 h recall overestimated EBF magnitude by 23 percentage points (95% CI 19.2, 27.8). As the number of days of 24 h recall increased, a significant decrease in overestimation of EBF was observed. A significant overestimation was observed when single 24 h recall was used to estimate prevalence of EBF compared to seven days of 24 h recall. By increasing the observation days we can significantly decrease the degree of overestimation. Recall since birth presented estimates of EBF that is close to seven repeated 24 h recall. This suggests that a week recall could be an alternative indicator to single 24 h recall.

  11. Ratio-based vs. model-based methods to correct for urinary creatinine concentrations.

    PubMed

    Jain, Ram B

    2016-08-01

    Creatinine-corrected urinary analyte concentration is usually computed as the ratio of the observed level of analyte concentration divided by the observed level of the urinary creatinine concentration (UCR). This ratio-based method is flawed since it implicitly assumes that hydration is the only factor that affects urinary creatinine concentrations. On the contrary, it has been shown in the literature, that age, gender, race/ethnicity, and other factors also affect UCR. Consequently, an optimal method to correct for UCR should correct for hydration as well as other factors like age, gender, and race/ethnicity that affect UCR. Model-based creatinine correction in which observed UCRs are used as an independent variable in regression models has been proposed. This study was conducted to evaluate the performance of ratio-based and model-based creatinine correction methods when the effects of gender, age, and race/ethnicity are evaluated one factor at a time for selected urinary analytes and metabolites. It was observed that ratio-based method leads to statistically significant pairwise differences, for example, between males and females or between non-Hispanic whites (NHW) and non-Hispanic blacks (NHB), more often than the model-based method. However, depending upon the analyte of interest, the reverse is also possible. The estimated ratios of geometric means (GM), for example, male to female or NHW to NHB, were also compared for the two methods. When estimated UCRs were higher for the group (for example, males) in the numerator of this ratio, these ratios were higher for the model-based method, for example, male to female ratio of GMs. When estimated UCR were lower for the group (for example, NHW) in the numerator of this ratio, these ratios were higher for the ratio-based method, for example, NHW to NHB ratio of GMs. Model-based method is the method of choice if all factors that affect UCR are to be accounted for.

  12. SIFT Meets CNN: A Decade Survey of Instance Retrieval.

    PubMed

    Zheng, Liang; Yang, Yi; Tian, Qi

    2018-05-01

    In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.

  13. Remote Sensing Extraction of Stopes and Tailings Ponds in AN Ultra-Low Iron Mining Area

    NASA Astrophysics Data System (ADS)

    Ma, B.; Chen, Y.; Li, X.; Wu, L.

    2018-04-01

    With the development of economy, global demand for steel has accelerated since 2000, and thus mining activities of iron ore have become intensive accordingly. An ultra-low-grade iron has been extracted by open-pit mining and processed massively since 2001 in Kuancheng County, Hebei Province. There are large-scale stopes and tailings ponds in this area. It is important to extract their spatial distribution information for environmental protection and disaster prevention. A remote sensing method of extracting stopes and tailings ponds is studied based on spectral characteristics by use of Landsat 8 OLI imagery and ground spectral data. The overall accuracy of extraction is 95.06 %. In addition, tailings ponds are distinguished from stopes based on thermal characteristics by use of temperature image. The results could provide decision support for environmental protection, disaster prevention, and ecological restoration in the ultra-low-grade iron ore mining area.

  14. New readout integrated circuit using continuous time fixed pattern noise correction

    NASA Astrophysics Data System (ADS)

    Dupont, Bertrand; Chammings, G.; Rapellin, G.; Mandier, C.; Tchagaspanian, M.; Dupont, Benoit; Peizerat, A.; Yon, J. J.

    2008-04-01

    LETI has been involved in IRFPA development since 1978; the design department (LETI/DCIS) has focused its work on new ROIC architecture since many years. The trend is to integrate advanced functions into the CMOS design to achieve cost efficient sensors production. Thermal imaging market is today more and more demanding of systems with instant ON capability and low power consumption. The purpose of this paper is to present the latest developments of fixed pattern noise continuous time correction. Several architectures are proposed, some are based on hardwired digital processing and some are purely analog. Both are using scene based algorithms. Moreover a new method is proposed for simultaneous correction of pixel offsets and sensitivities. In this scope, a new architecture of readout integrated circuit has been implemented; this architecture is developed with 0.18μm CMOS technology. The specification and the application of the ROIC are discussed in details.

  15. Combination of CD157 and FLAER to Detect Peripheral Blood Eosinophils by Multiparameter Flow Cytometry.

    PubMed

    Carulli, Giovanni; Marini, Alessandra; Sammuri, Paola; Domenichini, Cristiana; Ottaviano, Virginia; Pacini, Simone; Petrini, Mario

    2015-01-01

    The identification of eosinophils by flow cytometry is difficult because most of the surface antigens expressed by eosinophils are shared with neutrophils. Some methods have been proposed, generally based on differential light scatter properties, enhanced autofluorescence, lack of CD16 or selective positivity of CD52. Such methods, however, show several limitations. In the present study we report a novel method based on the analysis of glycosylphosphatidylinositol (GPI)-linked molecules. The combination of CD157 and FLAER was used, since FLAER recognizes all GPI-linked molecules, while CD157 is absent on the membrane of eosinophils and expressed by neutrophils. Peripheral blood samples from normal subjects and patients with variable percentages of eosinophils (n = 31), and without any evidence for circulating immature myeloid cells, were stained with the combination of FLAER-Alexa Fluor and CD157-PE. A FascCanto II cytometer was used. Granulocytes were gated after CD33 staining and eosinophils were identified as CD157(-)/FLAER(+) events. Neutrophils were identified as CD157(+)/FLAER(+) events. The percentages of eosinophils detected by this method showed a very significant correlation both with automated counting and with manual counting (r = 0.981 and 0.989, respectively). Sorting assays were carried out by a S3 Cell Sorter: cytospins obtained from CD157(-)/FLAER(+) events consisted of 100% eosinophils, while samples from CD157(+)/FLAER(+) events were represented only by neutrophils. In conclusion, this method shows high sensitivity and specificity in order to distinguish eosinophils from neutrophils by flow cytometry. However, since CD157 is gradually up-regulated throughout bone marrow myeloid maturation, our method cannot be applied to cases characterized by immature myeloid cells.

  16. Strategies to Integrate Web Videoconferencing Software into an Online Counselor Education Course

    ERIC Educational Resources Information Center

    McBride, Dawn Lorraine; Muhlbach, Paul M.

    2008-01-01

    This article outlines how a web based video conferencing system (Marratech) was used in a graduate online counselor education course as part of a blended online graduate degree in Counseling. Since the course is open to students from around North America, a variety of e-delivery methods of instruction is significant to the program's success. A…

  17. A method of determining surface runoff by

    Treesearch

    Donald E. Whelan; Lemuel E. Miller; John B. Cavallero

    1952-01-01

    To determine the effects of watershed management on flood runoff, one must make a reliable estimate of how much the surface runoff can be reduced by a land-use program. Since surface runoff is the difference between precipitation and the amount of water that soaks into the soil, such an estimate must be based on the infiltration capacity of the soil.

  18. The Impact of Project Work and the Writing Process Method on Writing Production

    ERIC Educational Resources Information Center

    Díaz Ramírez, Marcela

    2014-01-01

    This article presents the outcomes of an investigation whose main goal was to implement the methodology of project work and a process approach in order to improve writing production in an English class of Colombian university students since their diagnostic tests showed that their written production had the lowest score. Based on data collected,…

  19. The Clinical Presentation of Mitochondrial Diseases in Children with Progressive Intellectual and Neurological Deterioration: A National, Prospective, Population-Based Study

    ERIC Educational Resources Information Center

    Verity, Christopher M.; Winstone, Anne Marie; Stellitano, Lesley; Krishnakumar, Deepa; Will, Robert; McFarland, Robert

    2010-01-01

    Aim: Our aim was to study the clinical presentation, mode of diagnosis, and epidemiology of mitochondrial disorders in children from the UK who have progressive intellectual and neurological deterioration (PIND). Method: Since April 1997, we have identified patients aged 16 years or younger with suspected PIND through the monthly notification card…

  20. 29 CFR Appendix C to Subpart M of... - Personal Fall Arrest Systems

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... appendix D of this subpart, the test methods listed here in appendix C can also be used to assist employers... about the system based on its performance during testing so that the employer can know if the system... deceleration device of the self-retracting type since this can result in additional free fall for which the...

  1. 29 CFR Appendix C to Subpart M of... - Personal Fall Arrest Systems

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... appendix D of this subpart, the test methods listed here in appendix C can also be used to assist employers... about the system based on its performance during testing so that the employer can know if the system... deceleration device of the self-retracting type since this can result in additional free fall for which the...

  2. 29 CFR Appendix C to Subpart M of... - Personal Fall Arrest Systems

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... appendix D of this subpart, the test methods listed here in appendix C can also be used to assist employers... about the system based on its performance during testing so that the employer can know if the system... deceleration device of the self-retracting type since this can result in additional free fall for which the...

  3. A Qualitative Phenomenological Analysis Exploring Digital Immigrants' Use of Church-Based Computer-Supported Collaborative Learning

    ERIC Educational Resources Information Center

    Brinkman-Kealey, Renee

    2012-01-01

    Individuals and societies have traditionally sought answers to important questions in life through religion. In the 21st century, physical churches with clergy are no longer the sole source of spiritual answers or knowledge. Since the late 1960s, church attendance has been declining. Church leaders have begun to implement new methods such as using…

  4. An Empirical Study of the Distributional Changes in Higher Education among East, Middle and West China

    ERIC Educational Resources Information Center

    Jiang, Chunjiao; Li, Song

    2008-01-01

    Based on the quantitative research and comparative study method, this paper attempts to make a systematic study and analysis of regional differences which have existed since 1949 in higher education among East, Middle and West China. The study is intended to explore the causes, regional differences, social changes, and their co-related…

  5. Debugging classification and anti-debugging strategies

    NASA Astrophysics Data System (ADS)

    Gao, Shang; Lin, Qian; Xia, Mingyuan; Yu, Miao; Qi, Zhengwei; Guan, Haibing

    2011-12-01

    Debugging, albeit useful for software development, is also a double-edge sword since it could also be exploited by malicious attackers. This paper analyzes the prevailing debuggers and classifies them into 4 categories based on the debugging mechanism. Furthermore, as an opposite, we list 13 typical anti-debugging strategies adopted in Windows. These methods intercept specific execution points which expose the diagnostic behavior of debuggers.

  6. QUANTITATION OF PERCHLORATE ION BY ELECTROSPRAY IONIZATION MASS SPECTROMETRY (ESI-MS) USING STABLE ASSOCIATION COMPLEXES WITH ORGANIC CATIONS AND BASES TO ENHANCE SELECTIVITY

    EPA Science Inventory

    Quantitation of trace levels of perchlorate ion in water has become a key issue since this species was discovered in water supplies around the United States. Although ion chromatographic methods presently offer the lowest limit of detection, =40 nm (4ngm1-1), chromatographic ret...

  7. Infection with Pathogens Transmitted Commonly Through Food and the Effect of Increasing Use of Culture-Independent Diagnostic Tests on Surveillance--Foodborne Diseases Active Surveillance Network, 10 U.S. Sites, 2012-2015.

    PubMed

    Huang, Jennifer Y; Henao, Olga L; Griffin, Patricia M; Vugia, Duc J; Cronquist, Alicia B; Hurd, Sharon; Tobin-D'Angelo, Melissa; Ryan, Patricia; Smith, Kirk; Lathrop, Sarah; Zansky, Shelley; Cieslak, Paul R; Dunn, John; Holt, Kristin G; Wolpert, Beverly J; Patrick, Mary E

    2016-04-15

    To evaluate progress toward prevention of enteric and foodborne illnesses in the United States, the Foodborne Diseases Active Surveillance Network (FoodNet) monitors the incidence of laboratory-confirmed infections caused by nine pathogens transmitted commonly through food in 10 U.S. sites. This report summarizes preliminary 2015 data and describes trends since 2012. In 2015, FoodNet reported 20,107 confirmed cases (defined as culture-confirmed bacterial infections and laboratory-confirmed parasitic infections), 4,531 hospitalizations, and 77 deaths. FoodNet also received reports of 3,112 positive culture-independent diagnostic tests (CIDTs) without culture-confirmation, a number that has markedly increased since 2012. Diagnostic testing practices for enteric pathogens are rapidly moving away from culture-based methods. The continued shift from culture-based methods to CIDTs that do not produce the isolates needed to distinguish between strains and subtypes affects the interpretation of public health surveillance data and ability to monitor progress toward prevention efforts. Expanded case definitions and strategies for obtaining bacterial isolates are crucial during this transition period.

  8. A Model and Satellite-Based Analysis of the Tropospheric Ozone Distribution in Clear Versus Convectively Cloudy Conditions

    NASA Technical Reports Server (NTRS)

    Strode, Sarah A.; Douglass, Anne R.; Ziemke, Jerald R.; Manyin, Michael; Nielsen, J. Eric; Oman, Luke D.

    2017-01-01

    Satellite observations of in-cloud ozone concentrations from the Ozone Monitoring Instrument and Microwave Limb Sounder instruments show substantial differences from background ozone concentrations. We develop a method for comparing a free-running chemistry-climate model (CCM) to in-cloud and background ozone observations using a simple criterion based on cloud fraction to separate cloudy and clear-sky days. We demonstrate that the CCM simulates key features of the in-cloud versus background ozone differences and of the geographic distribution of in-cloud ozone. Since the agreement is not dependent on matching the meteorological conditions of a specific day, this is a promising method for diagnosing how accurately CCMs represent the relationships between ozone and clouds, including the lower ozone concentrations shown by in-cloud satellite observations. Since clouds are associated with convection as well as changes in chemistry, we diagnose the tendency of tropical ozone at 400 hPa due to chemistry, convection and turbulence, and large-scale dynamics. While convection acts to reduce ozone concentrations at 400 hPa throughout much of the tropics, it has the opposite effect over highly polluted regions of South and East Asia.

  9. Discovering the Unknown: Improving Detection of Novel Species and Genera from Short Reads

    DOE PAGES

    Rosen, Gail L.; Polikar, Robi; Caseiro, Diamantino A.; ...

    2011-01-01

    High-throughput sequencing technologies enable metagenome profiling, simultaneous sequencing of multiple microbial species present within an environmental sample. Since metagenomic data includes sequence fragments (“reads”) from organisms that are absent from any database, new algorithms must be developed for the identification and annotation of novel sequence fragments. Homology-based techniques have been modified to detect novel species and genera, but, composition-based methods, have not been adapted. We develop a detection technique that can discriminate between “known” and “unknown” taxa, which can be used with composition-based methods, as well as a hybrid method. Unlike previous studies, we rigorously evaluate all algorithms for theirmore » ability to detect novel taxa. First, we show that the integration of a detector with a composition-based method performs significantly better than homology-based methods for the detection of novel species and genera, with best performance at finer taxonomic resolutions. Most importantly, we evaluate all the algorithms by introducing an “unknown” class and show that the modified version of PhymmBL has similar or better overall classification performance than the other modified algorithms, especially for the species-level and ultrashort reads. Finally, we evaluate theperformance of several algorithms on a real acid mine drainage dataset.« less

  10. A variable structure fuzzy neural network model of squamous dysplasia and esophageal squamous cell carcinoma based on a global chaotic optimization algorithm.

    PubMed

    Moghtadaei, Motahareh; Hashemi Golpayegani, Mohammad Reza; Malekzadeh, Reza

    2013-02-07

    Identification of squamous dysplasia and esophageal squamous cell carcinoma (ESCC) is of great importance in prevention of cancer incidence. Computer aided algorithms can be very useful for identification of people with higher risks of squamous dysplasia, and ESCC. Such method can limit the clinical screenings to people with higher risks. Different regression methods have been used to predict ESCC and dysplasia. In this paper, a Fuzzy Neural Network (FNN) model is selected for ESCC and dysplasia prediction. The inputs to the classifier are the risk factors. Since the relation between risk factors in the tumor system has a complex nonlinear behavior, in comparison to most of ordinary data, the cost function of its model can have more local optimums. Thus the need for global optimization methods is more highlighted. The proposed method in this paper is a Chaotic Optimization Algorithm (COA) proceeding by the common Error Back Propagation (EBP) local method. Since the model has many parameters, we use a strategy to reduce the dependency among parameters caused by the chaotic series generator. This dependency was not considered in the previous COA methods. The algorithm is compared with logistic regression model as the latest successful methods of ESCC and dysplasia prediction. The results represent a more precise prediction with less mean and variance of error. Copyright © 2012 Elsevier Ltd. All rights reserved.

  11. Discriminative detection of deposited radon daughters on CR-39 track detectors using TRIAC II code

    NASA Astrophysics Data System (ADS)

    Patiris, D. L.; Ioannides, K. G.

    2009-07-01

    A method for detecting deposited 218Po and 214Po by a spectrometric study of CR-39 solid state nuclear track detectors is described. The method is based on the application of software imposed selection criteria, concerning the geometrical and optical properties of the tracks, which correspond to tracks created by alpha particles of specific energy falling on the detector at given angles of incidence. The selection criteria were based on a preliminary study of tracks' parameters (major and minor axes and mean value of brightness), using the TRIAC II code. Since no linear relation was found between the energy and the geometric characteristics of the tracks (major and minor axes), we resorted to the use of an additional parameter in order to classify the tracks according to the particles' energy. Since the brightness of tracks is associated with the tracks' depth, the mean value of brightness was chosen as the parameter of choice. To reduce the energy of the particles, which are emitted by deposited 218Po and 214Po into a quantifiable range, the detectors were covered with an aluminum absorber material. In this way, the discrimination of radon's daughters was finally accomplished by properly selecting amongst all registered tracks. This method could be applied as a low cost tool for the study of the radon's daughters behavior in air.

  12. A novel method for efficient archiving and retrieval of biomedical images using MPEG-7

    NASA Astrophysics Data System (ADS)

    Meyer, Joerg; Pahwa, Ash

    2004-10-01

    Digital archiving and efficient retrieval of radiological scans have become critical steps in contemporary medical diagnostics. Since more and more images and image sequences (single scans or video) from various modalities (CT/MRI/PET/digital X-ray) are now available in digital formats (e.g., DICOM-3), hospitals and radiology clinics need to implement efficient protocols capable of managing the enormous amounts of data generated daily in a typical clinical routine. We present a method that appears to be a viable way to eliminate the tedious step of manually annotating image and video material for database indexing. MPEG-7 is a new framework that standardizes the way images are characterized in terms of color, shape, and other abstract, content-related criteria. A set of standardized descriptors that are automatically generated from an image is used to compare an image to other images in a database, and to compute the distance between two images for a given application domain. Text-based database queries can be replaced with image-based queries using MPEG-7. Consequently, image queries can be conducted without any prior knowledge of the keys that were used as indices in the database. Since the decoding and matching steps are not part of the MPEG-7 standard, this method also enables searches that were not planned by the time the keys were generated.

  13. Time Recovery for a Complex Process Using Accelerated Dynamics.

    PubMed

    Paz, S Alexis; Leiva, Ezequiel P M

    2015-04-14

    The hyperdynamics method (HD) developed by Voter (J. Chem. Phys. 1996, 106, 4665) sets the theoretical basis to construct an accelerated simulation scheme that holds the time scale information. Since HD is based on transition state theory, pseudoequilibrium conditions (PEC) must be satisfied before any system in a trapped state may be accelerated. As the system evolves, many trapped states may appear, and the PEC must be assumed in each one to accelerate the escape. However, since the system evolution is a priori unknown, the PEC cannot be permanently assumed to be true. Furthermore, the different parameters of the bias function used may need drastic recalibration during this evolution. To overcome these problems, we present a general scheme to switch between HD and conventional molecular dynamics (MD) in an automatic fashion during the simulation. To decide when HD should start and finish, criteria based on the energetic properties of the system are introduced. On the other hand, a very simple bias function is proposed, leading to a straightforward on-the-fly set up of the required parameters. A way to measure the quality of the simulation is suggested. The efficiency of the present hybrid HD-MD method is tested for a two-dimensional model potential and for the coalescence process of two nanoparticles. In spite of the important complexity of the latter system (165 degrees of freedoms), some relevant mechanistic properties were recovered within the present method.

  14. Quantum Mechanical Calculations of Cytosine, Thiocytosine and Their Radical Ions

    NASA Astrophysics Data System (ADS)

    Singh, Rashmi

    2010-08-01

    The RNA and DNA are polymer that share some interesting similarities, for instance it is well known that cytosine is the one of the common nucleic acid base. The sulfur is characterized as a very reactive element and it has been used, in chemical warfare agents. Since the genetic information is based on the sequence of the nucleic acid bases. The quantum mechanical calculations of the energies, geometries, charges and vibrational characteristics of the cytosine and thiocytosine. and their corresponding radicals were carried out by using DFT method with b3lyp/6-311++g** basis set.

  15. An evolutionary algorithm that constructs recurrent neural networks.

    PubMed

    Angeline, P J; Saunders, G M; Pollack, J B

    1994-01-01

    Standard methods for simultaneously inducing the structure and weights of recurrent neural networks limit every task to an assumed class of architectures. Such a simplification is necessary since the interactions between network structure and function are not well understood. Evolutionary computations, which include genetic algorithms and evolutionary programming, are population-based search methods that have shown promise in many similarly complex tasks. This paper argues that genetic algorithms are inappropriate for network acquisition and describes an evolutionary program, called GNARL, that simultaneously acquires both the structure and weights for recurrent networks. GNARL's empirical acquisition method allows for the emergence of complex behaviors and topologies that are potentially excluded by the artificial architectural constraints imposed in standard network induction methods.

  16. Support vector machines-based fault diagnosis for turbo-pump rotor

    NASA Astrophysics Data System (ADS)

    Yuan, Sheng-Fa; Chu, Fu-Lei

    2006-05-01

    Most artificial intelligence methods used in fault diagnosis are based on empirical risk minimisation principle and have poor generalisation when fault samples are few. Support vector machines (SVM) is a new general machine-learning tool based on structural risk minimisation principle that exhibits good generalisation even when fault samples are few. Fault diagnosis based on SVM is discussed. Since basic SVM is originally designed for two-class classification, while most of fault diagnosis problems are multi-class cases, a new multi-class classification of SVM named 'one to others' algorithm is presented to solve the multi-class recognition problems. It is a binary tree classifier composed of several two-class classifiers organised by fault priority, which is simple, and has little repeated training amount, and the rate of training and recognition is expedited. The effectiveness of the method is verified by the application to the fault diagnosis for turbo pump rotor.

  17. The attitude inversion method of geostationary satellites based on unscented particle filter

    NASA Astrophysics Data System (ADS)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  18. Cis-regulatory element based targeted gene finding: genome-wide identification of abscisic acid- and abiotic stress-responsive genes in Arabidopsis thaliana.

    PubMed

    Zhang, Weixiong; Ruan, Jianhua; Ho, Tuan-Hua David; You, Youngsook; Yu, Taotao; Quatrano, Ralph S

    2005-07-15

    A fundamental problem of computational genomics is identifying the genes that respond to certain endogenous cues and environmental stimuli. This problem can be referred to as targeted gene finding. Since gene regulation is mainly determined by the binding of transcription factors and cis-regulatory DNA sequences, most existing gene annotation methods, which exploit the conservation of open reading frames, are not effective in finding target genes. A viable approach to targeted gene finding is to exploit the cis-regulatory elements that are known to be responsible for the transcription of target genes. Given such cis-elements, putative target genes whose promoters contain the elements can be identified. As a case study, we apply the above approach to predict the genes in model plant Arabidopsis thaliana which are inducible by a phytohormone, abscisic acid (ABA), and abiotic stress, such as drought, cold and salinity. We first construct and analyze two ABA specific cis-elements, ABA-responsive element (ABRE) and its coupling element (CE), in A.thaliana, based on their conservation in rice and other cereal plants. We then use the ABRE-CE module to identify putative ABA-responsive genes in A.thaliana. Based on RT-PCR verification and the results from literature, this method has an accuracy rate of 67.5% for the top 40 predictions. The cis-element based targeted gene finding approach is expected to be widely applicable since a large number of cis-elements in many species are available.

  19. [Trends of doctoral dissertations in nursing science: focused on studies submitted since 2000].

    PubMed

    Shin, Hyunsook; Sung, Kyung-Mi; Jeong, Seok Hee; Kim, Dae-Ran

    2008-02-01

    The purpose of this study was to identify the characteristics of doctoral dissertations in nursing science submitted since 2000. Three-hundred and five dissertations of six schools of nursing published from 2000 to 2006 in Korea were analyzed with the categories of philosophy, method, body of knowledge, research design, and nursing domain. In philosophy, 82% of all dissertations were identified as scientific realism, 15% were relativism, and 3% were practicism. Two-hundred and fifty dissertations (82%) were divided into a quantitative methodology and 55 dissertations (18%) were qualitative methodology. Specifically, 45% were experimental, 23% methodological, 13% survey and 17% qualitative designed researches. Prescriptive knowledge was created in 47% of dissertations, explanatory knowledge in 29%, and descriptive knowledge in 24%. Over 50% of all research was studied with a community-based population. In the nursing domain, dissertations of the practice domain were highest (48.2%). Dissertations since 2000 were markedly different from the characteristics of the previous studies (1982-1999) in the increase of situation-related, prescriptive and community-based population studies. A picture of current nursing science identified in this study may provide a future guideline for the doctoral education for nursing.

  20. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.

    PubMed

    Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max).

  1. Equivalent modulus method for finite element simulation of the sound absorption of anechoic coating backed with orthogonally rib-stiffened plate

    NASA Astrophysics Data System (ADS)

    Jin, Zhongkun; Yin, Yao; Liu, Bilong

    2016-03-01

    The finite element method is often used to investigate the sound absorption of anechoic coating backed with orthogonally rib-stiffened plate. Since the anechoic coating contains cavities, the number of grid nodes of a periodic unit cell is usually large. An equivalent modulus method is proposed to reduce the large amount of nodes by calculating an equivalent homogeneous layer. Applications of this method in several models show that the method can well predict the sound absorption coefficient of such structure in a wide frequency range. Based on the simulation results, the sound absorption performance of such structure and the influences of different backings on the first absorption peak are also discussed.

  2. FFT multislice method--the silver anniversary.

    PubMed

    Ishizuka, Kazuo

    2004-02-01

    The first paper on the FFT multislice method was published in 1977, a quarter of a century ago. The formula was extended in 1982 to include a large tilt of an incident beam relative to the specimen surface. Since then, with advances of computing power, the FFT multislice method has been successfully applied to coherent CBED and HAADF-STEM simulations. However, because the multislice formula is built on some physical approximations and approximations in numerical procedure, there seem to be controversial conclusions in the literature on the multislice method. In this report, the physical implication of the multislice method is reviewed based on the formula for the tilted illumination. Then, some results on the coherent CBED and the HAADF-STEM simulations are presented.

  3. Current perspectives in fragment-based lead discovery (FBLD)

    PubMed Central

    Lamoree, Bas; Hubbard, Roderick E.

    2017-01-01

    It is over 20 years since the first fragment-based discovery projects were disclosed. The methods are now mature for most ‘conventional’ targets in drug discovery such as enzymes (kinases and proteases) but there has also been growing success on more challenging targets, such as disruption of protein–protein interactions. The main application is to identify tractable chemical startpoints that non-covalently modulate the activity of a biological molecule. In this essay, we overview current practice in the methods and discuss how they have had an impact in lead discovery – generating a large number of fragment-derived compounds that are in clinical trials and two medicines treating patients. In addition, we discuss some of the more recent applications of the methods in chemical biology – providing chemical tools to investigate biological molecules, mechanisms and systems. PMID:29118093

  4. Total-energy Assisted Tight-binding Method Based on Local Density Approximation of Density Functional Theory

    NASA Astrophysics Data System (ADS)

    Fujiwara, Takeo; Nishino, Shinya; Yamamoto, Susumu; Suzuki, Takashi; Ikeda, Minoru; Ohtani, Yasuaki

    2018-06-01

    A novel tight-binding method is developed, based on the extended Hückel approximation and charge self-consistency, with referring the band structure and the total energy of the local density approximation of the density functional theory. The parameters are so adjusted by computer that the result reproduces the band structure and the total energy, and the algorithm for determining parameters is established. The set of determined parameters is applicable to a variety of crystalline compounds and change of lattice constants, and, in other words, it is transferable. Examples are demonstrated for Si crystals of several crystalline structures varying lattice constants. Since the set of parameters is transferable, the present tight-binding method may be applicable also to molecular dynamics simulations of large-scale systems and long-time dynamical processes.

  5. Study of vesicle size distribution dependence on pH value based on nanopore resistive pulse method

    NASA Astrophysics Data System (ADS)

    Lin, Yuqing; Rudzevich, Yauheni; Wearne, Adam; Lumpkin, Daniel; Morales, Joselyn; Nemec, Kathleen; Tatulian, Suren; Lupan, Oleg; Chow, Lee

    2013-03-01

    Vesicles are low-micron to sub-micron spheres formed by a lipid bilayer shell and serve as potential vehicles for drug delivery. The size of vesicle is proposed to be one of the instrumental variables affecting delivery efficiency since the size is correlated to factors like circulation and residence time in blood, the rate for cell endocytosis, and efficiency in cell targeting. In this work, we demonstrate accessible and reliable detection and size distribution measurement employing a glass nanopore device based on the resistive pulse method. This novel method enables us to investigate the size distribution dependence of pH difference across the membrane of vesicles with very small sample volume and rapid speed. This provides useful information for optimizing the efficiency of drug delivery in a pH sensitive environment.

  6. Detecting communities in large networks

    NASA Astrophysics Data System (ADS)

    Capocci, A.; Servedio, V. D. P.; Caldarelli, G.; Colaiori, F.

    2005-07-01

    We develop an algorithm to detect community structure in complex networks. The algorithm is based on spectral methods and takes into account weights and link orientation. Since the method detects efficiently clustered nodes in large networks even when these are not sharply partitioned, it turns to be specially suitable for the analysis of social and information networks. We test the algorithm on a large-scale data-set from a psychological experiment of word association. In this case, it proves to be successful both in clustering words, and in uncovering mental association patterns.

  7. Conjugate gradient heat bath for ill-conditioned actions.

    PubMed

    Ceriotti, Michele; Bussi, Giovanni; Parrinello, Michele

    2007-08-01

    We present a method for performing sampling from a Boltzmann distribution of an ill-conditioned quadratic action. This method is based on heat-bath thermalization along a set of conjugate directions, generated via a conjugate-gradient procedure. The resulting scheme outperforms local updates for matrices with very high condition number, since it avoids the slowing down of modes with lower eigenvalue, and has some advantages over the global heat-bath approach, compared to which it is more stable and allows for more freedom in devising case-specific optimizations.

  8. Advances in graphene-related technologies: synthesis, devices and outlook.

    PubMed

    Frazier, R M; Hough, W L; Chopra, N; Hathcock, K W

    2012-06-01

    Graphene has been the subject of many scientific investigations since exfoliation methods facilitated isolation of the two-dimensional material. During this time, new synthesis methods have been developed which have opened technological opportunities previously hindered by synthetic constraints. An update on the recent advances in graphene-based technologies, including synthesis and applications into electrical, mechanical and thermal uses will be covered. A special focus on the patent space and commercial landscape will be given in an effort to identify current trends and future commercialization of graphene-related technologies.

  9. A review of available analytical technologies for qualitative and quantitative determination of nitramines.

    PubMed

    Lindahl, Sofia; Gundersen, Cathrine Brecke; Lundanes, Elsa

    2014-08-01

    This review aims to summarize the available analytical methods in the open literature for the determination of some aliphatic and cyclic nitramines. Nitramines covered in this review are the ones that can be formed from the use of amines in post-combustion CO2 capture (PCC) plants and end up in the environment. Since the literature is quite scarce regarding the determination of nitramines in aqueous and soil samples, methods for determination of nitramines in other matrices have also been included. Since the nitramines are found in complex matrices and/or in very low concentration, an extraction step is often necessary before their determination. Liquid-liquid extraction (LLE) using dichloromethane and solid phase extraction (SPE) with an activated carbon based material have been the two most common extraction methods. Gas chromatography (GC) or reversed phase liquid chromatography (RPLC) has been used often combined with mass spectrometry (MS) in the final determination step. Presently there is no comprehensive method available that can be used for determination of all nitramines included in this review. The lowest concentration limit of quantification (cLOQ) is in the ng L(-1) range, however, most methods appear to have a cLOQ in the μg L(-1) range, if the cLOQ has been given.

  10. Adaptive photoacoustic imaging quality optimization with EMD and reconstruction

    NASA Astrophysics Data System (ADS)

    Guo, Chengwen; Ding, Yao; Yuan, Jie; Xu, Guan; Wang, Xueding; Carson, Paul L.

    2016-10-01

    Biomedical photoacoustic (PA) signal is characterized with extremely low signal to noise ratio which will yield significant artifacts in photoacoustic tomography (PAT) images. Since PA signals acquired by ultrasound transducers are non-linear and non-stationary, traditional data analysis methods such as Fourier and wavelet method cannot give useful information for further research. In this paper, we introduce an adaptive method to improve the quality of PA imaging based on empirical mode decomposition (EMD) and reconstruction. Data acquired by ultrasound transducers are adaptively decomposed into several intrinsic mode functions (IMFs) after a sifting pre-process. Since noise is randomly distributed in different IMFs, depressing IMFs with more noise while enhancing IMFs with less noise can effectively enhance the quality of reconstructed PAT images. However, searching optimal parameters by means of brute force searching algorithms will cost too much time, which prevent this method from practical use. To find parameters within reasonable time, heuristic algorithms, which are designed for finding good solutions more efficiently when traditional methods are too slow, are adopted in our method. Two of the heuristic algorithms, Simulated Annealing Algorithm, a probabilistic method to approximate the global optimal solution, and Artificial Bee Colony Algorithm, an optimization method inspired by the foraging behavior of bee swarm, are selected to search optimal parameters of IMFs in this paper. The effectiveness of our proposed method is proved both on simulated data and PA signals from real biomedical tissue, which might bear the potential for future clinical PA imaging de-noising.

  11. A novel method for detecting light source for digital images forensic

    NASA Astrophysics Data System (ADS)

    Roy, A. K.; Mitra, S. K.; Agrawal, R.

    2011-06-01

    Manipulation in image has been in practice since centuries. These manipulated images are intended to alter facts — facts of ethics, morality, politics, sex, celebrity or chaos. Image forensic science is used to detect these manipulations in a digital image. There are several standard ways to analyze an image for manipulation. Each one has some limitation. Also very rarely any method tried to capitalize on the way image was taken by the camera. We propose a new method that is based on light and its shade as light and shade are the fundamental input resources that may carry all the information of the image. The proposed method measures the direction of light source and uses the light based technique for identification of any intentional partial manipulation in the said digital image. The method is tested for known manipulated images to correctly identify the light sources. The light source of an image is measured in terms of angle. The experimental results show the robustness of the methodology.

  12. A new approach for automatic matching of ground control points in urban areas from heterogeneous images

    NASA Astrophysics Data System (ADS)

    Cong, Chao; Liu, Dingsheng; Zhao, Lingjun

    2008-12-01

    This paper discusses a new method for the automatic matching of ground control points (GCPs) between satellite remote sensing Image and digital raster graphic (DRG) in urban areas. The key of this method is to automatically extract tie point pairs according to geographic characters from such heterogeneous images. Since there are big differences between such heterogeneous images respect to texture and corner features, more detail analyzations are performed to find similarities and differences between high resolution remote sensing Image and (DRG). Furthermore a new algorithms based on the fuzzy-c means (FCM) method is proposed to extract linear feature in remote sensing Image. Based on linear feature, crossings and corners extracted from these features are chosen as GCPs. On the other hand, similar method was used to find same features from DRGs. Finally, Hausdorff Distance was adopted to pick matching GCPs from above two GCP groups. Experiences shown the method can extract GCPs from such images with a reasonable RMS error.

  13. spMC: an R-package for 3D lithological reconstructions based on spatial Markov chains

    NASA Astrophysics Data System (ADS)

    Sartore, Luca; Fabbri, Paolo; Gaetan, Carlo

    2016-09-01

    The paper presents the spatial Markov Chains (spMC) R-package and a case study of subsoil simulation/prediction located in a plain site of Northeastern Italy. spMC is a quite complete collection of advanced methods for data inspection, besides spMC implements Markov Chain models to estimate experimental transition probabilities of categorical lithological data. Furthermore, simulation methods based on most known prediction methods (as indicator Kriging and CoKriging) were implemented in spMC package. Moreover, other more advanced methods are available for simulations, e.g. path methods and Bayesian procedures, that exploit the maximum entropy. Since the spMC package was developed for intensive geostatistical computations, part of the code is implemented for parallel computations via the OpenMP constructs. A final analysis of this computational efficiency compares the simulation/prediction algorithms by using different numbers of CPU cores, and considering the example data set of the case study included in the package.

  14. Advertisement recognition using mode voting acoustic fingerprint

    NASA Astrophysics Data System (ADS)

    Fahmi, Reza; Abedi Firouzjaee, Hosein; Janalizadeh Choobbasti, Ali; Mortazavi Najafabadi, S. H. E.; Safavi, Saeid

    2017-12-01

    Emergence of media outlets and public relations tools such as TV, radio and the Internet since the 20th century provided the companies with a good platform for advertising their goods and services. Advertisement recognition is an important task that can help companies measure the efficiency of their advertising campaigns in the market and make it possible to compare their performance with competitors in order to get better business insights. Advertisement recognition is usually performed manually with help of human labor or is done through automated methods that are mainly based on heuristics features, these methods usually lack abilities such as scalability, being able to be generalized and be used in different situations. In this paper, we present an automated method for advertisement recognition based on audio processing method that could make this process fairly simple and eliminate the human factor out of the equation. This method has ultimately been used in Miras information technology in order to monitor 56 TV channels to detect all ad video clips broadcast over some networks.

  15. Efficient full-chip SRAF placement using machine learning for best accuracy and improved consistency

    NASA Astrophysics Data System (ADS)

    Wang, Shibing; Baron, Stanislas; Kachwala, Nishrin; Kallingal, Chidam; Sun, Dezheng; Shu, Vincent; Fong, Weichun; Li, Zero; Elsaid, Ahmad; Gao, Jin-Wei; Su, Jing; Ser, Jung-Hoon; Zhang, Quan; Chen, Been-Der; Howell, Rafael; Hsu, Stephen; Luo, Larry; Zou, Yi; Zhang, Gary; Lu, Yen-Wen; Cao, Yu

    2018-03-01

    Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time. Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO). Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy. ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges. In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.

  16. Localization Algorithm Based on a Spring Model (LASM) for Large Scale Wireless Sensor Networks.

    PubMed

    Chen, Wanming; Mei, Tao; Meng, Max Q-H; Liang, Huawei; Liu, Yumei; Li, Yangming; Li, Shuai

    2008-03-15

    A navigation method for a lunar rover based on large scale wireless sensornetworks is proposed. To obtain high navigation accuracy and large exploration area, highnode localization accuracy and large network scale are required. However, thecomputational and communication complexity and time consumption are greatly increasedwith the increase of the network scales. A localization algorithm based on a spring model(LASM) method is proposed to reduce the computational complexity, while maintainingthe localization accuracy in large scale sensor networks. The algorithm simulates thedynamics of physical spring system to estimate the positions of nodes. The sensor nodesare set as particles with masses and connected with neighbor nodes by virtual springs. Thevirtual springs will force the particles move to the original positions, the node positionscorrespondingly, from the randomly set positions. Therefore, a blind node position can bedetermined from the LASM algorithm by calculating the related forces with the neighbornodes. The computational and communication complexity are O(1) for each node, since thenumber of the neighbor nodes does not increase proportionally with the network scale size.Three patches are proposed to avoid local optimization, kick out bad nodes and deal withnode variation. Simulation results show that the computational and communicationcomplexity are almost constant despite of the increase of the network scale size. The time consumption has also been proven to remain almost constant since the calculation steps arealmost unrelated with the network scale size.

  17. Enzymes in removal of pharmaceuticals from wastewater: A critical review of challenges, applications and screening methods for their selection.

    PubMed

    Stadlmair, Lara F; Letzel, Thomas; Drewes, Jörg E; Grassmann, Johanna

    2018-08-01

    At present, the removal of trace organic chemicals such as pharmaceuticals in wastewater treatment plants is often incomplete resulting in a continuous discharge into the aqueous environment. To overcome this issue, bioremediation approaches gained significant importance in recent times, since they might have a lower carbon footprint than chemical or physical treatment methods. In this context, enzyme-based technologies represent a promising alternative since they are able to specifically target certain chemicals. For this purpose, versatile monitoring of enzymatic reactions is of great importance in order to understand underlying transformation mechanisms and estimate the suitability of various enzymes exhibiting different specificities for bioremediation purposes. This study provides a comprehensive review, summarizing research on enzymatic transformation of pharmaceuticals in water treatment applications using traditional and state-of-the-art enzyme screening approaches with a special focus on mass spectrometry (MS)-based and high-throughput tools. MS-based enzyme screening represents an approach that allows a comprehensive mechanistic understanding of enzymatic reactions and, in particular, the identification of transformation products. A critical discussion of these approaches for implementation in wastewater treatment processes is also presented. So far, there are still major gaps between laboratory- and field-scale research that need to be overcome in order to assess the viability for real applications. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Mobile indoor localization using Kalman filter and trilateration technique

    NASA Astrophysics Data System (ADS)

    Wahid, Abdul; Kim, Su Mi; Choi, Jaeho

    2015-12-01

    In this paper, an indoor localization method based on Kalman filtered RSSI is presented. The indoor communications environment however is rather harsh to the mobiles since there is a substantial number of objects distorting the RSSI signals; fading and interference are main sources of the distortion. In this paper, a Kalman filter is adopted to filter the RSSI signals and the trilateration method is applied to obtain the robust and accurate coordinates of the mobile station. From the indoor experiments using the WiFi stations, we have found that the proposed algorithm can provide a higher accuracy with relatively lower power consumption in comparison to a conventional method.

  19. Antipsychotic drug poisoning monitoring of clozapine in urine by using coffee ring effect based surface-enhanced Raman spectroscopy.

    PubMed

    Zhu, Qingxia; Yu, Xiaoyan; Wu, Zebing; Lu, Feng; Yuan, Yongfang

    2018-07-19

    Antipsychotics are the drugs most often involved in drug poisoning cases, and therefore, therapeutic drug monitoring (TDM) is necessary for safe and effective medication administration of these drugs. In this study, a coffee ring effect-based surface-enhanced Raman spectroscopy (CRE-SERS) method was developed and successfully used to monitor antipsychotic poisoning by using urine samples for the first time. The established method exhibited excellent SERS performance since more hot spots were obtained in the "coffee ring". Using the optimized CRE-SERS method, the sensitivity was improved one order more than that of the conventional method with reasonable reproducibility. The antipsychotic drug clozapine (CLO) spiked into urine samples at 0.5-50 μg mL -1 was quantitatively detected, at concentrations above the thresholds for toxicity. The CRE-SERS method allowed CLO and its metabolites to be ultimately distinguished from real poisoning urine samples. The coffee-ring effect would provide more opportunities for practical applications of the SERS-based method. The frequent occurrence of drug poisoning may have created a new area for the application of the CRE-SERS method. It is anticipated that the developed method will also have great potential for other drug poisoning monitoring. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Solving large sparse eigenvalue problems on supercomputers

    NASA Technical Reports Server (NTRS)

    Philippe, Bernard; Saad, Youcef

    1988-01-01

    An important problem in scientific computing consists in finding a few eigenvalues and corresponding eigenvectors of a very large and sparse matrix. The most popular methods to solve these problems are based on projection techniques on appropriate subspaces. The main attraction of these methods is that they only require the use of the matrix in the form of matrix by vector multiplications. The implementations on supercomputers of two such methods for symmetric matrices, namely Lanczos' method and Davidson's method are compared. Since one of the most important operations in these two methods is the multiplication of vectors by the sparse matrix, methods of performing this operation efficiently are discussed. The advantages and the disadvantages of each method are compared and implementation aspects are discussed. Numerical experiments on a one processor CRAY 2 and CRAY X-MP are reported. Possible parallel implementations are also discussed.

  1. A velocity-correction projection method based immersed boundary method for incompressible flows

    NASA Astrophysics Data System (ADS)

    Cai, Shanggui

    2014-11-01

    In the present work we propose a novel direct forcing immersed boundary method based on the velocity-correction projection method of [J.L. Guermond, J. Shen, Velocity-correction projection methods for incompressible flows, SIAM J. Numer. Anal., 41 (1)(2003) 112]. The principal idea of immersed boundary method is to correct the velocity in the vicinity of the immersed object by using an artificial force to mimic the presence of the physical boundaries. Therefore, velocity-correction projection method is preferred to its pressure-correction counterpart in the present work. Since the velocity-correct projection method is considered as a dual class of pressure-correction method, the proposed method here can also be interpreted in the way that first the pressure is predicted by treating the viscous term explicitly without the consideration of the immersed boundary, and the solenoidal velocity is used to determine the volume force on the Lagrangian points, then the non-slip boundary condition is enforced by correcting the velocity with the implicit viscous term. To demonstrate the efficiency and accuracy of the proposed method, several numerical simulations are performed and compared with the results in the literature. China Scholarship Council.

  2. Virus Particle Detection by Convolutional Neural Network in Transmission Electron Microscopy Images.

    PubMed

    Ito, Eisuke; Sato, Takaaki; Sano, Daisuke; Utagawa, Etsuko; Kato, Tsuyoshi

    2018-06-01

    A new computational method for the detection of virus particles in transmission electron microscopy (TEM) images is presented. Our approach is to use a convolutional neural network that transforms a TEM image to a probabilistic map that indicates where virus particles exist in the image. Our proposed approach automatically and simultaneously learns both discriminative features and classifier for virus particle detection by machine learning, in contrast to existing methods that are based on handcrafted features that yield many false positives and require several postprocessing steps. The detection performance of the proposed method was assessed against a dataset of TEM images containing feline calicivirus particles and compared with several existing detection methods, and the state-of-the-art performance of the developed method for detecting virus was demonstrated. Since our method is based on supervised learning that requires both the input images and their corresponding annotations, it is basically used for detection of already-known viruses. However, the method is highly flexible, and the convolutional networks can adapt themselves to any virus particles by learning automatically from an annotated dataset.

  3. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method

    PubMed Central

    2011-01-01

    Background Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. Methods We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views), knee (three views), wrist (two views), and other. Indirect costs were allocated to cost objects using the ABC method. Results The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. Conclusions The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients. PMID:21961846

  4. A hybrid robust fault tolerant control based on adaptive joint unscented Kalman filter.

    PubMed

    Shabbouei Hagh, Yashar; Mohammadi Asl, Reza; Cocquempot, Vincent

    2017-01-01

    In this paper, a new hybrid robust fault tolerant control scheme is proposed. A robust H ∞ control law is used in non-faulty situation, while a Non-Singular Terminal Sliding Mode (NTSM) controller is activated as soon as an actuator fault is detected. Since a linear robust controller is designed, the system is first linearized through the feedback linearization method. To switch from one controller to the other, a fuzzy based switching system is used. An Adaptive Joint Unscented Kalman Filter (AJUKF) is used for fault detection and diagnosis. The proposed method is based on the simultaneous estimation of the system states and parameters. In order to show the efficiency of the proposed scheme, a simulated 3-DOF robotic manipulator is used. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Financial Exploitation and Psychological Mistreatment among Older Adults: Differences between African Americans and Non-African Americans in a Population-Based Survey

    ERIC Educational Resources Information Center

    Beach, Scott R.; Schulz, Richard; Castle, Nicholas G.; Rosen, Jules

    2010-01-01

    Purpose: To examine racial differences in (a) the prevalence of financial exploitation and psychological mistreatment since turning 60 and in the past 6 months and (b) the experience--perpetrator, frequency, and degree of upset--of psychological mistreatment in the past 6 months. Design and methods: Random digit dial telephone recruitment and…

  6. Toward Applications for DNA Nanotechnology-More Bricks To Build With.

    PubMed

    Dietz, Hendrik

    2016-06-16

    Another brick in the wall: DNA nanotechnology has come a long way since its initial beginnings. This would not be possible without the continued development of methods for DNA assembly and new uses for DNA as a material. This Special Issue highlights some of the newest building blocks for nanodevices based on DNA. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. The Reinforcement to the Citizen Participation in Taking Care of the Environmental Protection Towards a Green Moral Concept-Based Sustainable Development

    ERIC Educational Resources Information Center

    Syahri, Mohamad

    2016-01-01

    The research sites were "Blitar, Malang" and "Batu" in East Java Province, Indonesia since those areas are regarded to have problems of environmental crises. In the data collection, this study made use of four methods, namely: a) observation, b) In-depth Interviews, c) documentation, and d) focus group discussion. The results…

  8. Recognition of Prior Learning at the Centre of a National Strategy: Tensions between Professional Gains and Personal Development

    ERIC Educational Resources Information Center

    Lima, Licínio C.; Guimarães, Paula

    2016-01-01

    This paper focuses on recognition of prior learning as part of a national policy based on European Union guidelines for lifelong learning, and it explains how recognition of prior learning has been perceived since it was implemented in Portugal in 2000. Data discussed are the result of a mixed method research project that surveyed adult learners,…

  9. A Mixed Approaches Method Used to Investigate Teacher Cognition of English Language Teaching

    ERIC Educational Resources Information Center

    Hung, Nguyen Viet

    2012-01-01

    This paper is a part in a bigger research project related to ELT quality in secondary schools in Vietnam since the implementation of the new series of English textbooks was officially passed by in 2006, and the innovated direction was paid to task-based language teaching (TBLT). The purpose of this paper is to make argumentation on why, what and…

  10. Water mass changes inferred by gravity field variations with GRACE

    NASA Astrophysics Data System (ADS)

    Fagiolini, Elisa; Gruber, Christian; Apel, Heiko; Viet Dung, Nguyen; Güntner, Andreas

    2013-04-01

    Since 2002 the Gravity Recovery And Climate Experiment (GRACE) mission has been measuring temporal variations of Earth's gravity field depicting with extreme accuracy how mass is distributed and varies around the globe. Advanced signal separation techniques enable to isolate different sources of mass such as atmospheric and oceanic circulation or land hydrology. Nowadays thanks to GRACE, floods, droughts, and water resources monitoring are possible on a global scale. At GFZ Potsdam scientists have been involved since 2000 in the initiation and launch of the GRACE precursor CHAMP satellite mission, since 2002 in the GRACE Science Data System and since 2009 in the frame of ESÁs GOCE High Processing Facility as well as projected GRACE FOLLOW-ON for the continuation of time variable gravity field determination. Recently GFZ has reprocessed the complete GRACE time-series of monthly gravity field spherical harmonic solutions with improved standards and background models. This new release (RL05) already shows significantly less noise and spurious artifacts. In order to monitor water mass re-distribution and fast moving water, we still need to reach a higher resolution in both time and space. Moreover, in view of disaster management applications we need to act with a shorter latency (current latency standard is 2 months). For this purpose, we developed a regional method based on radial base functions that is capable to compute models in regional and global representation. This new method localizes the gravity observation to the closest regions and omits spatial correlations with farther regions. Additionally, we succeeded to increase the temporal resolution to sub-monthly time scales. Innovative concepts such as Kalman filtering and regularization, along with sophisticated regional modeling have shifted temporal and spatial resolution towards new frontiers. We expect global hydrological models as WHGM to profit from such accurate outcomes. First results comparing the mass changes over the Mekong Delta observed with GRACE with spatial explicit hydraulic simulations of the large scale annual inundation volume during the flood season are presented and discussed.

  11. Planetary protection - assaying new methods

    NASA Astrophysics Data System (ADS)

    Nellen, J.; Rettberg, P.; Horneck, G.

    Space age began in 1957 when the USSR launched the first satellite into earth orbit. In response to this new challenge the International Council for Science, formerly know as International Council of Scientific Unions (ICSU), established the Committee on Space Research (COSPAR) in 1958. The role of COSPAR was to channel the international scientific research in space and establish an international forum. Through COSPAR the scientific community agreed on the need for screening interplanetary probes for forward (contamination of foreign planets) and backward (contamination of earth by returned samples/probes) contamination. To prevent both forms of contamination a set of rules, as a guideline was established. Nowadays the standard implementation of the planetary protection rules is based on the experience gained during NASA's Viking project in 1975/76. Since then the evaluation-methods for microbial contamination of spacecrafts have been changed or updated just slowly. In this study the standard method of sample taking will be evaluated. New methods for examination of those samples, based on the identification of life on the molecular level, will be reviewed and checked for their feasibility as microbial detection systems. The methods will be examined for their qualitative (detection and verification of different organisms) and quantitative (detection limit and concentration verification) qualities. Amongst the methods analyzed will be i.e. real-time / PCR (poly-chain-reaction), using specific primer-sets for the amplification of highly conserved rRNA or DNA regions. Measurement of intrinsic fluorescence, i.e ATP using luciferin-luciferase reagents. The use of FAME (fatty acid methyl esters) and microchips for microbial identification purposes. The methods will be chosen to give a good overall coverage of different possible molecular markers and approaches. The most promising methods shall then be lab-tested and evaluated for their use under spacecraft assembly conditions. Since mars became one of the most sought-after planets in our solar system and will be visited by man-made probes quiet often in the near future, planetary protection is as important as never before.

  12. A Bootstrap Based Measure Robust to the Choice of Normalization Methods for Detecting Rhythmic Features in High Dimensional Data.

    PubMed

    Larriba, Yolanda; Rueda, Cristina; Fernández, Miguel A; Peddada, Shyamal D

    2018-01-01

    Motivation: Gene-expression data obtained from high throughput technologies are subject to various sources of noise and accordingly the raw data are pre-processed before formally analyzed. Normalization of the data is a key pre-processing step, since it removes systematic variations across arrays. There are numerous normalization methods available in the literature. Based on our experience, in the context of oscillatory systems, such as cell-cycle, circadian clock, etc., the choice of the normalization method may substantially impact the determination of a gene to be rhythmic. Thus rhythmicity of a gene can purely be an artifact of how the data were normalized. Since the determination of rhythmic genes is an important component of modern toxicological and pharmacological studies, it is important to determine truly rhythmic genes that are robust to the choice of a normalization method. Results: In this paper we introduce a rhythmicity measure and a bootstrap methodology to detect rhythmic genes in an oscillatory system. Although the proposed methodology can be used for any high-throughput gene expression data, in this paper we illustrate the proposed methodology using several publicly available circadian clock microarray gene-expression datasets. We demonstrate that the choice of normalization method has very little effect on the proposed methodology. Specifically, for any pair of normalization methods considered in this paper, the resulting values of the rhythmicity measure are highly correlated. Thus it suggests that the proposed measure is robust to the choice of a normalization method. Consequently, the rhythmicity of a gene is potentially not a mere artifact of the normalization method used. Lastly, as demonstrated in the paper, the proposed bootstrap methodology can also be used for simulating data for genes participating in an oscillatory system using a reference dataset. Availability: A user friendly code implemented in R language can be downloaded from http://www.eio.uva.es/~miguel/robustdetectionprocedure.html.

  13. A Bootstrap Based Measure Robust to the Choice of Normalization Methods for Detecting Rhythmic Features in High Dimensional Data

    PubMed Central

    Larriba, Yolanda; Rueda, Cristina; Fernández, Miguel A.; Peddada, Shyamal D.

    2018-01-01

    Motivation: Gene-expression data obtained from high throughput technologies are subject to various sources of noise and accordingly the raw data are pre-processed before formally analyzed. Normalization of the data is a key pre-processing step, since it removes systematic variations across arrays. There are numerous normalization methods available in the literature. Based on our experience, in the context of oscillatory systems, such as cell-cycle, circadian clock, etc., the choice of the normalization method may substantially impact the determination of a gene to be rhythmic. Thus rhythmicity of a gene can purely be an artifact of how the data were normalized. Since the determination of rhythmic genes is an important component of modern toxicological and pharmacological studies, it is important to determine truly rhythmic genes that are robust to the choice of a normalization method. Results: In this paper we introduce a rhythmicity measure and a bootstrap methodology to detect rhythmic genes in an oscillatory system. Although the proposed methodology can be used for any high-throughput gene expression data, in this paper we illustrate the proposed methodology using several publicly available circadian clock microarray gene-expression datasets. We demonstrate that the choice of normalization method has very little effect on the proposed methodology. Specifically, for any pair of normalization methods considered in this paper, the resulting values of the rhythmicity measure are highly correlated. Thus it suggests that the proposed measure is robust to the choice of a normalization method. Consequently, the rhythmicity of a gene is potentially not a mere artifact of the normalization method used. Lastly, as demonstrated in the paper, the proposed bootstrap methodology can also be used for simulating data for genes participating in an oscillatory system using a reference dataset. Availability: A user friendly code implemented in R language can be downloaded from http://www.eio.uva.es/~miguel/robustdetectionprocedure.html PMID:29456555

  14. Retrieval Algorithms for Road Surface Modelling Using Laser-Based Mobile Mapping.

    PubMed

    Jaakkola, Anttoni; Hyyppä, Juha; Hyyppä, Hannu; Kukko, Antero

    2008-09-01

    Automated processing of the data provided by a laser-based mobile mapping system will be a necessity due to the huge amount of data produced. In the future, vehiclebased laser scanning, here called mobile mapping, should see considerable use for road environment modelling. Since the geometry of the scanning and point density is different from airborne laser scanning, new algorithms are needed for information extraction. In this paper, we propose automatic methods for classifying the road marking and kerbstone points and modelling the road surface as a triangulated irregular network. On the basis of experimental tests, the mean classification accuracies obtained using automatic method for lines, zebra crossings and kerbstones were 80.6%, 92.3% and 79.7%, respectively.

  15. An automatic and efficient pipeline for disease gene identification through utilizing family-based sequencing data.

    PubMed

    Song, Dandan; Li, Ning; Liao, Lejian

    2015-01-01

    Due to the generation of enormous amounts of data at both lower costs as well as in shorter times, whole-exome sequencing technologies provide dramatic opportunities for identifying disease genes implicated in Mendelian disorders. Since upwards of thousands genomic variants can be sequenced in each exome, it is challenging to filter pathogenic variants in protein coding regions and reduce the number of missing true variants. Therefore, an automatic and efficient pipeline for finding disease variants in Mendelian disorders is designed by exploiting a combination of variants filtering steps to analyze the family-based exome sequencing approach. Recent studies on the Freeman-Sheldon disease are revisited and show that the proposed method outperforms other existing candidate gene identification methods.

  16. A technique based on droplet evaporation to recognize alcoholic drinks

    NASA Astrophysics Data System (ADS)

    González-Gutiérrez, Jorge; Pérez-Isidoro, Rosendo; Ruiz-Suárez, J. C.

    2017-07-01

    Chromatography is, at present, the most used technique to determine the purity of alcoholic drinks. This involves a careful separation of the components of the liquid elements. However, since this technique requires sophisticated instrumentation, there are alternative techniques such as conductivity measurements and UV-Vis and infrared spectrometries. We report here a method based on salt-induced crystallization patterns formed during the evaporation of alcoholic drops. We found that droplets of different samples form different structures upon drying, which we characterize by their radial density profiles. We prove that using the dried deposit of a spirit as a control sample, our method allows us to differentiate between pure and adulterated drinks. As a proof of concept, we study tequila.

  17. TMA Vessel Segmentation Based on Color and Morphological Features: Application to Angiogenesis Research

    PubMed Central

    Fernández-Carrobles, M. Milagro; Tadeo, Irene; Bueno, Gloria; Noguera, Rosa; Déniz, Oscar; Salido, Jesús; García-Rojo, Marcial

    2013-01-01

    Given that angiogenesis and lymphangiogenesis are strongly related to prognosis in neoplastic and other pathologies and that many methods exist that provide different results, we aim to construct a morphometric tool allowing us to measure different aspects of the shape and size of vascular vessels in a complete and accurate way. The developed tool presented is based on vessel closing which is an essential property to properly characterize the size and the shape of vascular and lymphatic vessels. The method is fast and accurate improving existing tools for angiogenesis analysis. The tool also improves the accuracy of vascular density measurements, since the set of endothelial cells forming a vessel is considered as a single object. PMID:24489494

  18. Observer-based adaptive backstepping control for fractional order systems with input saturation.

    PubMed

    Sheng, Dian; Wei, Yiheng; Cheng, Songsong; Wang, Yong

    2017-07-03

    An observer-based fractional order anti-saturation adaptive backstepping control scheme is proposed for incommensurate fractional order systems with input saturation and partial measurable state in this paper. On the basis of stability analysis, a novel state observer is established first since the only information we could acquire is the system output. In order to compensate the saturation, a series of virtual signals are generated via the construction of fractional order auxiliary system. Afterwards, the controller design is carried out in accordance with the adaptive backstepping control method by introduction of the indirect Lyapunov method. To highlight the effectiveness of the proposed control scheme, simulation examples are demonstrated at last. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  19. Plexiform neurofibroma tissue classification

    NASA Astrophysics Data System (ADS)

    Weizman, L.; Hoch, L.; Ben Sira, L.; Joskowicz, L.; Pratt, L.; Constantini, S.; Ben Bashat, D.

    2011-03-01

    Plexiform Neurofibroma (PN) is a major complication of NeuroFibromatosis-1 (NF1), a common genetic disease that involving the nervous system. PNs are peripheral nerve sheath tumors extending along the length of the nerve in various parts of the body. Treatment decision is based on tumor volume assessment using MRI, which is currently time consuming and error prone, with limited semi-automatic segmentation support. We present in this paper a new method for the segmentation and tumor mass quantification of PN from STIR MRI scans. The method starts with a user-based delineation of the tumor area in a single slice and automatically detects the PN lesions in the entire image based on the tumor connectivity. Experimental results on seven datasets yield a mean volume overlap difference of 25% as compared to manual segmentation by expert radiologist with a mean computation and interaction time of 12 minutes vs. over an hour for manual annotation. Since the user interaction in the segmentation process is minimal, our method has the potential to successfully become part of the clinical workflow.

  20. High resolution melting (HRM) analysis of DNA--its role and potential in food analysis.

    PubMed

    Druml, Barbara; Cichna-Markl, Margit

    2014-09-01

    DNA based methods play an increasing role in food safety control and food adulteration detection. Recent papers show that high resolution melting (HRM) analysis is an interesting approach. It involves amplification of the target of interest in the presence of a saturation dye by the polymerase chain reaction (PCR) and subsequent melting of the amplicons by gradually increasing the temperature. Since the melting profile depends on the GC content, length, sequence and strand complementarity of the product, HRM analysis is highly suitable for the detection of single-base variants and small insertions or deletions. The review gives an introduction into HRM analysis, covers important aspects in the development of an HRM analysis method and describes how HRM data are analysed and interpreted. Then we discuss the potential of HRM analysis based methods in food analysis, i.e. for the identification of closely related species and cultivars and the identification of pathogenic microorganisms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Molecular diagnostic methods for invasive fungal disease: the horizon draws nearer?

    PubMed

    Halliday, C L; Kidd, S E; Sorrell, T C; Chen, S C-A

    2015-04-01

    Rapid, accurate diagnostic laboratory tests are needed to improve clinical outcomes of invasive fungal disease (IFD). Traditional direct microscopy, culture and histological techniques constitute the 'gold standard' against which newer tests are judged. Molecular diagnostic methods, whether broad-range or fungal-specific, have great potential to enhance sensitivity and speed of IFD diagnosis, but have varying specificities. The use of PCR-based assays, DNA sequencing, and other molecular methods including those incorporating proteomic approaches such as matrix-assisted laser desorption ionisation-time of flight mass spectroscopy (MALDI-TOF MS) have shown promising results. These are used mainly to complement conventional methods since they require standardisation before widespread implementation can be recommended. None are incorporated into diagnostic criteria for defining IFD. Commercial assays may assist standardisation. This review provides an update of molecular-based diagnostic approaches applicable to biological specimens and fungal cultures in microbiology laboratories. We focus on the most common pathogens, Candida and Aspergillus, and the mucormycetes. The position of molecular-based approaches in the detection of azole and echinocandin antifungal resistance is also discussed.

  2. Seismic noise attenuation using an online subspace tracking algorithm

    NASA Astrophysics Data System (ADS)

    Zhou, Yatong; Li, Shuhua; Zhang, Dong; Chen, Yangkang

    2018-02-01

    We propose a new low-rank based noise attenuation method using an efficient algorithm for tracking subspaces from highly corrupted seismic observations. The subspace tracking algorithm requires only basic linear algebraic manipulations. The algorithm is derived by analysing incremental gradient descent on the Grassmannian manifold of subspaces. When the multidimensional seismic data are mapped to a low-rank space, the subspace tracking algorithm can be directly applied to the input low-rank matrix to estimate the useful signals. Since the subspace tracking algorithm is an online algorithm, it is more robust to random noise than traditional truncated singular value decomposition (TSVD) based subspace tracking algorithm. Compared with the state-of-the-art algorithms, the proposed denoising method can obtain better performance. More specifically, the proposed method outperforms the TSVD-based singular spectrum analysis method in causing less residual noise and also in saving half of the computational cost. Several synthetic and field data examples with different levels of complexities demonstrate the effectiveness and robustness of the presented algorithm in rejecting different types of noise including random noise, spiky noise, blending noise, and coherent noise.

  3. Inferring drug-disease associations based on known protein complexes.

    PubMed

    Yu, Liang; Huang, Jianbin; Ma, Zhixin; Zhang, Jing; Zou, Yapeng; Gao, Lin

    2015-01-01

    Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html.

  4. Modeling of frequency agile devices: development of PKI neuromodeling library based on hierarchical network structure

    NASA Astrophysics Data System (ADS)

    Sanchez, P.; Hinojosa, J.; Ruiz, R.

    2005-06-01

    Recently, neuromodeling methods of microwave devices have been developed. These methods are suitable for the model generation of novel devices. They allow fast and accurate simulations and optimizations. However, the development of libraries makes these methods to be a formidable task, since they require massive input-output data provided by an electromagnetic simulator or measurements and repeated artificial neural network (ANN) training. This paper presents a strategy reducing the cost of library development with the advantages of the neuromodeling methods: high accuracy, large range of geometrical and material parameters and reduced CPU time. The library models are developed from a set of base prior knowledge input (PKI) models, which take into account the characteristics common to all the models in the library, and high-level ANNs which give the library model outputs from base PKI models. This technique is illustrated for a microwave multiconductor tunable phase shifter using anisotropic substrates. Closed-form relationships have been developed and are presented in this paper. The results show good agreement with the expected ones.

  5. PHYLOViZ: phylogenetic inference and data visualization for sequence based typing methods

    PubMed Central

    2012-01-01

    Background With the decrease of DNA sequencing costs, sequence-based typing methods are rapidly becoming the gold standard for epidemiological surveillance. These methods provide reproducible and comparable results needed for a global scale bacterial population analysis, while retaining their usefulness for local epidemiological surveys. Online databases that collect the generated allelic profiles and associated epidemiological data are available but this wealth of data remains underused and are frequently poorly annotated since no user-friendly tool exists to analyze and explore it. Results PHYLOViZ is platform independent Java software that allows the integrated analysis of sequence-based typing methods, including SNP data generated from whole genome sequence approaches, and associated epidemiological data. goeBURST and its Minimum Spanning Tree expansion are used for visualizing the possible evolutionary relationships between isolates. The results can be displayed as an annotated graph overlaying the query results of any other epidemiological data available. Conclusions PHYLOViZ is a user-friendly software that allows the combined analysis of multiple data sources for microbial epidemiological and population studies. It is freely available at http://www.phyloviz.net. PMID:22568821

  6. Inferring drug-disease associations based on known protein complexes

    PubMed Central

    2015-01-01

    Inferring drug-disease associations is critical in unveiling disease mechanisms, as well as discovering novel functions of available drugs, or drug repositioning. Previous work is primarily based on drug-gene-disease relationship, which throws away many important information since genes execute their functions through interacting others. To overcome this issue, we propose a novel methodology that discover the drug-disease association based on protein complexes. Firstly, the integrated heterogeneous network consisting of drugs, protein complexes, and disease are constructed, where we assign weights to the drug-disease association by using probability. Then, from the tripartite network, we get the indirect weighted relationships between drugs and diseases. The larger the weight, the higher the reliability of the correlation. We apply our method to mental disorders and hypertension, and validate the result by using comparative toxicogenomics database. Our ranked results can be directly reinforced by existing biomedical literature, suggesting that our proposed method obtains higher specificity and sensitivity. The proposed method offers new insight into drug-disease discovery. Our method is publicly available at http://1.complexdrug.sinaapp.com/Drug_Complex_Disease/Data_Download.html. PMID:26044949

  7. Speckle noise reduction for optical coherence tomography based on adaptive 2D dictionary

    NASA Astrophysics Data System (ADS)

    Lv, Hongli; Fu, Shujun; Zhang, Caiming; Zhai, Lin

    2018-05-01

    As a high-resolution biomedical imaging modality, optical coherence tomography (OCT) is widely used in medical sciences. However, OCT images often suffer from speckle noise, which can mask some important image information, and thus reduce the accuracy of clinical diagnosis. Taking full advantage of nonlocal self-similarity and adaptive 2D-dictionary-based sparse representation, in this work, a speckle noise reduction algorithm is proposed for despeckling OCT images. To reduce speckle noise while preserving local image features, similar nonlocal patches are first extracted from the noisy image and put into groups using a gamma- distribution-based block matching method. An adaptive 2D dictionary is then learned for each patch group. Unlike traditional vector-based sparse coding, we express each image patch by the linear combination of a few matrices. This image-to-matrix method can exploit the local correlation between pixels. Since each image patch might belong to several groups, the despeckled OCT image is finally obtained by aggregating all filtered image patches. The experimental results demonstrate the superior performance of the proposed method over other state-of-the-art despeckling methods, in terms of objective metrics and visual inspection.

  8. Evaluation of passenger health risk assessment of sustainable indoor air quality monitoring in metro systems based on a non-Gaussian dynamic sensor validation method.

    PubMed

    Kim, MinJeong; Liu, Hongbin; Kim, Jeong Tai; Yoo, ChangKyoo

    2014-08-15

    Sensor faults in metro systems provide incorrect information to indoor air quality (IAQ) ventilation systems, resulting in the miss-operation of ventilation systems and adverse effects on passenger health. In this study, a new sensor validation method is proposed to (1) detect, identify and repair sensor faults and (2) evaluate the influence of sensor reliability on passenger health risk. To address the dynamic non-Gaussianity problem of IAQ data, dynamic independent component analysis (DICA) is used. To detect and identify sensor faults, the DICA-based squared prediction error and sensor validity index are used, respectively. To restore the faults to normal measurements, a DICA-based iterative reconstruction algorithm is proposed. The comprehensive indoor air-quality index (CIAI) that evaluates the influence of the current IAQ on passenger health is then compared using the faulty and reconstructed IAQ data sets. Experimental results from a metro station showed that the DICA-based method can produce an improved IAQ level in the metro station and reduce passenger health risk since it more accurately validates sensor faults than do conventional methods. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation

    PubMed Central

    Li, Hong; Lu, Mingquan

    2017-01-01

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks. PMID:28665318

  10. GNSS Spoofing Detection and Mitigation Based on Maximum Likelihood Estimation.

    PubMed

    Wang, Fei; Li, Hong; Lu, Mingquan

    2017-06-30

    Spoofing attacks are threatening the global navigation satellite system (GNSS). The maximum likelihood estimation (MLE)-based positioning technique is a direct positioning method originally developed for multipath rejection and weak signal processing. We find this method also has a potential ability for GNSS anti-spoofing since a spoofing attack that misleads the positioning and timing result will cause distortion to the MLE cost function. Based on the method, an estimation-cancellation approach is presented to detect spoofing attacks and recover the navigation solution. A statistic is derived for spoofing detection with the principle of the generalized likelihood ratio test (GLRT). Then, the MLE cost function is decomposed to further validate whether the navigation solution obtained by MLE-based positioning is formed by consistent signals. Both formulae and simulations are provided to evaluate the anti-spoofing performance. Experiments with recordings in real GNSS spoofing scenarios are also performed to validate the practicability of the approach. Results show that the method works even when the code phase differences between the spoofing and authentic signals are much less than one code chip, which can improve the availability of GNSS service greatly under spoofing attacks.

  11. Deviation-based spam-filtering method via stochastic approach

    NASA Astrophysics Data System (ADS)

    Lee, Daekyung; Lee, Mi Jin; Kim, Beom Jun

    2018-03-01

    In the presence of a huge number of possible purchase choices, ranks or ratings of items by others often play very important roles for a buyer to make a final purchase decision. Perfectly objective rating is an impossible task to achieve, and we often use an average rating built on how previous buyers estimated the quality of the product. The problem of using a simple average rating is that it can easily be polluted by careless users whose evaluation of products cannot be trusted, and by malicious spammers who try to bias the rating result on purpose. In this letter we suggest how trustworthiness of individual users can be systematically and quantitatively reflected to build a more reliable rating system. We compute the suitably defined reliability of each user based on the user's rating pattern for all products she evaluated. We call our proposed method as the deviation-based ranking, since the statistical significance of each user's rating pattern with respect to the average rating pattern is the key ingredient. We find that our deviation-based ranking method outperforms existing methods in filtering out careless random evaluators as well as malicious spammers.

  12. The Use of Resistivity Methods in Terrestrial Forensic Searches

    NASA Astrophysics Data System (ADS)

    Wolf, R. C.; Raisuddin, I.; Bank, C.

    2013-12-01

    The increasing use of near-surface geophysical methods in forensic searches has demonstrated the need for further studies to identify the ideal physical, environmental and temporal settings for each geophysical method. Previous studies using resistivity methods have shown promising results, but additional work is required to more accurately interpret and analyze survey findings. The Ontario Provincial Police's UCRT (Urban Search and Rescue; Chemical, Biolgical, Radiological, Nuclear and Explosives; Response Team) is collaborating with the University of Toronto and two additional universities in a multi-year study investigating the applications of near-surface geophysical methods to terrestrial forensic searches. In the summer of 2012, on a test site near Bolton, Ontario, the OPP buried weapons, drums and pigs (naked, tarped, and clothed) to simulate clandestine graves and caches. Our study aims to conduct repeat surveys using an IRIS Syscal Junior with 48 electrode switching system resistivity-meter. These surveys will monitor changes in resistivity reflecting decomposition of the object since burial, and identify the strengths and weaknesses of resistivity when used in a rural, clandestine burial setting. Our initial findings indicate the usefulness of this method, as prominent resistivity changes have been observed. We anticipate our results will help to assist law enforcement agencies in determining the type of resistivity results to expect based on time since burial, depth of burial and state of dress of the body.

  13. Effect of costing methods on unit cost of hospital medical services.

    PubMed

    Riewpaiboon, Arthorn; Malaroje, Saranya; Kongsawatt, Sukalaya

    2007-04-01

    To explore the variance of unit costs of hospital medical services due to different costing methods employed in the analysis. Retrospective and descriptive study at Kaengkhoi District Hospital, Saraburi Province, Thailand, in the fiscal year 2002. The process started with a calculation of unit costs of medical services as a base case. After that, the unit costs were re-calculated based on various methods. Finally, the variations of the results obtained from various methods and the base case were computed and compared. The total annualized capital cost of buildings and capital items calculated by the accounting-based approach (averaging the capital purchase prices throughout their useful life) was 13.02% lower than that calculated by the economic-based approach (combination of depreciation cost and interest on undepreciated portion over the useful life). A change of discount rate from 3% to 6% results in a 4.76% increase of the hospital's total annualized capital cost. When the useful life of durable goods was changed from 5 to 10 years, the total annualized capital cost of the hospital decreased by 17.28% from that of the base case. Regarding alternative criteria of indirect cost allocation, unit cost of medical services changed by a range of -6.99% to +4.05%. We explored the effect on unit cost of medical services in one department. Various costing methods, including departmental allocation methods, ranged between -85% and +32% against those of the base case. Based on the variation analysis, the economic-based approach was suitable for capital cost calculation. For the useful life of capital items, appropriate duration should be studied and standardized. Regarding allocation criteria, single-output criteria might be more efficient than the combined-output and complicated ones. For the departmental allocation methods, micro-costing method was the most suitable method at the time of study. These different costing methods should be standardized and developed as guidelines since they could affect implementation of the national health insurance scheme and health financing management.

  14. Solving the problem of negative populations in approximate accelerated stochastic simulations using the representative reaction approach.

    PubMed

    Kadam, Shantanu; Vanka, Kumar

    2013-02-15

    Methods based on the stochastic formulation of chemical kinetics have the potential to accurately reproduce the dynamical behavior of various biochemical systems of interest. However, the computational expense makes them impractical for the study of real systems. Attempts to render these methods practical have led to the development of accelerated methods, where the reaction numbers are modeled by Poisson random numbers. However, for certain systems, such methods give rise to physically unrealistic negative numbers for species populations. The methods which make use of binomial variables, in place of Poisson random numbers, have since become popular, and have been partially successful in addressing this problem. In this manuscript, the development of two new computational methods, based on the representative reaction approach (RRA), has been discussed. The new methods endeavor to solve the problem of negative numbers, by making use of tools like the stochastic simulation algorithm and the binomial method, in conjunction with the RRA. It is found that these newly developed methods perform better than other binomial methods used for stochastic simulations, in resolving the problem of negative populations. Copyright © 2012 Wiley Periodicals, Inc.

  15. Experimental comparison and validation of hot-ball method with guarded hot plate method on polyurethane foams

    NASA Astrophysics Data System (ADS)

    Hudec, Ján; Glorieux, Christ; Dieška, Peter; Kubičár, Ľudovít

    2016-07-01

    The Hot-ball method is an innovative transient method for measuring thermophysical properties. The principle is based on heating of a small ball, incorporated in measured medium, by constant heating power and simultaneous measuring of the ball's temperature response since the heating was initiated. The shape of the temperature response depends on thermophysical properties of the medium, where the sensor is placed. This method is patented by Institute of Physics, SAS, where the method and sensors based on this method are being developed. At the beginning of the development of sensors for this method we were oriented on monitoring applications, where relative precision is much more important than accuracy. Meanwhile, the quality of sensors was improved good enough to be used for a new application - absolute measuring of thermophysical parameters of low thermally conductive materials. This paper describes experimental verification and validation of measurement by hot-ball method. Thanks to cooperation with Laboratory of Soft Matter and Biophysics of Catholic University of Leuven in Belgium, established Guarded Hot Plate method was used as a reference. Details about measuring setups, description of the experiments and results of the comparison are presented.

  16. The review and results of different methods for facial recognition

    NASA Astrophysics Data System (ADS)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  17. Extracting rate changes in transcriptional regulation from MEDLINE abstracts.

    PubMed

    Liu, Wenting; Miao, Kui; Li, Guangxia; Chang, Kuiyu; Zheng, Jie; Rajapakse, Jagath C

    2014-01-01

    Time delays are important factors that are often neglected in gene regulatory network (GRN) inference models. Validating time delays from knowledge bases is a challenge since the vast majority of biological databases do not record temporal information of gene regulations. Biological knowledge and facts on gene regulations are typically extracted from bio-literature with specialized methods that depend on the regulation task. In this paper, we mine evidences for time delays related to the transcriptional regulation of yeast from the PubMed abstracts. Since the vast majority of abstracts lack quantitative time information, we can only collect qualitative evidences of time delays. Specifically, the speed-up or delay in transcriptional regulation rate can provide evidences for time delays (shorter or longer) in GRN. Thus, we focus on deriving events related to rate changes in transcriptional regulation. A corpus of yeast regulation related abstracts was manually labeled with such events. In order to capture these events automatically, we create an ontology of sub-processes that are likely to result in transcription rate changes by combining textual patterns and biological knowledge. We also propose effective feature extraction methods based on the created ontology to identify the direct evidences with specific details of these events. Our ontologies outperform existing state-of-the-art gene regulation ontologies in the automatic rule learning method applied to our corpus. The proposed deterministic ontology rule-based method can achieve comparable performance to the automatic rule learning method based on decision trees. This demonstrates the effectiveness of our ontology in identifying rate-changing events. We also tested the effectiveness of the proposed feature mining methods on detecting direct evidence of events. Experimental results show that the machine learning method on these features achieves an F1-score of 71.43%. The manually labeled corpus of events relating to rate changes in transcriptional regulation for yeast is available in https://sites.google.com/site/wentingntu/data. The created ontologies summarized both biological causes of rate changes in transcriptional regulation and corresponding positive and negative textual patterns from the corpus. They are demonstrated to be effective in identifying rate-changing events, which shows the benefits of combining textual patterns and biological knowledge on extracting complex biological events.

  18. Assessing differential expression in two-color microarrays: a resampling-based empirical Bayes approach.

    PubMed

    Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D

    2013-01-01

    Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.

  19. Health care facility-based decontamination of victims exposed to chemical, biological, and radiological materials.

    PubMed

    Koenig, Kristi L; Boatright, Connie J; Hancock, John A; Denny, Frank J; Teeter, David S; Kahn, Christopher A; Schultz, Carl H

    2008-01-01

    Since the US terrorist attacks of September 11, 2001, concern regarding use of chemical, biological, or radiological weapons is heightened. Many victims of such an attack would present directly to health care facilities without first undergoing field decontamination. This article reviews basic tenets and recommendations for health care facility-based decontamination, including regulatory concerns, types of contaminants, comprehensive decontamination procedures (including crowd control, triage, removal of contaminated garments, cleaning of body contaminants, and management of contaminated materials and equipment), and a discussion of methods to achieve preparedness.

  20. Site selection model for new metro stations based on land use

    NASA Astrophysics Data System (ADS)

    Zhang, Nan; Chen, Xuewu

    2015-12-01

    Since the construction of metro system generally lags behind the development of urban land use, sites of metro stations should adapt to their surrounding situations, which was rarely discussed by previous research on station layout. This paper proposes a new site selection model to find the best location for a metro station, establishing the indicator system based on land use and combining AHP with entropy weight method to obtain the schemes' ranking. The feasibility and efficiency of this model has been validated by evaluating Nanjing Shengtai Road station and other potential sites.

  1. [Smart therapeutics based on synthetic gene circuits].

    PubMed

    Peng, Shuguang; Xie, Zhen

    2017-03-25

    Synthetic biology has an important impact on biology research since its birth. Applying the thought and methods that reference from electrical engineering, synthetic biology uncovers many regulatory mechanisms of life systems, transforms and expands a series of biological components. Therefore, it brings a wide range of biomedical applications, including providing new ideas for disease diagnosis and treatment. This review describes the latest advances in the field of disease diagnosis and therapy based on mammalian cell or bacterial synthetic gene circuits, and provides new ideas for future smart therapy design.

  2. Moments of inclination error distribution computer program

    NASA Technical Reports Server (NTRS)

    Myler, T. R.

    1981-01-01

    A FORTRAN coded computer program is described which calculates orbital inclination error statistics using a closed-form solution. This solution uses a data base of trajectory errors from actual flights to predict the orbital inclination error statistics. The Scott flight history data base consists of orbit insertion errors in the trajectory parameters - altitude, velocity, flight path angle, flight azimuth, latitude and longitude. The methods used to generate the error statistics are of general interest since they have other applications. Program theory, user instructions, output definitions, subroutine descriptions and detailed FORTRAN coding information are included.

  3. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    PubMed

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  4. A pilot study of river flow prediction in urban area based on phase space reconstruction

    NASA Astrophysics Data System (ADS)

    Adenan, Nur Hamiza; Hamid, Nor Zila Abd; Mohamed, Zulkifley; Noorani, Mohd Salmi Md

    2017-08-01

    River flow prediction is significantly related to urban hydrology impact which can provide information to solve any problems such as flood in urban area. The daily river flow of Klang River, Malaysia was chosen to be forecasted in this pilot study which based on phase space reconstruction. The reconstruction of phase space involves a single variable of river flow data to m-dimensional phase space in which the dimension (m) is based on the optimal values of Cao method. The results from the reconstruction of phase space have been used in the forecasting process using local linear approximation method. From our investigation, river flow at Klang River is chaotic based on the analysis from Cao method. The overall results provide good value of correlation coefficient. The value of correlation coefficient is acceptable since the area of the case study is influence by a lot of factors. Therefore, this pilot study may be proposed to forecast daily river flow data with the purpose of providing information about the flow of the river system in urban area.

  5. On the wavelet optimized finite difference method

    NASA Technical Reports Server (NTRS)

    Jameson, Leland

    1994-01-01

    When one considers the effect in the physical space, Daubechies-based wavelet methods are equivalent to finite difference methods with grid refinement in regions of the domain where small scale structure exists. Adding a wavelet basis function at a given scale and location where one has a correspondingly large wavelet coefficient is, essentially, equivalent to adding a grid point, or two, at the same location and at a grid density which corresponds to the wavelet scale. This paper introduces a wavelet optimized finite difference method which is equivalent to a wavelet method in its multiresolution approach but which does not suffer from difficulties with nonlinear terms and boundary conditions, since all calculations are done in the physical space. With this method one can obtain an arbitrarily good approximation to a conservative difference method for solving nonlinear conservation laws.

  6. Incorporating information on predicted solvent accessibility to the co-evolution-based study of protein interactions.

    PubMed

    Ochoa, David; García-Gutiérrez, Ponciano; Juan, David; Valencia, Alfonso; Pazos, Florencio

    2013-01-27

    A widespread family of methods for studying and predicting protein interactions using sequence information is based on co-evolution, quantified as similarity of phylogenetic trees. Part of the co-evolution observed between interacting proteins could be due to co-adaptation caused by inter-protein contacts. In this case, the co-evolution is expected to be more evident when evaluated on the surface of the proteins or the internal layers close to it. In this work we study the effect of incorporating information on predicted solvent accessibility to three methods for predicting protein interactions based on similarity of phylogenetic trees. We evaluate the performance of these methods in predicting different types of protein associations when trees based on positions with different characteristics of predicted accessibility are used as input. We found that predicted accessibility improves the results of two recent versions of the mirrortree methodology in predicting direct binary physical interactions, while it neither improves these methods, nor the original mirrortree method, in predicting other types of interactions. That improvement comes at no cost in terms of applicability since accessibility can be predicted for any sequence. We also found that predictions of protein-protein interactions are improved when multiple sequence alignments with a richer representation of sequences (including paralogs) are incorporated in the accessibility prediction.

  7. A novel sulfate-reducing bacteria detection method based on inhibition of cysteine protease activity.

    PubMed

    Qi, Peng; Zhang, Dun; Wan, Yi

    2014-11-01

    Sulfate-reducing bacteria (SRB) have been extensively studied in corrosion and environmental science. However, fast enumeration of SRB population is still a difficult task. This work presents a novel specific SRB detection method based on inhibition of cysteine protease activity. The hydrolytic activity of cysteine protease was inhibited by taking advantage of sulfide, the characteristic metabolic product of SRB, to attack active cysteine thiol group in cysteine protease catalytic sites. The active thiol S-sulfhydration process could be used for SRB detection, since the amount of sulfide accumulated in culture medium was highly related with initial bacterial concentration. The working conditions of cysteine protease have been optimized to obtain better detection capability, and the SRB detection performances have been evaluated in this work. The proposed SRB detection method based on inhibition of cysteine protease activity avoided the use of biological recognition elements. In addition, compared with the widely used most probable number (MPN) method which would take up to at least 15days to accomplish whole detection process, the method based on inhibition of papain activity could detect SRB in 2 days, with a detection limit of 5.21×10(2) cfu mL(-1). The detection time for SRB population quantitative analysis was greatly shortened. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. Assessment of Pansharpening Methods Applied to WorldView-2 Imagery Fusion.

    PubMed

    Li, Hui; Jing, Linhai; Tang, Yunwei

    2017-01-05

    Since WorldView-2 (WV-2) images are widely used in various fields, there is a high demand for the use of high-quality pansharpened WV-2 images for different application purposes. With respect to the novelty of the WV-2 multispectral (MS) and panchromatic (PAN) bands, the performances of eight state-of-art pan-sharpening methods for WV-2 imagery including six datasets from three WV-2 scenes were assessed in this study using both quality indices and information indices, along with visual inspection. The normalized difference vegetation index, normalized difference water index, and morphological building index, which are widely used in applications related to land cover classification, the extraction of vegetation areas, buildings, and water bodies, were employed in this work to evaluate the performance of different pansharpening methods in terms of information presentation ability. The experimental results show that the Haze- and Ratio-based, adaptive Gram-Schmidt, Generalized Laplacian pyramids (GLP) methods using enhanced spectral distortion minimal model and enhanced context-based decision model methods are good choices for producing fused WV-2 images used for image interpretation and the extraction of urban buildings. The two GLP-based methods are better choices than the other methods, if the fused images will be used for applications related to vegetation and water-bodies.

  9. New spatial upscaling methods for multi-point measurements: From normal to p-normal

    NASA Astrophysics Data System (ADS)

    Liu, Feng; Li, Xin

    2017-12-01

    Careful attention must be given to determining whether the geophysical variables of interest are normally distributed, since the assumption of a normal distribution may not accurately reflect the probability distribution of some variables. As a generalization of the normal distribution, the p-normal distribution and its corresponding maximum likelihood estimation (the least power estimation, LPE) were introduced in upscaling methods for multi-point measurements. Six methods, including three normal-based methods, i.e., arithmetic average, least square estimation, block kriging, and three p-normal-based methods, i.e., LPE, geostatistics LPE and inverse distance weighted LPE are compared in two types of experiments: a synthetic experiment to evaluate the performance of the upscaling methods in terms of accuracy, stability and robustness, and a real-world experiment to produce real-world upscaling estimates using soil moisture data obtained from multi-scale observations. The results show that the p-normal-based methods produced lower mean absolute errors and outperformed the other techniques due to their universality and robustness. We conclude that introducing appropriate statistical parameters into an upscaling strategy can substantially improve the estimation, especially if the raw measurements are disorganized; however, further investigation is required to determine which parameter is the most effective among variance, spatial correlation information and parameter p.

  10. Assessment of Pansharpening Methods Applied to WorldView-2 Imagery Fusion

    PubMed Central

    Li, Hui; Jing, Linhai; Tang, Yunwei

    2017-01-01

    Since WorldView-2 (WV-2) images are widely used in various fields, there is a high demand for the use of high-quality pansharpened WV-2 images for different application purposes. With respect to the novelty of the WV-2 multispectral (MS) and panchromatic (PAN) bands, the performances of eight state-of-art pan-sharpening methods for WV-2 imagery including six datasets from three WV-2 scenes were assessed in this study using both quality indices and information indices, along with visual inspection. The normalized difference vegetation index, normalized difference water index, and morphological building index, which are widely used in applications related to land cover classification, the extraction of vegetation areas, buildings, and water bodies, were employed in this work to evaluate the performance of different pansharpening methods in terms of information presentation ability. The experimental results show that the Haze- and Ratio-based, adaptive Gram-Schmidt, Generalized Laplacian pyramids (GLP) methods using enhanced spectral distortion minimal model and enhanced context-based decision model methods are good choices for producing fused WV-2 images used for image interpretation and the extraction of urban buildings. The two GLP-based methods are better choices than the other methods, if the fused images will be used for applications related to vegetation and water-bodies. PMID:28067770

  11. Modified-Fibonacci-Dual-Lucas method for earthquake prediction

    NASA Astrophysics Data System (ADS)

    Boucouvalas, A. C.; Gkasios, M.; Tselikas, N. T.; Drakatos, G.

    2015-06-01

    The FDL method makes use of Fibonacci, Dual and Lucas numbers and has shown considerable success in predicting earthquake events locally as well as globally. Predicting the location of the epicenter of an earthquake is one difficult challenge the other being the timing and magnitude. One technique for predicting the onset of earthquakes is the use of cycles, and the discovery of periodicity. Part of this category is the reported FDL method. The basis of the reported FDL method is the creation of FDL future dates based on the onset date of significant earthquakes. The assumption being that each occurred earthquake discontinuity can be thought of as a generating source of FDL time series The connection between past earthquakes and future earthquakes based on FDL numbers has also been reported with sample earthquakes since 1900. Using clustering methods it has been shown that significant earthquakes (<6.5R) can be predicted with very good accuracy window (+-1 day). In this contribution we present an improvement modification to the FDL method, the MFDL method, which performs better than the FDL. We use the FDL numbers to develop possible earthquakes dates but with the important difference that the starting seed date is a trigger planetary aspect prior to the earthquake. Typical planetary aspects are Moon conjunct Sun, Moon opposite Sun, Moon conjunct or opposite North or South Modes. In order to test improvement of the method we used all +8R earthquakes recorded since 1900, (86 earthquakes from USGS data). We have developed the FDL numbers for each of those seeds, and examined the earthquake hit rates (for a window of 3, i.e. +-1 day of target date) and for <6.5R. The successes are counted for each one of the 86 earthquake seeds and we compare the MFDL method with the FDL method. In every case we find improvement when the starting seed date is on the planetary trigger date prior to the earthquake. We observe no improvement only when a planetary trigger coincided with the earthquake date and in this case the FDL method coincides with the MFDL. Based on the MDFL method we present the prediction method capable of predicting global events or localized earthquakes and we will discuss the accuracy of the method in as far as the prediction and location parts of the method. We show example calendar style predictions for global events as well as for the Greek region using planetary alignment seeds.

  12. Object-color-signal prediction using wraparound Gaussian metamers.

    PubMed

    Mirzaei, Hamidreza; Funt, Brian

    2014-07-01

    Alexander Logvinenko introduced an object-color atlas based on idealized reflectances called rectangular metamers in 2009. For a given color signal, the atlas specifies a unique reflectance that is metameric to it under the given illuminant. The atlas is complete and illuminant invariant, but not possible to implement in practice. He later introduced a parametric representation of the object-color atlas based on smoother "wraparound Gaussian" functions. In this paper, these wraparound Gaussians are used in predicting illuminant-induced color signal changes. The method proposed in this paper is based on computationally "relighting" that reflectance to determine what its color signal would be under any other illuminant. Since that reflectance is in the metamer set the prediction is also physically realizable, which cannot be guaranteed for predictions obtained via von Kries scaling. Testing on Munsell spectra and a multispectral image shows that the proposed method outperforms the predictions of both those based on von Kries scaling and those based on the Bradford transform.

  13. Population dynamics among Asmat hunter-gatherers of New Guinea: data, methods, comparisons.

    PubMed

    Van Arsdale, P W

    1978-12-01

    Since 1953 the Asmat hunter-gatherers of Irian Jaya have been experiencing rapid cultural change, yet demographically they still can be classified as "living primitives." Methods of nonstandard data analysis are used in an effort to provide specific information on age-sex structure, fertility, birthrates, death rates, population growth, internal migration, and life expectancy and to aid in the development of a 2-part model of population growth encompassing the immediate precontact and contact eras. The population data upon which the discussion is based were obtained in 1973 and 1974 as part of a broader field study that aimed at assessing the impact of externally induced culture change. Special attention is given to the continuing although reduced impact of infanticide. Brief comparisons with other Melanesian and 3rd world societies are presented; the Asmat average annual growth rate of 1.5% since 1st permanent contact in 1953 contrasts with the generally higher rates reported for most of these other groups.

  14. A Horizontal Tilt Correction Method for Ship License Numbers Recognition

    NASA Astrophysics Data System (ADS)

    Liu, Baolong; Zhang, Sanyuan; Hong, Zhenjie; Ye, Xiuzi

    2018-02-01

    An automatic ship license numbers (SLNs) recognition system plays a significant role in intelligent waterway transportation systems since it can be used to identify ships by recognizing the characters in SLNs. Tilt occurs frequently in many SLNs because the monitors and the ships usually have great vertical or horizontal angles, which decreases the accuracy and robustness of a SLNs recognition system significantly. In this paper, we present a horizontal tilt correction method for SLNs. For an input tilt SLN image, the proposed method accomplishes the correction task through three main steps. First, a MSER-based characters’ center-points computation algorithm is designed to compute the accurate center-points of the characters contained in the input SLN image. Second, a L 1- L 2 distance-based straight line is fitted to the computed center-points using M-estimator algorithm. The tilt angle is estimated at this stage. Finally, based on the computed tilt angle, an affine transformation rotation is conducted to rotate and to correct the input SLN horizontally. At last, the proposed method is tested on 200 tilt SLN images, the proposed method is proved to be effective with a tilt correction rate of 80.5%.

  15. Prediction of Heterodimeric Protein Complexes from Weighted Protein-Protein Interaction Networks Using Novel Features and Kernel Functions

    PubMed Central

    Ruan, Peiying; Hayashida, Morihiro; Maruyama, Osamu; Akutsu, Tatsuya

    2013-01-01

    Since many proteins express their functional activity by interacting with other proteins and forming protein complexes, it is very useful to identify sets of proteins that form complexes. For that purpose, many prediction methods for protein complexes from protein-protein interactions have been developed such as MCL, MCODE, RNSC, PCP, RRW, and NWE. These methods have dealt with only complexes with size of more than three because the methods often are based on some density of subgraphs. However, heterodimeric protein complexes that consist of two distinct proteins occupy a large part according to several comprehensive databases of known complexes. In this paper, we propose several feature space mappings from protein-protein interaction data, in which each interaction is weighted based on reliability. Furthermore, we make use of prior knowledge on protein domains to develop feature space mappings, domain composition kernel and its combination kernel with our proposed features. We perform ten-fold cross-validation computational experiments. These results suggest that our proposed kernel considerably outperforms the naive Bayes-based method, which is the best existing method for predicting heterodimeric protein complexes. PMID:23776458

  16. Intelligent Diagnosis Method for Rotating Machinery Using Dictionary Learning and Singular Value Decomposition.

    PubMed

    Han, Te; Jiang, Dongxiang; Zhang, Xiaochen; Sun, Yankui

    2017-03-27

    Rotating machinery is widely used in industrial applications. With the trend towards more precise and more critical operating conditions, mechanical failures may easily occur. Condition monitoring and fault diagnosis (CMFD) technology is an effective tool to enhance the reliability and security of rotating machinery. In this paper, an intelligent fault diagnosis method based on dictionary learning and singular value decomposition (SVD) is proposed. First, the dictionary learning scheme is capable of generating an adaptive dictionary whose atoms reveal the underlying structure of raw signals. Essentially, dictionary learning is employed as an adaptive feature extraction method regardless of any prior knowledge. Second, the singular value sequence of learned dictionary matrix is served to extract feature vector. Generally, since the vector is of high dimensionality, a simple and practical principal component analysis (PCA) is applied to reduce dimensionality. Finally, the K -nearest neighbor (KNN) algorithm is adopted for identification and classification of fault patterns automatically. Two experimental case studies are investigated to corroborate the effectiveness of the proposed method in intelligent diagnosis of rotating machinery faults. The comparison analysis validates that the dictionary learning-based matrix construction approach outperforms the mode decomposition-based methods in terms of capacity and adaptability for feature extraction.

  17. Revenge versus rapport: Interrogation, terrorism, and torture.

    PubMed

    Alison, Laurence; Alison, Emily

    2017-04-01

    This review begins with the historical context of harsh interrogation methods that have been used repeatedly since the Second World War. This is despite the legal, ethical and moral sanctions against them and the lack of evidence for their efficacy. Revenge-motivated interrogations (Carlsmith & Sood, 2009) regularly occur in high conflict, high uncertainty situations and where there is dehumanization of the enemy. These methods are diametrically opposed to the humanization process required for adopting rapport-based methods-for which there is an increasing corpus of studies evidencing their efficacy. We review this emerging field of study and show how rapport-based methods rely on building alliances and involve a specific set of interpersonal skills on the part of the interrogator. We conclude with 2 key propositions: (a) for psychologists to firmly maintain the Hippocratic Oath of "first do no harm," irrespective of perceived threat and uncertainty, and (b) for wider recognition of the empirical evidence that rapport-based approaches work and revenge tactics do not. Proposition (a) is directly in line with fundamental ethical principles of practice for anyone in a caring profession. Proposition (b) is based on the requirement for psychology to protect and promote human welfare and to base conclusions on objective evidence. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Video-based respiration monitoring with automatic region of interest detection.

    PubMed

    Janssen, Rik; Wang, Wenjin; Moço, Andreia; de Haan, Gerard

    2016-01-01

    Vital signs monitoring is ubiquitous in clinical environments and emerging in home-based healthcare applications. Still, since current monitoring methods require uncomfortable sensors, respiration rate remains the least measured vital sign. In this paper, we propose a video-based respiration monitoring method that automatically detects a respiratory region of interest (RoI) and signal using a camera. Based on the observation that respiration induced chest/abdomen motion is an independent motion system in a video, our basic idea is to exploit the intrinsic properties of respiration to find the respiratory RoI and extract the respiratory signal via motion factorization. We created a benchmark dataset containing 148 video sequences obtained on adults under challenging conditions and also neonates in the neonatal intensive care unit (NICU). The measurements obtained by the proposed video respiration monitoring (VRM) method are not significantly different from the reference methods (guided breathing or contact-based ECG; p-value  =  0.6), and explain more than 99% of the variance of the reference values with low limits of agreement (-2.67 to 2.81 bpm). VRM seems to provide a valid solution to ECG in confined motion scenarios, though precision may be reduced for neonates. More studies are needed to validate VRM under challenging recording conditions, including upper-body motion types.

  19. Reanalysis of a 15-year Archive of IMPROVE Samples

    NASA Astrophysics Data System (ADS)

    Hyslop, N. P.; White, W. H.; Trzepla, K.

    2013-12-01

    The IMPROVE (Interagency Monitoring of PROtected Visual Environments) network monitors aerosol concentrations at 170 remote sites throughout the United States. Twenty-four-hour filter samples of particulate matter are collected every third day and analyzed for chemical composition. About 30 of the sites have operated continuously since 1988, and the sustained data record (http://views.cira.colostate.edu/web/) offers a unique window on regional aerosol trends. All elemental analyses have been performed by Crocker Nuclear Laboratory at the University of California in Davis, and sample filters collected since 1995 are archived on campus. The suite of reported elements has remained constant, but the analytical methods employed for their determination have evolved. For example, the elements Na - Mn were determined by PIXE until November 2001, then by XRF analysis in a He-flushed atmosphere through 2004, and by XRF analysis in vacuum since January 2005. In addition to these fundamental changes, incompletely-documented operational factors such as detector performance and calibration details have introduced variations in the measurements. Because the past analytical methods were non-destructive, the archived filters can be re-analyzed with the current analytical systems and protocols. The 15-year sample archives from Great Smoky Mountains, Mount Rainier, and Point Reyes National Parks were selected for reanalysis. The agreement between the new analyses and original determinations varies with element and analytical era (Figure 1). Temporal trends for some elements are affected by these changes in measurement technique while others are not (Figure 2). Figure 1. Repeatability of analyses for sulfur and vanadium at Great Smoky Mountains National Park. Each point shows the ratio of mass loadings determined by the original analysis and recent reanalysis. Major method distinctions are indicated at the top. Figure 2. Trends, based on Thiel-Sen regression, in lead concentrations based on the original and reanalysis data.

  20. Modeling Sensor Reliability in Fault Diagnosis Based on Evidence Theory

    PubMed Central

    Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong

    2016-01-01

    Sensor data fusion plays an important role in fault diagnosis. Dempster–Shafer (D-R) evidence theory is widely used in fault diagnosis, since it is efficient to combine evidence from different sensors. However, under the situation where the evidence highly conflicts, it may obtain a counterintuitive result. To address the issue, a new method is proposed in this paper. Not only the statistic sensor reliability, but also the dynamic sensor reliability are taken into consideration. The evidence distance function and the belief entropy are combined to obtain the dynamic reliability of each sensor report. A weighted averaging method is adopted to modify the conflict evidence by assigning different weights to evidence according to sensor reliability. The proposed method has better performance in conflict management and fault diagnosis due to the fact that the information volume of each sensor report is taken into consideration. An application in fault diagnosis based on sensor fusion is illustrated to show the efficiency of the proposed method. The results show that the proposed method improves the accuracy of fault diagnosis from 81.19% to 89.48% compared to the existing methods. PMID:26797611

  1. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  2. A rotation-translation invariant molecular descriptor of partial charges and its use in ligand-based virtual screening

    PubMed Central

    2014-01-01

    Background Measures of similarity for chemical molecules have been developed since the dawn of chemoinformatics. Molecular similarity has been measured by a variety of methods including molecular descriptor based similarity, common molecular fragments, graph matching and 3D methods such as shape matching. Similarity measures are widespread in practice and have proven to be useful in drug discovery. Because of our interest in electrostatics and high throughput ligand-based virtual screening, we sought to exploit the information contained in atomic coordinates and partial charges of a molecule. Results A new molecular descriptor based on partial charges is proposed. It uses the autocorrelation function and linear binning to encode all atoms of a molecule into two rotation-translation invariant vectors. Combined with a scoring function, the descriptor allows to rank-order a database of compounds versus a query molecule. The proposed implementation is called ACPC (AutoCorrelation of Partial Charges) and released in open source. Extensive retrospective ligand-based virtual screening experiments were performed and other methods were compared with in order to validate the method and associated protocol. Conclusions While it is a simple method, it performed remarkably well in experiments. At an average speed of 1649 molecules per second, it reached an average median area under the curve of 0.81 on 40 different targets; hence validating the proposed protocol and implementation. PMID:24887178

  3. Pedestrian visual recommendation in Kertanegara - Semeru corridor in Malang City

    NASA Astrophysics Data System (ADS)

    Cosalia, V. B.

    2017-06-01

    Streetscape could be the first impression to see an urban area. One of the streerscape that should be attended to it is corridor of Jl. Kertanegara - Semeru since at that corridor is the road corridor having the strong caracter also as the one of the main axes in Malang city. This research is aim knowing the visual quality also the exact structuring rcommendation for Jl. Kertanegara - Semeru based on pedestrian’s visual. The methode used to this research is Scenic Beauty Estimation (SBE) and used historic study. There is several variables used, they are scale space, visual flexibility, beauty, emphasis, balance and dominant. Based on those variable the pedestrians as a respondent doing the assessment. Based on the result of SBE have been done, it is showed that the visual quality in Corridor Kertanegara Semeru is well enough since the result showed that there are 10 photos in low visual quality in Jl. Semeru and 14 photos in high visual quality in Jl. Kertanegara, Jl. Tugu dan Jl. Kahuripan. By the historic study and based on high visual quality reference doing the structuring recommendation in part of landscape having the low visual quality.

  4. Truck circuits diagnosis for railway lines equipped with an automatic block signalling system

    NASA Astrophysics Data System (ADS)

    Spunei, E.; Piroi, I.; Muscai, C.; Răduca, E.; Piroi, F.

    2018-01-01

    This work presents a diagnosis method for detecting track circuits failures on a railway traffic line equipped with an Automatic Block Signalling installation. The diagnosis method uses the installation’s electrical schemas, based on which a series of diagnosis charts have been created. Further, the diagnosis charts were used to develop a software package, CDCBla, which substantially contributes to reducing the diagnosis time and human error during failure remedies. The proposed method can also be used as a training package for the maintenance staff. Since the diagnosis method here does not need signal or measurement inputs, using it does not necessitate additional IT knowledge and can be deployed on a mobile computing device (tablet, smart phone).

  5. An Efficient Scheme for Crystal Structure Prediction Based on Structural Motifs

    DOE PAGES

    Zhu, Zizhong; Wu, Ping; Wu, Shunqing; ...

    2017-05-15

    An efficient scheme based on structural motifs is proposed for the crystal structure prediction of materials. The key advantage of the present method comes in two fold: first, the degrees of freedom of the system are greatly reduced, since each structural motif, regardless of its size, can always be described by a set of parameters (R, θ, φ) with five degrees of freedom; second, the motifs could always appear in the predicted structures when the energies of the structures are relatively low. Both features make the present scheme a very efficient method for predicting desired materials. The method has beenmore » applied to the case of LiFePO 4, an important cathode material for lithium-ion batteries. Numerous new structures of LiFePO 4 have been found, compared to those currently available, available, demonstrating the reliability of the present methodology and illustrating the promise of the concept of structural motifs.« less

  6. Content Sharing Based on Personal Information in Virtually Secured Space

    NASA Astrophysics Data System (ADS)

    Sohn, Hosik; Ro, Yong Man; Plataniotis, Kostantinos N.

    User generated contents (UGC) are shared in an open space like social media where users can upload and consume contents freely. Since the access of contents is not restricted, the contents could be delivered to unwanted users or misused sometimes. In this paper, we propose a method for sharing UGCs securely based on the personal information of users. With the proposed method, virtual secure space is created for contents delivery. The virtual secure space allows UGC creator to deliver contents to users who have similar personal information and they can consume the contents without any leakage of personal information. In order to verify the usefulness of the proposed method, the experiment was performed where the content was encrypted with personal information of creator, and users with similar personal information have decrypted and consumed the contents. The results showed that UGCs were securely shared among users who have similar personal information.

  7. Egg-Independent Influenza Vaccines and Vaccine Candidates

    PubMed Central

    Manini, Ilaria; Pozzi, Teresa; Rossi, Stefania; Montomoli, Emanuele

    2017-01-01

    Vaccination remains the principal way to control seasonal infections and is the most effective method of reducing influenza-associated morbidity and mortality. Since the 1940s, the main method of producing influenza vaccines has been an egg-based production process. However, in the event of a pandemic, this method has a significant limitation, as the time lag from strain isolation to final dose formulation and validation is six months. Indeed, production in eggs is a relatively slow process and production yields are both unpredictable and highly variable from strain to strain. In particular, if the next influenza pandemic were to arise from an avian influenza virus, and thus reduce the egg-laying hen population, there would be a shortage of embryonated eggs available for vaccine manufacturing. Although the production of egg-derived vaccines will continue, new technological developments have generated a cell-culture-based influenza vaccine and other more recent platforms, such as synthetic influenza vaccines. PMID:28718786

  8. Error Estimation for the Linearized Auto-Localization Algorithm

    PubMed Central

    Guevara, Jorge; Jiménez, Antonio R.; Prieto, Jose Carlos; Seco, Fernando

    2012-01-01

    The Linearized Auto-Localization (LAL) algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs), using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL), the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method. PMID:22736965

  9. Handwriting individualization using distance and rarity

    NASA Astrophysics Data System (ADS)

    Tang, Yi; Srihari, Sargur; Srinivasan, Harish

    2012-01-01

    Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.

  10. Discovery of pyridine-based agrochemicals by using Intermediate Derivatization Methods.

    PubMed

    Guan, Ai-Ying; Liu, Chang-Ling; Sun, Xu-Feng; Xie, Yong; Wang, Ming-An

    2016-02-01

    Pyridine-based compounds have been playing a crucial role as agrochemicals or pesticides including fungicides, insecticides/acaricides and herbicides, etc. Since most of the agrochemicals listed in the Pesticide Manual were discovered through screening programs that relied on trial-and-error testing and new agrochemical discovery is not benefiting as much from the in silico new chemical compound identification/discovery techniques used in pharmaceutical research, it has become more important to find new methods to enhance the efficiency of discovering novel lead compounds in the agrochemical field to shorten the time of research phases in order to meet changing market requirements. In this review, we selected 18 representative known agrochemicals containing a pyridine moiety and extrapolate their discovery from the perspective of Intermediate Derivatization Methods in the hope that this approach will have greater appeal to researchers engaged in the discovery of agrochemicals and/or pharmaceuticals. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. An extended Lagrangian method for subsonic flows

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Loh, Ching Y.

    1992-01-01

    It is well known that fluid motion can be specified by either the Eulerian of Lagrangian description. Most of Computational Fluid Dynamics (CFD) developments over the last three decades have been based on the Eulerian description and considerable progress has been made. In particular, the upwind methods, inspired and guided by the work of Gudonov, have met with many successes in dealing with complex flows, especially where discontinuities exist. However, this shock capturing property has proven to be accurate only when the discontinuity is aligned with one of the grid lines since most upwind methods are strictly formulated in 1-D framework and only formally extended to multi-dimensions. Consequently, the attractive property of crisp resolution of these discontinuities is lost and research on genuine multi-dimensional approach has just been undertaken by several leading researchers. Nevertheless they are still based on the Eulerian description.

  12. Surface Profile and Stress Field Evaluation using Digital Gradient Sensing Method

    DOE PAGES

    Miao, C.; Sundaram, B. M.; Huang, L.; ...

    2016-08-09

    Shape and surface topography evaluation from measured orthogonal slope/gradient data is of considerable engineering significance since many full-field optical sensors and interferometers readily output accurate data of that kind. This has applications ranging from metrology of optical and electronic elements (lenses, silicon wafers, thin film coatings), surface profile estimation, wave front and shape reconstruction, to name a few. In this context, a new methodology for surface profile and stress field determination based on a recently introduced non-contact, full-field optical method called digital gradient sensing (DGS) capable of measuring small angular deflections of light rays coupled with a robust finite-difference-based least-squaresmore » integration (HFLI) scheme in the Southwell configuration is advanced here. The method is demonstrated by evaluating (a) surface profiles of mechanically warped silicon wafers and (b) stress gradients near growing cracks in planar phase objects.« less

  13. Native conflict awared layout decomposition in triple patterning lithography using bin-based library matching method

    NASA Astrophysics Data System (ADS)

    Ke, Xianhua; Jiang, Hao; Lv, Wen; Liu, Shiyuan

    2016-03-01

    Triple patterning (TP) lithography becomes a feasible technology for manufacturing as the feature size further scale down to sub 14/10 nm. In TP, a layout is decomposed into three masks followed with exposures and etches/freezing processes respectively. Previous works mostly focus on layout decomposition with minimal conflicts and stitches simultaneously. However, since any existence of native conflict will result in layout re-design/modification and reperforming the time-consuming decomposition, the effective method that can be aware of native conflicts (NCs) in layout is desirable. In this paper, a bin-based library matching method is proposed for NCs detection and layout decomposition. First, a layout is divided into bins and the corresponding conflict graph in each bin is constructed. Then, we match the conflict graph in a prebuilt colored library, and as a result the NCs can be located and highlighted quickly.

  14. An Efficient Scheme for Crystal Structure Prediction Based on Structural Motifs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Zizhong; Wu, Ping; Wu, Shunqing

    An efficient scheme based on structural motifs is proposed for the crystal structure prediction of materials. The key advantage of the present method comes in two fold: first, the degrees of freedom of the system are greatly reduced, since each structural motif, regardless of its size, can always be described by a set of parameters (R, θ, φ) with five degrees of freedom; second, the motifs could always appear in the predicted structures when the energies of the structures are relatively low. Both features make the present scheme a very efficient method for predicting desired materials. The method has beenmore » applied to the case of LiFePO 4, an important cathode material for lithium-ion batteries. Numerous new structures of LiFePO 4 have been found, compared to those currently available, available, demonstrating the reliability of the present methodology and illustrating the promise of the concept of structural motifs.« less

  15. Molecular beacon sequence design algorithm.

    PubMed

    Monroe, W Todd; Haselton, Frederick R

    2003-01-01

    A method based on Web-based tools is presented to design optimally functioning molecular beacons. Molecular beacons, fluorogenic hybridization probes, are a powerful tool for the rapid and specific detection of a particular nucleic acid sequence. However, their synthesis costs can be considerable. Since molecular beacon performance is based on its sequence, it is imperative to rationally design an optimal sequence before synthesis. The algorithm presented here uses simple Microsoft Excel formulas and macros to rank candidate sequences. This analysis is carried out using mfold structural predictions along with other free Web-based tools. For smaller laboratories where molecular beacons are not the focus of research, the public domain algorithm described here may be usefully employed to aid in molecular beacon design.

  16. Galactofuranose antigens, a target for diagnosis of fungal infections in humans

    PubMed Central

    Marino, Carla; Rinflerch, Adriana; de Lederkremer, Rosa M

    2017-01-01

    The use of biomarkers for the detection of fungal infections is of interest to complement histopathological and culture methods. Since the production of antibodies in immunocompromised patients is scarce, detection of a specific antigen could be effective for early diagnosis. D-Galactofuranose (Galf) is the antigenic epitope in glycoconjugates of several pathogenic fungi. Since Galf is not biosynthesized by mammals, it is an attractive candidate for diagnosis of infection. A monoclonal antibody that recognizes Galf is commercialized for detection of aspergillosis. The linkage of Galf in the natural glycans and the chemical structures of the synthesized Galf-containing oligosaccharides are described in this paper. The oligosaccharides could be used for the synthesis of artificial carbohydrate-based antigens, not enough exploited for diagnosis. PMID:28883999

  17. Method to estimate center of rigidity using vibration recordings

    USGS Publications Warehouse

    Safak, Erdal; Çelebi, Mehmet

    1990-01-01

    A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.

  18. Red ball ranging optimization based on dual camera ranging method

    NASA Astrophysics Data System (ADS)

    Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung

    2018-05-01

    In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.

  19. A method to analyze molecular tagging velocimetry data using the Hough transform.

    PubMed

    Sanchez-Gonzalez, R; McManamen, B; Bowersox, R D W; North, S W

    2015-10-01

    The development of a method to analyze molecular tagging velocimetry data based on the Hough transform is presented. This method, based on line fitting, parameterizes the grid lines "written" into a flowfield. Initial proof-of-principle illustration of this method was performed to obtain two-component velocity measurements in the wake of a cylinder in a Mach 4.6 flow, using a data set derived from computational fluid dynamics simulations. The Hough transform is attractive for molecular tagging velocimetry applications since it is capable of discriminating spurious features that can have a biasing effect in the fitting process. Assessment of the precision and accuracy of the method were also performed to show the dependence on analysis window size and signal-to-noise levels. The accuracy of this Hough transform-based method to quantify intersection displacements was determined to be comparable to cross-correlation methods. The employed line parameterization avoids the assumption of linearity in the vicinity of each intersection, which is important in the limit of drastic grid deformations resulting from large velocity gradients common in high-speed flow applications. This Hough transform method has the potential to enable the direct and spatially accurate measurement of local vorticity, which is important in applications involving turbulent flowfields. Finally, two-component velocity determinations using the Hough transform from experimentally obtained images are presented, demonstrating the feasibility of the proposed analysis method.

  20. The problem of estimating recent genetic connectivity in a changing world.

    PubMed

    Samarasin, Pasan; Shuter, Brian J; Wright, Stephen I; Rodd, F Helen

    2017-02-01

    Accurate understanding of population connectivity is important to conservation because dispersal can play an important role in population dynamics, microevolution, and assessments of extirpation risk and population rescue. Genetic methods are increasingly used to infer population connectivity because advances in technology have made them more advantageous (e.g., cost effective) relative to ecological methods. Given the reductions in wildlife population connectivity since the Industrial Revolution and more recent drastic reductions from habitat loss, it is important to know the accuracy of and biases in genetic connectivity estimators when connectivity has declined recently. Using simulated data, we investigated the accuracy and bias of 2 common estimators of migration (movement of individuals among populations) rate. We focused on the timing of the connectivity change and the magnitude of that change on the estimates of migration by using a coalescent-based method (Migrate-n) and a disequilibrium-based method (BayesAss). Contrary to expectations, when historically high connectivity had declined recently: (i) both methods over-estimated recent migration rates; (ii) the coalescent-based method (Migrate-n) provided better estimates of recent migration rate than the disequilibrium-based method (BayesAss); (iii) the coalescent-based method did not accurately reflect long-term genetic connectivity. Overall, our results highlight the problems with comparing coalescent and disequilibrium estimates to make inferences about the effects of recent landscape change on genetic connectivity among populations. We found that contrasting these 2 estimates to make inferences about genetic-connectivity changes over time could lead to inaccurate conclusions. © 2016 Society for Conservation Biology.

  1. Iris segmentation using an edge detector based on fuzzy sets theory and cellular learning automata.

    PubMed

    Ghanizadeh, Afshin; Abarghouei, Amir Atapour; Sinaie, Saman; Saad, Puteh; Shamsuddin, Siti Mariyam

    2011-07-01

    Iris-based biometric systems identify individuals based on the characteristics of their iris, since they are proven to remain unique for a long time. An iris recognition system includes four phases, the most important of which is preprocessing in which the iris segmentation is performed. The accuracy of an iris biometric system critically depends on the segmentation system. In this paper, an iris segmentation system using edge detection techniques and Hough transforms is presented. The newly proposed edge detection system enhances the performance of the segmentation in a way that it performs much more efficiently than the other conventional iris segmentation methods.

  2. Construction of IT Based Learning System at University Level

    NASA Astrophysics Data System (ADS)

    Akiyama, Hidenori; Kozono, Kazutake

    Rapid progress of information and communication technologies has been changing the education method. In Japan, online lectures have been recognized as the credits for graduation by the change of a law since 2001. One trial to construct an IT based learning system has been done for the development of IT based higher education and training. Educational effect of online lecture taken anytime and anywhere is evaluated, and then an authoring software for online lectures is developed for educators who are not familiar to IT. A learning management system begins to be operated for whole lectures, and a wireless LAN system is equipped in whole campus of Kumamoto University.

  3. An enhanced structure tensor method for sea ice ridge detection from GF-3 SAR imagery

    NASA Astrophysics Data System (ADS)

    Zhu, T.; Li, F.; Zhang, Y.; Zhang, S.; Spreen, G.; Dierking, W.; Heygster, G.

    2017-12-01

    In SAR imagery, ridges or leads are shown as the curvilinear features. The proposed ridge detection method is facilitated by their curvilinear shapes. The bright curvilinear features are recognized as the ridges while the dark curvilinear features are classified as the leads. In dual-polarization HH or HV channel of C-band SAR imagery, the bright curvilinear feature may be false alarm because the frost flowers of young leads may show as bright pixels associated with changes in the surface salinity under calm surface conditions. Wind roughened leads also trigger the backscatter increasing that can be misclassified as ridges [1]. Thus the width limitation is considered in this proposed structure tensor method [2], since only shape feature based method is not enough for detecting ridges. The ridge detection algorithm is based on the hypothesis that the bright pixels are ridges with curvilinear shapes and the ridge width is less 30 meters. Benefited from GF-3 with high spatial resolution of 3 meters, we provide an enhanced structure tensor method for detecting the significant ridge. The preprocessing procedures including the calibration and incidence angle normalization are also investigated. The bright pixels will have strong response to the bandpass filtering. The ridge training samples are delineated from the SAR imagery in the Log-Gabor filters to construct structure tensor. From the tensor, the dominant orientation of the pixel representing the ridge is determined by the dominant eigenvector. For the post-processing of structure tensor, the elongated kernel is desired to enhance the ridge curvilinear shape. Since ridge presents along a certain direction, the ratio of the dominant eigenvector will be used to measure the intensity of local anisotropy. The convolution filter has been utilized in the constructed structure tensor is used to model spatial contextual information. Ridge detection results from GF-3 show the proposed method performs better compared to the direct threshold method.

  4. Lightning Charge Retrievals: Dimensional Reduction, LDAR Constraints, and a First Comparison with LIS Satellite Data

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Krider, E. P.; Murray, N.; Boccippio, D. J.

    2007-01-01

    A "dimensional reduction" (DR) method is introduced for analyzing lightning field changes (DELTAEs) whereby the number of unknowns in a discrete two-charge model is reduced from the standard eight (x, y, z, Q, x', y', z', Q') to just four (x, y, z, Q). The four unknowns (x, y, z, Q) are found by performing a numerical minimization of a chi-square function. At each step of the minimization, an Overdetermined Fixed Matrix (OFM) method is used to immediately retrieve the best "residual source" (x', y', z', Q'), given the values of (x, y, z, Q). In this way, all 8 parameters (x, y, z, Q, x', y', z', Q') are found, yet a numerical search of only 4 parameters (x, y, z, Q) is required. The DR method has been used to analyze lightning-caused DeltaEs derived from multiple ground-based electric field measurements at the NASA Kennedy Space Center (KSC) and USAF Eastern Range (ER). The accuracy of the DR method has been assessed by comparing retrievals with data provided by the Lightning Detection And Ranging (LDAR) system at the KSC-ER, and from least squares error estimation theory, and the method is shown to be a useful "stand-alone" charge retrieval tool. Since more than one charge distribution describes a finite set of DELTAEs (i.e., solutions are non-unique), and since there can exist appreciable differences in the physical characteristics of these solutions, not all DR solutions are physically acceptable. Hence, an alternative and more accurate method of analysis is introduced that uses LDAR data to constrain the geometry of the charge solutions, thereby removing physically unacceptable retrievals. The charge solutions derived from this method are shown to compare well with independent satellite- and ground-based observations of lightning in several Florida storms.

  5. An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.

    PubMed

    Kidney, Darren; Rawson, Benjamin M; Borchers, David L; Stevenson, Ben C; Marques, Tiago A; Thomas, Len

    2016-01-01

    Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR) methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will make this method an attractive option in many situations where populations can be surveyed acoustically by humans.

  6. STRATEGIES FOR QUANTIFYING PET IMAGING DATA FROM TRACER STUDIES OF BRAIN RECEPTORS AND ENZYMES.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Logan, J.

    2001-04-02

    A description of some of the methods used in neuroreceptor imaging to distinguish changes in receptor availability has been presented in this chapter. It is necessary to look beyond regional uptake of the tracer since uptake generally is affected by factors other than the number of receptors for which the tracer has affinity. An exception is the infusion method producing an equilibrium state. The techniques vary in complexity some requiring arterial blood measurements of unmetabolized tracer and multiple time uptake data. Others require only a few plasma and uptake measurements and those based on a reference region require no plasmamore » measurements. We have outlined some of the limitations of the different methods. Laruelle (1999) has pointed out that test/retest studies to which various methods can be applied are crucial in determining the optimal method for a particular study. The choice of method will also depend upon the application. In a clinical setting, methods not involving arterial blood sampling are generally preferred. In the future techniques for externally measuring arterial plasma radioactivity with only a few blood samples for metabolite correction will extend the modeling options of clinical PET. Also since parametric images can provide information beyond that of ROI analysis, improved techniques for generating such images will be important, particularly for ligands requiring more than a one-compartment model. Techniques such as the wavelet transform proposed by Turkheimer et al. (2000) may prove to be important in reducing noise and improving quantitation.« less

  7. On the computation and updating of the modified Cholesky decomposition of a covariance matrix

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Methods for obtaining and updating the modified Cholesky decomposition (MCD) for the particular case of a covariance matrix when one is given only the original data are described. These methods are the standard method of forming the covariance matrix K then solving for the MCD, L and D (where K=LDLT); a method based on Householder reflections; and lastly, a method employing the composite-t algorithm. For many cases in the analysis of remotely sensed data, the composite-t method is the superior method despite the fact that it is the slowest one, since (1) the relative amount of time computing MCD's is often quite small, (2) the stability properties of it are the best of the three, and (3) it affords an efficient and numerically stable procedure for updating the MCD. The properties of these methods are discussed and FORTRAN programs implementing these algorithms are listed.

  8. Survey of adaptive control using Liapunov design

    NASA Technical Reports Server (NTRS)

    Lindorff, D. P.; Carroll, R. L.

    1972-01-01

    A survey was made of the literature devoted to the synthesis of model-tracking adaptive systems based on application of Liapunov's second method. The basic synthesis procedure is introduced and a critical review of extensions made to the theory since 1966 is made. The extensions relate to design for relative stability, reduction of order techniques, design with disturbance, design with time variable parameters, multivariable systems, identification, and an adaptive observer.

  9. Valuing national effects of digital health investments: an applied method.

    PubMed

    Hagens, Simon; Zelmer, Jennifer; Frazer, Cassandra; Gheorghiu, Bobby; Leaver, Chad

    2015-01-01

    This paper describes an approach which has been applied to value national outcomes of investments by federal, provincial and territorial governments, clinicians and healthcare organizations in digital health. Hypotheses are used to develop a model, which is revised and populated based upon the available evidence. Quantitative national estimates and qualitative findings are produced and validated through structured peer review processes. This methodology has applied in four studies since 2008.

  10. Inventory Funding Methods on Navy Ships: NWCF vs. End-use

    DTIC Science & Technology

    2013-06-01

    OPTAR Operating Target OSO Other Supply Officer POM Pre/Post-Overseas Movement R-Supply Relational Supply RoR Reorder Review SAC Service...process called other supply officer ( OSO ) transfer. Since end-use ships own their inventory, the supply officer can choose to transfer a part being...requested by another ship at their discretion, based on their ship’s anticipated requirements and their own goodwill. OSO transfers among end-use

  11. A TaqMan-based multiplex qPCR assay and DNA extraction method for phylotype IIB sequevars 1&2 (select agent) strains of Ralstonia solanacearum

    USDA-ARS?s Scientific Manuscript database

    Ralstonia solanacearum race 3 biovar 2 strains have the ability to cause brown rot of potato in temperate climates. Since these strains are not established in the U.S. and because of the potential risk they pose to the potato industry, the U.S. government has listed them as select agents. Cultivated...

  12. A new method for enhancer prediction based on deep belief network.

    PubMed

    Bu, Hongda; Gan, Yanglan; Wang, Yang; Zhou, Shuigeng; Guan, Jihong

    2017-10-16

    Studies have shown that enhancers are significant regulatory elements to play crucial roles in gene expression regulation. Since enhancers are unrelated to the orientation and distance to their target genes, it is a challenging mission for scholars and researchers to accurately predicting distal enhancers. In the past years, with the high-throughout ChiP-seq technologies development, several computational techniques emerge to predict enhancers using epigenetic or genomic features. Nevertheless, the inconsistency of computational models across different cell-lines and the unsatisfactory prediction performance call for further research in this area. Here, we propose a new Deep Belief Network (DBN) based computational method for enhancer prediction, which is called EnhancerDBN. This method combines diverse features, composed of DNA sequence compositional features, DNA methylation and histone modifications. Our computational results indicate that 1) EnhancerDBN outperforms 13 existing methods in prediction, and 2) GC content and DNA methylation can serve as relevant features for enhancer prediction. Deep learning is effective in boosting the performance of enhancer prediction.

  13. Deep learning based classification for head and neck cancer detection with hyperspectral imaging in an animal model

    NASA Astrophysics Data System (ADS)

    Ma, Ling; Lu, Guolan; Wang, Dongsheng; Wang, Xu; Chen, Zhuo Georgia; Muller, Susan; Chen, Amy; Fei, Baowei

    2017-03-01

    Hyperspectral imaging (HSI) is an emerging imaging modality that can provide a noninvasive tool for cancer detection and image-guided surgery. HSI acquires high-resolution images at hundreds of spectral bands, providing big data to differentiating different types of tissue. We proposed a deep learning based method for the detection of head and neck cancer with hyperspectral images. Since the deep learning algorithm can learn the feature hierarchically, the learned features are more discriminative and concise than the handcrafted features. In this study, we adopt convolutional neural networks (CNN) to learn the deep feature of pixels for classifying each pixel into tumor or normal tissue. We evaluated our proposed classification method on the dataset containing hyperspectral images from 12 tumor-bearing mice. Experimental results show that our method achieved an average accuracy of 91.36%. The preliminary study demonstrated that our deep learning method can be applied to hyperspectral images for detecting head and neck tumors in animal models.

  14. Investigation of the equality constraint effect on the reduction of the rotational ambiguity in three-component system using a novel grid search method.

    PubMed

    Beyramysoltan, Samira; Rajkó, Róbert; Abdollahi, Hamid

    2013-08-12

    The obtained results by soft modeling multivariate curve resolution methods often are not unique and are questionable because of rotational ambiguity. It means a range of feasible solutions equally fit experimental data and fulfill the constraints. Regarding to chemometric literature, a survey of useful constraints for the reduction of the rotational ambiguity is a big challenge for chemometrician. It is worth to study the effects of applying constraints on the reduction of rotational ambiguity, since it can help us to choose the useful constraints in order to impose in multivariate curve resolution methods for analyzing data sets. In this work, we have investigated the effect of equality constraint on decreasing of the rotational ambiguity. For calculation of all feasible solutions corresponding with known spectrum, a novel systematic grid search method based on Species-based Particle Swarm Optimization is proposed in a three-component system. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Recent patents for detecting the species of origin in animal feedstuff, and raw and processed meat products.

    PubMed

    Rogberg-Muñoz, Andrés; Posik, Diego M; Rípoli, María V; Falomir Lockhart, Agustín H; Peral-García, Pilar; Giovambattista, Guillermo

    2013-04-01

    The value of the traceability and labeling of food is attributable to two main aspects: health safety and/or product or process certification. The identification of the species related to meat production is still a major concern for economic, religious and health reasons. Many approaches and technologies have been used for species identification in animal feedstuff and food. The early methods for meat products identification include physical, anatomical, histological and chemical. Since 1970, a variety of methods were developed, these include electrophoresis (i.e. isoelectrofocusing), chromatography (i.e. HPLC), immunological techniques (i.e. ELISA), Nuclear Magnetic Resonance, Mass Spectrometry and PCR (DNA and RNA based methods). The recent patents on species detection in animal feedstuffs, raw meat and meat processed products, listed in this work, are mainly based on monoclonal antibodies and PCR, especially RT-PCR. The new developments under research are looking for more sensible, specific, less time consuming and quantitatively detection methods, which can be used in highly processed or heated treated meat food.

  16. Carotenoid coloration is related to fat digestion efficiency in a wild bird

    NASA Astrophysics Data System (ADS)

    Madonia, Christina; Hutton, Pierce; Giraudeau, Mathieu; Sepp, Tuul

    2017-12-01

    Some of the most spectacular visual signals found in the animal kingdom are based on dietarily derived carotenoid pigments (which cannot be produced de novo), with a general assumption that carotenoids are limited resources for wild organisms, causing trade-offs in allocation of carotenoids to different physiological functions and ornamentation. This resource trade-off view has been recently questioned, since the efficiency of carotenoid processing may relax the trade-off between allocation toward condition or ornamentation. This hypothesis has so far received little exploratory support, since studies of digestive efficiency of wild animals are limited due to methodological difficulties. Recently, a method for quantifying the percentage of fat in fecal samples to measure digestive efficiency has been developed in birds. Here, we use this method to test if the intensity of the carotenoid-based coloration predicts digestive efficiency in a wild bird, the house finch ( Haemorhous mexicanus). The redness of carotenoid feather coloration (hue) positively predicted digestion efficiency, with redder birds being more efficient at absorbing fats from seeds. We show for the first time in a wild species that digestive efficiency predicts ornamental coloration. Though not conclusive due to the correlative nature of our study, these results strongly suggest that fat extraction might be a crucial but overlooked process behind many ornamental traits.

  17. Towards European urinalysis guidelines. Introduction of a project under European Confederation of Laboratory Medicine.

    PubMed

    Kouri, T T; Gant, V A; Fogazzi, G B; Hofmann, W; Hallander, H O; Guder, W G

    2000-07-01

    Improved standardized performance is needed because urinalysis continues to be one of the most frequently requested laboratory tests. Since 1997, the European Confederation of Laboratory Medicine (ECLM) has been supporting an interdisciplinary project aiming to produce European urinalysis guidelines. More than seventy clinical chemists, microbiologists and ward-based clinicians, as well as representatives of manufacturers are taking part. These guidelines aim to improve the quality and consistency of chemical urinalysis, particle counting and bacterial culture by suggesting optimal investigative processes that could be applied in Europe. The approach is based on medical needs for urinalysis. The importance of the pre-analytical stage for total quality is stressed by detailed illustrative advice for specimen collection. Attention is also given to emerging automated technology. For cost containment reasons, both optimum (ideal) procedures and minimum analytical approaches are suggested. Since urinalysis mostly lacks genuine reference methods (primary reference measurement procedures; Level 4), a novel classification of the methods is proposed: comparison measurement procedures (Level 3), quantitative routine procedures (Level 2), and ordinal scale examinations (Level 1). Stepwise strategies are suggested to save costs, applying different rules for general and specific patient populations. New analytical quality specifications have been created. After a consultation period, the final written text will be published in full as a separate document.

  18. Porous media for catalytic renewable energy conversion

    NASA Astrophysics Data System (ADS)

    Hotz, Nico

    2012-05-01

    A novel flow-based method is presented to place catalytic nanoparticles into a reactor by sol-gelation of a porous ceramic consisting of copper-based nanoparticles, silica sand, ceramic binder, and a gelation agent. This method allows for the placement of a liquid precursor containing the catalyst into the final reactor geometry without the need of impregnating or coating of a substrate with the catalytic material. The so generated foam-like porous ceramic shows properties highly appropriate for use as catalytic reactor material, e.g., reasonable pressure drop due to its porosity, high thermal and catalytic stability, and excellent catalytic behavior. The catalytic activity of micro-reactors containing this foam-like ceramic is tested in terms of their ability to convert alcoholic biofuel (e.g. methanol) to a hydrogen-rich gas mixture with low concentrations of carbon monoxide (up to 75% hydrogen content and less than 0.2% CO, for the case of methanol). This gas mixture is subsequently used in a low-temperature fuel cell, converting the hydrogen directly to electricity. A low concentration of CO is crucial to avoid poisoning of the fuel cell catalyst. Since conventional Polymer Electrolyte Membrane (PEM) fuel cells require CO concentrations far below 100 ppm and since most methods to reduce the mole fraction of CO (such as Preferential Oxidation or PROX) have CO conversions of up to 99%, the alcohol fuel reformer has to achieve initial CO mole fractions significantly below 1%. The catalyst and the porous ceramic reactor of the present study can successfully fulfill this requirement.

  19. Segmentation of DTI based on tensorial morphological gradient

    NASA Astrophysics Data System (ADS)

    Rittner, Leticia; de Alencar Lotufo, Roberto

    2009-02-01

    This paper presents a segmentation technique for diffusion tensor imaging (DTI). This technique is based on a tensorial morphological gradient (TMG), defined as the maximum dissimilarity over the neighborhood. Once this gradient is computed, the tensorial segmentation problem becomes an scalar one, which can be solved by conventional techniques, such as watershed transform and thresholding. Similarity functions, namely the dot product, the tensorial dot product, the J-divergence and the Frobenius norm, were compared, in order to understand their differences regarding the measurement of tensor dissimilarities. The study showed that the dot product and the tensorial dot product turned out to be inappropriate for computation of the TMG, while the Frobenius norm and the J-divergence were both capable of measuring tensor dissimilarities, despite the distortion of Frobenius norm, since it is not an affine invariant measure. In order to validate the TMG as a solution for DTI segmentation, its computation was performed using distinct similarity measures and structuring elements. TMG results were also compared to fractional anisotropy. Finally, synthetic and real DTI were used in the method validation. Experiments showed that the TMG enables the segmentation of DTI by watershed transform or by a simple choice of a threshold. The strength of the proposed segmentation method is its simplicity and robustness, consequences of TMG computation. It enables the use, not only of well-known algorithms and tools from the mathematical morphology, but also of any other segmentation method to segment DTI, since TMG computation transforms tensorial images in scalar ones.

  20. Anisotropic norm-oriented mesh adaptation for a Poisson problem

    NASA Astrophysics Data System (ADS)

    Brèthes, Gautier; Dervieux, Alain

    2016-10-01

    We present a novel formulation for the mesh adaptation of the approximation of a Partial Differential Equation (PDE). The discussion is restricted to a Poisson problem. The proposed norm-oriented formulation extends the goal-oriented formulation since it is equation-based and uses an adjoint. At the same time, the norm-oriented formulation somewhat supersedes the goal-oriented one since it is basically a solution-convergent method. Indeed, goal-oriented methods rely on the reduction of the error in evaluating a chosen scalar output with the consequence that, as mesh size is increased (more degrees of freedom), only this output is proven to tend to its continuous analog while the solution field itself may not converge. A remarkable quality of goal-oriented metric-based adaptation is the mathematical formulation of the mesh adaptation problem under the form of the optimization, in the well-identified set of metrics, of a well-defined functional. In the new proposed formulation, we amplify this advantage. We search, in the same well-identified set of metrics, the minimum of a norm of the approximation error. The norm is prescribed by the user and the method allows addressing the case of multi-objective adaptation like, for example in aerodynamics, adaptating the mesh for drag, lift and moment in one shot. In this work, we consider the basic linear finite-element approximation and restrict our study to L2 norm in order to enjoy second-order convergence. Numerical examples for the Poisson problem are computed.

  1. Paying for hospital-based care of Kala-azar in Nepal: assessing catastrophic, impoverishment and economic consequences.

    PubMed

    Adhikari, Shiva R; Maskay, Nephil M; Sharma, Bishnu P

    2009-03-01

    Households obtaining health care services in developing countries incur substantial costs, despite services generally being provided free of charge by public health institutions. This constitutes an economic burden on low-income households, and contributes to deepening their level of poverty. In addition to the economic burden of obtaining health care, the method of financing these payments has implications for the distribution of household assets. This effect on resource-poor households is amplified since they have decreased access to health insurance. Recent literature, however, ignores the importance of the method of financing health care payments. This paper looks at the case of Nepal and highlights the impact on households of paying for hospital-based care of Kala-azar (KA) by analysing the catastrophic, impoverishment and economic consequences of their coping strategies. The paper utilizes micro-data on a random selection of 50% of the KA-affected households of Siraha and Saptari districts of Nepal. The empirical results suggest that direct costs of hospital-based treatment of KA are catastrophic since they consume 17% of annual household income. This expenditure causes more than 20% of KA-affected households to fall below the poverty line, with the remaining households being pushed into the category of marginal poor; the poverty gap ratio is more than 90%. Further, KA incidence can have prolonged and severe economic consequences for the household economy due to the mechanisms of informal sector financing to which households resort. A heavy burden of loan repayments can lead households on a downward spiral that eventually becomes a poverty trap. In other words, the method of financing health care payments is an important ingredient in understanding the economic burden of disease.

  2. Evaluation of non-rigid registration parameters for atlas-based segmentation of CT images of human cochlea

    NASA Astrophysics Data System (ADS)

    Elfarnawany, Mai; Alam, S. Riyahi; Agrawal, Sumit K.; Ladak, Hanif M.

    2017-02-01

    Cochlear implant surgery is a hearing restoration procedure for patients with profound hearing loss. In this surgery, an electrode is inserted into the cochlea to stimulate the auditory nerve and restore the patient's hearing. Clinical computed tomography (CT) images are used for planning and evaluation of electrode placement, but their low resolution limits the visualization of internal cochlear structures. Therefore, high resolution micro-CT images are used to develop atlas-based segmentation methods to extract these nonvisible anatomical features in clinical CT images. Accurate registration of the high and low resolution CT images is a prerequisite for reliable atlas-based segmentation. In this study, we evaluate and compare different non-rigid B-spline registration parameters using micro-CT and clinical CT images of five cadaveric human cochleae. The varying registration parameters are cost function (normalized correlation (NC), mutual information and mean square error), interpolation method (linear, windowed-sinc and B-spline) and sampling percentage (1%, 10% and 100%). We compare the registration results visually and quantitatively using the Dice similarity coefficient (DSC), Hausdorff distance (HD) and absolute percentage error in cochlear volume. Using MI or MSE cost functions and linear or windowed-sinc interpolation resulted in visually undesirable deformation of internal cochlear structures. Quantitatively, the transforms using 100% sampling percentage yielded the highest DSC and smallest HD (0.828+/-0.021 and 0.25+/-0.09mm respectively). Therefore, B-spline registration with cost function: NC, interpolation: B-spline and sampling percentage: moments 100% can be the foundation of developing an optimized atlas-based segmentation algorithm of intracochlear structures in clinical CT images.

  3. Information Gain Based Dimensionality Selection for Classifying Text Documents

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexitymore » is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.« less

  4. Effect of atmospheric scattering and surface reflection on upwelling solar radiation

    NASA Technical Reports Server (NTRS)

    Suttles, J. T.; Barkstrom, B. R.; Tiwari, S. N.

    1981-01-01

    A study is presented of the solar radiation transfer in the complete earth-atmosphere system, and numerical results are compared with satellite data obtained during the Earth Radiation Budget Experiment on Nimbus 6, in August, 1975. Emphasis is placed on the upwelling radiance distribution at the top of the atmosphere, assumed to be at 50 km. The numerical technique is based on the finite difference method, which includes azimuth and spectral variations for the entire solar wavelength range. Detailed solar properties, atmospheric physical properties, and optical properties are used. However, since the property descriptions are based on a trade-off between accuracy and computational realities, aerosol and cloud optical properties are treated with simple approximations. The radiative transfer model is in good agreement with the satellite radiance observations. The method provides a valuable tool in analyzing satellite- and ground-based radiation budget measurements and in designing instrumentation.

  5. Ultrasound Image Despeckling Using Stochastic Distance-Based BM3D.

    PubMed

    Santos, Cid A N; Martins, Diego L N; Mascarenhas, Nelson D A

    2017-06-01

    Ultrasound image despeckling is an important research field, since it can improve the interpretability of one of the main categories of medical imaging. Many techniques have been tried over the years for ultrasound despeckling, and more recently, a great deal of attention has been focused on patch-based methods, such as non-local means and block-matching collaborative filtering (BM3D). A common idea in these recent methods is the measure of distance between patches, originally proposed as the Euclidean distance, for filtering additive white Gaussian noise. In this paper, we derive new stochastic distances for the Fisher-Tippett distribution, based on well-known statistical divergences, and use them as patch distance measures in a modified version of the BM3D algorithm for despeckling log-compressed ultrasound images. State-of-the-art results in filtering simulated, synthetic, and real ultrasound images confirm the potential of the proposed approach.

  6. Novel Algorithm for Classification of Medical Images

    NASA Astrophysics Data System (ADS)

    Bhushan, Bharat; Juneja, Monika

    2010-11-01

    Content-based image retrieval (CBIR) methods in medical image databases have been designed to support specific tasks, such as retrieval of medical images. These methods cannot be transferred to other medical applications since different imaging modalities require different types of processing. To enable content-based queries in diverse collections of medical images, the retrieval system must be familiar with the current Image class prior to the query processing. Further, almost all of them deal with the DICOM imaging format. In this paper a novel algorithm based on energy information obtained from wavelet transform for the classification of medical images according to their modalities is described. For this two types of wavelets have been used and have been shown that energy obtained in either case is quite distinct for each of the body part. This technique can be successfully applied to different image formats. The results are shown for JPEG imaging format.

  7. Control-based continuation: Bifurcation and stability analysis for physical experiments

    NASA Astrophysics Data System (ADS)

    Barton, David A. W.

    2017-02-01

    Control-based continuation is technique for tracking the solutions and bifurcations of nonlinear experiments. The idea is to apply the method of numerical continuation to a feedback-controlled physical experiment such that the control becomes non-invasive. Since in an experiment it is not (generally) possible to set the state of the system directly, the control target becomes a proxy for the state. Control-based continuation enables the systematic investigation of the bifurcation structure of a physical system, much like if it was numerical model. However, stability information (and hence bifurcation detection and classification) is not readily available due to the presence of stabilising feedback control. This paper uses a periodic auto-regressive model with exogenous inputs (ARX) to approximate the time-varying linearisation of the experiment around a particular periodic orbit, thus providing the missing stability information. This method is demonstrated using a physical nonlinear tuned mass damper.

  8. Fully automatic bone age estimation from left hand MR images.

    PubMed

    Stern, Darko; Ebner, Thomas; Bischof, Horst; Grassegger, Sabine; Ehammer, Thomas; Urschler, Martin

    2014-01-01

    There has recently been an increased demand in bone age estimation (BAE) of living individuals and human remains in legal medicine applications. A severe drawback of established BAE techniques based on X-ray images is radiation exposure, since many countries prohibit scanning involving ionizing radiation without diagnostic reasons. We propose a completely automated method for BAE based on volumetric hand MRI images. On our database of 56 male caucasian subjects between 13 and 19 years, we are able to estimate the subjects age with a mean difference of 0.85 ± 0.58 years compared to the chronological age, which is in line with radiologist results using established radiographic methods. We see this work as a promising first step towards a novel MRI based bone age estimation system, with the key benefits of lacking exposure to ionizing radiation and higher accuracy due to exploitation of volumetric data.

  9. A method for real-time implementation of HOG feature extraction

    NASA Astrophysics Data System (ADS)

    Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai

    2011-08-01

    Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.

  10. Recent advances in synchrotron-based hard x-ray phase contrast imaging

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Nelson, J.; Holzner, C.; Andrews, J. C.; Pianetta, P.

    2013-12-01

    Ever since the first demonstration of phase contrast imaging (PCI) in the 1930s by Frits Zernike, people have realized the significant advantage of phase contrast over conventional absorption-based imaging in terms of sensitivity to ‘transparent’ features within specimens. Thus, x-ray phase contrast imaging (XPCI) holds great potential in studies of soft biological tissues, typically containing low Z elements such as C, H, O and N. Particularly when synchrotron hard x-rays are employed, the favourable brightness, energy tunability, monochromatic characteristics and penetration depth have dramatically enhanced the quality and variety of XPCI methods, which permit detection of the phase shift associated with 3D geometry of relatively large samples in a non-destructive manner. In this paper, we review recent advances in several synchrotron-based hard x-ray XPCI methods. Challenges and key factors in methodological development are discussed, and biological and medical applications are presented.

  11. Extrinsic Calibration of Camera Networks Based on Pedestrians

    PubMed Central

    Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried

    2016-01-01

    In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080

  12. Application of ECT inspection to the first wall of a fusion reactor with wavelet analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, G.; Yoshida, Y.; Miya, K.

    1994-12-31

    The first wall of a fusion reactor will be subjected to intensive loads during fusion operations. Since these loads may cause defects in the first wall, nondestructive evaluation techniques of the first wall should be developed. In this paper, we try to apply eddy current testing (ECT) technique to the inspection of the first wall. A method based on current vector potential and wavelet analysis is proposed. Owing to the use of wavelet analysis, a new theory developed recently, the accuracy of the present method is shown to be better than a conventional one.

  13. Data structures supporting multi-region adaptive isogeometric analysis

    NASA Astrophysics Data System (ADS)

    Perduta, Anna; Putanowicz, Roman

    2018-01-01

    Since the first paper published in 2005 Isogeometric Analysis (IGA) has gained strong interest and found applications in many engineering problems. Despite the advancement of the method, there are still far fewer software implementations comparing to Finite Element Method. The paper presents an approach to the development of data structures that can support multi-region IGA with local mesh refinement (patch-based) and possible application in IGA-FEM models. The purpose of this paper is to share original design concepts, that authors have created while developing an IGA package, which other researchers may find beneficial for their own simulation codes.

  14. Filmless versus film-based systems in radiographic examination costs: an activity-based costing method.

    PubMed

    Muto, Hiroshi; Tani, Yuji; Suzuki, Shigemasa; Yokooka, Yuki; Abe, Tamotsu; Sase, Yuji; Terashita, Takayoshi; Ogasawara, Katsuhiko

    2011-09-30

    Since the shift from a radiographic film-based system to that of a filmless system, the change in radiographic examination costs and costs structure have been undetermined. The activity-based costing (ABC) method measures the cost and performance of activities, resources, and cost objects. The purpose of this study is to identify the cost structure of a radiographic examination comparing a filmless system to that of a film-based system using the ABC method. We calculated the costs of radiographic examinations for both a filmless and a film-based system, and assessed the costs or cost components by simulating radiographic examinations in a health clinic. The cost objects of the radiographic examinations included lumbar (six views), knee (three views), wrist (two views), and other. Indirect costs were allocated to cost objects using the ABC method. The costs of a radiographic examination using a filmless system are as follows: lumbar 2,085 yen; knee 1,599 yen; wrist 1,165 yen; and other 1,641 yen. The costs for a film-based system are: lumbar 3,407 yen; knee 2,257 yen; wrist 1,602 yen; and other 2,521 yen. The primary activities were "calling patient," "explanation of scan," "take photographs," and "aftercare" for both filmless and film-based systems. The cost of these activities cost represented 36.0% of the total cost for a filmless system and 23.6% of a film-based system. The costs of radiographic examinations using a filmless system and a film-based system were calculated using the ABC method. Our results provide clear evidence that the filmless system is more effective than the film-based system in providing greater value services directly to patients.

  15. Sequence Based Prediction of DNA-Binding Proteins Based on Hybrid Feature Selection Using Random Forest and Gaussian Naïve Bayes

    PubMed Central

    Lou, Wangchao; Wang, Xiaoqing; Chen, Fan; Chen, Yixiao; Jiang, Bo; Zhang, Hua

    2014-01-01

    Developing an efficient method for determination of the DNA-binding proteins, due to their vital roles in gene regulation, is becoming highly desired since it would be invaluable to advance our understanding of protein functions. In this study, we proposed a new method for the prediction of the DNA-binding proteins, by performing the feature rank using random forest and the wrapper-based feature selection using forward best-first search strategy. The features comprise information from primary sequence, predicted secondary structure, predicted relative solvent accessibility, and position specific scoring matrix. The proposed method, called DBPPred, used Gaussian naïve Bayes as the underlying classifier since it outperformed five other classifiers, including decision tree, logistic regression, k-nearest neighbor, support vector machine with polynomial kernel, and support vector machine with radial basis function. As a result, the proposed DBPPred yields the highest average accuracy of 0.791 and average MCC of 0.583 according to the five-fold cross validation with ten runs on the training benchmark dataset PDB594. Subsequently, blind tests on the independent dataset PDB186 by the proposed model trained on the entire PDB594 dataset and by other five existing methods (including iDNA-Prot, DNA-Prot, DNAbinder, DNABIND and DBD-Threader) were performed, resulting in that the proposed DBPPred yielded the highest accuracy of 0.769, MCC of 0.538, and AUC of 0.790. The independent tests performed by the proposed DBPPred on completely a large non-DNA binding protein dataset and two RNA binding protein datasets also showed improved or comparable quality when compared with the relevant prediction methods. Moreover, we observed that majority of the selected features by the proposed method are statistically significantly different between the mean feature values of the DNA-binding and the non DNA-binding proteins. All of the experimental results indicate that the proposed DBPPred can be an alternative perspective predictor for large-scale determination of DNA-binding proteins. PMID:24475169

  16. Simple Ultraviolet Short-Pulse Intensity Diagnostic Method Using Atmosphere

    NASA Astrophysics Data System (ADS)

    Aota, Tatsuya; Takahashi, Eiichi; Losev, Leonid L.; Tabuchi, Takeyuki; Kato, Susumu; Matsumoto, Yuji; Okuda, Isao; Owadano, Yoshiro

    2005-05-01

    An ultraviolet (UV) short-pulse intensity diagnostic method using atmosphere as a nonlinear medium was developed. This diagnostic method is based on evaluating the ion charge of the two-photon ionization of atmospheric oxygen upon irradiation with a UV (238-299 nm) short-pulse laser. The observed ion signal increased proportionally to the input intensity to the power of ˜2.2, during the two-photon ionization of atmospheric oxygen. An autocorrelator was constructed and used to successfully measure a UV laser pulse of ˜400 fs duration. Since this diagnostic system is used in the open-air under windowless conditions, it can be set along the beam path and used as a UV intensity monitor.

  17. Generalization of the Mulliken-Hush treatment for the calculation of electron transfer matrix elements

    NASA Astrophysics Data System (ADS)

    Cave, Robert J.; Newton, Marshall D.

    1996-01-01

    A new method for the calculation of the electronic coupling matrix element for electron transfer processes is introduced and results for several systems are presented. The method can be applied to ground and excited state systems and can be used in cases where several states interact strongly. Within the set of states chosen it is a non-perturbative treatment, and can be implemented using quantities obtained solely in terms of the adiabatic states. Several applications based on quantum chemical calculations are briefly presented. Finally, since quantities for adiabatic states are the only input to the method, it can also be used with purely experimental data to estimate electron transfer matrix elements.

  18. Mapping implicit spectral methods to distributed memory architectures

    NASA Technical Reports Server (NTRS)

    Overman, Andrea L.; Vanrosendale, John

    1991-01-01

    Spectral methods were proven invaluable in numerical simulation of PDEs (Partial Differential Equations), but the frequent global communication required raises a fundamental barrier to their use on highly parallel architectures. To explore this issue, a 3-D implicit spectral method was implemented on an Intel hypercube. Utilization of about 50 percent was achieved on a 32 node iPSC/860 hypercube, for a 64 x 64 x 64 Fourier-spectral grid; finer grids yield higher utilizations. Chebyshev-spectral grids are more problematic, since plane-relaxation based multigrid is required. However, by using a semicoarsening multigrid algorithm, and by relaxing all multigrid levels concurrently, relatively high utilizations were also achieved in this harder case.

  19. Qualitative Versus Quantitative Mammographic Breast Density Assessment: Applications for the US and Abroad

    PubMed Central

    Destounis, Stamatia; Arieno, Andrea; Morgan, Renee; Roberts, Christina; Chan, Ariane

    2017-01-01

    Mammographic breast density (MBD) has been proven to be an important risk factor for breast cancer and an important determinant of mammographic screening performance. The measurement of density has changed dramatically since its inception. Initial qualitative measurement methods have been found to have limited consistency between readers, and in regards to breast cancer risk. Following the introduction of full-field digital mammography, more sophisticated measurement methodology is now possible. Automated computer-based density measurements can provide consistent, reproducible, and objective results. In this review paper, we describe various methods currently available to assess MBD, and provide a discussion on the clinical utility of such methods for breast cancer screening. PMID:28561776

  20. A two-phase copula entropy-based multiobjective optimization approach to hydrometeorological gauge network design

    NASA Astrophysics Data System (ADS)

    Xu, Pengcheng; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Chen, Yuanfang; Chen, Xi; Liu, Jiufu; Zou, Ying; He, Ruimin

    2017-12-01

    Hydrometeorological data are needed for obtaining point and areal mean, quantifying the spatial variability of hydrometeorological variables, and calibration and verification of hydrometeorological models. Hydrometeorological networks are utilized to collect such data. Since data collection is expensive, it is essential to design an optimal network based on the minimal number of hydrometeorological stations in order to reduce costs. This study proposes a two-phase copula entropy- based multiobjective optimization approach that includes: (1) copula entropy-based directional information transfer (CDIT) for clustering the potential hydrometeorological gauges into several groups, and (2) multiobjective method for selecting the optimal combination of gauges for regionalized groups. Although entropy theory has been employed for network design before, the joint histogram method used for mutual information estimation has several limitations. The copula entropy-based mutual information (MI) estimation method is shown to be more effective for quantifying the uncertainty of redundant information than the joint histogram (JH) method. The effectiveness of this approach is verified by applying to one type of hydrometeorological gauge network, with the use of three model evaluation measures, including Nash-Sutcliffe Coefficient (NSC), arithmetic mean of the negative copula entropy (MNCE), and MNCE/NSC. Results indicate that the two-phase copula entropy-based multiobjective technique is capable of evaluating the performance of regional hydrometeorological networks and can enable decision makers to develop strategies for water resources management.

  1. Research of Hubs Location Method for Weighted Brain Network Based on NoS-FA.

    PubMed

    Weng, Zhengkui; Wang, Bin; Xue, Jie; Yang, Baojie; Liu, Hui; Xiong, Xin

    2017-01-01

    As a complex network of many interlinked brain regions, there are some central hub regions which play key roles in the structural human brain network based on T1 and diffusion tensor imaging (DTI) technology. Since most studies about hubs location method in the whole human brain network are mainly concerned with the local properties of each single node but not the global properties of all the directly connected nodes, a novel hubs location method based on global importance contribution evaluation index is proposed in this study. The number of streamlines (NoS) is fused with normalized fractional anisotropy (FA) for more comprehensive brain bioinformation. The brain region importance contribution matrix and information transfer efficiency value are constructed, respectively, and then by combining these two factors together we can calculate the importance value of each node and locate the hubs. Profiting from both local and global features of the nodes and the multi-information fusion of human brain biosignals, the experiment results show that this method can detect the brain hubs more accurately and reasonably compared with other methods. Furthermore, the proposed location method is used in impaired brain hubs connectivity analysis of schizophrenia patients and the results are in agreement with previous studies.

  2. Encircling the dark, a simple method to decipher the cosmos

    NASA Astrophysics Data System (ADS)

    Quirico, Eric

    2017-09-01

    Asteroids are relics of Solar System formation and host insightful information on physical, chemical, chronological and dynamical conditions that operated, since the formation of the first solids until the Late Heavy Bombardment. Since 2000, our view on these small objects has been deeply transformed due to several space missions and advances in ground-based observations. Near, Dawn (NASA) and Hayabusa 1 (JAXA) have provided extensive characterizations of the surface and interior of asteroids 433Eros, Itokawa, Vesta and Ceres, and revealed a complex morphology driven by collisions and/or internal activity. The samples returned to Earth by Hayabusa 1 provided a firm evidence of the genetic link between S-type asteroids and ordinary chondrites, and valuable clues on the first stage of space weathering. Meanwhile, ground-based observations, dynamical theory and meteoritics have drawn a big picture pointing to a continuum between asteroids and comets. Hopefully, the forthcoming missions Hayabusa2 and Osiris ReX will explore for the first time two C-type asteroids in the next years.

  3. DNA rendering of polyhedral meshes at the nanoscale

    NASA Astrophysics Data System (ADS)

    Benson, Erik; Mohammed, Abdulmelik; Gardell, Johan; Masich, Sergej; Czeizler, Eugen; Orponen, Pekka; Högberg, Björn

    2015-07-01

    It was suggested more than thirty years ago that Watson-Crick base pairing might be used for the rational design of nanometre-scale structures from nucleic acids. Since then, and especially since the introduction of the origami technique, DNA nanotechnology has enabled increasingly more complex structures. But although general approaches for creating DNA origami polygonal meshes and design software are available, there are still important constraints arising from DNA geometry and sense/antisense pairing, necessitating some manual adjustment during the design process. Here we present a general method of folding arbitrary polygonal digital meshes in DNA that readily produces structures that would be very difficult to realize using previous approaches. The design process is highly automated, using a routeing algorithm based on graph theory and a relaxation simulation that traces scaffold strands through the target structures. Moreover, unlike conventional origami designs built from close-packed helices, our structures have a more open conformation with one helix per edge and are therefore stable under the ionic conditions usually used in biological assays.

  4. DNA rendering of polyhedral meshes at the nanoscale.

    PubMed

    Benson, Erik; Mohammed, Abdulmelik; Gardell, Johan; Masich, Sergej; Czeizler, Eugen; Orponen, Pekka; Högberg, Björn

    2015-07-23

    It was suggested more than thirty years ago that Watson-Crick base pairing might be used for the rational design of nanometre-scale structures from nucleic acids. Since then, and especially since the introduction of the origami technique, DNA nanotechnology has enabled increasingly more complex structures. But although general approaches for creating DNA origami polygonal meshes and design software are available, there are still important constraints arising from DNA geometry and sense/antisense pairing, necessitating some manual adjustment during the design process. Here we present a general method of folding arbitrary polygonal digital meshes in DNA that readily produces structures that would be very difficult to realize using previous approaches. The design process is highly automated, using a routeing algorithm based on graph theory and a relaxation simulation that traces scaffold strands through the target structures. Moreover, unlike conventional origami designs built from close-packed helices, our structures have a more open conformation with one helix per edge and are therefore stable under the ionic conditions usually used in biological assays.

  5. Fast ancestral gene order reconstruction of genomes with unequal gene content.

    PubMed

    Feijão, Pedro; Araujo, Eloi

    2016-11-11

    During evolution, genomes are modified by large scale structural events, such as rearrangements, deletions or insertions of large blocks of DNA. Of particular interest, in order to better understand how this type of genomic evolution happens, is the reconstruction of ancestral genomes, given a phylogenetic tree with extant genomes at its leaves. One way of solving this problem is to assume a rearrangement model, such as Double Cut and Join (DCJ), and find a set of ancestral genomes that minimizes the number of events on the input tree. Since this problem is NP-hard for most rearrangement models, exact solutions are practical only for small instances, and heuristics have to be used for larger datasets. This type of approach can be called event-based. Another common approach is based on finding conserved structures between the input genomes, such as adjacencies between genes, possibly also assigning weights that indicate a measure of confidence or probability that this particular structure is present on each ancestral genome, and then finding a set of non conflicting adjacencies that optimize some given function, usually trying to maximize total weight and minimizing character changes in the tree. We call this type of methods homology-based. In previous work, we proposed an ancestral reconstruction method that combines homology- and event-based ideas, using the concept of intermediate genomes, that arise in DCJ rearrangement scenarios. This method showed better rate of correctly reconstructed adjacencies than other methods, while also being faster, since the use of intermediate genomes greatly reduces the search space. Here, we generalize the intermediate genome concept to genomes with unequal gene content, extending our method to account for gene insertions and deletions of any length. In many of the simulated datasets, our proposed method had better results than MLGO and MGRA, two state-of-the-art algorithms for ancestral reconstruction with unequal gene content, while running much faster, making it more scalable to larger datasets. Studing ancestral reconstruction problems under a new light, using the concept of intermediate genomes, allows the design of very fast algorithms by greatly reducing the solution search space, while also giving very good results. The algorithms introduced in this paper were implemented in an open-source software called RINGO (ancestral Reconstruction with INtermediate GenOmes), available at https://github.com/pedrofeijao/RINGO .

  6. Challenges in NMR-based structural genomics

    NASA Astrophysics Data System (ADS)

    Sue, Shih-Che; Chang, Chi-Fon; Huang, Yao-Te; Chou, Ching-Yu; Huang, Tai-huang

    2005-05-01

    Understanding the functions of the vast number of proteins encoded in many genomes that have been completely sequenced recently is the main challenge for biologists in the post-genomics era. Since the function of a protein is determined by its exact three-dimensional structure it is paramount to determine the 3D structures of all proteins. This need has driven structural biologists to undertake the structural genomics project aimed at determining the structures of all known proteins. Several centers for structural genomics studies have been established throughout the world. Nuclear magnetic resonance (NMR) spectroscopy has played a major role in determining protein structures in atomic details and in a physiologically relevant solution state. Since the number of new genes being discovered daily far exceeds the number of structures determined by both NMR and X-ray crystallography, a high-throughput method for speeding up the process of protein structure determination is essential for the success of the structural genomics effort. In this article we will describe NMR methods currently being employed for protein structure determination. We will also describe methods under development which may drastically increase the throughput, as well as point out areas where opportunities exist for biophysicists to make significant contribution in this important field.

  7. Reusable data in public health data-bases-problems encountered in Danish Children's Database.

    PubMed

    Høstgaard, Anna Marie; Pape-Haugaard, Louise

    2012-01-01

    Denmark have unique health informatics databases e.g. "The Children's Database", which since 2009 holds data on all Danish children from birth until 17 years of age. In the current set-up a number of potential sources of errors exist - both technical and human-which means that the data is flawed. This gives rise to erroneous statistics and makes the data unsuitable for research purposes. In order to make the data usable, it is necessary to develop new methods for validating the data generation process at the municipal/regional/national level. In the present ongoing research project, two research areas are combined: Public Health Informatics and Computer Science, and both ethnographic as well as system engineering research methods are used. The project is expected to generate new generic methods and knowledge about electronic data collection and transmission in different social contexts and by different social groups and thus to be of international importance, since this is sparsely documented in the Public Health Informatics perspective. This paper presents the preliminary results, which indicate that health information technology used ought to be subject for redesign, where a thorough insight into the work practices should be point of departure.

  8. Using comparative genome analysis to identify problems in annotated microbial genomes.

    PubMed

    Poptsova, Maria S; Gogarten, J Peter

    2010-07-01

    Genome annotation is a tedious task that is mostly done by automated methods; however, the accuracy of these approaches has been questioned since the beginning of the sequencing era. Genome annotation is a multilevel process, and errors can emerge at different stages: during sequencing, as a result of gene-calling procedures, and in the process of assigning gene functions. Missed or wrongly annotated genes differentially impact different types of analyses. Here we discuss and demonstrate how the methods of comparative genome analysis can refine annotations by locating missing orthologues. We also discuss possible reasons for errors and show that the second-generation annotation systems, which combine multiple gene-calling programs with similarity-based methods, perform much better than the first annotation tools. Since old errors may propagate to the newly sequenced genomes, we emphasize that the problem of continuously updating popular public databases is an urgent and unresolved one. Due to the progress in genome-sequencing technologies, automated annotation techniques will remain the main approach in the future. Researchers need to be aware of the existing errors in the annotation of even well-studied genomes, such as Escherichia coli, and consider additional quality control for their results.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bonneville, Alain H.; Kouzes, Richard T.

    Imaging subsurface geological formations, oil and gas reservoirs, mineral deposits, cavities or magma chambers under active volcanoes has been for many years a major quest of geophysicists and geologists. Since these objects cannot be observed directly, different indirect geophysical methods have been developed. They are all based on variations of certain physical properties of the subsurface that can be detected from the ground surface or from boreholes. Electrical resistivity, seismic wave’s velocities and density are certainly the most used properties. If we look at density, indirect estimates of density distributions are performed currently by seismic reflection methods - since themore » velocity of seismic waves depend also on density - but they are expensive and discontinuous in time. Direct estimates of density are performed using gravimetric data looking at variations of the gravity field induced by the density variations at depth but this is not sufficiently accurate. A new imaging technique using cosmic-ray muon detectors has emerged during the last decade and muon tomography - or muography - promises to provide, for the first time, a complete and precise image of the density distribution in the subsurface. Further, this novel approach has the potential to become a direct, real-time, and low-cost method for monitoring fluid displacement in subsurface reservoirs.« less

  10. Curvelet-domain multiple matching method combined with cubic B-spline function

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  11. Generation of Human Induced Pluripotent Stem Cells Using RNA-Based Sendai Virus System and Pluripotency Validation of the Resulting Cell Population.

    PubMed

    Chichagova, Valeria; Sanchez-Vera, Irene; Armstrong, Lyle; Steel, David; Lako, Majlinda

    2016-01-01

    Human induced pluripotent stem cells (hiPSCs) provide a platform for studying human disease in vitro, increase our understanding of human embryonic development, and provide clinically relevant cell types for transplantation, drug testing, and toxicology studies. Since their discovery, numerous advances have been made in order to eliminate issues such as vector integration into the host genome, low reprogramming efficiency, incomplete reprogramming and acquisition of genomic instabilities. One of the ways to achieve integration-free reprogramming is by using RNA-based Sendai virus. Here we describe a method to generate hiPSCs with Sendai virus in both feeder-free and feeder-dependent culture systems. Additionally, we illustrate methods by which to validate pluripotency of the resulting stem cell population.

  12. Unexpected Listeria monocytogenes detection with a dithiothreitol-based device during an aseptic hip revision.

    PubMed

    Banche, Giuliana; Bistolfi, Alessandro; Allizond, Valeria; Galletta, Claudia; Iannantuoni, Maria Rita; Marra, Elisa Simona; Merlino, Chiara; Massè, Alessandro; Cuffini, Anna Maria

    2018-06-18

    Prosthetic joint infection diagnosis is often difficult since biofilm-embedded microorganisms attach well to the prosthetic surfaces and resist their detection by conventional methods. DL-dithiothreitol has been described as a valid method for biofilm detachment on orthopedic devices. We report the case of an occasional detection of Listeria monocytogenes in a non immuno-compromised patient with a preoperative diagnosis of aseptic loosening. The infection diagnosis due to such rare bacteria was made postoperatively, thanks to a DL-dithiothreitol-based device. This may be considered a feasible approach for the microbiological analysis of prosthetic joint infection, considering that a prompt diagnosis of such biofilm-associated infections could bring some advantages, such as an early and appropriate antibiotic therapy administration and a reduction of undiagnosed infections.

  13. Ultra-high speed digital micro-mirror device based ptychographic iterative engine method

    PubMed Central

    Sun, Aihui; He, Xiaoliang; Kong, Yan; Cui, Haoyang; Song, Xiaojun; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2017-01-01

    To reduce the long data acquisition time of the common mechanical scanning based Ptychographic Iterative Engine (PIE) technique, the digital micro-mirror device (DMD) is used to form the fast scanning illumination on the sample. Since the transverse mechanical scanning in the common PIE is replaced by the on/off switching of the micro-mirrors, the data acquisition time can be reduced from more than 15 minutes to less than 20 seconds for recording 12 × 10 diffraction patterns to cover the same field of 147.08 mm2. Furthermore, since the precision of DMD fabricated with the optical lithography is always higher than 10 nm (1 μm for the mechanical translation stage), the time consuming position-error-correction procedure is not required in the iterative reconstruction. These two improvements fundamentally speed up both the data acquisition and the reconstruction procedures in PIE, and relax its requirements on the stability of the imaging system, therefore remarkably improve its applicability for many practices. It is demonstrated experimentally with both USAF resolution target and biological sample that, the spatial resolution of 5.52 μm and the field of view of 147.08 mm2 can be reached with the DMD based PIE method. In a word, by using the DMD to replace the translation stage, we can effectively overcome the main shortcomings of common PIE related to the mechanical scanning, while keeping its advantages on both the high resolution and large field of view. PMID:28717560

  14. On iterative algorithms for quantitative photoacoustic tomography in the radiative transport regime

    NASA Astrophysics Data System (ADS)

    Wang, Chao; Zhou, Tie

    2017-11-01

    In this paper, we present a numerical reconstruction method for quantitative photoacoustic tomography (QPAT), based on the radiative transfer equation (RTE), which models light propagation more accurately than diffusion approximation (DA). We investigate the reconstruction of absorption coefficient and scattering coefficient of biological tissues. An improved fixed-point iterative method to retrieve the absorption coefficient, given the scattering coefficient, is proposed for its cheap computational cost; the convergence of this method is also proved. The Barzilai-Borwein (BB) method is applied to retrieve two coefficients simultaneously. Since the reconstruction of optical coefficients involves the solutions of original and adjoint RTEs in the framework of optimization, an efficient solver with high accuracy is developed from Gao and Zhao (2009 Transp. Theory Stat. Phys. 38 149-92). Simulation experiments illustrate that the improved fixed-point iterative method and the BB method are competitive methods for QPAT in the relevant cases.

  15. Simplified welding distortion analysis for fillet welding using composite shell elements

    NASA Astrophysics Data System (ADS)

    Kim, Mingyu; Kang, Minseok; Chung, Hyun

    2015-09-01

    This paper presents the simplified welding distortion analysis method to predict the welding deformation of both plate and stiffener in fillet welds. Currently, the methods based on equivalent thermal strain like Strain as Direct Boundary (SDB) has been widely used due to effective prediction of welding deformation. Regarding the fillet welding, however, those methods cannot represent deformation of both members at once since the temperature degree of freedom is shared at the intersection nodes in both members. In this paper, we propose new approach to simulate deformation of both members. The method can simulate fillet weld deformations by employing composite shell element and using different thermal expansion coefficients according to thickness direction with fixed temperature at intersection nodes. For verification purpose, we compare of result from experiments, 3D thermo elastic plastic analysis, SDB method and proposed method. Compared of experiments results, the proposed method can effectively predict welding deformation for fillet welds.

  16. Supercomputing Aspects for Simulating Incompressible Flow

    NASA Technical Reports Server (NTRS)

    Kwak, Dochan; Kris, Cetin C.

    2000-01-01

    The primary objective of this research is to support the design of liquid rocket systems for the Advanced Space Transportation System. Since the space launch systems in the near future are likely to rely on liquid rocket engines, increasing the efficiency and reliability of the engine components is an important task. One of the major problems in the liquid rocket engine is to understand fluid dynamics of fuel and oxidizer flows from the fuel tank to plume. Understanding the flow through the entire turbo-pump geometry through numerical simulation will be of significant value toward design. One of the milestones of this effort is to develop, apply and demonstrate the capability and accuracy of 3D CFD methods as efficient design analysis tools on high performance computer platforms. The development of the Message Passage Interface (MPI) and Multi Level Parallel (MLP) versions of the INS3D code is currently underway. The serial version of INS3D code is a multidimensional incompressible Navier-Stokes solver based on overset grid technology, INS3D-MPI is based on the explicit massage-passing interface across processors and is primarily suited for distributed memory systems. INS3D-MLP is based on multi-level parallel method and is suitable for distributed-shared memory systems. For the entire turbo-pump simulations, moving boundary capability and efficient time-accurate integration methods are built in the flow solver, To handle the geometric complexity and moving boundary problems, an overset grid scheme is incorporated with the solver so that new connectivity data will be obtained at each time step. The Chimera overlapped grid scheme allows subdomains move relative to each other, and provides a great flexibility when the boundary movement creates large displacements. Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two methods is compared by obtaining unsteady solutions for the evolution of twin vortices behind a flat plate. Calculated results are compared with experimental and other numerical results. For an unsteady flow, which requires small physical time step, the pressure projection method was found to be computationally efficient since it does not require any subiteration procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy the incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in present computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  17. Convergence of methods for coupling of microscopic and mesoscopic reaction-diffusion simulations

    NASA Astrophysics Data System (ADS)

    Flegg, Mark B.; Hellander, Stefan; Erban, Radek

    2015-05-01

    In this paper, three multiscale methods for coupling of mesoscopic (compartment-based) and microscopic (molecular-based) stochastic reaction-diffusion simulations are investigated. Two of the three methods that will be discussed in detail have been previously reported in the literature; the two-regime method (TRM) and the compartment-placement method (CPM). The third method that is introduced and analysed in this paper is called the ghost cell method (GCM), since it works by constructing a "ghost cell" in which molecules can disappear and jump into the compartment-based simulation. Presented is a comparison of sources of error. The convergent properties of this error are studied as the time step Δt (for updating the molecular-based part of the model) approaches zero. It is found that the error behaviour depends on another fundamental computational parameter h, the compartment size in the mesoscopic part of the model. Two important limiting cases, which appear in applications, are considered: Δt → 0 and h is fixed; Δt → 0 and h → 0 such that √{ Δt } / h is fixed. The error for previously developed approaches (the TRM and CPM) converges to zero only in the limiting case (ii), but not in case (i). It is shown that the error of the GCM converges in the limiting case (i). Thus the GCM is superior to previous coupling techniques if the mesoscopic description is much coarser than the microscopic part of the model.

  18. Chemometrics-assisted Spectrofluorimetric Determination of Two Co-administered Drugs of Major Interaction, Methotrexate and Aspirin, in Human Urine Following Acid-induced Hydrolysis.

    PubMed

    Maher, Hadir M; Ragab, Marwa A A; El-Kimary, Eman I

    2015-01-01

    Methotrexate (MTX) is widely used to treat rheumatoid arthritis (RA), mostly along with non-steroidal anti-inflammatory drugs (NSAIDs), the most common of which is aspirin or acetyl salicylic acid (ASA). Since NSAIDs impair MTX clearance and increase its toxicity, it was necessary to develop a simple and reliable method for the monitoring of MTX levels in urine samples, when coadministered with ASA. The method was based on the spectrofluorimetric measurement of the acid-induced hydrolysis product of MTX, 4-amino-4-deoxy-10-methylpteroic acid (AMP), along with the strongly fluorescent salicylic acid (SA), a product of acid-induced hydrolysis of aspirin and its metabolites in urine. The overlapping emission spectra were resolved using the derivative method (D method). In addition, the corresponding derivative emission spectra were convoluted using discrete Fourier functions, 8-points sin xi polynomials, (D/FF method) for better elimination of interferences. Validation of the developed methods was carried out according to the ICH guidelines. Moreover, the data obtained using derivative and convoluted derivative spectra were treated using the non-parametric Theil's method (NP), compared with the least-squares parametric regression method (LSP). The results treated with Theil's method were more accurate and precise compared with LSP since the former is less affected by the outliers. This work offers the potential of both derivative and convolution using discrete Fourier functions in addition to the effectiveness of using the NP regression analysis of data. The high sensitivity obtained by the proposed methods was promising for measuring low concentration levels of the two drugs in urine samples. These methods were efficiently used to measure the drugs in human urine samples following their co-administration.

  19. The BMPix and PEAK Tools: New Methods for Automated Laminae Recognition and Counting - Application to Glacial Varves From Antarctic Marine Sediment

    NASA Astrophysics Data System (ADS)

    Weber, M. E.; Reichelt, L.; Kuhn, G.; Thurow, J. W.; Ricken, W.

    2009-12-01

    We present software-based tools for rapid and quantitative detection of sediment lamination. The BMPix tool extracts color and gray-scale curves from images at ultrahigh (pixel) resolution. The PEAK tool uses the gray-scale curve and performs, for the first time, fully automated counting of laminae based on three methods. The maximum count algorithm counts every bright peak of a couplet of two laminae (annual resolution) in a Gaussian smoothed gray-scale curve. The zero-crossing algorithm counts every positive and negative halfway-passage of the gray-scale curve through a wide moving average. Hence, the record is separated into bright and dark intervals (seasonal resolution). The same is true for the frequency truncation method, which uses Fourier transformation to decompose the gray-scale curve into its frequency components, before positive and negative passages are count. We applied the new methods successfully to tree rings and to well-dated and already manually counted marine varves from Saanich Inlet before we adopted the tools to rather complex marine laminae from the Antarctic continental margin. In combination with AMS14C dating, we found convincing evidence that the laminations from three Weddell Sea sites represent true varves that were deposited on sediment ridges over several millennia during the last glacial maximum (LGM). There are apparently two seasonal layers of terrigenous composition, a coarser-grained bright layer, and a finer-grained dark layer. The new tools offer several advantages over previous tools. The counting procedures are based on a moving average generated from gray-scale curves instead of manual counting. Hence, results are highly objective and rely on reproducible mathematical criteria. Since PEAK associates counts with a specific depth, the thickness of each year or each season is also measured which is an important prerequisite for later spectral analysis. Since all information required to conduct the analysis is displayed graphically, interactive optimization of the counting algorithms can be achieved quickly and conveniently.

  20. Model-based coefficient method for calculation of N leaching from agricultural fields applied to small catchments and the effects of leaching reducing measures

    NASA Astrophysics Data System (ADS)

    Kyllmar, K.; Mårtensson, K.; Johnsson, H.

    2005-03-01

    A method to calculate N leaching from arable fields using model-calculated N leaching coefficients (NLCs) was developed. Using the process-based modelling system SOILNDB, leaching of N was simulated for four leaching regions in southern Sweden with 20-year climate series and a large number of randomised crop sequences based on regional agricultural statistics. To obtain N leaching coefficients, mean values of annual N leaching were calculated for each combination of main crop, following crop and fertilisation regime for each leaching region and soil type. The field-NLC method developed could be useful for following up water quality goals in e.g. small monitoring catchments, since it allows normal leaching from actual crop rotations and fertilisation to be determined regardless of the weather. The method was tested using field data from nine small intensively monitored agricultural catchments. The agreement between calculated field N leaching and measured N transport in catchment stream outlets, 19-47 and 8-38 kg ha -1 yr -1, respectively, was satisfactory in most catchments when contributions from land uses other than arable land and uncertainties in groundwater flows were considered. The possibility of calculating effects of crop combinations (crop and following crop) is of considerable value since changes in crop rotation constitute a large potential for reducing N leaching. When the effect of a number of potential measures to reduce N leaching (i.e. applying manure in spring instead of autumn; postponing ploughing-in of ley and green fallow in autumn; undersowing a catch crop in cereals and oilseeds; and increasing the area of catch crops by substituting winter cereals and winter oilseeds with corresponding spring crops) was calculated for the arable fields in the catchments using field-NLCs, N leaching was reduced by between 34 and 54% for the separate catchments when the best possible effect on the entire potential area was assumed.

  1. Recent developments of nano-structured materials as the catalysts for oxygen reduction reaction

    NASA Astrophysics Data System (ADS)

    Kang, SungYeon; Kim, HuiJung; Chung, Yong-Ho

    2018-04-01

    Developments of high efficient materials for electrocatalyst are significant topics of numerous researches since a few decades. Recent global interests related with energy conversion and storage lead to the expansion of efforts to find cost-effective catalysts that can substitute conventional catalytic materials. Especially, in the field of fuel cell, novel materials for oxygen reduction reaction (ORR) have been noticed to overcome disadvantages of conventional platinum-based catalysts. Various approaching methods have been attempted to achieve low cost and high electrochemical activity comparable with Pt-based catalysts, including reducing Pt consumption by the formation of hybrid materials, Pt-based alloys, and not-Pt metal or carbon based materials. To enhance catalytic performance and stability, numerous methods such as structural modifications and complex formations with other functional materials are proposed, and they are basically based on well-defined and well-ordered catalytic active sites by exquisite control at nanoscale. In this review, we highlight the development of nano-structured catalytic materials for ORR based on recent findings, and discuss about an outlook for the direction of future researches.

  2. Semi-regular remeshing based trust region spherical geometry image for 3D deformed mesh used MLWNN

    NASA Astrophysics Data System (ADS)

    Dhibi, Naziha; Elkefi, Akram; Bellil, Wajdi; Ben Amar, Chokri

    2017-03-01

    Triangular surface are now widely used for modeling three-dimensional object, since these models are very high resolution and the geometry of the mesh is often very dense, it is then necessary to remesh this object to reduce their complexity, the mesh quality (connectivity regularity) must be ameliorated. In this paper, we review the main methods of semi-regular remeshing of the state of the art, given the semi-regular remeshing is mainly relevant for wavelet-based compression, then we present our method for re-meshing based trust region spherical geometry image to have good scheme of 3d mesh compression used to deform 3D meh based on Multi library Wavelet Neural Network structure (MLWNN). Experimental results show that the progressive re-meshing algorithm capable of obtaining more compact representations and semi-regular objects and yield an efficient compression capabilities with minimal set of features used to have good 3D deformation scheme.

  3. Interface traps contribution on transport mechanisms under illumination in metal-oxide-semiconductor structures based on silicon nanocrystals

    NASA Astrophysics Data System (ADS)

    Chatbouri, S.; Troudi, M.; Kalboussi, A.; Souifi, A.

    2018-02-01

    The transport phenomena in metal-oxide-semiconductor (MOS) structures having silicon nanocrystals (Si-NCs) inside the dielectric layer have been investigated, in dark condition and under visible illumination. At first, using deep-level transient spectroscopy (DLTS), we find the presence of series electron traps having very close energy levels (comprised between 0.28 and 0.45 eV) for ours devices (with/without Si-NCs). And a single peak appears at low temperature only for MOS with Si-NCs related to Si-NCs DLTS response. In dark condition, the conduction mechanism is dominated by the thermionic fast emission/capture of charge carriers from the highly doped polysilicon layer to Si-substrate through interface trap states for MOS without Si-NCs. The tunneling of charge carriers from highly poly-Si to Si substrate trough the trapping/detrapping mechanism in the Si-NCs, at low temperature, contributed to the conduction mechanism for MOS with Si-NCs. The light effect on transport mechanisms has been investigated using current-voltage ( I- V), and high frequency capacitance-voltage ( C- V) methods. We have been marked the photoactive trap effect in inversion zone at room temperature in I- V characteristics, which confirm the contribution of photo-generated charge on the transport mechanisms from highly poly-Si to Si substrate trough the photo-trapping/detrapping mechanism in the Si-NCs and interfaces traps levels. These results have been confirmed by an increasing about 10 pF in capacity's values for the C- V characteristics of MOS with Si-NCs, in the inversion region for inverse high voltage applied under photoexcitation at low temperature. These results are helpful to understand the principle of charge transport in dark condition and under illumination, of MOS structures having Si-NCs in the SiO x = 1.5 oxide matrix.

  4. Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.

    PubMed

    Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2016-01-20

    A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.

  5. On Inertial Body Tracking in the Presence of Model Calibration Errors

    PubMed Central

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-01-01

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments—the IMU-to-segment calibrations, subsequently called I2S calibrations—to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and segment length errors in the tested ranges. Errors in the I2S orientations were, however, linearly propagated into the estimated segment orientations. In the absence of magnetic disturbances, severe model calibration errors and fast motion changes, the newly developed IMU centered EKF-based method yielded comparable results with lower computational complexity. PMID:27455266

  6. A comparison of viscoelastic damping models

    NASA Technical Reports Server (NTRS)

    Slater, Joseph C.; Belvin, W. Keith; Inman, Daniel J.

    1993-01-01

    Modern finite element methods (FEM's) enable the precise modeling of mass and stiffness properties in what were in the past overwhelmingly large and complex structures. These models allow the accurate determination of natural frequencies and mode shapes. However, adequate methods for modeling highly damped and high frequency dependent structures did not exist until recently. The most commonly used method, Modal Strain Energy, does not correctly predict complex mode shapes since it is based on the assumption that the mode shapes of a structure are real. Recently, many techniques have been developed which allow the modeling of frequency dependent damping properties of materials in a finite element compatible form. Two of these methods, the Golla-Hughes-McTavish method and the Lesieutre-Mingori method, model the frequency dependent effects by adding coordinates to the existing system thus maintaining the linearity of the model. The third model, proposed by Bagley and Torvik, is based on the Fractional Calculus method and requires fewer empirical parameters to model the frequency dependence at the expense of linearity of the governing equations. This work examines the Modal Strain Energy, Golla-Hughes-McTavish and Bagley and Torvik models and compares them to determine the plausibility of using them for modeling viscoelastic damping in large structures.

  7. Estimating uncertainty in respondent-driven sampling using a tree bootstrap method.

    PubMed

    Baraff, Aaron J; McCormick, Tyler H; Raftery, Adrian E

    2016-12-20

    Respondent-driven sampling (RDS) is a network-based form of chain-referral sampling used to estimate attributes of populations that are difficult to access using standard survey tools. Although it has grown quickly in popularity since its introduction, the statistical properties of RDS estimates remain elusive. In particular, the sampling variability of these estimates has been shown to be much higher than previously acknowledged, and even methods designed to account for RDS result in misleadingly narrow confidence intervals. In this paper, we introduce a tree bootstrap method for estimating uncertainty in RDS estimates based on resampling recruitment trees. We use simulations from known social networks to show that the tree bootstrap method not only outperforms existing methods but also captures the high variability of RDS, even in extreme cases with high design effects. We also apply the method to data from injecting drug users in Ukraine. Unlike other methods, the tree bootstrap depends only on the structure of the sampled recruitment trees, not on the attributes being measured on the respondents, so correlations between attributes can be estimated as well as variability. Our results suggest that it is possible to accurately assess the high level of uncertainty inherent in RDS.

  8. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    PubMed

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  9. Advanced signal processing methods applied to guided waves for wire rope defect detection

    NASA Astrophysics Data System (ADS)

    Tse, Peter W.; Rostami, Javad

    2016-02-01

    Steel wire ropes, which are usually composed of a polymer core and enclosed by twisted wires, are used to hoist heavy loads. These loads are different structures that can be clamshells, draglines, elevators, etc. Since the loading of these structures is dynamic, the ropes are working under fluctuating forces in a corrosive environment. This consequently leads to progressive loss of the metallic cross-section due to abrasion and corrosion. These defects can be seen in the forms of roughened and pitted surface of the ropes, reduction in diameter, and broken wires. Therefore, their deterioration must be monitored so that any unexpected damage or corrosion can be detected before it causes fatal accident. This is of vital importance in the case of passenger transportation, particularly in elevators in which any failure may cause a catastrophic disaster. At present, the widely used methods for thorough inspection of wire ropes include visual inspection and magnetic flux leakage (MFL). Reliability of the first method is questionable since it only depends on the operators' eyes that fails to determine the integrity of internal wires. The later method has the drawback of being a point by point and time-consuming inspection method. Ultrasonic guided wave (UGW) based inspection, which has proved its capability in inspecting plate like structures such as tubes and pipes, can monitor the cross-section of wire ropes in their entire length from a single point. However, UGW have drawn less attention for defect detection in wire ropes. This paper reports the condition monitoring of a steel wire rope from a hoisting elevator with broken wires as a result of corrosive environment and fatigue. Experiments were conducted to investigate the efficiency of using magnetostrictive based UGW for rope defect detection. The obtained signals were analyzed by two time-frequency representation (TFR) methods, namely the Short Time Fourier Transform (STFT) and the Wavelet analysis. The location of the defect and its severity were successfully identified and characterized.

  10. Visual tracking of da Vinci instruments for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2014-03-01

    Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.

  11. 4-channels coherent perfect absorption (CPA)-type demultiplexer using plasmonic nano spheres

    NASA Astrophysics Data System (ADS)

    Soltani, Mohamadreza; Keshavarzi, Rasul

    2017-10-01

    The current research represents a nanoscale and compact 4-channels plasmonic demultiplexer. It includes eight coherent perfect absorption (CPA) - type filters. The operation principle is based on the absorbable formation of a conductive path in the dielectric layer of a plasmonic nano-spheres waveguide. Since the CPA efficiency depends strongly on the number of plasmonic nano-spheres and the nano spheres location, an efficient binary optimization method based on the Particle Swarm Optimization algorithm is used to design an optimized array of the plasmonic nano-sphere in order to achieve the maximum absorption coefficient in the 'off' state.

  12. Capturing the vital vascular fingerprint with optical coherence tomography

    PubMed Central

    Liu, Gangjun; Chen, Zhongping

    2014-01-01

    Using fingerprints as a method to identify an individual has been accepted in forensics since the nineteenth century, and the fingerprint has become one of the most widely used biometric characteristics. Most of the modern fingerprint recognition systems are based on the print pattern of the finger surface and are not robust against spoof attaching. We demonstrate a novel vital vascular fingerprint system using Doppler optical coherence tomography that provides highly sensitive and reliable personal identification. Because the system is based on blood flow, which only exists in a livng person, the technique is robust against spoof attaching. PMID:23913068

  13. Stereoscopic display technologies for FHD 3D LCD TV

    NASA Astrophysics Data System (ADS)

    Kim, Dae-Sik; Ko, Young-Ji; Park, Sang-Moo; Jung, Jong-Hoon; Shestak, Sergey

    2010-04-01

    Stereoscopic display technologies have been developed as one of advanced displays, and many TV industrials have been trying commercialization of 3D TV. We have been developing 3D TV based on LCD with LED BLU (backlight unit) since Samsung launched the world's first 3D TV based on PDP. However, the data scanning of panel and LC's response characteristics of LCD TV cause interference among frames (that is crosstalk), and this makes 3D video quality worse. We propose the method to reduce crosstalk by LCD driving and backlight control of FHD 3D LCD TV.

  14. A method for analyzing the financial viability of a rural provider-based geriatric clinic.

    PubMed

    McAtee, Robin E; Beverly, Claudia J

    2005-01-01

    Little is known about the financial impact of rural provider-based geriatric outpatient clinics on their parent hospitals since the implementation of the outpatient prospective payment system. In this study, systems theory was used to develop a methodology for determining the financial viability of one such clinic in a rural hospital using data commonly found in rural hospital financial systems. Formulas were developed to identify the overall financial viability and a case-study model was utilized to test the formulas; however, this hospital did not track a key data element, resulting in an incomplete analysis.

  15. The effectiveness of the liquid-based preparation method in cerebrospinal fluid cytology.

    PubMed

    Argon, Asuman; Uyaroğlu, Mehmet Ali; Nart, Deniz; Veral, Ali; Kitapçıoğlu, Gül

    2013-01-01

    Since malignant cells were first detected in the cerebrospinal fluid (CSF), numerous methods have been used for CSF examination. The cytocentrifugation and liquid-based cytology (LBC) methods are two of these. We aimed to investigate whether the results from the LBC method were different from the results of the cytological diagnosis of the CSF materials that were prepared using the cytocentrifugation method. A retrospective analysis was conducted using the pathological records of 3,491 (cytocentrifugation on 1,306 and LBC on 2,185) cytological specimens of CSF which were diagnosed over a 4-year period between January 2007 and December 2011. The Fisher exact test was used to compare the results of the LBC and cytocentrifugation methods. While there was a noticeable decrease in nondiagnostic diagnosis and a slight decrease in suspicious diagnosis, there was an increase in malignant and benign diagnosis with the LBC method in comparison to the centrifugation method. Statistically, the decrease in nondiagnostic diagnosis was considered significant (p < 0.0001). The LBC method seems like a better option than the cytocentrifugation method, because of many preparatory, screening and diagnostic advantages, especially in pathology departments where materials come from far away and large volumes are examined. Copyright © 2013 S. Karger AG, Basel.

  16. A machine-learning approach for predicting palmitoylation sites from integrated sequence-based features.

    PubMed

    Li, Liqi; Luo, Qifa; Xiao, Weidong; Li, Jinhui; Zhou, Shiwen; Li, Yongsheng; Zheng, Xiaoqi; Yang, Hua

    2017-02-01

    Palmitoylation is the covalent attachment of lipids to amino acid residues in proteins. As an important form of protein posttranslational modification, it increases the hydrophobicity of proteins, which contributes to the protein transportation, organelle localization, and functions, therefore plays an important role in a variety of cell biological processes. Identification of palmitoylation sites is necessary for understanding protein-protein interaction, protein stability, and activity. Since conventional experimental techniques to determine palmitoylation sites in proteins are both labor intensive and costly, a fast and accurate computational approach to predict palmitoylation sites from protein sequences is in urgent need. In this study, a support vector machine (SVM)-based method was proposed through integrating PSI-BLAST profile, physicochemical properties, [Formula: see text]-mer amino acid compositions (AACs), and [Formula: see text]-mer pseudo AACs into the principal feature vector. A recursive feature selection scheme was subsequently implemented to single out the most discriminative features. Finally, an SVM method was implemented to predict palmitoylation sites in proteins based on the optimal features. The proposed method achieved an accuracy of 99.41% and Matthews Correlation Coefficient of 0.9773 for a benchmark dataset. The result indicates the efficiency and accuracy of our method in prediction of palmitoylation sites based on protein sequences.

  17. Change detection from synthetic aperture radar images based on neighborhood-based ratio and extreme learning machine

    NASA Astrophysics Data System (ADS)

    Gao, Feng; Dong, Junyu; Li, Bo; Xu, Qizhi; Xie, Cui

    2016-10-01

    Change detection is of high practical value to hazard assessment, crop growth monitoring, and urban sprawl detection. A synthetic aperture radar (SAR) image is the ideal information source for performing change detection since it is independent of atmospheric and sunlight conditions. Existing SAR image change detection methods usually generate a difference image (DI) first and use clustering methods to classify the pixels of DI into changed class and unchanged class. Some useful information may get lost in the DI generation process. This paper proposed an SAR image change detection method based on neighborhood-based ratio (NR) and extreme learning machine (ELM). NR operator is utilized for obtaining some interested pixels that have high probability of being changed or unchanged. Then, image patches centered at these pixels are generated, and ELM is employed to train a model by using these patches. Finally, pixels in both original SAR images are classified by the pretrained ELM model. The preclassification result and the ELM classification result are combined to form the final change map. The experimental results obtained on three real SAR image datasets and one simulated dataset show that the proposed method is robust to speckle noise and is effective to detect change information among multitemporal SAR images.

  18. Reconstruction of secular variation in seawater sulfate concentrations

    NASA Astrophysics Data System (ADS)

    Algeo, T. J.; Luo, G. M.; Song, H. Y.; Lyons, T. W.; Canfield, D. E.

    2015-04-01

    Long-term secular variation in seawater sulfate concentrations ([SO42-]SW) is of interest owing to its relationship to the oxygenation history of Earth's surface environment. In this study, we develop two complementary approaches for quantification of sulfate concentrations in ancient seawater and test their application to late Neoproterozoic (635 Ma) to Recent marine units. The "rate method" is based on two measurable parameters of paleomarine systems: (1) the S-isotope fractionation associated with microbial sulfate reduction (MSR), as proxied by Δ34SCAS-PY, and (2) the maximum rate of change in seawater sulfate, as proxied by &partial; δ 34SCAS/∂ t(max). The "MSR-trend method" is based on the empirical relationship of Δ34SCAS-PY to aqueous sulfate concentrations in 81 modern depositional systems. For a given paleomarine system, the rate method yields an estimate of maximum possible [SO42-]SW (although results are dependent on assumptions regarding the pyrite burial flux, FPY), and the MSR-trend method yields an estimate of mean [SO42-]SW. An analysis of seawater sulfate concentrations since 635 Ma suggests that [SO42-]SW was low during the late Neoproterozoic (<5 mM), rose sharply across the Ediacaran-Cambrian boundary (~5-10 mM), and rose again during the Permian (~10-30 mM) to levels that have varied only slightly since 250 Ma. However, Phanerozoic seawater sulfate concentrations may have been drawn down to much lower levels (~1-4 mM) during short (<~2 Myr) intervals of the Cambrian, Early Triassic, Early Jurassic, and Cretaceous as a consequence of widespread ocean anoxia, intense MSR, and pyrite burial. The procedures developed in this study offer potential for future high-resolution quantitative analyses of paleo-seawater sulfate concentrations.

  19. HEAT TREATMENT OF ELECTROPLATED URANIUM

    DOEpatents

    Hoglund, P.F.

    1958-07-01

    A method is described for improving electroplated coatings on uranium. Such coatings are often porous, and in an effort to remedy this, the coatings are heat treated by immersing the coated specimen ln a bath of fused salt or molten methl. Since the hase metal, uranium, is an active metal, such a procedure often results in reactions between the base metal and the heating medium. This difficulty can be overcome by using liquid organopolysiloxanes as the heating medium.

  20. Inventory Funding Methods on Navy Ships: NWCF vs. End-Use

    DTIC Science & Technology

    2013-05-30

    Category OPTAR Operating Target OSO Other Supply Officer POM Pre/Post-Overseas Movement R-Supply Relational Supply RoR Reorder Review SAC Service... OSO ) transfer. Since end-use ships own their inventory, the supply officer can choose to transfer a part being requested by another ship at their...discretion, based on their ship’s anticipated requirements and their own goodwill. OSO transfers among end-use ships do not require a transfer of OPTAR

Top