Sample records for proposed method aims

  1. New method for characterization of retroreflective materials

    NASA Astrophysics Data System (ADS)

    Junior, O. S.; Silva, E. S.; Barros, K. N.; Vitro, J. G.

    2018-03-01

    The present article aims to propose a new method of analyzing the properties of retroreflective materials using a goniophotometer. The aim is to establish a higher resolution test method with a wide range of viewing angles, taking into account a three-dimensional analysis of the retroreflection of the tested material. The validation was performed by collecting data from specimens collected from materials used in safety clothing and road signs. The approach showed that the results obtained by the proposed method are comparable to the results obtained by the normative protocols, representing an evolution for the metrology of these materials.

  2. An Exact Model-Based Method for Near-Field Sources Localization with Bistatic MIMO System.

    PubMed

    Singh, Parth Raj; Wang, Yide; Chargé, Pascal

    2017-03-30

    In this paper, we propose an exact model-based method for near-field sources localization with a bistatic multiple input, multiple output (MIMO) radar system, and compare it with an approximated model-based method. The aim of this paper is to propose an efficient way to use the exact model of the received signals of near-field sources in order to eliminate the systematic error introduced by the use of approximated model in most existing near-field sources localization techniques. The proposed method uses parallel factor (PARAFAC) decomposition to deal with the exact model. Thanks to the exact model, the proposed method has better precision and resolution than the compared approximated model-based method. The simulation results show the performance of the proposed method.

  3. Objectification of perceptual image quality for mobile video

    NASA Astrophysics Data System (ADS)

    Lee, Seon-Oh; Sim, Dong-Gyu

    2011-06-01

    This paper presents an objective video quality evaluation method for quantifying the subjective quality of digital mobile video. The proposed method aims to objectify the subjective quality by extracting edgeness and blockiness parameters. To evaluate the performance of the proposed algorithms, we carried out subjective video quality tests with the double-stimulus continuous quality scale method and obtained differential mean opinion score values for 120 mobile video clips. We then compared the performance of the proposed methods with that of existing methods in terms of the differential mean opinion score with 120 mobile video clips. Experimental results showed that the proposed methods were approximately 10% better than the edge peak signal-to-noise ratio of the J.247 method in terms of the Pearson correlation.

  4. Target deception jamming method against spaceborne synthetic aperture radar using electromagnetic scattering

    NASA Astrophysics Data System (ADS)

    Sun, Qingyang; Shu, Ting; Tang, Bin; Yu, Wenxian

    2018-01-01

    A method is proposed to perform target deception jamming against spaceborne synthetic aperture radar. Compared with the traditional jamming methods using deception templates to cover the target or region of interest, the proposed method aims to generate a verisimilar deceptive target in various attitude with high fidelity using the electromagnetic (EM) scattering. Based on the geometrical model for target deception jamming, the EM scattering data from the deceptive target was first simulated by applying an EM calculation software. Then, the proposed jamming frequency response (JFR) is calculated offline by further processing. Finally, the deception jamming is achieved in real time by a multiplication between the proposed JFR and the spectrum of intercepted radar signals. The practical implementation is presented. The simulation results prove the validity of the proposed method.

  5. Simple method for quick estimation of aquifer hydrogeological parameters

    NASA Astrophysics Data System (ADS)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  6. Modeling and quantification of repolarization feature dependency on heart rate.

    PubMed

    Minchole, A; Zacur, E; Pueyo, E; Laguna, P

    2014-01-01

    This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.

  7. [Joint correction for motion artifacts and off-resonance artifacts in multi-shot diffusion magnetic resonance imaging].

    PubMed

    Wu, Wenchuan; Fang, Sheng; Guo, Hua

    2014-06-01

    Aiming at motion artifacts and off-resonance artifacts in multi-shot diffusion magnetic resonance imaging (MRI), we proposed a joint correction method in this paper to correct the two kinds of artifacts simultaneously without additional acquisition of navigation data and field map. We utilized the proposed method using multi-shot variable density spiral sequence to acquire MRI data and used auto-focusing technique for image deblurring. We also used direct method or iterative method to correct motion induced phase errors in the process of deblurring. In vivo MRI experiments demonstrated that the proposed method could effectively suppress motion artifacts and off-resonance artifacts and achieve images with fine structures. In addition, the scan time was not increased in applying the proposed method.

  8. An optimization design proposal of automated guided vehicles for mixed type transportation in hospital environments

    PubMed Central

    González, Domingo; Espinosa, María del Mar; Domínguez, Manuel

    2017-01-01

    Aim The aim of this paper is to present an optimization proposal in the automated guided vehicles design used in hospital logistics, as well as to analyze the impact of its implementation in a real environment. Method This proposal is based on the design of those elements that would allow the vehicles to deliver an extra cart by the towing method. So, the proposal intention is to improve the productivity and the performance of the current vehicles by using a transportation method of combined carts. Results The study has been developed following concurrent engineering premises from three different viewpoints. First, the sequence of operations has been described, and second, a proposal of design of the equipment has been undertaken. Finally, the impact of the proposal has been analyzed according to real data from the Hospital Universitario Rio Hortega in Valladolid (Spain). In this particular case, by the implementation of the analyzed proposal in the hospital a reduction of over 35% of the current time of use can be achieved. This result may allow adding new tasks to the vehicles, and according to this, both a new kind of vehicle and a specific module can be developed in order to get a better performance. PMID:28562681

  9. A Proposal to Build an Education Research and Development Program: The Kamehameha Early Education Project Proposal. Technical Report #3.

    ERIC Educational Resources Information Center

    Gallimore, Ronald; And Others

    This report summarizes the programmatic features of a proposal for the Kamehameha Early Education Project (KEEP), a program aimed at the development, demonstration, and dissemination of methods for improving the education of Hawaiian and part-Hawaiian children. A brief description of the proposed project goals, structure, organization, and…

  10. Principles and Overview of Sampling Methods for Modeling Macromolecular Structure and Dynamics

    PubMed Central

    Moffatt, Ryan; Ma, Buyong; Nussinov, Ruth

    2016-01-01

    Investigation of macromolecular structure and dynamics is fundamental to understanding how macromolecules carry out their functions in the cell. Significant advances have been made toward this end in silico, with a growing number of computational methods proposed yearly to study and simulate various aspects of macromolecular structure and dynamics. This review aims to provide an overview of recent advances, focusing primarily on methods proposed for exploring the structure space of macromolecules in isolation and in assemblies for the purpose of characterizing equilibrium structure and dynamics. In addition to surveying recent applications that showcase current capabilities of computational methods, this review highlights state-of-the-art algorithmic techniques proposed to overcome challenges posed in silico by the disparate spatial and time scales accessed by dynamic macromolecules. This review is not meant to be exhaustive, as such an endeavor is impossible, but rather aims to balance breadth and depth of strategies for modeling macromolecular structure and dynamics for a broad audience of novices and experts. PMID:27124275

  11. Principles and Overview of Sampling Methods for Modeling Macromolecular Structure and Dynamics.

    PubMed

    Maximova, Tatiana; Moffatt, Ryan; Ma, Buyong; Nussinov, Ruth; Shehu, Amarda

    2016-04-01

    Investigation of macromolecular structure and dynamics is fundamental to understanding how macromolecules carry out their functions in the cell. Significant advances have been made toward this end in silico, with a growing number of computational methods proposed yearly to study and simulate various aspects of macromolecular structure and dynamics. This review aims to provide an overview of recent advances, focusing primarily on methods proposed for exploring the structure space of macromolecules in isolation and in assemblies for the purpose of characterizing equilibrium structure and dynamics. In addition to surveying recent applications that showcase current capabilities of computational methods, this review highlights state-of-the-art algorithmic techniques proposed to overcome challenges posed in silico by the disparate spatial and time scales accessed by dynamic macromolecules. This review is not meant to be exhaustive, as such an endeavor is impossible, but rather aims to balance breadth and depth of strategies for modeling macromolecular structure and dynamics for a broad audience of novices and experts.

  12. Investigation of methods to search for the boundaries on the image and their use on lung hardware of methods finding saliency map

    NASA Astrophysics Data System (ADS)

    Semenishchev, E. A.; Marchuk, V. I.; Fedosov, V. P.; Stradanchenko, S. G.; Ruslyakov, D. V.

    2015-05-01

    This work aimed to study computationally simple method of saliency map calculation. Research in this field received increasing interest for the use of complex techniques in portable devices. A saliency map allows increasing the speed of many subsequent algorithms and reducing the computational complexity. The proposed method of saliency map detection based on both image and frequency space analysis. Several examples of test image from the Kodak dataset with different detalisation considered in this paper demonstrate the effectiveness of the proposed approach. We present experiments which show that the proposed method providing better results than the framework Salience Toolbox in terms of accuracy and speed.

  13. An Approach for Integrating the Prioritization of Functional and Nonfunctional Requirements

    PubMed Central

    Dabbagh, Mohammad; Lee, Sai Peck

    2014-01-01

    Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches. PMID:24982987

  14. An approach for integrating the prioritization of functional and nonfunctional requirements.

    PubMed

    Dabbagh, Mohammad; Lee, Sai Peck

    2014-01-01

    Due to the budgetary deadlines and time to market constraints, it is essential to prioritize software requirements. The outcome of requirements prioritization is an ordering of requirements which need to be considered first during the software development process. To achieve a high quality software system, both functional and nonfunctional requirements must be taken into consideration during the prioritization process. Although several requirements prioritization methods have been proposed so far, no particular method or approach is presented to consider both functional and nonfunctional requirements during the prioritization stage. In this paper, we propose an approach which aims to integrate the process of prioritizing functional and nonfunctional requirements. The outcome of applying the proposed approach produces two separate prioritized lists of functional and non-functional requirements. The effectiveness of the proposed approach has been evaluated through an empirical experiment aimed at comparing the approach with the two state-of-the-art-based approaches, analytic hierarchy process (AHP) and hybrid assessment method (HAM). Results show that our proposed approach outperforms AHP and HAM in terms of actual time-consumption while preserving the quality of the results obtained by our proposed approach at a high level of agreement in comparison with the results produced by the other two approaches.

  15. Joint Concept Correlation and Feature-Concept Relevance Learning for Multilabel Classification.

    PubMed

    Zhao, Xiaowei; Ma, Zhigang; Li, Zhi; Li, Zhihui

    2018-02-01

    In recent years, multilabel classification has attracted significant attention in multimedia annotation. However, most of the multilabel classification methods focus only on the inherent correlations existing among multiple labels and concepts and ignore the relevance between features and the target concepts. To obtain more robust multilabel classification results, we propose a new multilabel classification method aiming to capture the correlations among multiple concepts by leveraging hypergraph that is proved to be beneficial for relational learning. Moreover, we consider mining feature-concept relevance, which is often overlooked by many multilabel learning algorithms. To better show the feature-concept relevance, we impose a sparsity constraint on the proposed method. We compare the proposed method with several other multilabel classification methods and evaluate the classification performance by mean average precision on several data sets. The experimental results show that the proposed method outperforms the state-of-the-art methods.

  16. Automatic comic page image understanding based on edge segment analysis

    NASA Astrophysics Data System (ADS)

    Liu, Dong; Wang, Yongtao; Tang, Zhi; Li, Luyuan; Gao, Liangcai

    2013-12-01

    Comic page image understanding aims to analyse the layout of the comic page images by detecting the storyboards and identifying the reading order automatically. It is the key technique to produce the digital comic documents suitable for reading on mobile devices. In this paper, we propose a novel comic page image understanding method based on edge segment analysis. First, we propose an efficient edge point chaining method to extract Canny edge segments (i.e., contiguous chains of Canny edge points) from the input comic page image; second, we propose a top-down scheme to detect line segments within each obtained edge segment; third, we develop a novel method to detect the storyboards by selecting the border lines and further identify the reading order of these storyboards. The proposed method is performed on a data set consisting of 2000 comic page images from ten printed comic series. The experimental results demonstrate that the proposed method achieves satisfactory results on different comics and outperforms the existing methods.

  17. An individual and dynamic Body Segment Inertial Parameter validation method using ground reaction forces.

    PubMed

    Hansen, Clint; Venture, Gentiane; Rezzoug, Nasser; Gorce, Philippe; Isableu, Brice

    2014-05-07

    Over the last decades a variety of research has been conducted with the goal to improve the Body Segment Inertial Parameters (BSIP) estimations but to our knowledge a real validation has never been completely successful, because no ground truth is available. The aim of this paper is to propose a validation method for a BSIP identification method (IM) and to confirm the results by comparing them with recalculated contact forces using inverse dynamics to those obtained by a force plate. Furthermore, the results are compared with the recently proposed estimation method by Dumas et al. (2007). Additionally, the results are cross validated with a high velocity overarm throwing movement. Throughout conditions higher correlations, smaller metrics and smaller RMSE can be found for the proposed BSIP estimation (IM) which shows its advantage compared to recently proposed methods as of Dumas et al. (2007). The purpose of the paper is to validate an already proposed method and to show that this method can be of significant advantage compared to conventional methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Effectiveness Evaluation Method of Anti-Radiation Missile against Active Decoy

    NASA Astrophysics Data System (ADS)

    Tang, Junyao; Cao, Fei; Li, Sijia

    2017-06-01

    In the problem of anti-radiation missile against active decoy, whether the ARM can effectively kill the target radiation source and bait is an important index for evaluating the operational effectiveness of the missile. Aiming at this problem, this paper proposes a method to evaluate the effect of ARM against active decoy. Based on the calculation of ARM’s ability to resist the decoy, the paper proposes a method to evaluate the decoy resistance based on the key components of the hitting radar. The method has the advantages of scientific and reliability.

  19. Mean-Reverting Portfolio With Budget Constraint

    NASA Astrophysics Data System (ADS)

    Zhao, Ziping; Palomar, Daniel P.

    2018-05-01

    This paper considers the mean-reverting portfolio design problem arising from statistical arbitrage in the financial markets. We first propose a general problem formulation aimed at finding a portfolio of underlying component assets by optimizing a mean-reversion criterion characterizing the mean-reversion strength, taking into consideration the variance of the portfolio and an investment budget constraint. Then several specific problems are considered based on the general formulation, and efficient algorithms are proposed. Numerical results on both synthetic and market data show that our proposed mean-reverting portfolio design methods can generate consistent profits and outperform the traditional design methods and the benchmark methods in the literature.

  20. A weakly-compressible Cartesian grid approach for hydrodynamic flows

    NASA Astrophysics Data System (ADS)

    Bigay, P.; Oger, G.; Guilcher, P.-M.; Le Touzé, D.

    2017-11-01

    The present article aims at proposing an original strategy to solve hydrodynamic flows. In introduction, the motivations for this strategy are developed. It aims at modeling viscous and turbulent flows including complex moving geometries, while avoiding meshing constraints. The proposed approach relies on a weakly-compressible formulation of the Navier-Stokes equations. Unlike most hydrodynamic CFD (Computational Fluid Dynamics) solvers usually based on implicit incompressible formulations, a fully-explicit temporal scheme is used. A purely Cartesian grid is adopted for numerical accuracy and algorithmic simplicity purposes. This characteristic allows an easy use of Adaptive Mesh Refinement (AMR) methods embedded within a massively parallel framework. Geometries are automatically immersed within the Cartesian grid with an AMR compatible treatment. The method proposed uses an Immersed Boundary Method (IBM) adapted to the weakly-compressible formalism and imposed smoothly through a regularization function, which stands as another originality of this work. All these features have been implemented within an in-house solver based on this WCCH (Weakly-Compressible Cartesian Hydrodynamic) method which meets the above requirements whilst allowing the use of high-order (> 3) spatial schemes rarely used in existing hydrodynamic solvers. The details of this WCCH method are presented and validated in this article.

  1. Detecting text in natural scenes with multi-level MSER and SWT

    NASA Astrophysics Data System (ADS)

    Lu, Tongwei; Liu, Renjun

    2018-04-01

    The detection of the characters in the natural scene is susceptible to factors such as complex background, variable viewing angle and diverse forms of language, which leads to poor detection results. Aiming at these problems, a new text detection method was proposed, which consisted of two main stages, candidate region extraction and text region detection. At first stage, the method used multiple scale transformations of original image and multiple thresholds of maximally stable extremal regions (MSER) to detect the text regions which could detect character regions comprehensively. At second stage, obtained SWT maps by using the stroke width transform (SWT) algorithm to compute the candidate regions, then using cascaded classifiers to propose non-text regions. The proposed method was evaluated on the standard benchmark datasets of ICDAR2011 and the datasets that we made our own data sets. The experiment results showed that the proposed method have greatly improved that compared to other text detection methods.

  2. Path Planning for Robot based on Chaotic Artificial Potential Field Method

    NASA Astrophysics Data System (ADS)

    Zhang, Cheng

    2018-03-01

    Robot path planning in unknown environments is one of the hot research topics in the field of robot control. Aiming at the shortcomings of traditional artificial potential field methods, we propose a new path planning for Robot based on chaotic artificial potential field method. The path planning adopts the potential function as the objective function and introduces the robot direction of movement as the control variables, which combines the improved artificial potential field method with chaotic optimization algorithm. Simulations have been carried out and the results demonstrate that the superior practicality and high efficiency of the proposed method.

  3. A ℓ2, 1 norm regularized multi-kernel learning for false positive reduction in Lung nodule CAD.

    PubMed

    Cao, Peng; Liu, Xiaoli; Zhang, Jian; Li, Wei; Zhao, Dazhe; Huang, Min; Zaiane, Osmar

    2017-03-01

    The aim of this paper is to describe a novel algorithm for False Positive Reduction in lung nodule Computer Aided Detection(CAD). In this paper, we describes a new CT lung CAD method which aims to detect solid nodules. Specially, we proposed a multi-kernel classifier with a ℓ 2, 1 norm regularizer for heterogeneous feature fusion and selection from the feature subset level, and designed two efficient strategies to optimize the parameters of kernel weights in non-smooth ℓ 2, 1 regularized multiple kernel learning algorithm. The first optimization algorithm adapts a proximal gradient method for solving the ℓ 2, 1 norm of kernel weights, and use an accelerated method based on FISTA; the second one employs an iterative scheme based on an approximate gradient descent method. The results demonstrates that the FISTA-style accelerated proximal descent method is efficient for the ℓ 2, 1 norm formulation of multiple kernel learning with the theoretical guarantee of the convergence rate. Moreover, the experimental results demonstrate the effectiveness of the proposed methods in terms of Geometric mean (G-mean) and Area under the ROC curve (AUC), and significantly outperforms the competing methods. The proposed approach exhibits some remarkable advantages both in heterogeneous feature subsets fusion and classification phases. Compared with the fusion strategies of feature-level and decision level, the proposed ℓ 2, 1 norm multi-kernel learning algorithm is able to accurately fuse the complementary and heterogeneous feature sets, and automatically prune the irrelevant and redundant feature subsets to form a more discriminative feature set, leading a promising classification performance. Moreover, the proposed algorithm consistently outperforms the comparable classification approaches in the literature. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Critical appraisal of fundamental items in approved clinical trial research proposals in Mashhad University of Medical Sciences

    PubMed Central

    Shakeri, Mohammad-Taghi; Taghipour, Ali; Sadeghi, Masoumeh; Nezami, Hossein; Amirabadizadeh, Ali-Reza; Bonakchi, Hossein

    2017-01-01

    Background: Writing, designing, and conducting a clinical trial research proposal has an important role in achieving valid and reliable findings. Thus, this study aimed at critically appraising fundamental information in approved clinical trial research proposals in Mashhad University of Medical Sciences (MUMS) from 2008 to 2014. Methods: This cross-sectional study was conducted on all 935 approved clinical trial research proposals in MUMS from 2008 to 2014. A valid and reliable as well as comprehensive, simple, and usable checklist in sessions with biostatisticians and methodologists, consisting of 11 main items as research tool, were used. Agreement rate between the reviewers of the proposals, who were responsible for data collection, was assessed during 3 sessions, and Kappa statistics was calculated at the last session as 97%. Results: More than 60% of the research proposals had a methodologist consultant, moreover, type of study or study design had been specified in almost all of them (98%). Appropriateness of study aims with hypotheses was not observed in a significant number of research proposals (585 proposals, 62.6%). The required sample size for 66.8% of the approved proposals was based on a sample size formula; however, in 25% of the proposals, sample size formula was not in accordance with the study design. Data collection tool was not selected appropriately in 55.2% of the approved research proposals. Type and method of randomization were unknown in 21% of the proposals and dealing with missing data had not been described in most of them (98%). Inclusion and exclusion criteria were (92%) fully and adequately explained. Moreover, 44% and 31% of the research proposals were moderate and weak in rank, respectively, with respect to the correctness of the statistical analysis methods. Conclusion: Findings of the present study revealed that a large portion of the approved proposals were highly biased or ambiguous with respect to randomization, blinding, dealing with missing data, data collection tool, sampling methods, and statistical analysis. Thus, it is essential to consult and collaborate with a methodologist in all parts of a proposal to control the possible and specific biases in clinical trials. PMID:29445703

  5. Critical appraisal of fundamental items in approved clinical trial research proposals in Mashhad University of Medical Sciences.

    PubMed

    Shakeri, Mohammad-Taghi; Taghipour, Ali; Sadeghi, Masoumeh; Nezami, Hossein; Amirabadizadeh, Ali-Reza; Bonakchi, Hossein

    2017-01-01

    Background: Writing, designing, and conducting a clinical trial research proposal has an important role in achieving valid and reliable findings. Thus, this study aimed at critically appraising fundamental information in approved clinical trial research proposals in Mashhad University of Medical Sciences (MUMS) from 2008 to 2014. Methods: This cross-sectional study was conducted on all 935 approved clinical trial research proposals in MUMS from 2008 to 2014. A valid and reliable as well as comprehensive, simple, and usable checklist in sessions with biostatisticians and methodologists, consisting of 11 main items as research tool, were used. Agreement rate between the reviewers of the proposals, who were responsible for data collection, was assessed during 3 sessions, and Kappa statistics was calculated at the last session as 97%. Results: More than 60% of the research proposals had a methodologist consultant, moreover, type of study or study design had been specified in almost all of them (98%). Appropriateness of study aims with hypotheses was not observed in a significant number of research proposals (585 proposals, 62.6%). The required sample size for 66.8% of the approved proposals was based on a sample size formula; however, in 25% of the proposals, sample size formula was not in accordance with the study design. Data collection tool was not selected appropriately in 55.2% of the approved research proposals. Type and method of randomization were unknown in 21% of the proposals and dealing with missing data had not been described in most of them (98%). Inclusion and exclusion criteria were (92%) fully and adequately explained. Moreover, 44% and 31% of the research proposals were moderate and weak in rank, respectively, with respect to the correctness of the statistical analysis methods. Conclusion: Findings of the present study revealed that a large portion of the approved proposals were highly biased or ambiguous with respect to randomization, blinding, dealing with missing data, data collection tool, sampling methods, and statistical analysis. Thus, it is essential to consult and collaborate with a methodologist in all parts of a proposal to control the possible and specific biases in clinical trials.

  6. What does it mean to "employ" the RE-AIM model?

    PubMed

    Kessler, Rodger S; Purcell, E Peyton; Glasgow, Russell E; Klesges, Lisa M; Benkeser, Rachel M; Peek, C J

    2013-03-01

    Many grant proposals identify the use of a given evaluation model or framework but offer little about how such models are implemented. The authors discuss what it means to employ a specific model, RE-AIM, and key dimensions from this model for program planning, implementation, evaluation, and reporting. The authors report both conceptual and content specifications for the use of the RE-AIM model and a content review of 42 recent dissemination and implementation grant applications to National Institutes of Health that proposed the use of this model. Outcomes include the extent to which proposals addressed the overall RE-AIM model and specific items within the five dimensions in their methods or evaluation plans. The majority of grants used only some elements of the model (less than 10% contained thorough measures across all RE-AIM dimensions). Few met criteria for "fully developed use" of RE-AIM and the percentage of key issues addressed varied from, on average, 45% to 78% across the RE-AIM dimensions. The results and discussion of key criteria should help investigators in their use of RE-AIM and illuminate the broader issue of comprehensive use of evaluation models.

  7. Communicative Competence of the Fourth Year Students: Basis for Proposed English Language Program

    ERIC Educational Resources Information Center

    Tuan, Vu Van

    2017-01-01

    This study on level of communicative competence covering linguistic/grammatical and discourse has aimed at constructing a proposed English language program for 5 key universities in Vietnam. The descriptive method utilized was scientifically employed with comparative techniques and correlational analysis. The researcher treated the surveyed data…

  8. Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati

    2016-10-01

    The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.

  9. Game Theory Based Trust Model for Cloud Environment

    PubMed Central

    Gokulnath, K.; Uthariaraj, Rhymend

    2015-01-01

    The aim of this work is to propose a method to establish trust at bootload level in cloud computing environment. This work proposes a game theoretic based approach for achieving trust at bootload level of both resources and users perception. Nash equilibrium (NE) enhances the trust evaluation of the first-time users and providers. It also restricts the service providers and the users to violate service level agreement (SLA). Significantly, the problem of cold start and whitewashing issues are addressed by the proposed method. In addition appropriate mapping of cloud user's application to cloud service provider for segregating trust level is achieved as a part of mapping. Thus, time complexity and space complexity are handled efficiently. Experiments were carried out to compare and contrast the performance of the conventional methods and the proposed method. Several metrics like execution time, accuracy, error identification, and undecidability of the resources were considered. PMID:26380365

  10. Correlation Filter Learning Toward Peak Strength for Visual Tracking.

    PubMed

    Sui, Yao; Wang, Guanghui; Zhang, Li

    2018-04-01

    This paper presents a novel visual tracking approach to correlation filter learning toward peak strength of correlation response. Previous methods leverage all features of the target and the immediate background to learn a correlation filter. Some features, however, may be distractive to tracking, like those from occlusion and local deformation, resulting in unstable tracking performance. This paper aims at solving this issue and proposes a novel algorithm to learn the correlation filter. The proposed approach, by imposing an elastic net constraint on the filter, can adaptively eliminate those distractive features in the correlation filtering. A new peak strength metric is proposed to measure the discriminative capability of the learned correlation filter. It is demonstrated that the proposed approach effectively strengthens the peak of the correlation response, leading to more discriminative performance than previous methods. Extensive experiments on a challenging visual tracking benchmark demonstrate that the proposed tracker outperforms most state-of-the-art methods.

  11. A new local-global approach for classification.

    PubMed

    Peres, R T; Pedreira, C E

    2010-09-01

    In this paper, we propose a new local-global pattern classification scheme that combines supervised and unsupervised approaches, taking advantage of both, local and global environments. We understand as global methods the ones concerned with the aim of constructing a model for the whole problem space using the totality of the available observations. Local methods focus into sub regions of the space, possibly using an appropriately selected subset of the sample. In the proposed method, the sample is first divided in local cells by using a Vector Quantization unsupervised algorithm, the LBG (Linde-Buzo-Gray). In a second stage, the generated assemblage of much easier problems is locally solved with a scheme inspired by Bayes' rule. Four classification methods were implemented for comparison purposes with the proposed scheme: Learning Vector Quantization (LVQ); Feedforward Neural Networks; Support Vector Machine (SVM) and k-Nearest Neighbors. These four methods and the proposed scheme were implemented in eleven datasets, two controlled experiments, plus nine public available datasets from the UCI repository. The proposed method has shown a quite competitive performance when compared to these classical and largely used classifiers. Our method is simple concerning understanding and implementation and is based on very intuitive concepts. Copyright 2010 Elsevier Ltd. All rights reserved.

  12. Smoke regions extraction based on two steps segmentation and motion detection in early fire

    NASA Astrophysics Data System (ADS)

    Jian, Wenlin; Wu, Kaizhi; Yu, Zirong; Chen, Lijuan

    2018-03-01

    Aiming at the early problems of video-based smoke detection in fire video, this paper proposes a method to extract smoke suspected regions by combining two steps segmentation and motion characteristics. Early smoldering smoke can be seen as gray or gray-white regions. In the first stage, regions of interests (ROIs) with smoke are obtained by using two step segmentation methods. Then, suspected smoke regions are detected by combining the two step segmentation and motion detection. Finally, morphological processing is used for smoke regions extracting. The Otsu algorithm is used as segmentation method and the ViBe algorithm is used to detect the motion of smoke. The proposed method was tested on 6 test videos with smoke. The experimental results show the effectiveness of our proposed method over visual observation.

  13. An adaptive field detection method for bridge scour monitoring using motion-sensing radio transponders (RFIDs).

    DOT National Transportation Integrated Search

    2014-01-01

    A comprehensive field detection method is proposed that is aimed at developing advanced capability for : reliable monitoring, inspection and life estimation of bridge infrastructure. The goal is to utilize Motion-Sensing Radio Transponders (RFIDS) on...

  14. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    PubMed

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  15. A modified precise integration method for transient dynamic analysis in structural systems with multiple damping models

    NASA Astrophysics Data System (ADS)

    Ding, Zhe; Li, Li; Hu, Yujin

    2018-01-01

    Sophisticated engineering systems are usually assembled by subcomponents with significantly different levels of energy dissipation. Therefore, these damping systems often contain multiple damping models and lead to great difficulties in analyzing. This paper aims at developing a time integration method for structural systems with multiple damping models. The dynamical system is first represented by a generally damped model. Based on this, a new extended state-space method for the damped system is derived. A modified precise integration method with Gauss-Legendre quadrature is then proposed. The numerical stability and accuracy of the proposed integration method are discussed in detail. It is verified that the method is conditionally stable and has inherent algorithmic damping, period error and amplitude decay. Numerical examples are provided to assess the performance of the proposed method compared with other methods. It is demonstrated that the method is more accurate than other methods with rather good efficiency and the stable condition is easy to be satisfied in practice.

  16. A novel method for overlapping community detection using Multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Ebrahimi, Morteza; Shahmoradi, Mohammad Reza; Heshmati, Zainabolhoda; Salehi, Mostafa

    2018-09-01

    The problem of community detection as one of the most important applications of network science can be addressed effectively by multi-objective optimization. In this paper, we aim to present a novel efficient method based on this approach. Also, in this study the idea of using all Pareto fronts to detect overlapping communities is introduced. The proposed method has two main advantages compared to other multi-objective optimization based approaches. The first advantage is scalability, and the second is the ability to find overlapping communities. Despite most of the works, the proposed method is able to find overlapping communities effectively. The new algorithm works by extracting appropriate communities from all the Pareto optimal solutions, instead of choosing the one optimal solution. Empirical experiments on different features of separated and overlapping communities, on both synthetic and real networks show that the proposed method performs better in comparison with other methods.

  17. Spatio-temporal alignment of multiple sensors

    NASA Astrophysics Data System (ADS)

    Zhang, Tinghua; Ni, Guoqiang; Fan, Guihua; Sun, Huayan; Yang, Biao

    2018-01-01

    Aiming to achieve the spatio-temporal alignment of multi sensor on the same platform for space target observation, a joint spatio-temporal alignment method is proposed. To calibrate the parameters and measure the attitude of cameras, an astronomical calibration method is proposed based on star chart simulation and collinear invariant features of quadrilateral diagonal between the observed star chart. In order to satisfy a temporal correspondence and spatial alignment similarity simultaneously, the method based on the astronomical calibration and attitude measurement in this paper formulates the video alignment to fold the spatial and temporal alignment into a joint alignment framework. The advantage of this method is reinforced by exploiting the similarities and prior knowledge of velocity vector field between adjacent frames, which is calculated by the SIFT Flow algorithm. The proposed method provides the highest spatio-temporal alignment accuracy compared to the state-of-the-art methods on sequences recorded from multi sensor at different times.

  18. The backward phase flow and FBI-transform-based Eulerian Gaussian beams for the Schroedinger equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Leung Shingyu, E-mail: masyleung@ust.h; Qian Jianliang, E-mail: qian@math.msu.ed

    2010-11-20

    We propose the backward phase flow method to implement the Fourier-Bros-Iagolnitzer (FBI)-transform-based Eulerian Gaussian beam method for solving the Schroedinger equation in the semi-classical regime. The idea of Eulerian Gaussian beams has been first proposed in . In this paper we aim at two crucial computational issues of the Eulerian Gaussian beam method: how to carry out long-time beam propagation and how to compute beam ingredients rapidly in phase space. By virtue of the FBI transform, we address the first issue by introducing the reinitialization strategy into the Eulerian Gaussian beam framework. Essentially we reinitialize beam propagation by applying themore » FBI transform to wavefields at intermediate time steps when the beams become too wide. To address the second issue, inspired by the original phase flow method, we propose the backward phase flow method which allows us to compute beam ingredients rapidly. Numerical examples demonstrate the efficiency and accuracy of the proposed algorithms.« less

  19. The backward phase flow and FBI-transform-based Eulerian Gaussian beams for the Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Leung, Shingyu; Qian, Jianliang

    2010-11-01

    We propose the backward phase flow method to implement the Fourier-Bros-Iagolnitzer (FBI)-transform-based Eulerian Gaussian beam method for solving the Schrödinger equation in the semi-classical regime. The idea of Eulerian Gaussian beams has been first proposed in [12]. In this paper we aim at two crucial computational issues of the Eulerian Gaussian beam method: how to carry out long-time beam propagation and how to compute beam ingredients rapidly in phase space. By virtue of the FBI transform, we address the first issue by introducing the reinitialization strategy into the Eulerian Gaussian beam framework. Essentially we reinitialize beam propagation by applying the FBI transform to wavefields at intermediate time steps when the beams become too wide. To address the second issue, inspired by the original phase flow method, we propose the backward phase flow method which allows us to compute beam ingredients rapidly. Numerical examples demonstrate the efficiency and accuracy of the proposed algorithms.

  20. Coronary arteries segmentation based on the 3D discrete wavelet transform and 3D neutrosophic transform.

    PubMed

    Chen, Shuo-Tsung; Wang, Tzung-Dau; Lee, Wen-Jeng; Huang, Tsai-Wei; Hung, Pei-Kai; Wei, Cheng-Yu; Chen, Chung-Ming; Kung, Woon-Man

    2015-01-01

    Most applications in the field of medical image processing require precise estimation. To improve the accuracy of segmentation, this study aimed to propose a novel segmentation method for coronary arteries to allow for the automatic and accurate detection of coronary pathologies. The proposed segmentation method included 2 parts. First, 3D region growing was applied to give the initial segmentation of coronary arteries. Next, the location of vessel information, HHH subband coefficients of the 3D DWT, was detected by the proposed vessel-texture discrimination algorithm. Based on the initial segmentation, 3D DWT integrated with the 3D neutrosophic transformation could accurately detect the coronary arteries. Each subbranch of the segmented coronary arteries was segmented correctly by the proposed method. The obtained results are compared with those ground truth values obtained from the commercial software from GE Healthcare and the level-set method proposed by Yang et al., 2007. Results indicate that the proposed method is better in terms of efficiency analyzed. Based on the initial segmentation of coronary arteries obtained from 3D region growing, one-level 3D DWT and 3D neutrosophic transformation can be applied to detect coronary pathologies accurately.

  1. The review and results of different methods for facial recognition

    NASA Astrophysics Data System (ADS)

    Le, Yifan

    2017-09-01

    In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.

  2. Analysis of full disc Ca II K spectroheliograms. I. Photometric calibration and centre-to-limb variation compensation

    NASA Astrophysics Data System (ADS)

    Chatzistergos, Theodosios; Ermolli, Ilaria; Solanki, Sami K.; Krivova, Natalie A.

    2018-01-01

    Context. Historical Ca II K spectroheliograms (SHG) are unique in representing long-term variations of the solar chromospheric magnetic field. They usually suffer from numerous problems and lack photometric calibration. Thus accurate processing of these data is required to get meaningful results from their analysis. Aims: In this paper we aim at developing an automatic processing and photometric calibration method that provides precise and consistent results when applied to historical SHG. Methods: The proposed method is based on the assumption that the centre-to-limb variation of the intensity in quiet Sun regions does not vary with time. We tested the accuracy of the proposed method on various sets of synthetic images that mimic problems encountered in historical observations. We also tested our approach on a large sample of images randomly extracted from seven different SHG archives. Results: The tests carried out on the synthetic data show that the maximum relative errors of the method are generally <6.5%, while the average error is <1%, even if rather poor quality observations are considered. In the absence of strong artefacts the method returns images that differ from the ideal ones by <2% in any pixel. The method gives consistent values for both plage and network areas. We also show that our method returns consistent results for images from different SHG archives. Conclusions: Our tests show that the proposed method is more accurate than other methods presented in the literature. Our method can also be applied to process images from photographic archives of solar observations at other wavelengths than Ca II K.

  3. Intelligent Distribution Voltage Control with Distributed Generation =

    NASA Astrophysics Data System (ADS)

    Castro Mendieta, Jose

    In this thesis, three methods for the optimal participation of the reactive power of distributed generations (DGs) in unbalanced distributed network have been proposed, developed, and tested. These new methods were developed with the objectives of maintain voltage within permissible limits and reduce losses. The first method proposes an optimal participation of reactive power of all devices available in the network. The propose approach is validated by comparing the results with other methods reported in the literature. The proposed method was implemented using Simulink of Matlab and OpenDSS. Optimization techniques and the presentation of results are from Matlab. The co-simulation of Electric Power Research Institute's (EPRI) OpenDSS program solves a three-phase optimal power flow problem in the unbalanced IEEE 13 and 34-node test feeders. The results from this work showed a better loss reduction compared to the Coordinated Voltage Control (CVC) method. The second method aims to minimize the voltage variation on the pilot bus on distribution network using DGs. It uses Pareto and Fuzzy-PID logic to reduce the voltage variation. Results indicate that the proposed method reduces the voltage variation more than the other methods. Simulink of Matlab and OpenDSS is used in the development of the proposed approach. The performance of the method is evaluated on IEEE 13-node test feeder with one and three DGs. Variables and unbalanced loads are used, based on real consumption data, over a time window of 48 hours. The third method aims to minimize the reactive losses using DGs on distribution networks. This method analyzes the problem using the IEEE 13-node test feeder with three different loads and the IEEE 123-node test feeder with four DGs. The DGs can be fixed or variables. Results indicate that integration of DGs to optimize the reactive power of the network helps to maintain the voltage within the allowed limits and to reduce the reactive power losses. The thesis is presented in the form of the three articles. The first article is published in the journal Electrical Power and Energy System, the second is published in the international journal Energies and the third was submitted to the journal Electrical Power and Energy System. Two other articles have been published in conferences with reviewing committee. This work is based on six chapters, which are detailed in the various sections of the thesis.

  4. Identification of potential inhibitors based on compound proposal contest: Tyrosine-protein kinase Yes as a target.

    PubMed

    Chiba, Shuntaro; Ikeda, Kazuyoshi; Ishida, Takashi; Gromiha, M Michael; Taguchi, Y-H; Iwadate, Mitsuo; Umeyama, Hideaki; Hsin, Kun-Yi; Kitano, Hiroaki; Yamamoto, Kazuki; Sugaya, Nobuyoshi; Kato, Koya; Okuno, Tatsuya; Chikenji, George; Mochizuki, Masahiro; Yasuo, Nobuaki; Yoshino, Ryunosuke; Yanagisawa, Keisuke; Ban, Tomohiro; Teramoto, Reiji; Ramakrishnan, Chandrasekaran; Thangakani, A Mary; Velmurugan, D; Prathipati, Philip; Ito, Junichi; Tsuchiya, Yuko; Mizuguchi, Kenji; Honma, Teruki; Hirokawa, Takatsugu; Akiyama, Yutaka; Sekijima, Masakazu

    2015-11-26

    A search of broader range of chemical space is important for drug discovery. Different methods of computer-aided drug discovery (CADD) are known to propose compounds in different chemical spaces as hit molecules for the same target protein. This study aimed at using multiple CADD methods through open innovation to achieve a level of hit molecule diversity that is not achievable with any particular single method. We held a compound proposal contest, in which multiple research groups participated and predicted inhibitors of tyrosine-protein kinase Yes. This showed whether collective knowledge based on individual approaches helped to obtain hit compounds from a broad range of chemical space and whether the contest-based approach was effective.

  5. Identification of potential inhibitors based on compound proposal contest: Tyrosine-protein kinase Yes as a target

    PubMed Central

    Chiba, Shuntaro; Ikeda, Kazuyoshi; Ishida, Takashi; Gromiha, M. Michael; Taguchi, Y-h.; Iwadate, Mitsuo; Umeyama, Hideaki; Hsin, Kun-Yi; Kitano, Hiroaki; Yamamoto, Kazuki; Sugaya, Nobuyoshi; Kato, Koya; Okuno, Tatsuya; Chikenji, George; Mochizuki, Masahiro; Yasuo, Nobuaki; Yoshino, Ryunosuke; Yanagisawa, Keisuke; Ban, Tomohiro; Teramoto, Reiji; Ramakrishnan, Chandrasekaran; Thangakani, A. Mary; Velmurugan, D.; Prathipati, Philip; Ito, Junichi; Tsuchiya, Yuko; Mizuguchi, Kenji; Honma, Teruki; Hirokawa, Takatsugu; Akiyama, Yutaka; Sekijima, Masakazu

    2015-01-01

    A search of broader range of chemical space is important for drug discovery. Different methods of computer-aided drug discovery (CADD) are known to propose compounds in different chemical spaces as hit molecules for the same target protein. This study aimed at using multiple CADD methods through open innovation to achieve a level of hit molecule diversity that is not achievable with any particular single method. We held a compound proposal contest, in which multiple research groups participated and predicted inhibitors of tyrosine-protein kinase Yes. This showed whether collective knowledge based on individual approaches helped to obtain hit compounds from a broad range of chemical space and whether the contest-based approach was effective. PMID:26607293

  6. Development of methods for analysis of knee articular cartilage degeneration by magnetic resonance imaging data

    NASA Astrophysics Data System (ADS)

    Suponenkovs, Artjoms; Glazs, Aleksandrs; Platkajis, Ardis

    2017-03-01

    The aim of this paper is to describe the new methods for analyzing knee articular cartilage degeneration. The most important aspects regarding research about magnetic resonance imaging, knee joint anatomy, stages of knee osteoarthritis, medical image segmentation and relaxation times calculation. This paper proposes new methods for relaxation times calculation and medical image segmentation. The experimental part describes the most important aspect regarding analysing of articular cartilage relaxation times changing. This part contains experimental results, which show the codependence between relaxation times and organic structure. These experimental results and proposed methods can be helpful for early osteoarthritis diagnostics.

  7. Developing a Self-Report-Based Sequential Analysis Method for Educational Technology Systems: A Process-Based Usability Evaluation

    ERIC Educational Resources Information Center

    Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse

    2015-01-01

    The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…

  8. Who Needs What? Some Thoughts on the Possibility of Using Psychology in Actor Training.

    ERIC Educational Resources Information Center

    Vulova, Marina

    2001-01-01

    Contends that the aim of professional actor training is to reveal and develop an actor's individuality. Proposes that the responsibility of drama teachers is to lead training in such away that students feel accepted, understood, and respected. Proposes that psychodrama is the most appropriate method for student and professional actors' personal…

  9. Cognitive Approaches for Medicine in Cloud Computing.

    PubMed

    Ogiela, Urszula; Takizawa, Makoto; Ogiela, Lidia

    2018-03-03

    This paper will present the application potential of the cognitive approach to data interpretation, with special reference to medical areas. The possibilities of using the meaning approach to data description and analysis will be proposed for data analysis tasks in Cloud Computing. The methods of cognitive data management in Cloud Computing are aimed to support the processes of protecting data against unauthorised takeover and they serve to enhance the data management processes. The accomplishment of the proposed tasks will be the definition of algorithms for the execution of meaning data interpretation processes in safe Cloud Computing. • We proposed a cognitive methods for data description. • Proposed a techniques for secure data in Cloud Computing. • Application of cognitive approaches for medicine was described.

  10. Component isolation for multi-component signal analysis using a non-parametric gaussian latent feature model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.

    2018-03-01

    A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.

  11. Qualitative approaches to use of the RE-AIM framework: rationale and methods.

    PubMed

    Holtrop, Jodi Summers; Rabin, Borsika A; Glasgow, Russell E

    2018-03-13

    There have been over 430 publications using the RE-AIM model for planning and evaluation of health programs and policies, as well as numerous applications of the model in grant proposals and national programs. Full use of the model includes use of qualitative methods to understand why and how results were obtained on different RE-AIM dimensions, however, recent reviews have revealed that qualitative methods have been used infrequently. Having quantitative and qualitative methods and results iteratively inform each other should enhance understanding and lessons learned. Because there have been few published examples of qualitative approaches and methods using RE-AIM for planning or assessment and no guidance on how qualitative approaches can inform these processes, we provide guidance on qualitative methods to address the RE-AIM model and its various dimensions. The intended audience is researchers interested in applying RE-AIM or similar implementation models, but the methods discussed should also be relevant to those in community or clinical settings. We present directions for, examples of, and guidance on how qualitative methods can be used to address each of the five RE-AIM dimensions. Formative qualitative methods can be helpful in planning interventions and designing for dissemination. Summative qualitative methods are useful when used in an iterative, mixed methods approach for understanding how and why different patterns of results occur. In summary, qualitative and mixed methods approaches to RE-AIM help understand complex situations and results, why and how outcomes were obtained, and contextual factors not easily assessed using quantitative measures.

  12. (Un)Earthing a Vocabulary of Values: A Discourse Analysis for Ecocomposition

    ERIC Educational Resources Information Center

    Walker, Paul

    2010-01-01

    In this article, the author aims to propose an analytic method through which composition students and others might discover and understand the ecological complexities of prevailing environmental terminology that create "wicked problems." Through this method, students engage in "discursive ecology" by exploring the connections…

  13. Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray

    This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less

  14. Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method

    DOE PAGES

    Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray

    2016-01-01

    This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less

  15. A Synthetic Approach to the Transfer Matrix Method in Classical and Quantum Physics

    ERIC Educational Resources Information Center

    Pujol, O.; Perez, J. P.

    2007-01-01

    The aim of this paper is to propose a synthetic approach to the transfer matrix method in classical and quantum physics. This method is an efficient tool to deal with complicated physical systems of practical importance in geometrical light or charged particle optics, classical electronics, mechanics, electromagnetics and quantum physics. Teaching…

  16. An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection.

    PubMed

    M, Soorya; Issac, Ashish; Dutta, Malay Kishore

    2018-02-01

    Glaucoma is an ocular disease which can cause irreversible blindness. The disease is currently identified using specialized equipment operated by optometrists manually. The proposed work aims to provide an efficient imaging solution which can help in automating the process of Glaucoma diagnosis using computer vision techniques from digital fundus images. The proposed method segments the optic disc using a geometrical feature based strategic framework which improves the detection accuracy and makes the algorithm invariant to illumination and noise. Corner thresholding and point contour joining based novel methods are proposed to construct smooth contours of Optic Disc. Based on a clinical approach as used by ophthalmologist, the proposed algorithm tracks blood vessels inside the disc region and identifies the points at which first vessel bend from the optic disc boundary and connects them to obtain the contours of Optic Cup. The proposed method has been compared with the ground truth marked by the medical experts and the similarity parameters, used to determine the performance of the proposed method, have yield a high similarity of segmentation. The proposed method has achieved a macro-averaged f-score of 0.9485 and accuracy of 97.01% in correctly classifying fundus images. The proposed method is clinically significant and can be used for Glaucoma screening over a large population which will work in a real time. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Eddy current loss analysis of open-slot fault-tolerant permanent-magnet machines based on conformal mapping method

    NASA Astrophysics Data System (ADS)

    Ji, Jinghua; Luo, Jianhua; Lei, Qian; Bian, Fangfang

    2017-05-01

    This paper proposed an analytical method, based on conformal mapping (CM) method, for the accurate evaluation of magnetic field and eddy current (EC) loss in fault-tolerant permanent-magnet (FTPM) machines. The aim of modulation function, applied in CM method, is to change the open-slot structure into fully closed-slot structure, whose air-gap flux density is easy to calculate analytically. Therefore, with the help of Matlab Schwarz-Christoffel (SC) Toolbox, both the magnetic flux density and EC density of FTPM machine are obtained accurately. Finally, time-stepped transient finite-element method (FEM) is used to verify the theoretical analysis, showing that the proposed method is able to predict the magnetic flux density and EC loss precisely.

  18. A hybrid method for evaluating enterprise architecture implementation.

    PubMed

    Nikpay, Fatemeh; Ahmad, Rodina; Yin Kia, Chiam

    2017-02-01

    Enterprise Architecture (EA) implementation evaluation provides a set of methods and practices for evaluating the EA implementation artefacts within an EA implementation project. There are insufficient practices in existing EA evaluation models in terms of considering all EA functions and processes, using structured methods in developing EA implementation, employing matured practices, and using appropriate metrics to achieve proper evaluation. The aim of this research is to develop a hybrid evaluation method that supports achieving the objectives of EA implementation. To attain this aim, the first step is to identify EA implementation evaluation practices. To this end, a Systematic Literature Review (SLR) was conducted. Second, the proposed hybrid method was developed based on the foundation and information extracted from the SLR, semi-structured interviews with EA practitioners, program theory evaluation and Information Systems (ISs) evaluation. Finally, the proposed method was validated by means of a case study and expert reviews. This research provides a suitable foundation for researchers who wish to extend and continue this research topic with further analysis and exploration, and for practitioners who would like to employ an effective and lightweight evaluation method for EA projects. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Combining large number of weak biomarkers based on AUC.

    PubMed

    Yan, Li; Tian, Lili; Liu, Song

    2015-12-20

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Combining large number of weak biomarkers based on AUC

    PubMed Central

    Yan, Li; Tian, Lili; Liu, Song

    2018-01-01

    Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901

  1. Cooperative parallel adaptive neighbourhood search for the disjunctively constrained knapsack problem

    NASA Astrophysics Data System (ADS)

    Quan, Zhe; Wu, Lei

    2017-09-01

    This article investigates the use of parallel computing for solving the disjunctively constrained knapsack problem. The proposed parallel computing model can be viewed as a cooperative algorithm based on a multi-neighbourhood search. The cooperation system is composed of a team manager and a crowd of team members. The team members aim at applying their own search strategies to explore the solution space. The team manager collects the solutions from the members and shares the best one with them. The performance of the proposed method is evaluated on a group of benchmark data sets. The results obtained are compared to those reached by the best methods from the literature. The results show that the proposed method is able to provide the best solutions in most cases. In order to highlight the robustness of the proposed parallel computing model, a new set of large-scale instances is introduced. Encouraging results have been obtained.

  2. Advanced scatter search approach and its application in a sequencing problem of mixed-model assembly lines in a case company

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Wang, Wen-xi; Zhu, Ke-ren; Zhang, Chao-yong; Rao, Yun-qing

    2014-11-01

    Mixed-model assembly line sequencing is significant in reducing the production time and overall cost of production. To improve production efficiency, a mathematical model aiming simultaneously to minimize overtime, idle time and total set-up costs is developed. To obtain high-quality and stable solutions, an advanced scatter search approach is proposed. In the proposed algorithm, a new diversification generation method based on a genetic algorithm is presented to generate a set of potentially diverse and high-quality initial solutions. Many methods, including reference set update, subset generation, solution combination and improvement methods, are designed to maintain the diversification of populations and to obtain high-quality ideal solutions. The proposed model and algorithm are applied and validated in a case company. The results indicate that the proposed advanced scatter search approach is significant for mixed-model assembly line sequencing in this company.

  3. Microwave-assisted Derivatization of Fatty Acids for Its Measurement in Milk Using High-Performance Liquid Chromatography.

    PubMed

    Shrestha, Rojeet; Miura, Yusuke; Hirano, Ken-Ichi; Chen, Zhen; Okabe, Hiroaki; Chiba, Hitoshi; Hui, Shu-Ping

    2018-01-01

    Fatty acid (FA) profiling of milk has important applications in human health and nutrition. Conventional methods for the saponification and derivatization of FA are time-consuming and laborious. We aimed to develop a simple, rapid, and economical method for the determination of FA in milk. We applied a beneficial approach of microwave-assisted saponification (MAS) of milk fats and microwave-assisted derivatization (MAD) of FA to its hydrazides, integrated with HPLC-based analysis. The optimal conditions for MAS and MAD were determined. Microwave irradiation significantly reduced the sample preparation time from 80 min in the conventional method to less than 3 min. We used three internal standards for the measurement of short-, medium- and long-chain FA. The proposed method showed satisfactory analytical sensitivity, recovery and reproducibility. There was a significant correlation in the milk FA concentrations between the proposed and conventional methods. Being quick, economic, and convenient, the proposed method for the milk FA measurement can be substitute for the convention method.

  4. Learning locality preserving graph from data.

    PubMed

    Zhang, Yan-Ming; Huang, Kaizhu; Hou, Xinwen; Liu, Cheng-Lin

    2014-11-01

    Machine learning based on graph representation, or manifold learning, has attracted great interest in recent years. As the discrete approximation of data manifold, the graph plays a crucial role in these kinds of learning approaches. In this paper, we propose a novel learning method for graph construction, which is distinct from previous methods in that it solves an optimization problem with the aim of directly preserving the local information of the original data set. We show that the proposed objective has close connections with the popular Laplacian Eigenmap problem, and is hence well justified. The optimization turns out to be a quadratic programming problem with n(n-1)/2 variables (n is the number of data points). Exploiting the sparsity of the graph, we further propose a more efficient cutting plane algorithm to solve the problem, making the method better scalable in practice. In the context of clustering and semi-supervised learning, we demonstrated the advantages of our proposed method by experiments.

  5. An Improved Distributed Secondary Control Method for DC Microgrids With Enhanced Dynamic Current Sharing Performance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Panbao; Lu, Xiaonan; Yang, Xu

    This paper proposes an improved distributed secondary control scheme for dc microgrids (MGs), aiming at overcoming the drawbacks of conventional droop control method. The proposed secondary control scheme can remove the dc voltage deviation and improve the current sharing accuracy by using voltage-shifting and slope-adjusting approaches simultaneously. Meanwhile, the average value of droop coefficients is calculated, and then it is controlled by an additional controller included in the distributed secondary control layer to ensure that each droop coefficient converges at a reasonable value. Hence, by adjusting the droop coefficient, each participating converter has equal output impedance, and the accurate proportionalmore » load current sharing can be achieved with different line resistances. Furthermore, the current sharing performance in steady and transient states can be enhanced by using the proposed method. The effectiveness of the proposed method is verified by detailed experimental tests based on a 3 × 1 kW prototype with three interface converters.« less

  6. Synthesis and use of 2-[ 18F]fluoromalondialdehyde, an accessible synthon for bioconjugation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hooker, Jacob M.

    We proposed methods for the synthesis and purification of 2-[ 18F]fluoromalondialdehyde, which will be a readily accessible synthon for bioconjugation. Our achievements in these areas will specifically address a stated goal of the DOE providing a transformational technology for macromolecule radiolabeling. Accomplishment of our aims will serve both DOE mission-related research as well as nuclear medicine research supported by the NIH and industry. At the heart of our proposal is the aim to “improve synthetic methodology for rapidly and efficiently incorporating radionuclides into a wide range of organic compounds.”

  7. The Hull Method for Selecting the Number of Common Factors

    ERIC Educational Resources Information Center

    Lorenzo-Seva, Urbano; Timmerman, Marieke E.; Kiers, Henk A. L.

    2011-01-01

    A common problem in exploratory factor analysis is how many factors need to be extracted from a particular data set. We propose a new method for selecting the number of major common factors: the Hull method, which aims to find a model with an optimal balance between model fit and number of parameters. We examine the performance of the method in an…

  8. Diverse task scheduling for individualized requirements in cloud manufacturing

    NASA Astrophysics Data System (ADS)

    Zhou, Longfei; Zhang, Lin; Zhao, Chun; Laili, Yuanjun; Xu, Lida

    2018-03-01

    Cloud manufacturing (CMfg) has emerged as a new manufacturing paradigm that provides ubiquitous, on-demand manufacturing services to customers through network and CMfg platforms. In CMfg system, task scheduling as an important means of finding suitable services for specific manufacturing tasks plays a key role in enhancing the system performance. Customers' requirements in CMfg are highly individualized, which leads to diverse manufacturing tasks in terms of execution flows and users' preferences. We focus on diverse manufacturing tasks and aim to address their scheduling issue in CMfg. First of all, a mathematical model of task scheduling is built based on analysis of the scheduling process in CMfg. To solve this scheduling problem, we propose a scheduling method aiming for diverse tasks, which enables each service demander to obtain desired manufacturing services. The candidate service sets are generated according to subtask directed graphs. An improved genetic algorithm is applied to searching for optimal task scheduling solutions. The effectiveness of the scheduling method proposed is verified by a case study with individualized customers' requirements. The results indicate that the proposed task scheduling method is able to achieve better performance than some usual algorithms such as simulated annealing and pattern search.

  9. Hyperspectral imagery super-resolution by compressive sensing inspired dictionary learning and spatial-spectral regularization.

    PubMed

    Huang, Wei; Xiao, Liang; Liu, Hongyi; Wei, Zhihui

    2015-01-19

    Due to the instrumental and imaging optics limitations, it is difficult to acquire high spatial resolution hyperspectral imagery (HSI). Super-resolution (SR) imagery aims at inferring high quality images of a given scene from degraded versions of the same scene. This paper proposes a novel hyperspectral imagery super-resolution (HSI-SR) method via dictionary learning and spatial-spectral regularization. The main contributions of this paper are twofold. First, inspired by the compressive sensing (CS) framework, for learning the high resolution dictionary, we encourage stronger sparsity on image patches and promote smaller coherence between the learned dictionary and sensing matrix. Thus, a sparsity and incoherence restricted dictionary learning method is proposed to achieve higher efficiency sparse representation. Second, a variational regularization model combing a spatial sparsity regularization term and a new local spectral similarity preserving term is proposed to integrate the spectral and spatial-contextual information of the HSI. Experimental results show that the proposed method can effectively recover spatial information and better preserve spectral information. The high spatial resolution HSI reconstructed by the proposed method outperforms reconstructed results by other well-known methods in terms of both objective measurements and visual evaluation.

  10. Cross-domain latent space projection for person re-identification

    NASA Astrophysics Data System (ADS)

    Pu, Nan; Wu, Song; Qian, Li; Xiao, Guoqiang

    2018-04-01

    In this paper, we research the problem of person re-identification and propose a cross-domain latent space projection (CDLSP) method to address the problems of the absence or insufficient labeled data in the target domain. Under the assumption that the visual features in the source domain and target domain share the similar geometric structure, we transform the visual features from source domain and target domain to a common latent space by optimizing the object function defined in the manifold alignment method. Moreover, the proposed object function takes into account the specific knowledge in the re-id with the aim to improve the performance of re-id under complex situations. Extensive experiments conducted on four benchmark datasets show the proposed CDLSP outperforms or is competitive with stateof- the-art methods for person re-identification.

  11. Investigation of KDP crystal surface based on an improved bidimensional empirical mode decomposition method

    NASA Astrophysics Data System (ADS)

    Lu, Lei; Yan, Jihong; Chen, Wanqun; An, Shi

    2018-03-01

    This paper proposed a novel spatial frequency analysis method for the investigation of potassium dihydrogen phosphate (KDP) crystal surface based on an improved bidimensional empirical mode decomposition (BEMD) method. Aiming to eliminate end effects of the BEMD method and improve the intrinsic mode functions (IMFs) for the efficient identification of texture features, a denoising process was embedded in the sifting iteration of BEMD method. With removing redundant information in decomposed sub-components of KDP crystal surface, middle spatial frequencies of the cutting and feeding processes were identified. Comparative study with the power spectral density method, two-dimensional wavelet transform (2D-WT), as well as the traditional BEMD method, demonstrated that the method developed in this paper can efficiently extract texture features and reveal gradient development of KDP crystal surface. Furthermore, the proposed method was a self-adaptive data driven technique without prior knowledge, which overcame shortcomings of the 2D-WT model such as the parameters selection. Additionally, the proposed method was a promising tool for the application of online monitoring and optimal control of precision machining process.

  12. Adolescents' Views about a Proposed Rewards Intervention to Promote Healthy Food Choice in Secondary School Canteens

    ERIC Educational Resources Information Center

    McEvoy, C. T.; Lawton, J.; Kee, F.; Young, I. S.; Woodside, J. V.; McBratney, J.; McKinley, M. C.

    2014-01-01

    Using rewards may be an effective method to positively influence adolescent eating behaviour, but evidence regarding this approach is limited. The aim of this study was to explore young adolescent views about a proposed reward intervention associated with food choice in school canteens. Focus groups were held in 10 schools located in lower…

  13. Validation of Proposed "DSM-5" Criteria for Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Frazier, Thomas W.; Youngstrom, Eric A.; Speer, Leslie; Embacher, Rebecca; Law, Paul; Constantino, John; Findling, Robert L.; Hardan, Antonio Y.; Eng, Charis

    2012-01-01

    Objective: The primary aim of the present study was to evaluate the validity of proposed "DSM-5" criteria for autism spectrum disorder (ASD). Method: We analyzed symptoms from 14,744 siblings (8,911 ASD and 5,863 non-ASD) included in a national registry, the Interactive Autism Network. Youth 2 through 18 years of age were included if at least one…

  14. Proposal for National Targets in the Framework of the European Reduction Goal for Early School Leaving

    ERIC Educational Resources Information Center

    Lastra-Bravo, Xavier B.; Tolón-Becerra, Alfredo; Salinas-Andújar, José A.

    2013-01-01

    According to the European Commission's "Europe 2020" strategy, the early school leaving (ESL) rate in European Union (EU) Member States must be reduced to a maximum of 10 per cent by 2020. This paper proposes a nonlinear distribution method based on dynamic targets for reducing the percentage of early school leavers. The aim of this…

  15. Link Prediction in Evolving Networks Based on Popularity of Nodes.

    PubMed

    Wang, Tong; He, Xing-Sheng; Zhou, Ming-Yang; Fu, Zhong-Qian

    2017-08-02

    Link prediction aims to uncover the underlying relationship behind networks, which could be utilized to predict missing edges or identify the spurious edges. The key issue of link prediction is to estimate the likelihood of potential links in networks. Most classical static-structure based methods ignore the temporal aspects of networks, limited by the time-varying features, such approaches perform poorly in evolving networks. In this paper, we propose a hypothesis that the ability of each node to attract links depends not only on its structural importance, but also on its current popularity (activeness), since active nodes have much more probability to attract future links. Then a novel approach named popularity based structural perturbation method (PBSPM) and its fast algorithm are proposed to characterize the likelihood of an edge from both existing connectivity structure and current popularity of its two endpoints. Experiments on six evolving networks show that the proposed methods outperform state-of-the-art methods in accuracy and robustness. Besides, visual results and statistical analysis reveal that the proposed methods are inclined to predict future edges between active nodes, rather than edges between inactive nodes.

  16. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  17. A consensus reaching model for 2-tuple linguistic multiple attribute group decision making with incomplete weight information

    NASA Astrophysics Data System (ADS)

    Zhang, Wancheng; Xu, Yejun; Wang, Huimin

    2016-01-01

    The aim of this paper is to put forward a consensus reaching method for multi-attribute group decision-making (MAGDM) problems with linguistic information, in which the weight information of experts and attributes is unknown. First, some basic concepts and operational laws of 2-tuple linguistic label are introduced. Then, a grey relational analysis method and a maximising deviation method are proposed to calculate the incomplete weight information of experts and attributes respectively. To eliminate the conflict in the group, a weight-updating model is employed to derive the weights of experts based on their contribution to the consensus reaching process. After conflict elimination, the final group preference can be obtained which will give the ranking of the alternatives. The model can effectively avoid information distortion which is occurred regularly in the linguistic information processing. Finally, an illustrative example is given to illustrate the application of the proposed method and comparative analysis with the existing methods are offered to show the advantages of the proposed method.

  18. On-line estimation and compensation of measurement delay in GPS/SINS integration

    NASA Astrophysics Data System (ADS)

    Yang, Tao; Wang, Wei

    2008-10-01

    The chief aim of this paper is to propose a simple on-line estimation and compensation method of GPS/SINS measurement delay. The causes of time delay for GPS/SINS integration are analyzed in this paper. New Kalman filter state equations augmented by measurement delay and modified measurement equations are derived. Based on an open-loop Kalman filter, several simulations are run, results of which show that by the proposed method, the estimation and compensation error of measurement delay is below 0.1s.

  19. Efficient solution methodology for calibrating the hemodynamic model using functional Magnetic Resonance Imaging (fMRI) measurements.

    PubMed

    Zambri, Brian; Djellouli, Rabia; Laleg-Kirati, Taous-Meriem

    2015-08-01

    Our aim is to propose a numerical strategy for retrieving accurately and efficiently the biophysiological parameters as well as the external stimulus characteristics corresponding to the hemodynamic mathematical model that describes changes in blood flow and blood oxygenation during brain activation. The proposed method employs the TNM-CKF method developed in [1], but in a prediction/correction framework. We present numerical results using both real and synthetic functional Magnetic Resonance Imaging (fMRI) measurements to highlight the performance characteristics of this computational methodology.

  20. Monitoring biodiesel reactions of soybean oil and sunflower oil using ultrasonic parameters

    NASA Astrophysics Data System (ADS)

    Figueiredo, M. K. K.; Silva, C. E. R.; Alvarenga, A. V.; Costa-Félix, R. P. B.

    2015-01-01

    Biodiesel is an innovation that attempts to substitute diesel oil with biomass. The aim of this paper is to show the development of a real-time method to monitor transesterification reactions by using low-power ultrasound and pulse/echo techniques. The results showed that it is possible to identify different events during the transesterification process by using the proposed parameters, showing that the proposed method is a feasible way to monitor the reactions of biodiesel during its fabrication, in real time, and with relatively low- cost equipment.

  1. Forecasting Construction Cost Index based on visibility graph: A network approach

    NASA Astrophysics Data System (ADS)

    Zhang, Rong; Ashuri, Baabak; Shyr, Yu; Deng, Yong

    2018-03-01

    Engineering News-Record (ENR), a professional magazine in the field of global construction engineering, publishes Construction Cost Index (CCI) every month. Cost estimators and contractors assess projects, arrange budgets and prepare bids by forecasting CCI. However, fluctuations and uncertainties of CCI cause irrational estimations now and then. This paper aims at achieving more accurate predictions of CCI based on a network approach in which time series is firstly converted into a visibility graph and future values are forecasted relied on link prediction. According to the experimental results, the proposed method shows satisfactory performance since the error measures are acceptable. Compared with other methods, the proposed method is easier to implement and is able to forecast CCI with less errors. It is convinced that the proposed method is efficient to provide considerably accurate CCI predictions, which will make contributions to the construction engineering by assisting individuals and organizations in reducing costs and making project schedules.

  2. Feature and Score Fusion Based Multiple Classifier Selection for Iris Recognition

    PubMed Central

    Islam, Md. Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al. PMID:25114676

  3. Feature and score fusion based multiple classifier selection for iris recognition.

    PubMed

    Islam, Md Rabiul

    2014-01-01

    The aim of this work is to propose a new feature and score fusion based iris recognition approach where voting method on Multiple Classifier Selection technique has been applied. Four Discrete Hidden Markov Model classifiers output, that is, left iris based unimodal system, right iris based unimodal system, left-right iris feature fusion based multimodal system, and left-right iris likelihood ratio score fusion based multimodal system, is combined using voting method to achieve the final recognition result. CASIA-IrisV4 database has been used to measure the performance of the proposed system with various dimensions. Experimental results show the versatility of the proposed system of four different classifiers with various dimensions. Finally, recognition accuracy of the proposed system has been compared with existing N hamming distance score fusion approach proposed by Ma et al., log-likelihood ratio score fusion approach proposed by Schmid et al., and single level feature fusion approach proposed by Hollingsworth et al.

  4. A Study on Human Oriented Autonomous Distributed Manufacturing System —Real-time Scheduling Method Based on Preference of Human Operators

    NASA Astrophysics Data System (ADS)

    Iwamura, Koji; Kuwahara, Shinya; Tanimizu, Yoshitaka; Sugimura, Nobuhiro

    Recently, new distributed architectures of manufacturing systems are proposed, aiming at realizing more flexible control structures of the manufacturing systems. Many researches have been carried out to deal with the distributed architectures for planning and control of the manufacturing systems. However, the human operators have not yet been discussed for the autonomous components of the distributed manufacturing systems. A real-time scheduling method is proposed, in this research, to select suitable combinations of the human operators, the resources and the jobs for the manufacturing processes. The proposed scheduling method consists of following three steps. In the first step, the human operators select their favorite manufacturing processes which they will carry out in the next time period, based on their preferences. In the second step, the machine tools and the jobs select suitable combinations for the next machining processes. In the third step, the automated guided vehicles and the jobs select suitable combinations for the next transportation processes. The second and third steps are carried out by using the utility value based method and the dispatching rule-based method proposed in the previous researches. Some case studies have been carried out to verify the effectiveness of the proposed method.

  5. Automated control of robotic camera tacheometers for measurements of industrial large scale objects

    NASA Astrophysics Data System (ADS)

    Heimonen, Teuvo; Leinonen, Jukka; Sipola, Jani

    2013-04-01

    The modern robotic tacheometers equipped with digital cameras (called also imaging total stations) and capable to measure reflectorless offer new possibilities to gather 3d data. In this paper an automated approach for the tacheometer measurements needed in the dimensional control of industrial large scale objects is proposed. There are two new contributions in the approach: the automated extraction of the vital points (i.e. the points to be measured) and the automated fine aiming of the tacheometer. The proposed approach proceeds through the following steps: First the coordinates of the vital points are automatically extracted from the computer aided design (CAD) data. The extracted design coordinates are then used to aim the tacheometer to point out to the designed location of the points, one after another. However, due to the deviations between the designed and the actual location of the points, the aiming need to be adjusted. An automated dynamic image-based look-and-move type servoing architecture is proposed to be used for this task. After a successful fine aiming, the actual coordinates of the point in question can be automatically measured by using the measuring functionalities of the tacheometer. The approach was validated experimentally and noted to be feasible. On average 97 % of the points actually measured in four different shipbuilding measurement cases were indeed proposed to be vital points by the automated extraction algorithm. The accuracy of the results obtained with the automatic control method of the tachoemeter were comparable to the results obtained with the manual control, and also the reliability of the image processing step of the method was found to be high in the laboratory experiments.

  6. An Accurate Transmitting Power Control Method in Wireless Communication Transceivers

    NASA Astrophysics Data System (ADS)

    Zhang, Naikang; Wen, Zhiping; Hou, Xunping; Bi, Bo

    2018-01-01

    Power control circuits are widely used in transceivers aiming at stabilizing the transmitted signal power to a specified value, thereby reducing power consumption and interference to other frequency bands. In order to overcome the shortcomings of traditional modes of power control, this paper proposes an accurate signal power detection method by multiplexing the receiver and realizes transmitting power control in the digital domain. The simulation results show that this novel digital power control approach has advantages of small delay, high precision and simplified design procedure. The proposed method is applicable to transceivers working at large frequency dynamic range, and has good engineering practicability.

  7. Electrochemical model based charge optimization for lithium-ion batteries

    NASA Astrophysics Data System (ADS)

    Pramanik, Sourav; Anwar, Sohel

    2016-05-01

    In this paper, we propose the design of a novel optimal strategy for charging the lithium-ion battery based on electrochemical battery model that is aimed at improved performance. A performance index that aims at minimizing the charging effort along with a minimum deviation from the rated maximum thresholds for cell temperature and charging current has been defined. The method proposed in this paper aims at achieving a faster charging rate while maintaining safe limits for various battery parameters. Safe operation of the battery is achieved by including the battery bulk temperature as a control component in the performance index which is of critical importance for electric vehicles. Another important aspect of the performance objective proposed here is the efficiency of the algorithm that would allow higher charging rates without compromising the internal electrochemical kinetics of the battery which would prevent abusive conditions, thereby improving the long term durability. A more realistic model, based on battery electro-chemistry has been used for the design of the optimal algorithm as opposed to the conventional equivalent circuit models. To solve the optimization problem, Pontryagins principle has been used which is very effective for constrained optimization problems with both state and input constraints. Simulation results show that the proposed optimal charging algorithm is capable of shortening the charging time of a lithium ion cell while maintaining the temperature constraint when compared with the standard constant current charging. The designed method also maintains the internal states within limits that can avoid abusive operating conditions.

  8. Fast methodology of analysing major steviol glycosides from Stevia rebaudiana leaves.

    PubMed

    Lorenzo, Cándida; Serrano-Díaz, Jéssica; Plaza, Miguel; Quintanilla, Carmen; Alonso, Gonzalo L

    2014-08-15

    The aim of this work is to propose an HPLC method for analysing major steviol glycosides as well as to optimise the extraction and clarification conditions for obtaining these compounds. Toward this aim, standards of stevioside and rebaudioside A with purities ⩾99.0%, commercial samples from different companies and Stevia rebaudiana Bertoni leaves from Paraguay supplied by Insobol, S.L., were used. The analytical method proposed is adequate in terms of selectivity, sensitivity and accuracy. Optimum extraction conditions and adequate clarification conditions have been set. Moreover, this methodology is safe and eco-friendly, as we use only water for extraction and do not use solid-phase extraction, which requires solvents that are banned in the food industry to condition the cartridge and elute the steviol glycosides. In addition, this methodology consumes little time as leaves are not ground and the filtration is faster, and the peak resolution is better as we used an HPLC method with gradient elution. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Listening to Children: Exploring Intuitive Strategies and Interactive Methods in a Study of Children's Special Places

    ERIC Educational Resources Information Center

    Green, Carie

    2012-01-01

    Stemming from the UNCRC, childhood researchers have proposed a variety of methodological strategies for upholding children's rights and understanding their perspectives. This paper aims to advance the conversation on engaging children's perspectives by presenting data collection methods used in a qualitative study exploring children's special…

  10. An Impact-Based Filtering Approach for Literature Searches

    ERIC Educational Resources Information Center

    Vista, Alvin

    2013-01-01

    This paper aims to present an alternative and simple method to improve the filtering of search results so as to increase the efficiency of literature searches, particularly for individual researchers who have limited logistical resources. The method proposed here is scope restriction using an impact-based filter, made possible by the emergence of…

  11. Alternative evaluation of innovations’ effectiveness in mechanical engineering

    NASA Astrophysics Data System (ADS)

    Puryaev, A. S.

    2017-09-01

    The aim of present work is approbation of the developed technique for assessing innovations’ effectiveness. We demonstrate an alternative assessment of innovations’ effectiveness (innovation projects) in mechanical engineering on illustrative example. It is proposed as an alternative to the traditional method technique based on the value concept and the method of “Cash flow”.

  12. Research Knowledge Transfer through Business-Driven Student Assignment

    ERIC Educational Resources Information Center

    Sas, Corina

    2009-01-01

    Purpose: The purpose of this paper is to present a knowledge transfer method that capitalizes on both research and teaching dimensions of academic work. It also aims to propose a framework for evaluating the impact of such a method on the involved stakeholders. Design/methodology/approach: The case study outlines and evaluates the six-stage…

  13. A Proposed Methodology for Contextualised Evaluation in Higher Education

    ERIC Educational Resources Information Center

    Nygaard, Claus; Belluigi, Dina Zoe

    2011-01-01

    This paper aims to inspire stakeholders working with quality of higher education (such as members of study boards, study programme directors, curriculum developers and teachers) to critically consider their evaluation methods in relation to a focus on student learning. We argue that many of the existing methods of evaluation in higher education…

  14. Exchanging large data object in multi-agent systems

    NASA Astrophysics Data System (ADS)

    Al-Yaseen, Wathiq Laftah; Othman, Zulaiha Ali; Nazri, Mohd Zakree Ahmad

    2016-08-01

    One of the Business Intelligent solutions that is currently in use is the Multi-Agent System (MAS). Communication is one of the most important elements in MAS, especially for exchanging large low level data between distributed agents (physically). The Agent Communication Language in JADE has been offered as a secure method for sending data, whereby the data is defined as an object. However, the object cannot be used to send data to another agent in a different location. Therefore, the aim of this paper was to propose a method for the exchange of large low level data as an object by creating a proxy agent known as a Delivery Agent, which temporarily imitates the Receiver Agent. The results showed that the proposed method is able to send large-sized data. The experiments were conducted using 16 datasets ranging from 100,000 to 7 million instances. However, for the proposed method, the RAM and the CPU machine had to be slightly increased for the Receiver Agent, but the latency time was not significantly different compared to the use of the Java Socket method (non-agent and less secure). With such results, it was concluded that the proposed method can be used to securely send large data between agents.

  15. Application of concepts from cross-recurrence analysis in speech production: an overview and comparison with other nonlinear methods.

    PubMed

    Lancia, Leonardo; Fuchs, Susanne; Tiede, Mark

    2014-06-01

    The aim of this article was to introduce an important tool, cross-recurrence analysis, to speech production applications by showing how it can be adapted to evaluate the similarity of multivariate patterns of articulatory motion. The method differs from classical applications of cross-recurrence analysis because no phase space reconstruction is conducted, and a cleaning algorithm removes the artifacts from the recurrence plot. The main features of the proposed approach are robustness to nonstationarity and efficient separation of amplitude variability from temporal variability. The authors tested these claims by applying their method to synthetic stimuli whose variability had been carefully controlled. The proposed method was also demonstrated in a practical application: It was used to investigate the role of biomechanical constraints in articulatory reorganization as a consequence of speeded repetition of CVCV utterances containing a labial and a coronal consonant. Overall, the proposed approach provided more reliable results than other methods, particularly in the presence of high variability. The proposed method is a useful and appropriate tool for quantifying similarity and dissimilarity in patterns of speech articulator movement, especially in such research areas as speech errors and pathologies, where unpredictable divergent behavior is expected.

  16. Novel maximum-margin training algorithms for supervised neural networks.

    PubMed

    Ludwig, Oswaldo; Nunes, Urbano

    2010-06-01

    This paper proposes three novel training methods, two of them based on the backpropagation approach and a third one based on information theory for multilayer perceptron (MLP) binary classifiers. Both backpropagation methods are based on the maximal-margin (MM) principle. The first one, based on the gradient descent with adaptive learning rate algorithm (GDX) and named maximum-margin GDX (MMGDX), directly increases the margin of the MLP output-layer hyperplane. The proposed method jointly optimizes both MLP layers in a single process, backpropagating the gradient of an MM-based objective function, through the output and hidden layers, in order to create a hidden-layer space that enables a higher margin for the output-layer hyperplane, avoiding the testing of many arbitrary kernels, as occurs in case of support vector machine (SVM) training. The proposed MM-based objective function aims to stretch out the margin to its limit. An objective function based on Lp-norm is also proposed in order to take into account the idea of support vectors, however, overcoming the complexity involved in solving a constrained optimization problem, usually in SVM training. In fact, all the training methods proposed in this paper have time and space complexities O(N) while usual SVM training methods have time complexity O(N (3)) and space complexity O(N (2)) , where N is the training-data-set size. The second approach, named minimization of interclass interference (MICI), has an objective function inspired on the Fisher discriminant analysis. Such algorithm aims to create an MLP hidden output where the patterns have a desirable statistical distribution. In both training methods, the maximum area under ROC curve (AUC) is applied as stop criterion. The third approach offers a robust training framework able to take the best of each proposed training method. The main idea is to compose a neural model by using neurons extracted from three other neural networks, each one previously trained by MICI, MMGDX, and Levenberg-Marquard (LM), respectively. The resulting neural network was named assembled neural network (ASNN). Benchmark data sets of real-world problems have been used in experiments that enable a comparison with other state-of-the-art classifiers. The results provide evidence of the effectiveness of our methods regarding accuracy, AUC, and balanced error rate.

  17. The Proposed Creation of University System in the Province of Ilocos Sur, Philippines

    ERIC Educational Resources Information Center

    Alviento, Severino G.

    2017-01-01

    This study aimed to determine the level of acceptability of the proposed amalgamation of the three Public Higher Education Institutions in Ilocos Sur. It employed the descriptive method of research. The respondents of the study are the 3rd and 4th year AB Political Science and BSE major in Social Science students and the 4th year BEED students of…

  18. Comparison of fuzzy AHP and fuzzy TODIM methods for landfill location selection.

    PubMed

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Landfill location selection is a multi-criteria decision problem and has a strategic importance for many regions. The conventional methods for landfill location selection are insufficient in dealing with the vague or imprecise nature of linguistic assessment. To resolve this problem, fuzzy multi-criteria decision-making methods are proposed. The aim of this paper is to use fuzzy TODIM (the acronym for Interactive and Multi-criteria Decision Making in Portuguese) and the fuzzy analytic hierarchy process (AHP) methods for the selection of landfill location. The proposed methods have been applied to a landfill location selection problem in the region of Casablanca, Morocco. After determining the criteria affecting the landfill location decisions, fuzzy TODIM and fuzzy AHP methods are applied to the problem and results are presented. The comparisons of these two methods are also discussed.

  19. Dynamic video encryption algorithm for H.264/AVC based on a spatiotemporal chaos system.

    PubMed

    Xu, Hui; Tong, Xiao-Jun; Zhang, Miao; Wang, Zhu; Li, Ling-Hao

    2016-06-01

    Video encryption schemes mostly employ the selective encryption method to encrypt parts of important and sensitive video information, aiming to ensure the real-time performance and encryption efficiency. The classic block cipher is not applicable to video encryption due to the high computational overhead. In this paper, we propose the encryption selection control module to encrypt video syntax elements dynamically which is controlled by the chaotic pseudorandom sequence. A novel spatiotemporal chaos system and binarization method is used to generate a key stream for encrypting the chosen syntax elements. The proposed scheme enhances the resistance against attacks through the dynamic encryption process and high-security stream cipher. Experimental results show that the proposed method exhibits high security and high efficiency with little effect on the compression ratio and time cost.

  20. Acoustic pressure measurement of pulsed ultrasound using acousto-optic diffraction

    NASA Astrophysics Data System (ADS)

    Jia, Lecheng; Chen, Shili; Xue, Bin; Wu, Hanzhong; Zhang, Kai; Yang, Xiaoxia; Zeng, Zhoumo

    2018-01-01

    Compared with continuous ultrasound wave, pulsed ultrasound has been widely used in ultrasound imaging. The aim of this work is to show the applicability of acousto-optic diffraction on pulsed ultrasound transducer. In this paper, acoustic pressure of two ultrasound transducers is measured based on Raman-Nath diffraction. The frequencies of transducers are 5MHz and 10MHz. The pulse-echo method and simulation data are used to evaluate the results. The results show that the proposed method is capable to measure the absolute sound pressure. We get a sectional view of acoustic pressure using a displacement platform as an auxiliary. Compared with the traditional sound pressure measurement methods, the proposed method is non-invasive with high sensitivity and spatial resolution.

  1. Development of Fabrication Methods of Filler/Polymer Nanocomposites: With Focus on Simple Melt-Compounding-Based Approach without Surface Modification of Nanofillers

    PubMed Central

    Tanahashi, Mitsuru

    2010-01-01

    Many attempts have been made to fabricate various types of inorganic nanoparticle-filled polymers (filler/polymer nanocomposites) by a mechanical or chemical approach. However, these approaches require modification of the nanofiller surfaces and/or complicated polymerization reactions, making them unsuitable for industrial-scale production of the nanocomposites. The author and coworkers have proposed a simple melt-compounding method for the fabrication of silica/polymer nanocomposites, wherein silica nanoparticles without surface modification were dispersed through the breakdown of loose agglomerates of colloidal nano-silica spheres in a kneaded polymer melt. This review aims to discuss experimental techniques of the proposed method and its advantages over other developed methods.

  2. Feature Selection for Object-Based Classification of High-Resolution Remote Sensing Images Based on the Combination of a Genetic Algorithm and Tabu Search

    PubMed Central

    Shi, Lei; Wan, Youchuan; Gao, Xianjun

    2018-01-01

    In object-based image analysis of high-resolution images, the number of features can reach hundreds, so it is necessary to perform feature reduction prior to classification. In this paper, a feature selection method based on the combination of a genetic algorithm (GA) and tabu search (TS) is presented. The proposed GATS method aims to reduce the premature convergence of the GA by the use of TS. A prematurity index is first defined to judge the convergence situation during the search. When premature convergence does take place, an improved mutation operator is executed, in which TS is performed on individuals with higher fitness values. As for the other individuals with lower fitness values, mutation with a higher probability is carried out. Experiments using the proposed GATS feature selection method and three other methods, a standard GA, the multistart TS method, and ReliefF, were conducted on WorldView-2 and QuickBird images. The experimental results showed that the proposed method outperforms the other methods in terms of the final classification accuracy. PMID:29581721

  3. Extended depth of field integral imaging using multi-focus fusion

    NASA Astrophysics Data System (ADS)

    Piao, Yongri; Zhang, Miao; Wang, Xiaohui; Li, Peihua

    2018-03-01

    In this paper, we propose a new method for depth of field extension in integral imaging by realizing the image fusion method on the multi-focus elemental images. In the proposed method, a camera is translated on a 2D grid to take multi-focus elemental images by sweeping the focus plane across the scene. Simply applying an image fusion method on the elemental images holding rich parallax information does not work effectively because registration accuracy of images is the prerequisite for image fusion. To solve this problem an elemental image generalization method is proposed. The aim of this generalization process is to geometrically align the objects in all elemental images so that the correct regions of multi-focus elemental images can be exacted. The all-in focus elemental images are then generated by fusing the generalized elemental images using the block based fusion method. The experimental results demonstrate that the depth of field of synthetic aperture integral imaging system has been extended by realizing the generation method combined with the image fusion on multi-focus elemental images in synthetic aperture integral imaging system.

  4. Experimental Quasi-Microwave Whole-Body Averaged SAR Estimation Method Using Cylindrical-External Field Scanning

    NASA Astrophysics Data System (ADS)

    Kawamura, Yoshifumi; Hikage, Takashi; Nojima, Toshio

    The aim of this study is to develop a new whole-body averaged specific absorption rate (SAR) estimation method based on the external-cylindrical field scanning technique. This technique is adopted with the goal of simplifying the dosimetry estimation of human phantoms that have different postures or sizes. An experimental scaled model system is constructed. In order to examine the validity of the proposed method for realistic human models, we discuss the pros and cons of measurements and numerical analyses based on the finite-difference time-domain (FDTD) method. We consider the anatomical European human phantoms and plane-wave in the 2GHz mobile phone frequency band. The measured whole-body averaged SAR results obtained by the proposed method are compared with the results of the FDTD analyses.

  5. Modeling crime events by d-separation method

    NASA Astrophysics Data System (ADS)

    Aarthee, R.; Ezhilmaran, D.

    2017-11-01

    Problematic legal cases have recently called for a scientifically founded method of dealing with the qualitative and quantitative roles of evidence in a case [1].To deal with quantitative, we proposed a d-separation method for modeling the crime events. A d-separation is a graphical criterion for identifying independence in a directed acyclic graph. By developing a d-separation method, we aim to lay the foundations for the development of a software support tool that can deal with the evidential reasoning in legal cases. Such a tool is meant to be used by a judge or juror, in alliance with various experts who can provide information about the details. This will hopefully improve the communication between judges or jurors and experts. The proposed method used to uncover more valid independencies than any other graphical criterion.

  6. A Study on Real-Time Scheduling Methods in Holonic Manufacturing Systems

    NASA Astrophysics Data System (ADS)

    Iwamura, Koji; Taimizu, Yoshitaka; Sugimura, Nobuhiro

    Recently, new architectures of manufacturing systems have been proposed to realize flexible control structures of the manufacturing systems, which can cope with the dynamic changes in the volume and the variety of the products and also the unforeseen disruptions, such as failures of manufacturing resources and interruptions by high priority jobs. They are so called as the autonomous distributed manufacturing system, the biological manufacturing system and the holonic manufacturing system. Rule-based scheduling methods were proposed and applied to the real-time production scheduling problems of the HMS (Holonic Manufacturing System) in the previous report. However, there are still remaining problems from the viewpoint of the optimization of the whole production schedules. New procedures are proposed, in the present paper, to select the production schedules, aimed at generating effective production schedules in real-time. The proposed methods enable the individual holons to select suitable machining operations to be carried out in the next time period. Coordination process among the holons is also proposed to carry out the coordination based on the effectiveness values of the individual holons.

  7. Pathological brain detection based on wavelet entropy and Hu moment invariants.

    PubMed

    Zhang, Yudong; Wang, Shuihua; Sun, Ping; Phillips, Preetha

    2015-01-01

    With the aim of developing an accurate pathological brain detection system, we proposed a novel automatic computer-aided diagnosis (CAD) to detect pathological brains from normal brains obtained by magnetic resonance imaging (MRI) scanning. The problem still remained a challenge for technicians and clinicians, since MR imaging generated an exceptionally large information dataset. A new two-step approach was proposed in this study. We used wavelet entropy (WE) and Hu moment invariants (HMI) for feature extraction, and the generalized eigenvalue proximal support vector machine (GEPSVM) for classification. To further enhance classification accuracy, the popular radial basis function (RBF) kernel was employed. The 10 runs of k-fold stratified cross validation result showed that the proposed "WE + HMI + GEPSVM + RBF" method was superior to existing methods w.r.t. classification accuracy. It obtained the average classification accuracies of 100%, 100%, and 99.45% over Dataset-66, Dataset-160, and Dataset-255, respectively. The proposed method is effective and can be applied to realistic use.

  8. A New Group-Formation Method for Student Projects

    ERIC Educational Resources Information Center

    Borges, Jose; Dias, Teresa Galvao; Cunha, Joao Falcao E.

    2009-01-01

    In BSc/MSc engineering programmes at Faculty of Engineering of the University of Porto (FEUP), the need to provide students with teamwork experiences close to a real world environment was identified as an important issue. A new group-formation method that aims to provide an enriching teamwork experience is proposed. Students are asked to answer a…

  9. Data-Driven Hint Generation from Peer Debugging Solutions

    ERIC Educational Resources Information Center

    Liu, Zhongxiu

    2015-01-01

    Data-driven methods have been a successful approach to generating hints for programming problems. However, the majority of previous studies are focused on procedural hints that aim at moving students to the next closest state to the solution. In this paper, I propose a data-driven method to generate remedy hints for BOTS, a game that teaches…

  10. Case Method in the Teaching of Food Safety

    ERIC Educational Resources Information Center

    Gallego, Alfredo; Fortunato, Maria S.; Rossi, Susana L.; Korol, Sonia E.; Moretton, Juan A.

    2013-01-01

    One of the fundamental aims of education is the integration of theory and practice. The case method is a teaching strategy in which students must apply their knowledge to solve real-life situations. They have to analyze the case described and propose the best possible solution. Although the case may be written, the use of new information and…

  11. Dynamic Scheduling for Web Monitoring Crawler

    DTIC Science & Technology

    2009-02-27

    researches on static scheduling methods , but they are not included in this project, because this project mainly focuses on the event-driven...pages from public search engines. This research aims to propose various query generation methods using MCRDR knowledge base and evaluates them to...South Wales Professor Hiroshi Motoda/Osaka University Dr. John Salerno, Air Force Research Laboratory/Information Directorate Report

  12. Locating Structural Centers: A Density-Based Clustering Method for Community Detection

    PubMed Central

    Liu, Gongshen; Li, Jianhua; Nees, Jan P.

    2017-01-01

    Uncovering underlying community structures in complex networks has received considerable attention because of its importance in understanding structural attributes and group characteristics of networks. The algorithmic identification of such structures is a significant challenge. Local expanding methods have proven to be efficient and effective in community detection, but most methods are sensitive to initial seeds and built-in parameters. In this paper, we present a local expansion method by density-based clustering, which aims to uncover the intrinsic network communities by locating the structural centers of communities based on a proposed structural centrality. The structural centrality takes into account local density of nodes and relative distance between nodes. The proposed algorithm expands a community from the structural center to the border with a single local search procedure. The local expanding procedure follows a heuristic strategy as allowing it to find complete community structures. Moreover, it can identify different node roles (cores and outliers) in communities by defining a border region. The experiments involve both on real-world and artificial networks, and give a comparison view to evaluate the proposed method. The result of these experiments shows that the proposed method performs more efficiently with a comparative clustering performance than current state of the art methods. PMID:28046030

  13. Analysis of optimisation method for a two-stroke piston ring using the Finite Element Method and the Simulated Annealing Method

    NASA Astrophysics Data System (ADS)

    Kaliszewski, M.; Mazuro, P.

    2016-09-01

    Simulated Annealing Method of optimisation for the sealing piston ring geometry is tested. The aim of optimisation is to develop ring geometry which would exert demanded pressure on a cylinder just while being bended to fit the cylinder. Method of FEM analysis of an arbitrary piston ring geometry is applied in an ANSYS software. The demanded pressure function (basing on formulae presented by A. Iskra) as well as objective function are introduced. Geometry definition constructed by polynomials in radial coordinate system is delivered and discussed. Possible application of Simulated Annealing Method in a piston ring optimisation task is proposed and visualised. Difficulties leading to possible lack of convergence of optimisation are presented. An example of an unsuccessful optimisation performed in APDL is discussed. Possible line of further optimisation improvement is proposed.

  14. Method Engineering: A Service-Oriented Approach

    NASA Astrophysics Data System (ADS)

    Cauvet, Corine

    In the past, a large variety of methods have been published ranging from very generic frameworks to methods for specific information systems. Method Engineering has emerged as a research discipline for designing, constructing and adapting methods for Information Systems development. Several approaches have been proposed as paradigms in method engineering. The meta modeling approach provides means for building methods by instantiation, the component-based approach aims at supporting the development of methods by using modularization constructs such as method fragments, method chunks and method components. This chapter presents an approach (SO2M) for method engineering based on the service paradigm. We consider services as autonomous computational entities that are self-describing, self-configuring and self-adapting. They can be described, published, discovered and dynamically composed for processing a consumer's demand (a developer's requirement). The method service concept is proposed to capture a development process fragment for achieving a goal. Goal orientation in service specification and the principle of service dynamic composition support method construction and method adaptation to different development contexts.

  15. Intelligent design of permanent magnet synchronous motor based on CBR

    NASA Astrophysics Data System (ADS)

    Li, Cong; Fan, Beibei

    2018-05-01

    Aiming at many problems in the design process of Permanent magnet synchronous motor (PMSM), such as the complexity of design process, the over reliance on designers' experience and the lack of accumulation and inheritance of design knowledge, a design method of PMSM Based on CBR is proposed in order to solve those problems. In this paper, case-based reasoning (CBR) methods of cases similarity calculation is proposed for reasoning suitable initial scheme. This method could help designers, by referencing previous design cases, to make a conceptual PMSM solution quickly. The case retain process gives the system self-enrich function which will improve the design ability of the system with the continuous use of the system.

  16. Plenoptic image watermarking to preserve copyright

    NASA Astrophysics Data System (ADS)

    Ansari, A.; Dorado, A.; Saavedra, G.; Martinez Corral, M.

    2017-05-01

    Common camera loses a huge amount of information obtainable from scene as it does not record the value of individual rays passing a point and it merely keeps the summation of intensities of all the rays passing a point. Plenoptic images can be exploited to provide a 3D representation of the scene and watermarking such images can be helpful to protect the ownership of these images. In this paper we propose a method for watermarking the plenoptic images to achieve this aim. The performance of the proposed method is validated by experimental results and a compromise is held between imperceptibility and robustness.

  17. Evaluating a normalized conceptual representation produced from natural language patient discharge summaries.

    PubMed Central

    Zweigenbaum, P.; Bouaud, J.; Bachimont, B.; Charlet, J.; Boisvieux, J. F.

    1997-01-01

    The Menelas project aimed to produce a normalized conceptual representation from natural language patient discharge summaries. Because of the complex and detailed nature of conceptual representations, evaluating the quality of output of such a system is difficult. We present the method designed to measure the quality of Menelas output, and its application to the state of the French Menelas prototype as of the end of the project. We examine this method in the framework recently proposed by Friedman and Hripcsak. We also propose two conditions which enable to reduce the evaluation preparation workload. PMID:9357694

  18. 3D Measurement of Anatomical Cross-sections of Foot while Walking

    NASA Astrophysics Data System (ADS)

    Kimura, Makoto; Mochimaru, Masaaki; Kanade, Takeo

    Recently, techniques for measuring and modeling of human body are taking attention, because human models are useful for ergonomic design in manufacturing. We aim to measure accurate shape of human foot that will be useful for the design of shoes. For such purpose, shape measurement of foot in motion is obviously important, because foot shape in the shoe is deformed while walking or running. In this paper, we propose a method to measure anatomical cross-sections of foot while walking. No one had ever measured dynamic shape of anatomical cross-sections, though they are very basic and popular in the field of biomechanics. Our proposed method is based on multi-view stereo method. The target cross-sections are painted in individual colors (red, green, yellow and blue), and the proposed method utilizes the characteristic of target shape in the camera captured images. Several nonlinear conditions are introduced in the process to find the consistent correspondence in all images. Our desired accuracy is less than 1mm error, which is similar to the existing 3D scanners for static foot measurement. In our experiments, the proposed method achieved the desired accuracy.

  19. A Unified Framework for Brain Segmentation in MR Images

    PubMed Central

    Yazdani, S.; Yusof, R.; Karimian, A.; Riazi, A. H.; Bennamoun, M.

    2015-01-01

    Brain MRI segmentation is an important issue for discovering the brain structure and diagnosis of subtle anatomical changes in different brain diseases. However, due to several artifacts brain tissue segmentation remains a challenging task. The aim of this paper is to improve the automatic segmentation of brain into gray matter, white matter, and cerebrospinal fluid in magnetic resonance images (MRI). We proposed an automatic hybrid image segmentation method that integrates the modified statistical expectation-maximization (EM) method and the spatial information combined with support vector machine (SVM). The combined method has more accurate results than what can be achieved with its individual techniques that is demonstrated through experiments on both real data and simulated images. Experiments are carried out on both synthetic and real MRI. The results of proposed technique are evaluated against manual segmentation results and other methods based on real T1-weighted scans from Internet Brain Segmentation Repository (IBSR) and simulated images from BrainWeb. The Kappa index is calculated to assess the performance of the proposed framework relative to the ground truth and expert segmentations. The results demonstrate that the proposed combined method has satisfactory results on both simulated MRI and real brain datasets. PMID:26089978

  20. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach.

    PubMed

    Park, Hyunseok; Magee, Christopher L

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents.

  1. Tracing Technological Development Trajectories: A Genetic Knowledge Persistence-Based Main Path Approach

    PubMed Central

    2017-01-01

    The aim of this paper is to propose a new method to identify main paths in a technological domain using patent citations. Previous approaches for using main path analysis have greatly improved our understanding of actual technological trajectories but nonetheless have some limitations. They have high potential to miss some dominant patents from the identified main paths; nonetheless, the high network complexity of their main paths makes qualitative tracing of trajectories problematic. The proposed method searches backward and forward paths from the high-persistence patents which are identified based on a standard genetic knowledge persistence algorithm. We tested the new method by applying it to the desalination and the solar photovoltaic domains and compared the results to output from the same domains using a prior method. The empirical results show that the proposed method can dramatically reduce network complexity without missing any dominantly important patents. The main paths identified by our approach for two test cases are almost 10x less complex than the main paths identified by the existing approach. The proposed approach identifies all dominantly important patents on the main paths, but the main paths identified by the existing approach miss about 20% of dominantly important patents. PMID:28135304

  2. The Application of Auto-Disturbance Rejection Control Optimized by Least Squares Support Vector Machines Method and Time-Frequency Representation in Voltage Source Converter-High Voltage Direct Current System.

    PubMed

    Liu, Ying-Pei; Liang, Hai-Ping; Gao, Zhong-Ke

    2015-01-01

    In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane.

  3. The Application of Auto-Disturbance Rejection Control Optimized by Least Squares Support Vector Machines Method and Time-Frequency Representation in Voltage Source Converter-High Voltage Direct Current System

    PubMed Central

    Gao, Zhong-Ke

    2015-01-01

    In order to improve the performance of voltage source converter-high voltage direct current (VSC-HVDC) system, we propose an improved auto-disturbance rejection control (ADRC) method based on least squares support vector machines (LSSVM) in the rectifier side. Firstly, we deduce the high frequency transient mathematical model of VSC-HVDC system. Then we investigate the ADRC and LSSVM principles. We ignore the tracking differentiator in the ADRC controller aiming to improve the system dynamic response speed. On this basis, we derive the mathematical model of ADRC controller optimized by LSSVM for direct current voltage loop. Finally we carry out simulations to verify the feasibility and effectiveness of our proposed control method. In addition, we employ the time-frequency representation methods, i.e., Wigner-Ville distribution (WVD) and adaptive optimal kernel (AOK) time-frequency representation, to demonstrate our proposed method performs better than the traditional method from the perspective of energy distribution in time and frequency plane. PMID:26098556

  4. Guided filter and principal component analysis hybrid method for hyperspectral pansharpening

    NASA Astrophysics Data System (ADS)

    Qu, Jiahui; Li, Yunsong; Dong, Wenqian

    2018-01-01

    Hyperspectral (HS) pansharpening aims to generate a fused HS image with high spectral and spatial resolution through integrating an HS image with a panchromatic (PAN) image. A guided filter (GF) and principal component analysis (PCA) hybrid HS pansharpening method is proposed. First, the HS image is interpolated and the PCA transformation is performed on the interpolated HS image. The first principal component (PC1) channel concentrates on the spatial information of the HS image. Different from the traditional PCA method, the proposed method sharpens the PAN image and utilizes the GF to obtain the spatial information difference between the HS image and the enhanced PAN image. Then, in order to reduce spectral and spatial distortion, an appropriate tradeoff parameter is defined and the spatial information difference is injected into the PC1 channel through multiplying by this tradeoff parameter. Once the new PC1 channel is obtained, the fused image is finally generated by the inverse PCA transformation. Experiments performed on both synthetic and real datasets show that the proposed method outperforms other several state-of-the-art HS pansharpening methods in both subjective and objective evaluations.

  5. On the use of the fourier modal method for calculation of localized eigenmodes of integrated optical resonators O пpineнeнii neToдa фypьe-noд k pacчёTy лokaлiзoвaнныx noд iнTegpaльныx oпTiчeckix peзoнaTopoв</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bykov, D. A.; Doskolovich, L. L.</p> <p>2015-12-01</p> <p>We propose the generalization of the Fourier modal method aimed at calculating localized eigenmodes of integrated optical resonators. The method is based on constructing the analytic continuation of the structure's scattering matrix and calculating its poles. The method allows one to calculate the complex frequency of the localized mode and the corresponding field distribution. We use the proposed method to calculate the eigenmodes of rectangular dielectric block located on metal surface. We show that excitation of these modes by surface plasmon-polariton (SPP) results in resonant features in the SPP transmission spectrum. The proposed method can be used to design and investigate optical properties of integrated and plasmonic optical devices.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ISPAr42.3..179C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ISPAr42.3..179C"><span>An Improved Image Matching Method Based on Surf Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, S. J.; Zheng, S. Z.; Xu, Z. G.; Guo, C. C.; Ma, X. L.</p> <p>2018-04-01</p> <p>Many state-of-the-art image matching methods, based on the feature matching, have been widely studied in the remote sensing field. These methods of feature matching which get highly operating efficiency, have a disadvantage of low accuracy and robustness. This paper proposes an improved image matching method which based on the SURF algorithm. The proposed method introduces color invariant transformation, information entropy theory and a series of constraint conditions to increase feature points detection and matching accuracy. First, the model of color invariant transformation is introduced for two matching images aiming at obtaining more color information during the matching process and information entropy theory is used to obtain the most information of two matching images. Then SURF algorithm is applied to detect and describe points from the images. Finally, constraint conditions which including Delaunay triangulation construction, similarity function and projective invariant are employed to eliminate the mismatches so as to improve matching precision. The proposed method has been validated on the remote sensing images and the result benefits from its high precision and robustness.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015MSSP...60..289L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015MSSP...60..289L"><span>Health condition identification of multi-stage planetary gearboxes using a mRVM-based method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lei, Yaguo; Liu, Zongyao; Wu, Xionghui; Li, Naipeng; Chen, Wu; Lin, Jing</p> <p>2015-08-01</p> <p>Multi-stage planetary gearboxes are widely applied in aerospace, automotive and heavy industries. Their key components, such as gears and bearings, can easily suffer from damage due to tough working environment. Health condition identification of planetary gearboxes aims to prevent accidents and save costs. This paper proposes a method based on multiclass relevance vector machine (mRVM) to identify health condition of multi-stage planetary gearboxes. In this method, a mRVM algorithm is adopted as a classifier, and two features, i.e. accumulative amplitudes of carrier orders (AACO) and energy ratio based on difference spectra (ERDS), are used as the input of the classifier to classify different health conditions of multi-stage planetary gearboxes. To test the proposed method, seven health conditions of a two-stage planetary gearbox are considered and vibration data is acquired from the planetary gearbox under different motor speeds and loading conditions. The results of three tests based on different data show that the proposed method obtains an improved identification performance and robustness compared with the existing method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhyA..436..216L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhyA..436..216L"><span>Predicting missing links via correlation between nodes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liao, Hao; Zeng, An; Zhang, Yi-Cheng</p> <p>2015-10-01</p> <p>As a fundamental problem in many different fields, link prediction aims to estimate the likelihood of an existing link between two nodes based on the observed information. Since this problem is related to many applications ranging from uncovering missing data to predicting the evolution of networks, link prediction has been intensively investigated recently and many methods have been proposed so far. The essential challenge of link prediction is to estimate the similarity between nodes. Most of the existing methods are based on the common neighbor index and its variants. In this paper, we propose to calculate the similarity between nodes by the Pearson correlation coefficient. This method is found to be very effective when applied to calculate similarity based on high order paths. We finally fuse the correlation-based method with the resource allocation method, and find that the combined method can substantially outperform the existing methods, especially in sparse networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4481961','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4481961"><span>A Self-Alignment Algorithm for SINS Based on Gravitational Apparent Motion and Sensor Data Denoising</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Yiting; Xu, Xiaosu; Liu, Xixiang; Yao, Yiqing; Wu, Liang; Sun, Jin</p> <p>2015-01-01</p> <p>Initial alignment is always a key topic and difficult to achieve in an inertial navigation system (INS). In this paper a novel self-initial alignment algorithm is proposed using gravitational apparent motion vectors at three different moments and vector-operation. Simulation and analysis showed that this method easily suffers from the random noise contained in accelerometer measurements which are used to construct apparent motion directly. Aiming to resolve this problem, an online sensor data denoising method based on a Kalman filter is proposed and a novel reconstruction method for apparent motion is designed to avoid the collinearity among vectors participating in the alignment solution. Simulation, turntable tests and vehicle tests indicate that the proposed alignment algorithm can fulfill initial alignment of strapdown INS (SINS) under both static and swinging conditions. The accuracy can either reach or approach the theoretical values determined by sensor precision under static or swinging conditions. PMID:25923932</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017EPJB...90..171J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017EPJB...90..171J"><span>Solving optimization problems by the public goods game</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Javarone, Marco Alberto</p> <p>2017-09-01</p> <p>We introduce a method based on the Public Goods Game for solving optimization tasks. In particular, we focus on the Traveling Salesman Problem, i.e. a NP-hard problem whose search space exponentially grows increasing the number of cities. The proposed method considers a population whose agents are provided with a random solution to the given problem. In doing so, agents interact by playing the Public Goods Game using the fitness of their solution as currency of the game. Notably, agents with better solutions provide higher contributions, while those with lower ones tend to imitate the solution of richer agents for increasing their fitness. Numerical simulations show that the proposed method allows to compute exact solutions, and suboptimal ones, in the considered search spaces. As result, beyond to propose a new heuristic for combinatorial optimization problems, our work aims to highlight the potentiality of evolutionary game theory beyond its current horizons.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008TJSAI..23..205U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008TJSAI..23..205U"><span>Analysing the Image Building Effects of TV Advertisements Using Internet Community Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Uehara, Hiroshi; Sato, Tadahiko; Yoshida, Kenichi</p> <p></p> <p>This paper proposes a method to measure the effects of TV advertisements on the Internet bulletin boards. It aims to clarify how the viewes' interests on TV advertisements are reflected on their images on the promoted products. Two kinds of time series data are generated based on the proposed method. First one represents the time series fluctuation of the interests on the TV advertisements. Another one represents the time series fluctuation of the images on the products. By analysing the correlations between these two time series data, we try to clarify the implicit relationship between the viewer's interests on the TV advertisement and their images on the promoted products. By applying the proposed method to an Internet bulletin board that deals with certain cosmetic brand, we show that the images on the products vary depending on the difference of the interests on each TV advertisement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011PhDT.......154A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011PhDT.......154A"><span>Topology optimization under stochastic stiffness</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Asadpoure, Alireza</p> <p></p> <p>Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations for the response quantities allow for efficient and accurate calculation of sensitivities of response statistics with respect to the design variables. The proposed methods are shown to be successful at generating robust optimal topologies. Examples from topology optimization in continuum and discrete domains (truss structures) under uncertainty are presented. It is also shown that proposed methods lead to significant computational savings when compared to Monte Carlo-based optimization which involve multiple formations and inversions of the global stiffness matrix and that results obtained from the proposed method are in excellent agreement with those obtained from a Monte Carlo-based optimization algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AdSpR..61.1512K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AdSpR..61.1512K"><span>Analysis of the dynamic behavior of structures using the high-rate GNSS-PPP method combined with a wavelet-neural model: Numerical simulation and experimental tests</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kaloop, Mosbeh R.; Yigit, Cemal O.; Hu, Jong W.</p> <p>2018-03-01</p> <p>Recently, the high rate global navigation satellite system-precise point positioning (GNSS-PPP) technique has been used to detect the dynamic behavior of structures. This study aimed to increase the accuracy of the extraction oscillation properties of structural movements based on the high-rate (10 Hz) GNSS-PPP monitoring technique. A developmental model based on the combination of wavelet package transformation (WPT) de-noising and neural network prediction (NN) was proposed to improve the dynamic behavior of structures for GNSS-PPP method. A complicated numerical simulation involving highly noisy data and 13 experimental cases with different loads were utilized to confirm the efficiency of the proposed model design and the monitoring technique in detecting the dynamic behavior of structures. The results revealed that, when combined with the proposed model, GNSS-PPP method can be used to accurately detect the dynamic behavior of engineering structures as an alternative to relative GNSS method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4916328','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4916328"><span>Metaheuristic Algorithms for Convolution Neural Network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fanany, Mohamad Ivan; Arymurthy, Aniati Murni</p> <p>2016-01-01</p> <p>A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent). PMID:27375738</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27375738','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27375738"><span>Metaheuristic Algorithms for Convolution Neural Network.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rere, L M Rasdi; Fanany, Mohamad Ivan; Arymurthy, Aniati Murni</p> <p>2016-01-01</p> <p>A typical modern optimization technique is usually either heuristic or metaheuristic. This technique has managed to solve some optimization problems in the research area of science, engineering, and industry. However, implementation strategy of metaheuristic for accuracy improvement on convolution neural networks (CNN), a famous deep learning method, is still rarely investigated. Deep learning relates to a type of machine learning technique, where its aim is to move closer to the goal of artificial intelligence of creating a machine that could successfully perform any intellectual tasks that can be carried out by a human. In this paper, we propose the implementation strategy of three popular metaheuristic approaches, that is, simulated annealing, differential evolution, and harmony search, to optimize CNN. The performances of these metaheuristic methods in optimizing CNN on classifying MNIST and CIFAR dataset were evaluated and compared. Furthermore, the proposed methods are also compared with the original CNN. Although the proposed methods show an increase in the computation time, their accuracy has also been improved (up to 7.14 percent).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28042188','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28042188"><span>Structured Matrix Completion with Applications to Genomic Data Integration.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cai, Tianxi; Cai, T Tony; Zhang, Anru</p> <p>2016-01-01</p> <p>Matrix completion has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ISPAr52W4...27N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ISPAr52W4...27N"><span>Cnn Based Retinal Image Upscaling Using Zero Component Analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nasonov, A.; Chesnakov, K.; Krylov, A.</p> <p>2017-05-01</p> <p>The aim of the paper is to obtain high quality of image upscaling for noisy images that are typical in medical image processing. A new training scenario for convolutional neural network based image upscaling method is proposed. Its main idea is a novel dataset preparation method for deep learning. The dataset contains pairs of noisy low-resolution images and corresponding noiseless highresolution images. To achieve better results at edges and textured areas, Zero Component Analysis is applied to these images. The upscaling results are compared with other state-of-the-art methods like DCCI, SI-3 and SRCNN on noisy medical ophthalmological images. Objective evaluation of the results confirms high quality of the proposed method. Visual analysis shows that fine details and structures like blood vessels are preserved, noise level is reduced and no artifacts or non-existing details are added. These properties are essential in retinal diagnosis establishment, so the proposed algorithm is recommended to be used in real medical applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4934327','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4934327"><span>Multi-Target State Extraction for the SMC-PHD Filter</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Si, Weijian; Wang, Liwei; Qu, Zhiyu</p> <p>2016-01-01</p> <p>The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016ApPhA.122...31A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016ApPhA.122...31A"><span>An optimization method for defects reduction in fiber laser keyhole welding</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ai, Yuewei; Jiang, Ping; Shao, Xinyu; Wang, Chunming; Li, Peigen; Mi, Gaoyang; Liu, Yang; Liu, Wei</p> <p>2016-01-01</p> <p>Laser welding has been widely used in automotive, power, chemical, nuclear and aerospace industries. The quality of welded joints is closely related to the existing defects which are primarily determined by the welding process parameters. This paper proposes a defects optimization method that takes the formation mechanism of welding defects and weld geometric features into consideration. The analysis of welding defects formation mechanism aims to investigate the relationship between welding defects and process parameters, and weld features are considered to identify the optimal process parameters for the desired welded joints with minimum defects. The improved back-propagation neural network possessing good modeling for nonlinear problems is adopted to establish the mathematical model and the obtained model is solved by genetic algorithm. The proposed method is validated by macroweld profile, microstructure and microhardness in the confirmation tests. The results show that the proposed method is effective at reducing welding defects and obtaining high-quality joints for fiber laser keyhole welding in practical production.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1351873-assessment-system-frequency-support-effect-pmsg-wtg-using-torque-limit-based-inertial-control','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1351873-assessment-system-frequency-support-effect-pmsg-wtg-using-torque-limit-based-inertial-control"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wang, Xiao; Gao, Wenzhong; Wang, Jianhui</p> <p></p> <p>To release the 'hidden inertia' of variable-speed wind turbines for temporary frequency support, a method of torque-limit based inertial control is proposed in this paper. This method aims to improve the frequency support capability considering the maximum torque restriction of a permanent magnet synchronous generator. The advantages of the proposed method are improved frequency nadir (FN) in the event of an under-frequency disturbance; and avoidance of over-deceleration and a second frequency dip during the inertial response. The system frequency response is different, with different slope values in the power-speed plane when the inertial response is performed. The proposed method ismore » evaluated in a modified three-machine, nine-bus system. The simulation results show that there is a trade-off between the recovery time and FN, such that a gradual slope tends to improve the FN and restrict the rate of change of frequency aggressively while causing an extension of the recovery time. These results provide insight into how to properly design such kinds of inertial control strategies for practical applications.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_5");'>5</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li class="active"><span>7</span></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_7 --> <div id="page_8" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="141"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1357745','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1357745"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wang, Xiao; Gao, Wenzhong; Wang, Jianhui</p> <p></p> <p>To release the 'hidden inertia' of variable-speed wind turbines for temporary frequency support, a method of torque-limit-based inertial control is proposed in this paper. This method aims to improve the frequency support capability considering the maximum torque restriction of a permanent magnet synchronous generator. The advantages of the proposed method are improved frequency nadir (FN) in the event of an under-frequency disturbance; and avoidance of over-deceleration and a second frequency dip during the inertial response. The system frequency response is different, with different slope values in the power-speed plane when the inertial response is performed. The proposed method is evaluatedmore » in a modified three-machine, nine-bus system. The simulation results show that there is a trade-off between the recovery time and FN, such that a gradual slope tends to improve the FN and restrict the rate of change of frequency aggressively while causing an extension of the recovery time. These results provide insight into how to properly design such kinds of inertial control strategies for practical applications.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ISPAr42W1....3O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ISPAr42W1....3O"><span>Change Detection of Remote Sensing Images by Dt-Cwt and Mrf</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ouyang, S.; Fan, K.; Wang, H.; Wang, Z.</p> <p>2017-05-01</p> <p>Aiming at the significant loss of high frequency information during reducing noise and the pixel independence in change detection of multi-scale remote sensing image, an unsupervised algorithm is proposed based on the combination between Dual-tree Complex Wavelet Transform (DT-CWT) and Markov random Field (MRF) model. This method first performs multi-scale decomposition for the difference image by the DT-CWT and extracts the change characteristics in high-frequency regions by using a MRF-based segmentation algorithm. Then our method estimates the final maximum a posterior (MAP) according to the segmentation algorithm of iterative condition model (ICM) based on fuzzy c-means(FCM) after reconstructing the high-frequency and low-frequency sub-bands of each layer respectively. Finally, the method fuses the above segmentation results of each layer by using the fusion rule proposed to obtain the mask of the final change detection result. The results of experiment prove that the method proposed is of a higher precision and of predominant robustness properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JOUC...16..757G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JOUC...16..757G"><span>Underwater image enhancement based on the dark channel prior and attenuation compensation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guo, Qingwen; Xue, Lulu; Tang, Ruichun; Guo, Lingrui</p> <p>2017-10-01</p> <p>Aimed at the two problems of underwater imaging, fog effect and color cast, an Improved Segmentation Dark Channel Prior (ISDCP) defogging method is proposed to solve the fog effects caused by physical properties of water. Due to mass refraction of light in the process of underwater imaging, fog effects would lead to image blurring. And color cast is closely related to different degree of attenuation while light with different wavelengths is traveling in water. The proposed method here integrates the ISDCP and quantitative histogram stretching techniques into the image enhancement procedure. Firstly, the threshold value is set during the refinement process of the transmission maps to identify the original mismatching, and to conduct the differentiated defogging process further. Secondly, a method of judging the propagating distance of light is adopted to get the attenuation degree of energy during the propagation underwater. Finally, the image histogram is stretched quantitatively in Red-Green-Blue channel respectively according to the degree of attenuation in each color channel. The proposed method ISDCP can reduce the computational complexity and improve the efficiency in terms of defogging effect to meet the real-time requirements. Qualitative and quantitative comparison for several different underwater scenes reveals that the proposed method can significantly improve the visibility compared with previous methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3274071','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3274071"><span>A Simple Method Based on the Application of a CCD Camera as a Sensor to Detect Low Concentrations of Barium Sulfate in Suspension</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>de Sena, Rodrigo Caciano; Soares, Matheus; Pereira, Maria Luiza Oliveira; da Silva, Rogério Cruz Domingues; do Rosário, Francisca Ferreira; da Silva, Joao Francisco Cajaiba</p> <p>2011-01-01</p> <p>The development of a simple, rapid and low cost method based on video image analysis and aimed at the detection of low concentrations of precipitated barium sulfate is described. The proposed system is basically composed of a webcam with a CCD sensor and a conventional dichroic lamp. For this purpose, software for processing and analyzing the digital images based on the RGB (Red, Green and Blue) color system was developed. The proposed method had shown very good repeatability and linearity and also presented higher sensitivity than the standard turbidimetric method. The developed method is presented as a simple alternative for future applications in the study of precipitations of inorganic salts and also for detecting the crystallization of organic compounds. PMID:22346607</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JPhCS.628a2013U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JPhCS.628a2013U"><span>Novel SHM method to locate damages in substructures based on VARX models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ugalde, U.; Anduaga, J.; Martínez, F.; Iturrospe, A.</p> <p>2015-07-01</p> <p>A novel damage localization method is proposed, which is based on a substructuring approach and makes use of Vector Auto-Regressive with eXogenous input (VARX) models. The substructuring approach aims to divide the monitored structure into several multi-DOF isolated substructures. Later, each individual substructure is modelled as a VARX model, and the health of each substructure is determined analyzing the variation of the VARX model. The method allows to detect whether the isolated substructure is damaged, and besides allows to locate and quantify the damage within the substructure. It is not necessary to have a theoretical model of the structure and only the measured displacement data is required to estimate the isolated substructure's VARX model. The proposed method is validated by simulations of a two-dimensional lattice structure.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27480742','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27480742"><span>Prediction of nocturnal hypoglycemia by an aggregation of previously known prediction approaches: proof of concept for clinical application.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tkachenko, Pavlo; Kriukova, Galyna; Aleksandrova, Marharyta; Chertov, Oleg; Renard, Eric; Pereverzyev, Sergei V</p> <p>2016-10-01</p> <p>Nocturnal hypoglycemia (NH) is common in patients with insulin-treated diabetes. Despite the risk associated with NH, there are only a few methods aiming at the prediction of such events based on intermittent blood glucose monitoring data and none has been validated for clinical use. Here we propose a method of combining several predictors into a new one that will perform at the level of the best involved one, or even outperform all individual candidates. The idea of the method is to use a recently developed strategy for aggregating ranking algorithms. The method has been calibrated and tested on data extracted from clinical trials, performed in the European FP7-funded project DIAdvisor. Then we have tested the proposed approach on other datasets to show the portability of the method. This feature of the method allows its simple implementation in the form of a diabetic smartphone app. On the considered datasets the proposed approach exhibits good performance in terms of sensitivity, specificity and predictive values. Moreover, the resulting predictor automatically performs at the level of the best involved method or even outperforms it. We propose a strategy for a combination of NH predictors that leads to a method exhibiting a reliable performance and the potential for everyday use by any patient who performs self-monitoring of blood glucose. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=web+AND+social&pg=2&id=EJ922244','ERIC'); return false;" href="https://eric.ed.gov/?q=web+AND+social&pg=2&id=EJ922244"><span>Social Search: A Taxonomy of, and a User-Centred Approach to, Social Web Search</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>McDonnell, Michael; Shiri, Ali</p> <p>2011-01-01</p> <p>Purpose: The purpose of this paper is to introduce the notion of social search as a new concept, drawing upon the patterns of web search behaviour. It aims to: define social search; present a taxonomy of social search; and propose a user-centred social search method. Design/methodology/approach: A mixed method approach was adopted to investigate…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=STATISTICAL+AND+METHODS+AND+FOR+AND+THE+AND+SOCIAL+AND+SCIENCES&id=EJ1141520','ERIC'); return false;" href="https://eric.ed.gov/?q=STATISTICAL+AND+METHODS+AND+FOR+AND+THE+AND+SOCIAL+AND+SCIENCES&id=EJ1141520"><span>An Intelligent Web-Based System for Diagnosing Student Learning Problems Using Concept Maps</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Acharya, Anal; Sinha, Devadatta</p> <p>2017-01-01</p> <p>The aim of this article is to propose a method for development of concept map in web-based environment for identifying concepts a student is deficient in after learning using traditional methods. Direct Hashing and Pruning algorithm was used to construct concept map. Redundancies within the concept map were removed to generate a learning sequence.…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS1000a2018B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS1000a2018B"><span>An approach to solve replacement problems under intuitionistic fuzzy nature</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Balaganesan, M.; Ganesan, K.</p> <p>2018-04-01</p> <p>Due to impreciseness to solve the day to day problems the researchers use fuzzy sets in their discussions of the replacement problems. The aim of this paper is to solve the replacement theory problems with triangular intuitionistic fuzzy numbers. An effective methodology based on fuzziness index and location index is proposed to determine the optimal solution of the replacement problem. A numerical example is illustrated to validate the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatSR...619440Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatSR...619440Z"><span>A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang</p> <p>2016-01-01</p> <p>Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4726115','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4726115"><span>A Matrix Pencil Algorithm Based Multiband Iterative Fusion Imaging Method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zou, Yong Qiang; Gao, Xun Zhang; Li, Xiang; Liu, Yong Xiang</p> <p>2016-01-01</p> <p>Multiband signal fusion technique is a practicable and efficient way to improve the range resolution of ISAR image. The classical fusion method estimates the poles of each subband signal by the root-MUSIC method, and some good results were get in several experiments. However, this method is fragile in noise for the proper poles could not easy to get in low signal to noise ratio (SNR). In order to eliminate the influence of noise, this paper propose a matrix pencil algorithm based method to estimate the multiband signal poles. And to deal with mutual incoherent between subband signals, the incoherent parameters (ICP) are predicted through the relation of corresponding poles of each subband. Then, an iterative algorithm which aimed to minimize the 2-norm of signal difference is introduced to reduce signal fusion error. Applications to simulate dada verify that the proposed method get better fusion results at low SNR. PMID:26781194</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017InPhT..82....8M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017InPhT..82....8M"><span>Infrared and visible image fusion based on visual saliency map and weighted least square optimization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ma, Jinlei; Zhou, Zhiqiang; Wang, Bo; Zong, Hua</p> <p>2017-05-01</p> <p>The goal of infrared (IR) and visible image fusion is to produce a more informative image for human observation or some other computer vision tasks. In this paper, we propose a novel multi-scale fusion method based on visual saliency map (VSM) and weighted least square (WLS) optimization, aiming to overcome some common deficiencies of conventional methods. Firstly, we introduce a multi-scale decomposition (MSD) using the rolling guidance filter (RGF) and Gaussian filter to decompose input images into base and detail layers. Compared with conventional MSDs, this MSD can achieve the unique property of preserving the information of specific scales and reducing halos near edges. Secondly, we argue that the base layers obtained by most MSDs would contain a certain amount of residual low-frequency information, which is important for controlling the contrast and overall visual appearance of the fused image, and the conventional "averaging" fusion scheme is unable to achieve desired effects. To address this problem, an improved VSM-based technique is proposed to fuse the base layers. Lastly, a novel WLS optimization scheme is proposed to fuse the detail layers. This optimization aims to transfer more visual details and less irrelevant IR details or noise into the fused image. As a result, the fused image details would appear more naturally and be suitable for human visual perception. Experimental results demonstrate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23325564','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23325564"><span>Prospective and retrospective high order eddy current mitigation for diffusion weighted echo planar imaging.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xu, Dan; Maier, Joseph K; King, Kevin F; Collick, Bruce D; Wu, Gaohong; Peters, Robert D; Hinks, R Scott</p> <p>2013-11-01</p> <p>The proposed method is aimed at reducing eddy current (EC) induced distortion in diffusion weighted echo planar imaging, without the need to perform further image coregistration between diffusion weighted and T2 images. These ECs typically have significant high order spatial components that cannot be compensated by preemphasis. High order ECs are first calibrated at the system level in a protocol independent fashion. The resulting amplitudes and time constants of high order ECs can then be used to calculate imaging protocol specific corrections. A combined prospective and retrospective approach is proposed to apply correction during data acquisition and image reconstruction. Various phantom, brain, body, and whole body diffusion weighted images with and without the proposed method are acquired. Significantly reduced image distortion and misregistration are consistently seen in images with the proposed method compared with images without. The proposed method is a powerful (e.g., effective at 48 cm field of view and 30 cm slice coverage) and flexible (e.g., compatible with other image enhancements and arbitrary scan plane) technique to correct high order ECs induced distortion and misregistration for various diffusion weighted echo planar imaging applications, without the need for further image post processing, protocol dependent prescan, or sacrifice in signal-to-noise ratio. Copyright © 2013 Wiley Periodicals, Inc.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28035943','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28035943"><span>Using a fuzzy comprehensive evaluation method to determine product usability: A proposed theoretical framework.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhou, Ronggang; Chan, Alan H S</p> <p>2017-01-01</p> <p>In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JIEID..99..165B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JIEID..99..165B"><span>Underground Mining Method Selection Using WPM and PROMETHEE</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Balusa, Bhanu Chander; Singam, Jayanthu</p> <p>2018-04-01</p> <p>The aim of this paper is to represent the solution to the problem of selecting suitable underground mining method for the mining industry. It is achieved by using two multi-attribute decision making techniques. These two techniques are weighted product method (WPM) and preference ranking organization method for enrichment evaluation (PROMETHEE). In this paper, analytic hierarchy process is used for weight's calculation of the attributes (i.e. parameters which are used in this paper). Mining method selection depends on physical parameters, mechanical parameters, economical parameters and technical parameters. WPM and PROMETHEE techniques have the ability to consider the relationship between the parameters and mining methods. The proposed techniques give higher accuracy and faster computation capability when compared with other decision making techniques. The proposed techniques are presented to determine the effective mining method for bauxite mine. The results of these techniques are compared with methods used in the earlier research works. The results show, conventional cut and fill method is the most suitable mining method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SMaS...26e5007M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SMaS...26e5007M"><span>Analog self-powered harvester achieving switching pause control to increase harvested energy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Makihara, Kanjuro; Asahina, Kei</p> <p>2017-05-01</p> <p>In this paper, we propose a self-powered analog controller circuit to increase the efficiency of electrical energy harvesting from vibrational energy using piezoelectric materials. Although the existing synchronized switch harvesting on inductor (SSHI) method is designed to produce efficient harvesting, its switching operation generates a vibration-suppression effect that reduces the harvested levels of electrical energy. To solve this problem, the authors proposed—in a previous paper—a switching method that takes this vibration-suppression effect into account. This method temporarily pauses the switching operation, allowing the recovery of the mechanical displacement and, therefore, of the piezoelectric voltage. In this paper, we propose a self-powered analog circuit to implement this switching control method. Self-powered vibration harvesting is achieved in this study by attaching a newly designed circuit to an existing analog controller for SSHI. This circuit aims to effectively implement the aforementioned new switching control strategy, where switching is paused in some vibration peaks, in order to allow motion recovery and a consequent increase in the harvested energy. Harvesting experiments performed using the proposed circuit reveal that the proposed method can increase the energy stored in the storage capacitor by a factor of 8.5 relative to the conventional SSHI circuit. This proposed technique is useful to increase the harvested energy especially for piezoelectric systems having large coupling factor.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26613351','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26613351"><span>Full cost accounting in the analysis of separated waste collection efficiency: A methodological proposal.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>D'Onza, Giuseppe; Greco, Giulio; Allegrini, Marco</p> <p>2016-02-01</p> <p>Recycling implies additional costs for separated municipal solid waste (MSW) collection. The aim of the present study is to propose and implement a management tool - the full cost accounting (FCA) method - to calculate the full collection costs of different types of waste. Our analysis aims for a better understanding of the difficulties of putting FCA into practice in the MSW sector. We propose a FCA methodology that uses standard cost and actual quantities to calculate the collection costs of separate and undifferentiated waste. Our methodology allows cost efficiency analysis and benchmarking, overcoming problems related to firm-specific accounting choices, earnings management policies and purchase policies. Our methodology allows benchmarking and variance analysis that can be used to identify the causes of off-standards performance and guide managers to deploy resources more efficiently. Our methodology can be implemented by companies lacking a sophisticated management accounting system. Copyright © 2015 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MS%26E..161a2071M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MS%26E..161a2071M"><span>A proposed procedure for expressing the behavior of a full engine cycle by identifying its critical load timings</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Marius Andrei, Mihalache; Gheorghe, Nagit; Gavril, Musca; Vasile, Merticaru, Jr.; Marius Ionut, Ripanu</p> <p>2016-11-01</p> <p>In the present study the authors propose a new algorithm for identifying the right loads that act upon a functional connecting rod during a full engine cycle. The loads are then divided into three categories depending on the results they produce, as static, semi-dynamic and dynamic ones Because an engine cycle extends up to 720°, the authors aim to identify a method of substitution of values that produce the same effect as a previous value of a considered angle did. In other words, the proposed method aims to pin point the critical values that produce an effect different as the one seen before during a full engine cycle. Only those values will then be considered as valid loads that act upon the connecting rod inside FEA analyses. This technique has been applied to each of the three categories mentioned above and did produced different critical values for each one of them. The whole study relies on a theoretical mechanical project which was developed in order to identify the right values that correspond to each degree of the entire engine cycle of a Daewoo Tico automobile.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013JKPS...63.1824J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013JKPS...63.1824J"><span>A study on the dependence of exposure dose reduction and image evaluation on the distance from the dental periapical X-ray machine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Joo, Kyu-Ji; Shin, Jae-Woo; Dong, Kyung-Rae; Lim, Chang-Seon; Chung, Woon-Kwan; Kim, Young-Jae</p> <p>2013-11-01</p> <p>Reducing the exposure dose from a periapical X-ray machine is an important aim in dental radiography. Although the radiation exposure dose is generally low, any radiation exposure is harmful to the human body. Therefore, this study developed a method that reduces the exposure dose significantly compared to that encountered in a normal procedure, but still produces an image with a similar resolution. The correlation between the image resolution and the exposure dose of the proposed method was examined with increasing distance between the dosimeter and the X-ray tube. The results were compared with those obtained from the existing radiography method. When periapical radiography was performed once according to the recommendations of the International Commission on Radiological Protection (ICRP), the measured skin surface dose was low at 7 mGy or below. In contrast, the skin surface dose measured using the proposed method was only 1.57 mGy, showing a five-fold reduction. These results suggest that further decreases in dose might be achieved using the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29059945','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29059945"><span>Hair and bare skin discrimination for laser-assisted hair removal systems.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Cayir, Sercan; Yetik, Imam Samil</p> <p>2017-07-01</p> <p>Laser-assisted hair removal devices aim to remove body hair permanently. In most cases, these devices irradiate the whole area of the skin with a homogenous power density. Thus, a significant portion of the skin, where hair is not present, is burnt unnecessarily causing health risks. Therefore, methods that can distinguish hair regions automatically would be very helpful avoiding these unnecessary applications of laser. This study proposes a new system of algorithms to detect hair regions with the help of a digital camera. Unlike previous limited number of studies, our methods are very fast allowing for real-time application. Proposed methods are based on certain features derived from histograms of hair and skin regions. We compare our algorithm with competing methods in terms of localization performance and computation time and show that a much faster real-time accurate localization of hair regions is possible with the proposed method. Our results show that the algorithm we have developed is extremely fast (around 45 milliseconds) allowing for real-time application with high accuracy hair localization ( 96.48 %).</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_6");'>6</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li class="active"><span>8</span></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_8 --> <div id="page_9" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="161"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16792103','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16792103"><span>Full-frame video stabilization with motion inpainting.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung</p> <p>2006-07-01</p> <p>Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ISPAr42W7..821M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ISPAr42W7..821M"><span>Classification of Hyperspectral Data Based on Guided Filtering and Random Forest</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ma, H.; Feng, W.; Cao, X.; Wang, L.</p> <p>2017-09-01</p> <p>Hyperspectral images usually consist of more than one hundred spectral bands, which have potentials to provide rich spatial and spectral information. However, the application of hyperspectral data is still challengeable due to "the curse of dimensionality". In this context, many techniques, which aim to make full use of both the spatial and spectral information, are investigated. In order to preserve the geometrical information, meanwhile, with less spectral bands, we propose a novel method, which combines principal components analysis (PCA), guided image filtering and the random forest classifier (RF). In detail, PCA is firstly employed to reduce the dimension of spectral bands. Secondly, the guided image filtering technique is introduced to smooth land object, meanwhile preserving the edge of objects. Finally, the features are fed into RF classifier. To illustrate the effectiveness of the method, we carry out experiments over the popular Indian Pines data set, which is collected by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. By comparing the proposed method with the method of only using PCA or guided image filter, we find that effect of the proposed method is better.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10615E..0OD','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10615E..0OD"><span>Sub-pattern based multi-manifold discriminant analysis for face recognition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Dai, Jiangyan; Guo, Changlu; Zhou, Wei; Shi, Yanjiao; Cong, Lin; Yi, Yugen</p> <p>2018-04-01</p> <p>In this paper, we present a Sub-pattern based Multi-manifold Discriminant Analysis (SpMMDA) algorithm for face recognition. Unlike existing Multi-manifold Discriminant Analysis (MMDA) approach which is based on holistic information of face image for recognition, SpMMDA operates on sub-images partitioned from the original face image and then extracts the discriminative local feature from the sub-images separately. Moreover, the structure information of different sub-images from the same face image is considered in the proposed method with the aim of further improve the recognition performance. Extensive experiments on three standard face databases (Extended YaleB, CMU PIE and AR) demonstrate that the proposed method is effective and outperforms some other sub-pattern based face recognition methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.887a2019S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.887a2019S"><span>A novel approach to describing and detecting performance anti-patterns</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sheng, Jinfang; Wang, Yihan; Hu, Peipei; Wang, Bin</p> <p>2017-08-01</p> <p>Anti-pattern, as an extension to pattern, describes a widely used poor solution which can bring negative influence to application systems. Aiming at the shortcomings of the existing anti-pattern descriptions, an anti-pattern description method based on first order predicate is proposed. This method synthesizes anti-pattern forms and symptoms, which makes the description more accurate and has good scalability and versatility as well. In order to improve the accuracy of anti-pattern detection, a Bayesian classification method is applied in validation for detection results, which can reduce false negatives and false positives of anti-pattern detection. Finally, the proposed approach in this paper is applied to a small e-commerce system, the feasibility and effectiveness of the approach is demonstrated further through experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JNEng..13f6016B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JNEng..13f6016B"><span>Reduction hybrid artifacts of EMG-EOG in electroencephalography evoked by prefrontal transcranial magnetic stimulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bai, Yang; Wan, Xiaohong; Zeng, Ke; Ni, Yinmei; Qiu, Lirong; Li, Xiaoli</p> <p>2016-12-01</p> <p>Objective. When prefrontal-transcranial magnetic stimulation (p-TMS) performed, it may evoke hybrid artifact mixed with muscle activity and blink activity in EEG recordings. Reducing this kind of hybrid artifact challenges the traditional preprocessing methods. We aim to explore method for the p-TMS evoked hybrid artifact removal. Approach. We propose a novel method used as independent component analysis (ICA) post processing to reduce the p-TMS evoked hybrid artifact. Ensemble empirical mode decomposition (EEMD) was used to decompose signal into multi-components, then the components were separated with artifact reduced by blind source separation (BSS) method. Three standard BSS methods, ICA, independent vector analysis, and canonical correlation analysis (CCA) were tested. Main results. Synthetic results showed that EEMD-CCA outperformed others as ICA post processing step in hybrid artifacts reduction. Its superiority was clearer when signal to noise ratio (SNR) was lower. In application to real experiment, SNR can be significantly increased and the p-TMS evoked potential could be recovered from hybrid artifact contaminated signal. Our proposed method can effectively reduce the p-TMS evoked hybrid artifacts. Significance. Our proposed method may facilitate future prefrontal TMS-EEG researches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SenIm..18...30S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SenIm..18...30S"><span>Enhancement of Satellite Image Compression Using a Hybrid (DWT-DCT) Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Shihab, Halah Saadoon; Shafie, Suhaidi; Ramli, Abdul Rahman; Ahmad, Fauzan</p> <p>2017-12-01</p> <p>Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) image compression techniques have been utilized in most of the earth observation satellites launched during the last few decades. However, these techniques have some issues that should be addressed. The DWT method has proven to be more efficient than DCT for several reasons. Nevertheless, the DCT can be exploited to improve the high-resolution satellite image compression when combined with the DWT technique. Hence, a proposed hybrid (DWT-DCT) method was developed and implemented in the current work, simulating an image compression system on-board on a small remote sensing satellite, with the aim of achieving a higher compression ratio to decrease the onboard data storage and the downlink bandwidth, while avoiding further complex levels of DWT. This method also succeeded in maintaining the reconstructed satellite image quality through replacing the standard forward DWT thresholding and quantization processes with an alternative process that employed the zero-padding technique, which also helped to reduce the processing time of DWT compression. The DCT, DWT and the proposed hybrid methods were implemented individually, for comparison, on three LANDSAT 8 images, using the MATLAB software package. A comparison was also made between the proposed method and three other previously published hybrid methods. The evaluation of all the objective and subjective results indicated the feasibility of using the proposed hybrid (DWT-DCT) method to enhance the image compression process on-board satellites.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26732361','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26732361"><span>Structural Equation Models in a Redundancy Analysis Framework With Covariates.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lovaglio, Pietro Giorgio; Vittadini, Giorgio</p> <p>2014-01-01</p> <p>A recent method to specify and fit structural equation modeling in the Redundancy Analysis framework based on so-called Extended Redundancy Analysis (ERA) has been proposed in the literature. In this approach, the relationships between the observed exogenous variables and the observed endogenous variables are moderated by the presence of unobservable composites, estimated as linear combinations of exogenous variables. However, in the presence of direct effects linking exogenous and endogenous variables, or concomitant indicators, the composite scores are estimated by ignoring the presence of the specified direct effects. To fit structural equation models, we propose a new specification and estimation method, called Generalized Redundancy Analysis (GRA), allowing us to specify and fit a variety of relationships among composites, endogenous variables, and external covariates. The proposed methodology extends the ERA method, using a more suitable specification and estimation algorithm, by allowing for covariates that affect endogenous indicators indirectly through the composites and/or directly. To illustrate the advantages of GRA over ERA we propose a simulation study of small samples. Moreover, we propose an application aimed at estimating the impact of formal human capital on the initial earnings of graduates of an Italian university, utilizing a structural model consistent with well-established economic theory.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25076039','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25076039"><span>An exploration for research-oriented teaching model in biology teaching.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xing, Wanjin; Mo, Morigen; Su, Huimin</p> <p>2014-07-01</p> <p>Training innovative talents, as one of the major aims for Chinese universities, needs to reform the traditional teaching methods. The research-oriented teaching method has been introduced and its connotation and significance for Chinese university teaching have been discussed for years. However, few practical teaching methods for routine class teaching were proposed. In this paper, a comprehensive and concrete research-oriented teaching model with contents of reference value and evaluation method for class teaching was proposed based on the current teacher-guiding teaching model in China. We proposed that the research-oriented teaching model should include at least seven aspects on: (1) telling the scientific history for the skills to find out scientific questions; (2) replaying the experiments for the skills to solve scientific problems; (3) analyzing experimental data for learning how to draw a conclusion; (4) designing virtual experiments for learning how to construct a proposal; (5) teaching the lesson as the detectives solve the crime for learning the logic in scientific exploration; (6) guiding students how to read and consult the relative references; (7) teaching students differently according to their aptitude and learning ability. In addition, we also discussed how to evaluate the effects of the research-oriented teaching model in examination.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28667318','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28667318"><span>Automatic Detection of Galaxy Type From Datasets of Galaxies Image Based on Image Retrieval Approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Abd El Aziz, Mohamed; Selim, I M; Xiong, Shengwu</p> <p>2017-06-30</p> <p>This paper presents a new approach for the automatic detection of galaxy morphology from datasets based on an image-retrieval approach. Currently, there are several classification methods proposed to detect galaxy types within an image. However, in some situations, the aim is not only to determine the type of galaxy within the queried image, but also to determine the most similar images for query image. Therefore, this paper proposes an image-retrieval method to detect the type of galaxies within an image and return with the most similar image. The proposed method consists of two stages, in the first stage, a set of features is extracted based on shape, color and texture descriptors, then a binary sine cosine algorithm selects the most relevant features. In the second stage, the similarity between the features of the queried galaxy image and the features of other galaxy images is computed. Our experiments were performed using the EFIGI catalogue, which contains about 5000 galaxies images with different types (edge-on spiral, spiral, elliptical and irregular). We demonstrate that our proposed approach has better performance compared with the particle swarm optimization (PSO) and genetic algorithm (GA) methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26625414','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26625414"><span>A Truncated Nuclear Norm Regularization Method Based on Weighted Residual Error for Matrix Completion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Qing Liu; Zhihui Lai; Zongwei Zhou; Fangjun Kuang; Zhong Jin</p> <p>2016-01-01</p> <p>Low-rank matrix completion aims to recover a matrix from a small subset of its entries and has received much attention in the field of computer vision. Most existing methods formulate the task as a low-rank matrix approximation problem. A truncated nuclear norm has recently been proposed as a better approximation to the rank of matrix than a nuclear norm. The corresponding optimization method, truncated nuclear norm regularization (TNNR), converges better than the nuclear norm minimization-based methods. However, it is not robust to the number of subtracted singular values and requires a large number of iterations to converge. In this paper, a TNNR method based on weighted residual error (TNNR-WRE) for matrix completion and its extension model (ETNNR-WRE) are proposed. TNNR-WRE assigns different weights to the rows of the residual error matrix in an augmented Lagrange function to accelerate the convergence of the TNNR method. The ETNNR-WRE is much more robust to the number of subtracted singular values than the TNNR-WRE, TNNR alternating direction method of multipliers, and TNNR accelerated proximal gradient with Line search methods. Experimental results using both synthetic and real visual data sets show that the proposed TNNR-WRE and ETNNR-WRE methods perform better than TNNR and Iteratively Reweighted Nuclear Norm (IRNN) methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28359960','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28359960"><span>Optimization of cell seeding in a 2D bio-scaffold system using computational models.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ho, Nicholas; Chua, Matthew; Chui, Chee-Kong</p> <p>2017-05-01</p> <p>The cell expansion process is a crucial part of generating cells on a large-scale level in a bioreactor system. Hence, it is important to set operating conditions (e.g. initial cell seeding distribution, culture medium flow rate) to an optimal level. Often, the initial cell seeding distribution factor is neglected and/or overlooked in the design of a bioreactor using conventional seeding distribution methods. This paper proposes a novel seeding distribution method that aims to maximize cell growth and minimize production time/cost. The proposed method utilizes two computational models; the first model represents cell growth patterns whereas the second model determines optimal initial cell seeding positions for adherent cell expansions. Cell growth simulation from the first model demonstrates that the model can be a representation of various cell types with known probabilities. The second model involves a combination of combinatorial optimization, Monte Carlo and concepts of the first model, and is used to design a multi-layer 2D bio-scaffold system that increases cell production efficiency in bioreactor applications. Simulation results have shown that the recommended input configurations obtained from the proposed optimization method are the most optimal configurations. The results have also illustrated the effectiveness of the proposed optimization method. The potential of the proposed seeding distribution method as a useful tool to optimize the cell expansion process in modern bioreactor system applications is highlighted. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3355461','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3355461"><span>Method for Optimal Sensor Deployment on 3D Terrains Utilizing a Steady State Genetic Algorithm with a Guided Walk Mutation Operator Based on the Wavelet Transform</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Unaldi, Numan; Temel, Samil; Asari, Vijayan K.</p> <p>2012-01-01</p> <p>One of the most critical issues of Wireless Sensor Networks (WSNs) is the deployment of a limited number of sensors in order to achieve maximum coverage on a terrain. The optimal sensor deployment which enables one to minimize the consumed energy, communication time and manpower for the maintenance of the network has attracted interest with the increased number of studies conducted on the subject in the last decade. Most of the studies in the literature today are proposed for two dimensional (2D) surfaces; however, real world sensor deployments often arise on three dimensional (3D) environments. In this paper, a guided wavelet transform (WT) based deployment strategy (WTDS) for 3D terrains, in which the sensor movements are carried out within the mutation phase of the genetic algorithms (GAs) is proposed. The proposed algorithm aims to maximize the Quality of Coverage (QoC) of a WSN via deploying a limited number of sensors on a 3D surface by utilizing a probabilistic sensing model and the Bresenham's line of sight (LOS) algorithm. In addition, the method followed in this paper is novel to the literature and the performance of the proposed algorithm is compared with the Delaunay Triangulation (DT) method as well as a standard genetic algorithm based method and the results reveal that the proposed method is a more powerful and more successful method for sensor deployment on 3D terrains. PMID:22666078</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21659025','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21659025"><span>Cellular neural networks, the Navier-Stokes equation, and microarray image reconstruction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zineddin, Bachar; Wang, Zidong; Liu, Xiaohui</p> <p>2011-11-01</p> <p>Although the last decade has witnessed a great deal of improvements achieved for the microarray technology, many major developments in all the main stages of this technology, including image processing, are still needed. Some hardware implementations of microarray image processing have been proposed in the literature and proved to be promising alternatives to the currently available software systems. However, the main drawback of those proposed approaches is the unsuitable addressing of the quantification of the gene spot in a realistic way without any assumption about the image surface. Our aim in this paper is to present a new image-reconstruction algorithm using the cellular neural network that solves the Navier-Stokes equation. This algorithm offers a robust method for estimating the background signal within the gene-spot region. The MATCNN toolbox for Matlab is used to test the proposed method. Quantitative comparisons are carried out, i.e., in terms of objective criteria, between our approach and some other available methods. It is shown that the proposed algorithm gives highly accurate and realistic measurements in a fully automated manner within a remarkably efficient time.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014HESS...18.2049A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014HESS...18.2049A"><span>Improving the complementary methods to estimate evapotranspiration under diverse climatic and physical conditions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anayah, F. M.; Kaluarachchi, J. J.</p> <p>2014-06-01</p> <p>Reliable estimation of evapotranspiration (ET) is important for the purpose of water resources planning and management. Complementary methods, including complementary relationship areal evapotranspiration (CRAE), advection aridity (AA) and Granger and Gray (GG), have been used to estimate ET because these methods are simple and practical in estimating regional ET using meteorological data only. However, prior studies have found limitations in these methods especially in contrasting climates. This study aims to develop a calibration-free universal method using the complementary relationships to compute regional ET in contrasting climatic and physical conditions with meteorological data only. The proposed methodology consists of a systematic sensitivity analysis using the existing complementary methods. This work used 34 global FLUXNET sites where eddy covariance (EC) fluxes of ET are available for validation. A total of 33 alternative model variations from the original complementary methods were proposed. Further analysis using statistical methods and simplified climatic class definitions produced one distinctly improved GG-model-based alternative. The proposed model produced a single-step ET formulation with results equal to or better than the recent studies using data-intensive, classical methods. Average root mean square error (RMSE), mean absolute bias (BIAS) and R2 (coefficient of determination) across 34 global sites were 20.57 mm month-1, 10.55 mm month-1 and 0.64, respectively. The proposed model showed a step forward toward predicting ET in large river basins with limited data and requiring no calibration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25588397','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25588397"><span>[New method of conduction anesthesia in the maxilla].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Efimov, Iu V; Tel'ianova, Iu V; Efimova, E Iu</p> <p>2014-01-01</p> <p>There was the research aimed at improving the effeciency of intraosseous anesthesia in the maxilla by blocking the infraorbital nerve conduction along its entire length. In the experimental part of the needle puncture defined place and character of the spreading of contrast medium into the upper jaw. In the clinical part of the study shows the advantages of the proposed method of intraosseous anesthesia.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1074497.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1074497.pdf"><span>The Intelligence of Complexity: Do the Ethical Aims of Research and Intervention in Education Not Lead Us to a New Discourse "On the Study Methods of Our Time"?</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Le Moigne, Jean-Louis</p> <p>2013-01-01</p> <p>To better appreciate the contribution of the "paradigm of complexity" in Educational sciences, this paper proposes a framework discussing its cultural and historical roots. First, it focuses on Giambattista Vico's (1668-1744) critique of René Descartes' method (1637), contrasting Cartesian's principles (evidence, disjunction, linear…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24109614','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24109614"><span>Human joint motion estimation for electromyography (EMG)-based dynamic motion control.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Qin; Hosoda, Ryo; Venture, Gentiane</p> <p>2013-01-01</p> <p>This study aims to investigate a joint motion estimation method from Electromyography (EMG) signals during dynamic movement. In most EMG-based humanoid or prosthetics control systems, EMG features were directly or indirectly used to trigger intended motions. However, both physiological and nonphysiological factors can influence EMG characteristics during dynamic movements, resulting in subject-specific, non-stationary and crosstalk problems. Particularly, when motion velocity and/or joint torque are not constrained, joint motion estimation from EMG signals are more challenging. In this paper, we propose a joint motion estimation method based on muscle activation recorded from a pair of agonist and antagonist muscles of the joint. A linear state-space model with multi input single output is proposed to map the muscle activity to joint motion. An adaptive estimation method is proposed to train the model. The estimation performance is evaluated in performing a single elbow flexion-extension movement in two subjects. All the results in two subjects at two load levels indicate the feasibility and suitability of the proposed method in joint motion estimation. The estimation root-mean-square error is within 8.3% ∼ 10.6%, which is lower than that being reported in several previous studies. Moreover, this method is able to overcome subject-specific problem and compensate non-stationary EMG properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26ES...63a2005M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26ES...63a2005M"><span>Ultra-Short-Term Wind Power Prediction Using a Hybrid Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mohammed, E.; Wang, S.; Yu, J.</p> <p>2017-05-01</p> <p>This paper aims to develop and apply a hybrid model of two data analytical methods, multiple linear regressions and least square (MLR&LS), for ultra-short-term wind power prediction (WPP), for example taking, Northeast China electricity demand. The data was obtained from the historical records of wind power from an offshore region, and from a wind farm of the wind power plant in the areas. The WPP achieved in two stages: first, the ratios of wind power were forecasted using the proposed hybrid method, and then the transformation of these ratios of wind power to obtain forecasted values. The hybrid model combines the persistence methods, MLR and LS. The proposed method included two prediction types, multi-point prediction and single-point prediction. WPP is tested by applying different models such as autoregressive moving average (ARMA), autoregressive integrated moving average (ARIMA) and artificial neural network (ANN). By comparing results of the above models, the validity of the proposed hybrid model is confirmed in terms of error and correlation coefficient. Comparison of results confirmed that the proposed method works effectively. Additional, forecasting errors were also computed and compared, to improve understanding of how to depict highly variable WPP and the correlations between actual and predicted wind power.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28508812','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28508812"><span>Comparison of Control Group Generating Methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Szekér, Szabolcs; Fogarassy, György; Vathy-Fogarassy, Ágnes</p> <p>2017-01-01</p> <p>Retrospective studies suffer from drawbacks such as selection bias. As the selection of the control group has a significant impact on the evaluation of the results, it is very important to find the proper method to generate the most appropriate control group. In this paper we suggest two nearest neighbors based control group selection methods that aim to achieve good matching between the individuals of case and control groups. The effectiveness of the proposed methods is evaluated by runtime and accuracy tests and the results are compared to the classical stratified sampling method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26184796','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26184796"><span>Non-invasive, investigative methods in skin aging.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Longo, C; Ciardo, S; Pellacani, G</p> <p>2015-12-01</p> <p>A precise and noninvasive quantification of aging is of outmost importance for in vivo assessment of the skin aging "stage", and thus acts to minimize it. Several bioengineering methods have been proposed to objectively, precisely, and non-invasively measure skin aging, and to detect early skin damage, that is sub-clinically observable. In this review we have described the most relevant methods that have emerged from recently introduced technologies, aiming at quantitatively assessing the effects of aging on the skin.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_7");'>7</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li class="active"><span>9</span></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_9 --> <div id="page_10" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="181"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3844172','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3844172"><span>Noise-Assisted Concurrent Multipath Traffic Distribution in Ad Hoc Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Murata, Masayuki</p> <p>2013-01-01</p> <p>The concept of biologically inspired networking has been introduced to tackle unpredictable and unstable situations in computer networks, especially in wireless ad hoc networks where network conditions are continuously changing, resulting in the need of robustness and adaptability of control methods. Unfortunately, existing methods often rely heavily on the detailed knowledge of each network component and the preconfigured, that is, fine-tuned, parameters. In this paper, we utilize a new concept, called attractor perturbation (AP), which enables controlling the network performance using only end-to-end information. Based on AP, we propose a concurrent multipath traffic distribution method, which aims at lowering the average end-to-end delay by only adjusting the transmission rate on each path. We demonstrate through simulations that, by utilizing the attractor perturbation relationship, the proposed method achieves a lower average end-to-end delay compared to other methods which do not take fluctuations into account. PMID:24319375</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..207a2061L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..207a2061L"><span>Research on Finite Element Model Generating Method of General Gear Based on Parametric Modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lei, Yulong; Yan, Bo; Fu, Yao; Chen, Wei; Hou, Liguo</p> <p>2017-06-01</p> <p>Aiming at the problems of low efficiency and poor quality of gear meshing in the current mainstream finite element software, through the establishment of universal gear three-dimensional model, and explore the rules of unit and node arrangement. In this paper, a finite element model generation method of universal gear based on parameterization is proposed. Visual Basic program is used to realize the finite element meshing, give the material properties, and set the boundary / load conditions and other pre-processing work. The dynamic meshing analysis of the gears is carried out with the method proposed in this pape, and compared with the calculated values to verify the correctness of the method. The method greatly shortens the workload of gear finite element pre-processing, improves the quality of gear mesh, and provides a new idea for the FEM pre-processing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009LNCS.5450..229L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009LNCS.5450..229L"><span>A Novel Method for Block Size Forensics Based on Morphological Operations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Luo, Weiqi; Huang, Jiwu; Qiu, Guoping</p> <p></p> <p>Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010EJASP2010..167X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010EJASP2010..167X"><span>Optimized Projection Matrix for Compressive Sensing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, Jianping; Pi, Yiming; Cao, Zongjie</p> <p>2010-12-01</p> <p>Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27370126','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27370126"><span>Segmentation of malignant lesions in 3D breast ultrasound using a depth-dependent model.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tan, Tao; Gubern-Mérida, Albert; Borelli, Cristina; Manniesing, Rashindra; van Zelst, Jan; Wang, Lei; Zhang, Wei; Platel, Bram; Mann, Ritse M; Karssemeijer, Nico</p> <p>2016-07-01</p> <p>Automated 3D breast ultrasound (ABUS) has been proposed as a complementary screening modality to mammography for early detection of breast cancers. To facilitate the interpretation of ABUS images, automated diagnosis and detection techniques are being developed, in which malignant lesion segmentation plays an important role. However, automated segmentation of cancer in ABUS is challenging since lesion edges might not be well defined. In this study, the authors aim at developing an automated segmentation method for malignant lesions in ABUS that is robust to ill-defined cancer edges and posterior shadowing. A segmentation method using depth-guided dynamic programming based on spiral scanning is proposed. The method automatically adjusts aggressiveness of the segmentation according to the position of the voxels relative to the lesion center. Segmentation is more aggressive in the upper part of the lesion (close to the transducer) than at the bottom (far away from the transducer), where posterior shadowing is usually visible. The authors used Dice similarity coefficient (Dice) for evaluation. The proposed method is compared to existing state of the art approaches such as graph cut, level set, and smart opening and an existing dynamic programming method without depth dependence. In a dataset of 78 cancers, our proposed segmentation method achieved a mean Dice of 0.73 ± 0.14. The method outperforms an existing dynamic programming method (0.70 ± 0.16) on this task (p = 0.03) and it is also significantly (p < 0.001) better than graph cut (0.66 ± 0.18), level set based approach (0.63 ± 0.20) and smart opening (0.65 ± 0.12). The proposed depth-guided dynamic programming method achieves accurate breast malignant lesion segmentation results in automated breast ultrasound.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5302050','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5302050"><span>Using a fuzzy comprehensive evaluation method to determine product usability: A proposed theoretical framework</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhou, Ronggang; Chan, Alan H. S.</p> <p>2016-01-01</p> <p>BACKGROUND: In order to compare existing usability data to ideal goals or to that for other products, usability practitioners have tried to develop a framework for deriving an integrated metric. However, most current usability methods with this aim rely heavily on human judgment about the various attributes of a product, but often fail to take into account of the inherent uncertainties in these judgments in the evaluation process. OBJECTIVE: This paper presents a universal method of usability evaluation by combining the analytic hierarchical process (AHP) and the fuzzy evaluation method. By integrating multiple sources of uncertain information during product usability evaluation, the method proposed here aims to derive an index that is structured hierarchically in terms of the three usability components of effectiveness, efficiency, and user satisfaction of a product. METHODS: With consideration of the theoretical basis of fuzzy evaluation, a two-layer comprehensive evaluation index was first constructed. After the membership functions were determined by an expert panel, the evaluation appraisals were computed by using the fuzzy comprehensive evaluation technique model to characterize fuzzy human judgments. Then with the use of AHP, the weights of usability components were elicited from these experts. RESULTS AND CONCLUSIONS: Compared to traditional usability evaluation methods, the major strength of the fuzzy method is that it captures the fuzziness and uncertainties in human judgments and provides an integrated framework that combines the vague judgments from multiple stages of a product evaluation process. PMID:28035943</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22649438-su-iep4-cone-beam-ray-luminescence-computed-tomography-based-generalized-gaussian-markov-random-field','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22649438-su-iep4-cone-beam-ray-luminescence-computed-tomography-based-generalized-gaussian-markov-random-field"><span>SU-G-IeP4-03: Cone Beam X-Ray Luminescence Computed Tomography Based On Generalized Gaussian Markov Random Field</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zhang, G; Xing, L</p> <p>2016-06-15</p> <p>Purpose: Cone beam X-ray luminescence computed tomography (CB-XLCT), which aims to achieve molecular and functional imaging by X-rays, has recently been proposed as a new imaging modality. However, the inverse problem of CB-XLCT is seriously ill-conditioned, hindering us to achieve good image quality. In this work, a novel reconstruction method based on Bayesian theory is proposed to tackle this problem Methods: Bayesian theory provides a natural framework for utilizing various kinds of available prior information to improve the reconstruction image quality. A generalized Gaussian Markov random field (GGMRF) model is proposed here to construct the prior model of the Bayesianmore » theory. The most important feature of GGMRF model is the adjustable shape parameter p, which can be continuously adjusted from 1 to 2. The reconstruction image tends to have more edge-preserving property when p is slide to 1, while having more noise tolerance property when p is slide to 2, just like the behavior of L1 and L2 regularization methods, respectively. The proposed method provides a flexible regularization framework to adapt to a wide range of applications. Results: Numerical simulations were implemented to test the performance of the proposed method. The Digimouse atlas were employed to construct a three-dimensional mouse model, and two small cylinders were placed inside to serve as the targets. Reconstruction results show that the proposed method tends to obtain better spatial resolution with a smaller shape parameter, while better signal-to-noise image with a larger shape parameter. Quantitative indexes, contrast-to-noise ratio (CNR) and full-width at half-maximum (FWHM), were used to assess the performance of the proposed method, and confirmed its effectiveness in CB-XLCT reconstruction. Conclusion: A novel reconstruction method for CB-XLCT is proposed based on GGMRF model, which enables an adjustable performance tradeoff between L1 and L2 regularization methods. Numerical simulations were conducted to demonstrate its performance.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10461E..1HW','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10461E..1HW"><span>A fast point-cloud computing method based on spatial symmetry of Fresnel field</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui</p> <p>2017-10-01</p> <p>Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24760942','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24760942"><span>Feature extraction using extrema sampling of discrete derivatives for spike sorting in implantable upper-limb neural prostheses.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zamani, Majid; Demosthenous, Andreas</p> <p>2014-07-01</p> <p>Next generation neural interfaces for upper-limb (and other) prostheses aim to develop implantable interfaces for one or more nerves, each interface having many neural signal channels that work reliably in the stump without harming the nerves. To achieve real-time multi-channel processing it is important to integrate spike sorting on-chip to overcome limitations in transmission bandwidth. This requires computationally efficient algorithms for feature extraction and clustering suitable for low-power hardware implementation. This paper describes a new feature extraction method for real-time spike sorting based on extrema analysis (namely positive peaks and negative peaks) of spike shapes and their discrete derivatives at different frequency bands. Employing simulation across different datasets, the accuracy and computational complexity of the proposed method are assessed and compared with other methods. The average classification accuracy of the proposed method in conjunction with online sorting (O-Sort) is 91.6%, outperforming all the other methods tested with the O-Sort clustering algorithm. The proposed method offers a better tradeoff between classification error and computational complexity, making it a particularly strong choice for on-chip spike sorting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28562681','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28562681"><span>An optimization design proposal of automated guided vehicles for mixed type transportation in hospital environments.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>González, Domingo; Romero, Luis; Espinosa, María Del Mar; Domínguez, Manuel</p> <p>2017-01-01</p> <p>The aim of this paper is to present an optimization proposal in the automated guided vehicles design used in hospital logistics, as well as to analyze the impact of its implementation in a real environment. This proposal is based on the design of those elements that would allow the vehicles to deliver an extra cart by the towing method. So, the proposal intention is to improve the productivity and the performance of the current vehicles by using a transportation method of combined carts. The study has been developed following concurrent engineering premises from three different viewpoints. First, the sequence of operations has been described, and second, a proposal of design of the equipment has been undertaken. Finally, the impact of the proposal has been analyzed according to real data from the Hospital Universitario Rio Hortega in Valladolid (Spain). In this particular case, by the implementation of the analyzed proposal in the hospital a reduction of over 35% of the current time of use can be achieved. This result may allow adding new tasks to the vehicles, and according to this, both a new kind of vehicle and a specific module can be developed in order to get a better performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9681E..0RV','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9681E..0RV"><span>Lipid-anthropometric index optimization for insulin sensitivity estimation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Velásquez, J.; Wong, S.; Encalada, L.; Herrera, H.; Severeyn, E.</p> <p>2015-12-01</p> <p>Insulin sensitivity (IS) is the ability of cells to react due to insulińs presence; when this ability is diminished, low insulin sensitivity or insulin resistance (IR) is considered. IR had been related to other metabolic disorders as metabolic syndrome (MS), obesity, dyslipidemia and diabetes. IS can be determined using direct or indirect methods. The indirect methods are less accurate and invasive than direct and they use glucose and insulin values from oral glucose tolerance test (OGTT). The accuracy is established by comparison using spearman rank correlation coefficient between direct and indirect method. This paper aims to propose a lipid-anthropometric index which offers acceptable correlation to insulin sensitivity index for different populations (DB1=MS subjects, DB2=sedentary without MS subjects and DB3=marathoners subjects) without to use OGTT glucose and insulin values. The proposed method is parametrically optimized through a random cross-validation, using the spearman rank correlation as comparator with CAUMO method. CAUMO is an indirect method designed from a simplification of the minimal model intravenous glucose tolerance test direct method (MINMOD-IGTT) and with acceptable correlation (0.89). The results show that the proposed optimized method got a better correlation with CAUMO in all populations compared to non-optimized. On the other hand, it was observed that the optimized method has better correlation with CAUMO in DB2 and DB3 groups than HOMA-IR method, which is the most widely used for diagnosing insulin resistance. The optimized propose method could detect incipient insulin resistance, when classify as insulin resistant subjects that present impaired postprandial insulin and glucose values.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017TDR.....8..146L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017TDR.....8..146L"><span>A Simple and Automatic Method for Locating Surgical Guide Hole</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Xun; Chen, Ming; Tang, Kai</p> <p>2017-12-01</p> <p>Restoration-driven surgical guides are widely used in implant surgery. This study aims to provide a simple and valid method of automatically locating surgical guide hole, which can reduce operator's experiences and improve the design efficiency and quality of surgical guide. Few literatures can be found on this topic and the paper proposed a novel and simple method to solve this problem. In this paper, a local coordinate system for each objective tooth is geometrically constructed in CAD system. This coordinate system well represents dental anatomical features and the center axis of the objective tooth (coincide with the corresponding guide hole axis) can be quickly evaluated in this coordinate system, finishing the location of the guide hole. The proposed method has been verified by comparing two types of benchmarks: manual operation by one skilled doctor with over 15-year experiences (used in most hospitals) and automatic way using one popular commercial package Simplant (used in few hospitals).Both the benchmarks and the proposed method are analyzed in their stress distribution when chewing and biting. The stress distribution is visually shown and plotted as a graph. The results show that the proposed method has much better stress distribution than the manual operation and slightly better than Simplant, which will significantly reduce the risk of cervical margin collapse and extend the wear life of the restoration.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4507689','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4507689"><span>A Multi-Channel Method for Retrieving Surface Temperature for High-Emissivity Surfaces from Hyperspectral Thermal Infrared Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhong, Xinke; Labed, Jelila; Zhou, Guoqing; Shao, Kun; Li, Zhao-Liang</p> <p>2015-01-01</p> <p>The surface temperature (ST) of high-emissivity surfaces is an important parameter in climate systems. The empirical methods for retrieving ST for high-emissivity surfaces from hyperspectral thermal infrared (HypTIR) images require spectrally continuous channel data. This paper aims to develop a multi-channel method for retrieving ST for high-emissivity surfaces from space-borne HypTIR data. With an assumption of land surface emissivity (LSE) of 1, ST is proposed as a function of 10 brightness temperatures measured at the top of atmosphere by a radiometer having a spectral interval of 800–1200 cm−1 and a spectral sampling frequency of 0.25 cm−1. We have analyzed the sensitivity of the proposed method to spectral sampling frequency and instrumental noise, and evaluated the proposed method using satellite data. The results indicated that the parameters in the developed function are dependent on the spectral sampling frequency and that ST of high-emissivity surfaces can be accurately retrieved by the proposed method if appropriate values are used for each spectral sampling frequency. The results also showed that the accuracy of the retrieved ST is of the order of magnitude of the instrumental noise and that the root mean square error (RMSE) of the ST retrieved from satellite data is 0.43 K in comparison with the AVHRR SST product. PMID:26061199</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25484912','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25484912"><span>An efficient diagnosis system for Parkinson's disease using kernel-based extreme learning machine with subtractive clustering features weighting approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Zhao, Xue-Hua</p> <p>2014-01-01</p> <p>A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM), has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29747059','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29747059"><span>Toward optimal feature and time segment selection by divergence method for EEG signals classification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing</p> <p>2018-06-01</p> <p>Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4251425','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4251425"><span>An Efficient Diagnosis System for Parkinson's Disease Using Kernel-Based Extreme Learning Machine with Subtractive Clustering Features Weighting Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Ma, Chao; Ouyang, Jihong; Chen, Hui-Ling; Zhao, Xue-Hua</p> <p>2014-01-01</p> <p>A novel hybrid method named SCFW-KELM, which integrates effective subtractive clustering features weighting and a fast classifier kernel-based extreme learning machine (KELM), has been introduced for the diagnosis of PD. In the proposed method, SCFW is used as a data preprocessing tool, which aims at decreasing the variance in features of the PD dataset, in order to further improve the diagnostic accuracy of the KELM classifier. The impact of the type of kernel functions on the performance of KELM has been investigated in detail. The efficiency and effectiveness of the proposed method have been rigorously evaluated against the PD dataset in terms of classification accuracy, sensitivity, specificity, area under the receiver operating characteristic (ROC) curve (AUC), f-measure, and kappa statistics value. Experimental results have demonstrated that the proposed SCFW-KELM significantly outperforms SVM-based, KNN-based, and ELM-based approaches and other methods in the literature and achieved highest classification results reported so far via 10-fold cross validation scheme, with the classification accuracy of 99.49%, the sensitivity of 100%, the specificity of 99.39%, AUC of 99.69%, the f-measure value of 0.9964, and kappa value of 0.9867. Promisingly, the proposed method might serve as a new candidate of powerful methods for the diagnosis of PD with excellent performance. PMID:25484912</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..501..327W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..501..327W"><span>Optimizing location of manufacturing industries in the context of economic globalization: A bi-level model based approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wu, Shanhua; Yang, Zhongzhen</p> <p>2018-07-01</p> <p>This paper aims to optimize the locations of manufacturing industries in the context of economic globalization by proposing a bi-level programming model which integrates the location optimization model with the traffic assignment model. In the model, the transport network is divided into the subnetworks of raw materials and products respectively. The upper-level model is used to determine the location of industries and the OD matrices of raw materials and products. The lower-level model is used to calculate the attributes of traffic flow under given OD matrices. To solve the model, the genetic algorithm is designed. The proposed method is tested using the Chinese steel industry as an example. The result indicates that the proposed method could help the decision-makers to implement the location decisions for the manufacturing industries effectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..318a2044A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..318a2044A"><span>Orthogonal Array Testing for Transmit Precoding based Codebooks in Space Shift Keying Systems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Al-Ansi, Mohammed; Alwee Aljunid, Syed; Sourour, Essam; Mat Safar, Anuar; Rashidi, C. B. M.</p> <p>2018-03-01</p> <p>In Space Shift Keying (SSK) systems, transmit precoding based codebook approaches have been proposed to improve the performance in limited feedback channels. The receiver performs an exhaustive search in a predefined Full-Combination (FC) codebook to select the optimal codeword that maximizes the Minimum Euclidean Distance (MED) between the received constellations. This research aims to reduce the codebook size with the purpose of minimizing the selection time and the number of feedback bits. Therefore, we propose to construct the codebooks based on Orthogonal Array Testing (OAT) methods due to their powerful inherent properties. These methods allow to acquire a short codebook where the codewords are sufficient to cover almost all the possible effects included in the FC codebook. Numerical results show the effectiveness of the proposed OAT codebooks in terms of the system performance and complexity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IEITC..91.3326L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IEITC..91.3326L"><span>A Support Vector Machine-Based Gender Identification Using Speech Signal</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lee, Kye-Hwan; Kang, Sang-Ick; Kim, Deok-Hwan; Chang, Joon-Hyuk</p> <p></p> <p>We propose an effective voice-based gender identification method using a support vector machine (SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model (GMM)-based method using the mel frequency cepstral coefficients (MFCC). A novel approach of incorporating a features fusion scheme based on a combination of the MFCC and the fundamental frequency is proposed with the aim of improving the performance of gender identification. Experimental results demonstrate that the gender identification performance using the SVM is significantly better than that of the GMM-based scheme. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27378443','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27378443"><span>Hyperspectral interventional imaging for enhanced tissue visualization and discrimination combining band selection methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nouri, Dorra; Lucas, Yves; Treuillet, Sylvie</p> <p>2016-12-01</p> <p>Hyperspectral imaging is an emerging technology recently introduced in medical applications inasmuch as it provides a powerful tool for noninvasive tissue characterization. In this context, a new system was designed to be easily integrated in the operating room in order to detect anatomical tissues hardly noticed by the surgeon's naked eye. Our LCTF-based spectral imaging system is operative over visible, near- and middle-infrared spectral ranges (400-1700 nm). It is dedicated to enhance critical biological tissues such as the ureter and the facial nerve. We aim to find the best three relevant bands to create a RGB image to display during the intervention with maximal contrast between the target tissue and its surroundings. A comparative study is carried out between band selection methods and band transformation methods. Combined band selection methods are proposed. All methods are compared using different evaluation criteria. Experimental results show that the proposed combined band selection methods provide the best performance with rich information, high tissue separability and short computational time. These methods yield a significant discrimination between biological tissues. We developed a hyperspectral imaging system in order to enhance some biological tissue visualization. The proposed methods provided an acceptable trade-off between the evaluation criteria especially in SWIR spectral band that outperforms the naked eye's capacities.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_8");'>8</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li class="active"><span>10</span></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_10 --> <div id="page_11" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="201"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE.9684E..3IZ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE.9684E..3IZ"><span>An adaptive block-based fusion method with LUE-SSIM for multi-focus images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zheng, Jianing; Guo, Yongcai; Huang, Yukun</p> <p>2016-09-01</p> <p>Because of the lenses' limited depth of field, digital cameras are incapable of acquiring an all-in-focus image of objects at varying distances in a scene. Multi-focus image fusion technique can effectively solve this problem. Aiming at the block-based multi-focus image fusion methods, the problem that blocking-artifacts often occurs. An Adaptive block-based fusion method based on lifting undistorted-edge structural similarity (LUE-SSIM) is put forward. In this method, image quality metrics LUE-SSIM is firstly proposed, which utilizes the characteristics of human visual system (HVS) and structural similarity (SSIM) to make the metrics consistent with the human visual perception. Particle swarm optimization(PSO) algorithm which selects LUE-SSIM as the object function is used for optimizing the block size to construct the fused image. Experimental results on LIVE image database shows that LUE-SSIM outperform SSIM on Gaussian defocus blur images quality assessment. Besides, multi-focus image fusion experiment is carried out to verify our proposed image fusion method in terms of visual and quantitative evaluation. The results show that the proposed method performs better than some other block-based methods, especially in reducing the blocking-artifact of the fused image. And our method can effectively preserve the undistorted-edge details in focus region of the source images.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=kinetic+AND+energy&id=EJ1142571','ERIC'); return false;" href="https://eric.ed.gov/?q=kinetic+AND+energy&id=EJ1142571"><span>An Easy Way to One-Dimensional Elastic Collisions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Sztrajman, Jorge; Sztrajman, Alejandro</p> <p>2017-01-01</p> <p>The aim of this paper is to propose a method for solving head-on elastic collisions, without algebraic complications, to emphasize the use of the fundamental conservations laws. Head-on elastic collisions are treated in many physics textbooks as examples of conservation of momentum and kinetic energy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4938380','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4938380"><span>Setting health research priorities using the CHNRI method: IV. Key conceptual advances</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Rudan, Igor</p> <p>2016-01-01</p> <p>Introduction Child Health and Nutrition Research Initiative (CHNRI) started as an initiative of the Global Forum for Health Research in Geneva, Switzerland. Its aim was to develop a method that could assist priority setting in health research investments. The first version of the CHNRI method was published in 2007–2008. The aim of this paper was to summarize the history of the development of the CHNRI method and its key conceptual advances. Methods The guiding principle of the CHNRI method is to expose the potential of many competing health research ideas to reduce disease burden and inequities that exist in the population in a feasible and cost–effective way. Results The CHNRI method introduced three key conceptual advances that led to its increased popularity in comparison to other priority–setting methods and processes. First, it proposed a systematic approach to listing a large number of possible research ideas, using the “4D” framework (description, delivery, development and discovery research) and a well–defined “depth” of proposed research ideas (research instruments, avenues, options and questions). Second, it proposed a systematic approach for discriminating between many proposed research ideas based on a well–defined context and criteria. The five “standard” components of the context are the population of interest, the disease burden of interest, geographic limits, time scale and the preferred style of investing with respect to risk. The five “standard” criteria proposed for prioritization between research ideas are answerability, effectiveness, deliverability, maximum potential for disease burden reduction and the effect on equity. However, both the context and the criteria can be flexibly changed to meet the specific needs of each priority–setting exercise. Third, it facilitated consensus development through measuring collective optimism on each component of each research idea among a larger group of experts using a simple scoring system. This enabled the use of the knowledge of many experts in the field, “visualising” their collective opinion and presenting the list of many research ideas with their ranks, based on an intuitive score that ranges between 0 and 100. Conclusions Two recent reviews showed that the CHNRI method, an approach essentially based on “crowdsourcing”, has become the dominant approach to setting health research priorities in the global biomedical literature over the past decade. With more than 50 published examples of implementation to date, it is now widely used in many international organisations for collective decision–making on health research priorities. The applications have been helpful in promoting better balance between investments in fundamental research, translation research and implementation research. PMID:27418959</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1992SPIE.1653..203A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1992SPIE.1653..203A"><span>Subband directional vector quantization in radiological image compression</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Akrout, Nabil M.; Diab, Chaouki; Prost, Remy; Goutte, Robert; Amiel, Michel</p> <p>1992-05-01</p> <p>The aim of this paper is to propose a new scheme for image compression. The method is very efficient for images which have directional edges such as the tree-like structure of the coronary vessels in digital angiograms. This method involves two steps. First, the original image is decomposed at different resolution levels using a pyramidal subband decomposition scheme. For decomposition/reconstruction of the image, free of aliasing and boundary errors, we use an ideal band-pass filter bank implemented in the Discrete Cosine Transform domain (DCT). Second, the high-frequency subbands are vector quantized using a multiresolution codebook with vertical and horizontal codewords which take into account the edge orientation of each subband. The proposed method reduces the blocking effect encountered at low bit rates in conventional vector quantization.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10608E..0JY','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10608E..0JY"><span>Infrared images target detection based on background modeling in the discrete cosine domain</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ye, Han; Pei, Jihong</p> <p>2018-02-01</p> <p>Background modeling is the critical technology to detect the moving target for video surveillance. Most background modeling techniques are aimed at land monitoring and operated in the spatial domain. A background establishment becomes difficult when the scene is a complex fluctuating sea surface. In this paper, the background stability and separability between target are analyzed deeply in the discrete cosine transform (DCT) domain, on this basis, we propose a background modeling method. The proposed method models each frequency point as a single Gaussian model to represent background, and the target is extracted by suppressing the background coefficients. Experimental results show that our approach can establish an accurate background model for seawater, and the detection results outperform other background modeling methods in the spatial domain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AIPC.1479.1788G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AIPC.1479.1788G"><span>Empirical solution of Green-Ampt equation using soil conservation service - curve number values</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Grimaldi, S.; Petroselli, A.; Romano, N.</p> <p>2012-09-01</p> <p>The Soil Conservation Service - Curve Number (SCS-CN) method is a popular widely used rainfall-runoff model for quantifying the total stream-flow volume generated by storm rainfall, but its application is not appropriate for sub-daily resolutions. In order to overcome this drawback, the Green-Ampt (GA) infiltration equation is considered and an empirical solution is proposed and evaluated. The procedure, named CN4GA (Curve Number for Green-Ampt), aims to calibrate the Green-Ampt model parameters distributing in time the global information provided by the SCS-CN method. The proposed procedure is evaluated by analysing observed rainfall-runoff events; results show that CN4GA seems to provide better agreement with the observed hydrographs respect to the classic SCS-CN method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26690571','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26690571"><span>A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong</p> <p>2015-01-01</p> <p>This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017HESS...21..949Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017HESS...21..949Z"><span>Monitoring surface water quality using social media in the context of citizen science</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zheng, Hang; Hong, Yang; Long, Di; Jing, Hua</p> <p>2017-02-01</p> <p>Surface water quality monitoring (SWQM) provides essential information for water environmental protection. However, SWQM is costly and limited in terms of equipment and sites. The global popularity of social media and intelligent mobile devices with GPS and photography functions allows citizens to monitor surface water quality. This study aims to propose a method for SWQM using social media platforms. Specifically, a WeChat-based application platform is built to collect water quality reports from volunteers, which have been proven valuable for water quality monitoring. The methods for data screening and volunteer recruitment are discussed based on the collected reports. The proposed methods provide a framework for collecting water quality data from citizens and offer a primary foundation for big data analysis in future research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IJSE...37..436A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IJSE...37..436A"><span>Sustainable energy planning decision using the intuitionistic fuzzy analytic hierarchy process: choosing energy technology in Malaysia: necessary modifications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Al-Qudaimi, Abdullah; Kumar, Amit</p> <p>2018-05-01</p> <p>Recently, Abdullah and Najib (International Journal of Sustainable Energy 35(4): 360-377, 2016) proposed an intuitionistic fuzzy analytic hierarchy process to deal with uncertainty in decision-making and applied it to establish preference in the sustainable energy planning decision-making of Malaysia. This work may attract the researchers of other countries to choose energy technology for their countries. However, after a deep study of the published paper (International Journal of Sustainable Energy 35(4): 362-377, 2016), it is noticed that the expression used by Abdullah and Najib in Step 6 of their proposed method for evaluating the intuitionistic fuzzy entropy of each aggregate of each row of intuitionistic fuzzy matrix is not valid. Therefore, it is not genuine to use the method proposed by Abdullah and Najib for solving real-life problems. The aim of this paper was to suggest the required necessary modifications for resolving the flaws of the Abdullah and Najib method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26098934','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26098934"><span>Intelligent Scheduling for Underground Mobile Mining Equipment.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John</p> <p>2015-01-01</p> <p>Many studies have been carried out and many commercial software applications have been developed to improve the performances of surface mining operations, especially for the loader-trucks cycle of surface mining. However, there have been quite few studies aiming to improve the mining process of underground mines. In underground mines, mobile mining equipment is mostly scheduled instinctively, without theoretical support for these decisions. Furthermore, in case of unexpected events, it is hard for miners to rapidly find solutions to reschedule and to adapt the changes. This investigation first introduces the motivation, the technical background, and then the objective of the study. A decision support instrument (i.e. schedule optimizer for mobile mining equipment) is proposed and described to address this issue. The method and related algorithms which are used in this instrument are presented and discussed. The proposed method was tested by using a real case of Kittilä mine located in Finland. The result suggests that the proposed method can considerably improve the working efficiency and reduce the working time of the underground mine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28278473','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28278473"><span>Optimal External Wrench Distribution During a Multi-Contact Sit-to-Stand Task.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bonnet, Vincent; Azevedo-Coste, Christine; Robert, Thomas; Fraisse, Philippe; Venture, Gentiane</p> <p>2017-07-01</p> <p>This paper aims at developing and evaluating a new practical method for the real-time estimate of joint torques and external wrenches during multi-contact sit-to-stand (STS) task using kinematics data only. The proposed method allows also identifying subject specific body inertial segment parameters that are required to perform inverse dynamics. The identification phase is performed using simple and repeatable motions. Thanks to an accurately identified model the estimate of the total external wrench can be used as an input to solve an under-determined multi-contact problem. It is solved using a constrained quadratic optimization process minimizing a hybrid human-like energetic criterion. The weights of this hybrid cost function are adjusted and a sensitivity analysis is performed in order to reproduce robustly human external wrench distribution. The results showed that the proposed method could successfully estimate the external wrenches under buttocks, feet, and hands during STS tasks (RMS error lower than 20 N and 6 N.m). The simplicity and generalization abilities of the proposed method allow paving the way of future diagnosis solutions and rehabilitation applications, including in-home use.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26011876','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26011876"><span>Robotic Online Path Planning on Point Cloud.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Liu, Ming</p> <p>2016-05-01</p> <p>This paper deals with the path-planning problem for mobile wheeled- or tracked-robot which drive in 2.5-D environments, where the traversable surface is usually considered as a 2-D-manifold embedded in a 3-D ambient space. Specially, we aim at solving the 2.5-D navigation problem using raw point cloud as input. The proposed method is independent of traditional surface parametrization or reconstruction methods, such as a meshing process, which generally has high-computational complexity. Instead, we utilize the output of 3-D tensor voting framework on the raw point clouds. The computation of tensor voting is accelerated by optimized implementation on graphics computation unit. Based on the tensor voting results, a novel local Riemannian metric is defined using the saliency components, which helps the modeling of the latent traversable surface. Using the proposed metric, we prove that the geodesic in the 3-D tensor space leads to rational path-planning results by experiments. Compared to traditional methods, the results reveal the advantages of the proposed method in terms of smoothing the robot maneuver while considering the minimum travel distance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012OptEn..51g7203W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012OptEn..51g7203W"><span>Appearance-based representative samples refining method for palmprint recognition</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wen, Jiajun; Chen, Yan</p> <p>2012-07-01</p> <p>The sparse representation can deal with the lack of sample problem due to utilizing of all the training samples. However, the discrimination ability will degrade when more training samples are used for representation. We propose a novel appearance-based palmprint recognition method. We aim to find a compromise between the discrimination ability and the lack of sample problem so as to obtain a proper representation scheme. Under the assumption that the test sample can be well represented by a linear combination of a certain number of training samples, we first select the representative training samples according to the contributions of the samples. Then we further refine the training samples by an iteration procedure, excluding the training sample with the least contribution to the test sample for each time. Experiments on PolyU multispectral palmprint database and two-dimensional and three-dimensional palmprint database show that the proposed method outperforms the conventional appearance-based palmprint recognition methods. Moreover, we also explore and find out the principle of the usage for the key parameters in the proposed algorithm, which facilitates to obtain high-recognition accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JSV...385..372C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JSV...385..372C"><span>Double-dictionary matching pursuit for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cui, Lingli; Gong, Xiangyang; Zhang, Jianyu; Wang, Huaqing</p> <p>2016-12-01</p> <p>The quantitative diagnosis of rolling bearing fault severity is particularly crucial to realize a proper maintenance decision. Aiming at the fault feature of rolling bearing, a novel double-dictionary matching pursuit (DDMP) for fault extent evaluation of rolling bearing based on the Lempel-Ziv complexity (LZC) index is proposed in this paper. In order to match the features of rolling bearing fault, the impulse time-frequency dictionary and modulation dictionary are constructed to form the double-dictionary by using the method of parameterized function model. Then a novel matching pursuit method is proposed based on the new double-dictionary. For rolling bearing vibration signals with different fault sizes, the signals are decomposed and reconstructed by the DDMP. After the noise reduced and signals reconstructed, the LZC index is introduced to realize the fault extent evaluation. The applications of this method to the fault experimental signals of bearing outer race and inner race with different degree of injury have shown that the proposed method can effectively realize the fault extent evaluation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1351872-assessment-system-frequency-support-effect-pmsg-wtg-using-torque-limit-based-inertial-control','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1351872-assessment-system-frequency-support-effect-pmsg-wtg-using-torque-limit-based-inertial-control"><span>Assessment of System Frequency Support Effect of PMSG-WTG Using Torque-Limit-Based Inertial Control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Wang, Xiao; Gao, Wenzhong; Wang, Jianhui</p> <p>2017-02-16</p> <p>To release the 'hidden inertia' of variable-speed wind turbines for temporary frequency support, a method of torque-limit based inertial control is proposed in this paper. This method aims to improve the frequency support capability considering the maximum torque restriction of a permanent magnet synchronous generator. The advantages of the proposed method are improved frequency nadir (FN) in the event of an under-frequency disturbance; and avoidance of over-deceleration and a second frequency dip during the inertial response. The system frequency response is different, with different slope values in the power-speed plane when the inertial response is performed. The proposed method ismore » evaluated in a modified three-machine, nine-bus system. The simulation results show that there is a trade-off between the recovery time and FN, such that a gradual slope tends to improve the FN and restrict the rate of change of frequency aggressively while causing an extension of the recovery time. These results provide insight into how to properly design such kinds of inertial control strategies for practical applications.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPRS..141..100W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPRS..141..100W"><span>Photovoltaic panel extraction from very high-resolution aerial imagery using region-line primitive association analysis and template matching</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Min; Cui, Qi; Sun, Yujie; Wang, Qiao</p> <p>2018-07-01</p> <p>In object-based image analysis (OBIA), object classification performance is jointly determined by image segmentation, sample or rule setting, and classifiers. Typically, as a crucial step to obtain object primitives, image segmentation quality significantly influences subsequent feature extraction and analyses. By contrast, template matching extracts specific objects from images and prevents shape defects caused by image segmentation. However, creating or editing templates is tedious and sometimes results in incomplete or inaccurate templates. In this study, we combine OBIA and template matching techniques to address these problems and aim for accurate photovoltaic panel (PVP) extraction from very high-resolution (VHR) aerial imagery. The proposed method is based on the previously proposed region-line primitive association framework, in which complementary information between region (segment) and line (straight line) primitives is utilized to achieve a more powerful performance than routine OBIA. Several novel concepts, including the mutual fitting ratio and best-fitting template based on region-line primitive association analyses, are proposed. Automatic template generation and matching method for PVP extraction from VHR imagery are designed for concept and model validation. Results show that the proposed method can successfully extract PVPs without any user-specified matching template or training sample. High user independency and accuracy are the main characteristics of the proposed method in comparison with routine OBIA and template matching techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10463E..07Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10463E..07Z"><span>Research on vehicle detection based on background feature analysis in SAR images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Bochuan; Tang, Bo; Zhang, Cong; Hu, Ruiguang; Yun, Hongquan; Xiao, Liping</p> <p>2017-10-01</p> <p>Aiming at vehicle detection on the ground through low resolution SAR images, a method is proposed for determining the region of the vehicles first and then detecting the target in the specific region. The experimental results show that this method not only reduces the target detection area, but also reduces the influence of terrain clutter on the detection, which greatly improves the reliability of the target detection.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=cause+AND+computer+AND+games&id=EJ987342','ERIC'); return false;" href="https://eric.ed.gov/?q=cause+AND+computer+AND+games&id=EJ987342"><span>"Sure, I Would Like to Continue": A Method for Mapping the Experience of Engagement in Video Games</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Schonau-Fog, Henrik; Bjorner, Thomas</p> <p>2012-01-01</p> <p>In order to explore one aspect of the engaging nature of computer games, this study will propose a method that aims at classifying the experience of engagement in video games. Inspired by a literature review, we will focus on the fundamental causes of engagement that motivate a player so much that he or she wants to continue playing. By organizing…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1834c0020W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1834c0020W"><span>Quality data collection and management technology of aerospace complex product assembly process</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Weng, Gang; Liu, Jianhua; He, Yongxi; Zhuang, Cunbo</p> <p>2017-04-01</p> <p>Aiming at solving problems of difficult management and poor traceability for discrete assembly process quality data, a data collection and management method is proposed which take the assembly process and BOM as the core. Data collection method base on workflow technology, data model base on BOM and quality traceability of assembly process is included in the method. Finally, assembly process quality data management system is developed and effective control and management of quality information for complex product assembly process is realized.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016MS%26E..145b2003U','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016MS%26E..145b2003U"><span>Reducing maintenance costs in agreement with CNC machine tools reliability</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ungureanu, A. L.; Stan, G.; Butunoi, P. A.</p> <p>2016-08-01</p> <p>Aligning maintenance strategy with reliability is a challenge due to the need to find an optimal balance between them. Because the various methods described in the relevant literature involve laborious calculations or use of software that can be costly, this paper proposes a method that is easier to implement on CNC machine tools. The new method, called the Consequence of Failure Analysis (CFA) is based on technical and economic optimization, aimed at obtaining a level of required performance with minimum investment and maintenance costs.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_9");'>9</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li class="active"><span>11</span></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_11 --> <div id="page_12" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="221"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1415554','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1415554"><span>NUEN-618 Class Project: Actually Implicit Monte Carlo</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Vega, R. M.; Brunner, T. A.</p> <p>2017-12-14</p> <p>This research describes a new method for the solution of the thermal radiative transfer (TRT) equations that is implicit in time which will be called Actually Implicit Monte Carlo (AIMC). This section aims to introduce the TRT equations, as well as the current workhorse method which is known as Implicit Monte Carlo (IMC). As the name of the method proposed here indicates, IMC is a misnomer in that it is only semi-implicit, which will be shown in this section as well.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA485305','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA485305"><span>Measuring the Return on Information Technology: A Knowledge-Based Approach for Revenue Allocation at the Process and Firm Level</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2005-07-01</p> <p>approach for measuring the return on Information Technology (IT) investments. A review of existing methods suggests the difficulty in adequately...measuring the returns of IT at various levels of analysis (e.g., firm or process level). To address this issue, this study aims to develop a method for...view (KBV), this paper proposes an analytic method for measuring the historical revenue and cost of IT investments by estimating the amount of</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28688489','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28688489"><span>A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun</p> <p>2017-07-01</p> <p>Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time results show that the proposed method has less time complexity after feature selection. The proposed feature extraction method is very effective for getting representatives information from mental states EEG signals in BCI applications and reducing the computational complexity of classifiers by reducing the number of extracted features. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22483057-liver-segmentation-using-multiple-region-appearances-graph-cuts','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22483057-liver-segmentation-using-multiple-region-appearances-graph-cuts"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Peng, Jialin, E-mail: 2004pjl@163.com; Zhang, Hongbo; Hu, Peijun</p> <p></p> <p>Purpose: Efficient and accurate 3D liver segmentations from contrast-enhanced computed tomography (CT) images play an important role in therapeutic strategies for hepatic diseases. However, inhomogeneous appearances, ambiguous boundaries, and large variance in shape often make it a challenging task. The existence of liver abnormalities poses further difficulty. Despite the significant intensity difference, liver tumors should be segmented as part of the liver. This study aims to address these challenges, especially when the target livers contain subregions with distinct appearances. Methods: The authors propose a novel multiregion-appearance based approach with graph cuts to delineate the liver surface. For livers with multiplemore » subregions, a geodesic distance based appearance selection scheme is introduced to utilize proper appearance constraint for each subregion. A special case of the proposed method, which uses only one appearance constraint to segment the liver, is also presented. The segmentation process is modeled with energy functions incorporating both boundary and region information. Rather than a simple fixed combination, an adaptive balancing weight is introduced and learned from training sets. The proposed method only calls initialization inside the liver surface. No additional constraints from user interaction are utilized. Results: The proposed method was validated on 50 3D CT images from three datasets, i.e., Medical Image Computing and Computer Assisted Intervention (MICCAI) training and testing set, and local dataset. On MICCAI testing set, the proposed method achieved a total score of 83.4 ± 3.1, outperforming nonexpert manual segmentation (average score of 75.0). When applying their method to MICCAI training set and local dataset, it yielded a mean Dice similarity coefficient (DSC) of 97.7% ± 0.5% and 97.5% ± 0.4%, respectively. These results demonstrated the accuracy of the method when applied to different computed tomography (CT) datasets. In addition, user operator variability experiments showed its good reproducibility. Conclusions: A multiregion-appearance based method is proposed and evaluated to segment liver. This approach does not require prior model construction and so eliminates the burdens associated with model construction and matching. The proposed method provides comparable results with state-of-the-art methods. Validation results suggest that it may be suitable for the clinical use.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://www.dtic.mil/docs/citations/ADA412606','DTIC-ST'); return false;" href="http://www.dtic.mil/docs/citations/ADA412606"><span>Dynamic Chest Image Analysis: Evaluation of Model-Based Pulmonary Perfusion Analysis With Pyramid Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.dtic.mil/">DTIC Science & Technology</a></p> <p></p> <p>2001-10-25</p> <p>Image Analysis aims to develop model-based computer analysis and visualization methods for showing focal and general abnormalities of lung ventilation and perfusion based on a sequence of digital chest fluoroscopy frames collected with the Dynamic Pulmonary Imaging technique 18,5,17,6. We have proposed and evaluated a multiresolutional method with an explicit ventilation model based on pyramid images for ventilation analysis. We have further extended the method for ventilation analysis to pulmonary perfusion. This paper focuses on the clinical evaluation of our method for</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JIEI...14..429M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JIEI...14..429M"><span>A hybrid CS-SA intelligent approach to solve uncertain dynamic facility layout problems considering dependency of demands</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Moslemipour, Ghorbanali</p> <p>2018-07-01</p> <p>This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JPhCS.944a2054K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JPhCS.944a2054K"><span>Mortise terrorism on the main pipelines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Komarov, V. A.; Nigrey, N. N.; Bronnikov, D. A.; Nigrey, A. A.</p> <p>2018-01-01</p> <p>The research aim of the work is to analyze the effectiveness of the methods of physical protection of main pipelines proposed in the article from the "mortise terrorism" A mathematical model has been developed that made it possible to predict the dynamics of "mortise terrorism" in the short term. An analysis of the effectiveness of physical protection methods proposed in the article to prevent unauthorized impacts on the objects under investigation is given. A variant of a video analytics system has been developed that allows detecting violators with recognition of the types of work they perform at a distance of 150 meters in conditions of complex natural backgrounds and precipitation. Probability of detection is 0.959.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22163794','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22163794"><span>Adaptive marginal median filter for colour images.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Morillas, Samuel; Gregori, Valentín; Sapena, Almanzor</p> <p>2011-01-01</p> <p>This paper describes a new filter for impulse noise reduction in colour images which is aimed at improving the noise reduction capability of the classical vector median filter. The filter is inspired by the application of a vector marginal median filtering process over a selected group of pixels in each filtering window. This selection, which is based on the vector median, along with the application of the marginal median operation constitutes an adaptive process that leads to a more robust filter design. Also, the proposed method is able to process colour images without introducing colour artifacts. Experimental results show that the images filtered with the proposed method contain less noisy pixels than those obtained through the vector median filter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014AIPC.1602..330R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014AIPC.1602..330R"><span>A protocol of rope skipping exercise for primary school children: A pilot test</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Radzi, A. N. M.; Rambely, A. S.; Chellapan, K.</p> <p>2014-06-01</p> <p>This paper aims to investigate the methods and sample used in rope skipping as an exercise approach. A systematic literature review was approached in identifying skipping performance in the related researches. The methods were compared to determine the best methodological approach for the targeted skipping based research measure. A pilot test was performed among seven students below 12 years old. As the outcome of the review, a skipping protocol design has been proposed for 10 years old primary school students. The proposed protocol design is to be submitted to PPUKM Ethical Committee for approval prior to its implementation in investigation memory enhancement in relation to designed skipping activities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MSSP...91..233M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MSSP...91..233M"><span>Online frequency estimation with applications to engine and generator sets</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Manngård, Mikael; Böling, Jari M.</p> <p>2017-07-01</p> <p>Frequency and spectral analysis based on the discrete Fourier transform is a fundamental task in signal processing and machine diagnostics. This paper aims at presenting computationally efficient methods for real-time estimation of stationary and time-varying frequency components in signals. A brief survey of the sliding time window discrete Fourier transform and Goertzel filter is presented, and two filter banks consisting of: (i) sliding time window Goertzel filters (ii) infinite impulse response narrow bandpass filters are proposed for estimating instantaneous frequencies. The proposed methods show excellent results on both simulation studies and on a case study using angular speed data measurements of the crankshaft of a marine diesel engine-generator set.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AcMSn.tmp..129Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AcMSn.tmp..129Z"><span>A modified multi-objective particle swarm optimization approach and its application to the design of a deepwater composite riser</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zheng, Y.; Chen, J.</p> <p>2017-09-01</p> <p>A modified multi-objective particle swarm optimization method is proposed for obtaining Pareto-optimal solutions effectively. Different from traditional multi-objective particle swarm optimization methods, Kriging meta-models and the trapezoid index are introduced and integrated with the traditional one. Kriging meta-models are built to match expensive or black-box functions. By applying Kriging meta-models, function evaluation numbers are decreased and the boundary Pareto-optimal solutions are identified rapidly. For bi-objective optimization problems, the trapezoid index is calculated as the sum of the trapezoid's area formed by the Pareto-optimal solutions and one objective axis. It can serve as a measure whether the Pareto-optimal solutions converge to the Pareto front. Illustrative examples indicate that to obtain Pareto-optimal solutions, the method proposed needs fewer function evaluations than the traditional multi-objective particle swarm optimization method and the non-dominated sorting genetic algorithm II method, and both the accuracy and the computational efficiency are improved. The proposed method is also applied to the design of a deepwater composite riser example in which the structural performances are calculated by numerical analysis. The design aim was to enhance the tension strength and minimize the cost. Under the buckling constraint, the optimal trade-off of tensile strength and material volume is obtained. The results demonstrated that the proposed method can effectively deal with multi-objective optimizations with black-box functions.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28026792','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28026792"><span>A Deep Learning Approach to on-Node Sensor Data Analytics for Mobile or Wearable Devices.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ravi, Daniele; Wong, Charence; Lo, Benny; Yang, Guang-Zhong</p> <p>2017-01-01</p> <p>The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JSV...370..394Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JSV...370..394Z"><span>EEMD-based multiscale ICA method for slewing bearing fault detection and diagnosis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Žvokelj, Matej; Zupan, Samo; Prebil, Ivan</p> <p>2016-05-01</p> <p>A novel multivariate and multiscale statistical process monitoring method is proposed with the aim of detecting incipient failures in large slewing bearings, where subjective influence plays a minor role. The proposed method integrates the strengths of the Independent Component Analysis (ICA) multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD), which adaptively decomposes signals into different time scales and can thus cope with multiscale system dynamics. The method, which was named EEMD-based multiscale ICA (EEMD-MSICA), not only enables bearing fault detection but also offers a mechanism of multivariate signal denoising and, in combination with the Envelope Analysis (EA), a diagnostic tool. The multiscale nature of the proposed approach makes the method convenient to cope with data which emanate from bearings in complex real-world rotating machinery and frequently represent the cumulative effect of many underlying phenomena occupying different regions in the time-frequency plane. The efficiency of the proposed method was tested on simulated as well as real vibration and Acoustic Emission (AE) signals obtained through conducting an accelerated run-to-failure lifetime experiment on a purpose-built laboratory slewing bearing test stand. The ability to detect and locate the early-stage rolling-sliding contact fatigue failure of the bearing indicates that AE and vibration signals carry sufficient information on the bearing condition and that the developed EEMD-MSICA method is able to effectively extract it, thereby representing a reliable bearing fault detection and diagnosis strategy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29062572','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29062572"><span>A novel minimum cost maximum power algorithm for future smart home energy management.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Singaravelan, A; Kowsalya, M</p> <p>2017-11-01</p> <p>With the latest development of smart grid technology, the energy management system can be efficiently implemented at consumer premises. In this paper, an energy management system with wireless communication and smart meter are designed for scheduling the electric home appliances efficiently with an aim of reducing the cost and peak demand. For an efficient scheduling scheme, the appliances are classified into two types: uninterruptible and interruptible appliances. The problem formulation was constructed based on the practical constraints that make the proposed algorithm cope up with the real-time situation. The formulated problem was identified as Mixed Integer Linear Programming (MILP) problem, so this problem was solved by a step-wise approach. This paper proposes a novel Minimum Cost Maximum Power (MCMP) algorithm to solve the formulated problem. The proposed algorithm was simulated with input data available in the existing method. For validating the proposed MCMP algorithm, results were compared with the existing method. The compared results prove that the proposed algorithm efficiently reduces the consumer electricity consumption cost and peak demand to optimum level with 100% task completion without sacrificing the consumer comfort.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://ntrs.nasa.gov/search.jsp?R=19750020946&hterms=communication+effective+University&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dcommunication%2Beffective%2BUniversity','NASA-TRS'); return false;" href="https://ntrs.nasa.gov/search.jsp?R=19750020946&hterms=communication+effective+University&qs=N%3D0%26Ntk%3DAll%26Ntx%3Dmode%2Bmatchall%26Ntt%3Dcommunication%2Beffective%2BUniversity"><span>University/government/industry relations in aeronautics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Schairer, G. S.</p> <p>1975-01-01</p> <p>Methods for improving the relationships between universities, the aircraft industry, and the Government are proposed. The author submits nine specific recommendations aimed at more effective aeronautical engineering education and employment of graduate engineers. The need for improved communication between the organizations which influence the advancement of aeronautical sciences is stressed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=types+AND+company&pg=5&id=EJ1142471','ERIC'); return false;" href="https://eric.ed.gov/?q=types+AND+company&pg=5&id=EJ1142471"><span>The Design Process of Corporate Universities: A Stakeholder Approach</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Patrucco, Andrea Stefano; Pellizzoni, Elena; Buganza, Tommaso</p> <p>2017-01-01</p> <p>Purpose: Corporate universities (CUs) have been experiencing tremendous growth during the past years and can represent a real driving force for structured organizations. The paper aims to define the process of CU design shaped around company strategy. For each step, the authors propose specific roles, activities and methods.…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OptCo.391....1Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OptCo.391....1Z"><span>Eliminating the influence of source spectrum of white light scanning interferometry through time-delay estimation algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhou, Yunfei; Cai, Hongzhi; Zhong, Liyun; Qiu, Xiang; Tian, Jindong; Lu, Xiaoxu</p> <p>2017-05-01</p> <p>In white light scanning interferometry (WLSI), the accuracy of profile measurement achieved with the conventional zero optical path difference (ZOPD) position locating method is closely related with the shape of interference signal envelope (ISE), which is mainly decided by the spectral distribution of illumination source. For a broadband light with Gaussian spectral distribution, the corresponding shape of ISE reveals a symmetric distribution, so the accurate ZOPD position can be achieved easily. However, if the spectral distribution of source is irregular, the shape of ISE will become asymmetric or complex multi-peak distribution, WLSI cannot work well through using ZOPD position locating method. Aiming at this problem, we propose time-delay estimation (TDE) based WLSI method, in which the surface profile information is achieved by using the relative displacement of interference signal between different pixels instead of the conventional ZOPD position locating method. Due to all spectral information of interference signal (envelope and phase) are utilized, in addition to revealing the advantage of high accuracy, the proposed method can achieve profile measurement with high accuracy in the case that the shape of ISE is irregular while ZOPD position locating method cannot work. That is to say, the proposed method can effectively eliminate the influence of source spectrum.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27860446','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27860446"><span>A genetic algorithm-based framework for wavelength selection on sample categorization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F</p> <p>2017-08-01</p> <p>In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2009CNSNS..14.2870L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2009CNSNS..14.2870L"><span>Applications of information theory, genetic algorithms, and neural models to predict oil flow</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto</p> <p>2009-07-01</p> <p>This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012OptSp.112..271L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012OptSp.112..271L"><span>Water stress assessment of cork oak leaves and maritime pine needles based on LIF spectra</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lavrov, A.; Utkin, A. B.; Marques da Silva, J.; Vilar, Rui; Santos, N. M.; Alves, B.</p> <p>2012-02-01</p> <p>The aim of the present work was to develop a method for the remote assessment of the impact of fire and drought stress on Mediterranean forest species such as the cork oak ( Quercus suber) and maritime pine ( Pinus pinaster). The proposed method is based on laser induced fluorescence (LIF): chlorophyll fluorescence is remotely excited by frequency-doubled YAG:Nd laser radiation pulses and collected and analyzed using a telescope and a gated high sensitivity spectrometer. The plant health criterion used is based on the I 685/ I 740 ratio value, calculated from the fluorescence spectra. The method was benchmarked by comparing the results achieved with those obtained by conventional, continuous excitation fluorometric method and water loss gravimetric measurements. The results obtained with both methods show a strong correlation between them and with the weight-loss measurements, showing that the proposed method is suitable for fire and drought impact assessment on these two species.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_10");'>10</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li class="active"><span>12</span></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_12 --> <div id="page_13" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="241"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017IJGS...46..386P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017IJGS...46..386P"><span>Comparison of fusion methods from the abstract level and the rank level in a dispersed decision-making system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Przybyła-Kasperek, M.; Wakulicz-Deja, A.</p> <p>2017-05-01</p> <p>Issues related to decision making based on dispersed knowledge are discussed in the paper. A dispersed decision-making system, which was proposed by the authors in previous articles, is used in this paper. In the system, a process of combining classifiers into coalitions with a negotiation stage is realized. The novelty that is proposed in this article involves the use of six different methods of conflict analysis that are known from the literature.The main purpose of the tests, which were performed, was to compare the methods from the two groups - the abstract level and the rank level. An additional aim was to investigate the efficiency of the fusion methods used in a dispersed system with a dynamic structure with the efficiency that is obtained when no structure is used. Conclusions were drawn that, in most cases, the use of a dispersed system improves the efficiency of inference.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3519820','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3519820"><span>Identifying Audiences of E-Infrastructures - Tools for Measuring Impact</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>van den Besselaar, Peter</p> <p>2012-01-01</p> <p>Research evaluation should take into account the intended scholarly and non-scholarly audiences of the research output. This holds too for research infrastructures, which often aim at serving a large variety of audiences. With research and research infrastructures moving to the web, new possibilities are emerging for evaluation metrics. This paper proposes a feasible indicator for measuring the scope of audiences who use web-based e-infrastructures, as well as the frequency of use. In order to apply this indicator, a method is needed for classifying visitors to e-infrastructures into relevant user categories. The paper proposes such a method, based on an inductive logic program and a Bayesian classifier. The method is tested, showing that the visitors are efficiently classified with 90% accuracy into the selected categories. Consequently, the method can be used to evaluate the use of the e-infrastructure within and outside academia. PMID:23239995</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25685841','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25685841"><span>A dynamic integrated fault diagnosis method for power transformers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gao, Wensheng; Bai, Cuifen; Liu, Tong</p> <p>2015-01-01</p> <p>In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4320846','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4320846"><span>A Dynamic Integrated Fault Diagnosis Method for Power Transformers</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gao, Wensheng; Liu, Tong</p> <p>2015-01-01</p> <p>In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JSV...424..192C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JSV...424..192C"><span>Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Dongyue; Lin, Jianhui; Li, Yanping</p> <p>2018-06-01</p> <p>Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.8285E..4PL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.8285E..4PL"><span>Gaussian mass optimization for kernel PCA parameters</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Yong; Wang, Zulin</p> <p>2011-10-01</p> <p>This paper proposes a novel kernel parameter optimization method based on Gaussian mass, which aims to overcome the current brute force parameter optimization method in a heuristic way. Generally speaking, the choice of kernel parameter should be tightly related to the target objects while the variance between the samples, the most commonly used kernel parameter, doesn't possess much features of the target, which gives birth to Gaussian mass. Gaussian mass defined in this paper has the property of the invariance of rotation and translation and is capable of depicting the edge, topology and shape information. Simulation results show that Gaussian mass leads a promising heuristic optimization boost up for kernel method. In MNIST handwriting database, the recognition rate improves by 1.6% compared with common kernel method without Gaussian mass optimization. Several promising other directions which Gaussian mass might help are also proposed at the end of the paper.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5387813','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5387813"><span>Therapy Decision Support Based on Recommender System Methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gräßer, Felix; Beckert, Stefanie; Küster, Denise; Schmitt, Jochen; Abraham, Susanne; Malberg, Hagen</p> <p>2017-01-01</p> <p>We present a system for data-driven therapy decision support based on techniques from the field of recommender systems. Two methods for therapy recommendation, namely, Collaborative Recommender and Demographic-based Recommender, are proposed. Both algorithms aim to predict the individual response to different therapy options using diverse patient data and recommend the therapy which is assumed to provide the best outcome for a specific patient and time, that is, consultation. The proposed methods are evaluated using a clinical database incorporating patients suffering from the autoimmune skin disease psoriasis. The Collaborative Recommender proves to generate both better outcome predictions and recommendation quality. However, due to sparsity in the data, this approach cannot provide recommendations for the entire database. In contrast, the Demographic-based Recommender performs worse on average but covers more consultations. Consequently, both methods profit from a combination into an overall recommender system. PMID:29065657</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22127987','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22127987"><span>Localization of synchronous cortical neural sources.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zerouali, Younes; Herry, Christophe L; Jemel, Boutheina; Lina, Jean-Marc</p> <p>2013-03-01</p> <p>Neural synchronization is a key mechanism to a wide variety of brain functions, such as cognition, perception, or memory. High temporal resolution achieved by EEG recordings allows the study of the dynamical properties of synchronous patterns of activity at a very fine temporal scale but with very low spatial resolution. Spatial resolution can be improved by retrieving the neural sources of EEG signal, thus solving the so-called inverse problem. Although many methods have been proposed to solve the inverse problem and localize brain activity, few of them target the synchronous brain regions. In this paper, we propose a novel algorithm aimed at localizing specifically synchronous brain regions and reconstructing the time course of their activity. Using multivariate wavelet ridge analysis, we extract signals capturing the synchronous events buried in the EEG and then solve the inverse problem on these signals. Using simulated data, we compare results of source reconstruction accuracy achieved by our method to a standard source reconstruction approach. We show that the proposed method performs better across a wide range of noise levels and source configurations. In addition, we applied our method on real dataset and identified successfully cortical areas involved in the functional network underlying visual face perception. We conclude that the proposed approach allows an accurate localization of synchronous brain regions and a robust estimation of their activity.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011smma.book...47Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011smma.book...47Z"><span>Social Image Tag Ranking by Two-View Learning</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhuang, Jinfeng; Hoi, Steven C. H.</p> <p></p> <p>Tags play a central role in text-based social image retrieval and browsing. However, the tags annotated by web users could be noisy, irrelevant, and often incomplete for describing the image contents, which may severely deteriorate the performance of text-based image retrieval models. In order to solve this problem, researchers have proposed techniques to rank the annotated tags of a social image according to their relevance to the visual content of the image. In this paper, we aim to overcome the challenge of social image tag ranking for a corpus of social images with rich user-generated tags by proposing a novel two-view learning approach. It can effectively exploit both textual and visual contents of social images to discover the complicated relationship between tags and images. Unlike the conventional learning approaches that usually assumes some parametric models, our method is completely data-driven and makes no assumption about the underlying models, making the proposed solution practically more effective. We formulate our method as an optimization task and present an efficient algorithm to solve it. To evaluate the efficacy of our method, we conducted an extensive set of experiments by applying our technique to both text-based social image retrieval and automatic image annotation tasks. Our empirical results showed that the proposed method can be more effective than the conventional approaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IEITI..91.2153W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IEITI..91.2153W"><span>Pose Invariant Face Recognition Based on Hybrid Dominant Frequency Features</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wijaya, I. Gede Pasek Suta; Uchimura, Keiichi; Hu, Zhencheng</p> <p></p> <p>Face recognition is one of the most active research areas in pattern recognition, not only because the face is a human biometric characteristics of human being but also because there are many potential applications of the face recognition which range from human-computer interactions to authentication, security, and surveillance. This paper presents an approach to pose invariant human face image recognition. The proposed scheme is based on the analysis of discrete cosine transforms (DCT) and discrete wavelet transforms (DWT) of face images. From both the DCT and DWT domain coefficients, which describe the facial information, we build compact and meaningful features vector, using simple statistical measures and quantization. This feature vector is called as the hybrid dominant frequency features. Then, we apply a combination of the L2 and Lq metric to classify the hybrid dominant frequency features to a person's class. The aim of the proposed system is to overcome the high memory space requirement, the high computational load, and the retraining problems of previous methods. The proposed system is tested using several face databases and the experimental results are compared to a well-known Eigenface method. The proposed method shows good performance, robustness, stability, and accuracy without requiring geometrical normalization. Furthermore, the purposed method has low computational cost, requires little memory space, and can overcome retraining problem.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016IJAEO..50...64E','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016IJAEO..50...64E"><span>Spatiotemporal monitoring of soil salinization in irrigated Tadla Plain (Morocco) using satellite spectral indices</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>El Harti, Abderrazak; Lhissou, Rachid; Chokmani, Karem; Ouzemou, Jamal-eddine; Hassouna, Mohamed; Bachaoui, El Mostafa; El Ghmari, Abderrahmene</p> <p>2016-08-01</p> <p>Soil salinization is major environmental issue in irrigated agricultural production. Conventional methods for salinization monitoring are time and money consuming and limited by the high spatiotemporal variability of this phenomenon. This work aims to propose a spatiotemporal monitoring method of soil salinization in the Tadla plain in central Morocco using spectral indices derived from Thematic Mapper (TM) and Operational Land Imager (OLI) data. Six Landsat TM/OLI satellite images acquired during 13 years period (2000-2013) coupled with in-situ electrical conductivity (EC) measurements were used to develop the proposed method. After radiometric and atmospheric correction of TM/OLI images, a new soil salinity index (OLI-SI) is proposed for soil EC estimation. Validation shows that this index allowed a satisfactory EC estimation in the Tadla irrigated perimeter with coefficient of determination R2 varying from 0.55 to 0.77 and a Root Mean Square Error (RMSE) ranging between 1.02 dS/m and 2.35 dS/m. The times-series of salinity maps produced over the Tadla plain using the proposed method show that salinity is decreasing in intensity and progressively increasing in spatial extent, over the 2000-2013 period. This trend resulted in a decrease in agricultural activities in the southwestern part of the perimeter, located in the hydraulic downstream.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27418959','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27418959"><span>Setting health research priorities using the CHNRI method: IV. Key conceptual advances.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Rudan, Igor</p> <p>2016-06-01</p> <p>Child Health and Nutrition Research Initiative (CHNRI) started as an initiative of the Global Forum for Health Research in Geneva, Switzerland. Its aim was to develop a method that could assist priority setting in health research investments. The first version of the CHNRI method was published in 2007-2008. The aim of this paper was to summarize the history of the development of the CHNRI method and its key conceptual advances. The guiding principle of the CHNRI method is to expose the potential of many competing health research ideas to reduce disease burden and inequities that exist in the population in a feasible and cost-effective way. The CHNRI method introduced three key conceptual advances that led to its increased popularity in comparison to other priority-setting methods and processes. First, it proposed a systematic approach to listing a large number of possible research ideas, using the "4D" framework (description, delivery, development and discovery research) and a well-defined "depth" of proposed research ideas (research instruments, avenues, options and questions). Second, it proposed a systematic approach for discriminating between many proposed research ideas based on a well-defined context and criteria. The five "standard" components of the context are the population of interest, the disease burden of interest, geographic limits, time scale and the preferred style of investing with respect to risk. The five "standard" criteria proposed for prioritization between research ideas are answerability, effectiveness, deliverability, maximum potential for disease burden reduction and the effect on equity. However, both the context and the criteria can be flexibly changed to meet the specific needs of each priority-setting exercise. Third, it facilitated consensus development through measuring collective optimism on each component of each research idea among a larger group of experts using a simple scoring system. This enabled the use of the knowledge of many experts in the field, "visualising" their collective opinion and presenting the list of many research ideas with their ranks, based on an intuitive score that ranges between 0 and 100. Two recent reviews showed that the CHNRI method, an approach essentially based on "crowdsourcing", has become the dominant approach to setting health research priorities in the global biomedical literature over the past decade. With more than 50 published examples of implementation to date, it is now widely used in many international organisations for collective decision-making on health research priorities. The applications have been helpful in promoting better balance between investments in fundamental research, translation research and implementation research.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/22072388','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/22072388"><span>Decompositions of large-scale biological systems based on dynamical properties.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Soranzo, Nicola; Ramezani, Fahimeh; Iacono, Giovanni; Altafini, Claudio</p> <p>2012-01-01</p> <p>Given a large-scale biological network represented as an influence graph, in this article we investigate possible decompositions of the network aimed at highlighting specific dynamical properties. The first decomposition we study consists in finding a maximal directed acyclic subgraph of the network, which dynamically corresponds to searching for a maximal open-loop subsystem of the given system. Another dynamical property investigated is strong monotonicity. We propose two methods to deal with this property, both aimed at decomposing the system into strongly monotone subsystems, but with different structural characteristics: one method tends to produce a single large strongly monotone component, while the other typically generates a set of smaller disjoint strongly monotone subsystems. Original heuristics for the methods investigated are described in the article. altafini@sissa.it</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017E%26ES...73a2005L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017E%26ES...73a2005L"><span>Natural Aggregation Approach based Home Energy Manage System with User Satisfaction Modelling</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Luo, F. J.; Ranzi, G.; Dong, Z. Y.; Murata, J.</p> <p>2017-07-01</p> <p>With the prevalence of advanced sensing and two-way communication technologies, Home Energy Management System (HEMS) has attracted lots of attentions in recent years. This paper proposes a HEMS that optimally schedules the controllable Residential Energy Resources (RERs) in a Time-of-Use (TOU) pricing and high solar power penetrated environment. The HEMS aims to minimize the overall operational cost of the home, and the user’s satisfactions and requirements on the operation of different household appliances are modelled and considered in the HEMS. Further, a new biological self-aggregation intelligence based optimization technique previously proposed by the authors, i.e., Natural Aggregation Algorithm (NAA), is applied to solve the proposed HEMS optimization model. Simulations are conducted to validate the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28678852','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28678852"><span>Enriching plausible new hypothesis generation in PubMed.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Baek, Seung Han; Lee, Dahee; Kim, Minjoo; Lee, Jong Ho; Song, Min</p> <p>2017-01-01</p> <p>Most of earlier studies in the field of literature-based discovery have adopted Swanson's ABC model that links pieces of knowledge entailed in disjoint literatures. However, the issue concerning their practicability remains to be solved since most of them did not deal with the context surrounding the discovered associations and usually not accompanied with clinical confirmation. In this study, we aim to propose a method that expands and elaborates the existing hypothesis by advanced text mining techniques for capturing contexts. We extend ABC model to allow for multiple B terms with various biological types. We were able to concretize a specific, metabolite-related hypothesis with abundant contextual information by using the proposed method. Starting from explaining the relationship between lactosylceramide and arterial stiffness, the hypothesis was extended to suggest a potential pathway consisting of lactosylceramide, nitric oxide, malondialdehyde, and arterial stiffness. The experiment by domain experts showed that it is clinically valid. The proposed method is designed to provide plausible candidates of the concretized hypothesis, which are based on extracted heterogeneous entities and detailed relation information, along with a reliable ranking criterion. Statistical tests collaboratively conducted with biomedical experts provide the validity and practical usefulness of the method unlike previous studies. Applying the proposed method to other cases, it would be helpful for biologists to support the existing hypothesis and easily expect the logical process within it.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4362797','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4362797"><span>Gradient-based Electrical Properties Tomography (gEPT): a Robust Method for Mapping Electrical Properties of Biological Tissues In Vivo Using Magnetic Resonance Imaging</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Jiaen; Zhang, Xiaotong; Schmitter, Sebastian; Van de Moortele, Pierre-Francois; He, Bin</p> <p>2014-01-01</p> <p>Purpose To develop high-resolution electrical properties tomography (EPT) methods and investigate a gradient-based EPT (gEPT) approach which aims to reconstruct the electrical properties (EP), including conductivity and permittivity, of an imaged sample from experimentally measured B1 maps with improved boundary reconstruction and robustness against measurement noise. Theory and Methods Using a multi-channel transmit/receive stripline head coil, with acquired B1 maps for each coil element, by assuming negligible Bz component compared to transverse B1 components, a theory describing the relationship between B1 field, EP value and their spatial gradient has been proposed. The final EP images were obtained through spatial integration over the reconstructed EP gradient. Numerical simulation, physical phantom and in vivo human experiments at 7 T have been conducted to evaluate the performance of the proposed methods. Results Reconstruction results were compared with target EP values in both simulations and phantom experiments. Human experimental results were compared with EP values in literature. Satisfactory agreement was observed with improved boundary reconstruction. Importantly, the proposed gEPT method proved to be more robust against noise when compared to previously described non-gradient-based EPT approaches. Conclusion The proposed gEPT approach holds promises to improve EP mapping quality by recovering the boundary information and enhancing robustness against noise. PMID:25213371</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25919372','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25919372"><span>Adaptive control of the packet transmission period with solar energy harvesting prediction in wireless sensor networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kwon, Kideok; Yang, Jihoon; Yoo, Younghwan</p> <p>2015-04-24</p> <p>A number of research works has studied packet scheduling policies in energy scavenging wireless sensor networks, based on the predicted amount of harvested energy. Most of them aim to achieve energy neutrality, which means that an embedded system can operate perpetually while meeting application requirements. Unlike other renewable energy sources, solar energy has the feature of distinct periodicity in the amount of harvested energy over a day. Using this feature, this paper proposes a packet transmission control policy that can enhance the network performance while keeping sensor nodes alive. Furthermore, this paper suggests a novel solar energy prediction method that exploits the relation between cloudiness and solar radiation. The experimental results and analyses show that the proposed packet transmission policy outperforms others in terms of the deadline miss rate and data throughput. Furthermore, the proposed solar energy prediction method can predict more accurately than others by 6.92%.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28917806','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28917806"><span>Portable and low-cost colorimetric office paper-based device for phenacetin detection in seized cocaine samples.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>da Silva, Gabriela O; de Araujo, William R; Paixão, Thiago R L C</p> <p>2018-01-01</p> <p>An office paper-based colorimetric device is proposed as a portable, rapid, and low-cost sensor for forensic applications aiming to detect phenacetin used as adulterant in illicit seized materials such as cocaine. The proposed method uses white office paper as the substrate and wax printing technology to fabricate the detection zones. Based on the optimum conditions, a linear analytical curve was obtained for phenacetin concentrations ranging from 0 to 64.52µgmL ‒1 , and the straight line was in accordance with the following equation: (Magenta percentage color) = 1.19 + 0.458 (C Phe /µgmL ‒1 ), R 2 = 0.990. The limit of detection was calculated as 3.5µgmL ‒1 (3σ/slope). The accuracy of the proposed method was evaluated using real seized cocaine samples and the spike-recovery procedure. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015JSV...357...16H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015JSV...357...16H"><span>Development and optimization of an energy-regenerative suspension system under stochastic road excitation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Huang, Bo; Hsieh, Chen-Yu; Golnaraghi, Farid; Moallem, Mehrdad</p> <p>2015-11-01</p> <p>In this paper a vehicle suspension system with energy harvesting capability is developed, and an analytical methodology for the optimal design of the system is proposed. The optimization technique provides design guidelines for determining the stiffness and damping coefficients aimed at the optimal performance in terms of ride comfort and energy regeneration. The corresponding performance metrics are selected as root-mean-square (RMS) of sprung mass acceleration and expectation of generated power. The actual road roughness is considered as the stochastic excitation defined by ISO 8608:1995 standard road profiles and used in deriving the optimization method. An electronic circuit is proposed to provide variable damping in the real-time based on the optimization rule. A test-bed is utilized and the experiments under different driving conditions are conducted to verify the effectiveness of the proposed method. The test results suggest that the analytical approach is credible in determining the optimality of system performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28129185','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28129185"><span>Integral Sliding Mode Fault-Tolerant Control for Uncertain Linear Systems Over Networks With Signals Quantization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hao, Li-Ying; Park, Ju H; Ye, Dan</p> <p>2017-09-01</p> <p>In this paper, a new robust fault-tolerant compensation control method for uncertain linear systems over networks is proposed, where only quantized signals are assumed to be available. This approach is based on the integral sliding mode (ISM) method where two kinds of integral sliding surfaces are constructed. One is the continuous-state-dependent surface with the aim of sliding mode stability analysis and the other is the quantization-state-dependent surface, which is used for ISM controller design. A scheme that combines the adaptive ISM controller and quantization parameter adjustment strategy is then proposed. Through utilizing H ∞ control analytical technique, once the system is in the sliding mode, the nature of performing disturbance attenuation and fault tolerance from the initial time can be found without requiring any fault information. Finally, the effectiveness of our proposed ISM control fault-tolerant schemes against quantization errors is demonstrated in the simulation.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_11");'>11</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li class="active"><span>13</span></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_13 --> <div id="page_14" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="261"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010JaJAP..49eEB04C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010JaJAP..49eEB04C"><span>Improvement in Brightness Uniformity by Compensating for the Threshold Voltages of Both the Driving Thin-Film Transistor and the Organic Light-Emitting Diode for Active-Matrix Organic Light-Emitting Diode Displays</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Ching-Lin Fan,; Hui-Lung Lai,; Jyu-Yu Chang,</p> <p>2010-05-01</p> <p>In this paper, we propose a novel pixel design and driving method for active-matrix organic light-emitting diode (AM-OLED) displays using low-temperature polycrystalline silicon thin-film transistors (LTPS-TFTs). The proposed threshold voltage compensation circuit, which comprised five transistors and two capacitors, has been verified to supply uniform output current by simulation work using the automatic integrated circuit modeling simulation program with integrated circuit emphasis (AIM-SPICE) simulator. The driving scheme of this voltage programming method includes four periods: precharging, compensation, data input, and emission. The simulated results demonstrate excellent properties such as low error rate of OLED anode voltage variation (<1%) and high output current. The proposed pixel circuit shows high immunity to the threshold voltage deviation characteristics of both the driving poly-Si TFT and the OLED.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AIPC.1776i0007A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AIPC.1776i0007A"><span>Preconditioning strategies for nonlinear conjugate gradient methods, based on quasi-Newton updates</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Andrea, Caliciotti; Giovanni, Fasano; Massimo, Roma</p> <p>2016-10-01</p> <p>This paper reports two proposals of possible preconditioners for the Nonlinear Conjugate Gradient (NCG) method, in large scale unconstrained optimization. On one hand, the common idea of our preconditioners is inspired to L-BFGS quasi-Newton updates, on the other hand we aim at explicitly approximating in some sense the inverse of the Hessian matrix. Since we deal with large scale optimization problems, we propose matrix-free approaches where the preconditioners are built using symmetric low-rank updating formulae. Our distinctive new contributions rely on using information on the objective function collected as by-product of the NCG, at previous iterations. Broadly speaking, our first approach exploits the secant equation, in order to impose interpolation conditions on the objective function. In the second proposal we adopt and ad hoc modified-secant approach, in order to possibly guarantee some additional theoretical properties.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28172588','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28172588"><span>Standardized Methods for Enhanced Quality and Comparability of Tuberculous Meningitis Studies.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Marais, Ben J; Heemskerk, Anna D; Marais, Suzaan S; van Crevel, Reinout; Rohlwink, Ursula; Caws, Maxine; Meintjes, Graeme; Misra, Usha K; Mai, Nguyen T H; Ruslami, Rovina; Seddon, James A; Solomons, Regan; van Toorn, Ronald; Figaji, Anthony; McIlleron, Helen; Aarnoutse, Robert; Schoeman, Johan F; Wilkinson, Robert J; Thwaites, Guy E</p> <p>2017-02-15</p> <p>Tuberculous meningitis (TBM) remains a major cause of death and disability in tuberculosis-endemic areas, especially in young children and immunocompromised adults. Research aimed at improving outcomes is hampered by poor standardization, which limits study comparison and the generalizability of results. We propose standardized methods for the conduct of TBM clinical research that were drafted at an international tuberculous meningitis research meeting organized by the Oxford University Clinical Research Unit in Vietnam. We propose a core dataset including demographic and clinical information to be collected at study enrollment, important aspects related to patient management and monitoring, and standardized reporting of patient outcomes. The criteria proposed for the conduct of observational and intervention TBM studies should improve the quality of future research outputs, can facilitate multicenter studies and meta-analyses of pooled data, and could provide the foundation for a global TBM data repository.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AGUFM.H41B1296L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AGUFM.H41B1296L"><span>Stochastic collocation using Kronrod-Patterson-Hermite quadrature with moderate delay for subsurface flow and transport</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liao, Q.; Tchelepi, H.; Zhang, D.</p> <p>2015-12-01</p> <p>Uncertainty quantification aims at characterizing the impact of input parameters on the output responses and plays an important role in many areas including subsurface flow and transport. In this study, a sparse grid collocation approach, which uses a nested Kronrod-Patterson-Hermite quadrature rule with moderate delay for Gaussian random parameters, is proposed to quantify the uncertainty of model solutions. The conventional stochastic collocation method serves as a promising non-intrusive approach and has drawn a great deal of interests. The collocation points are usually chosen to be Gauss-Hermite quadrature nodes, which are naturally unnested. The Kronrod-Patterson-Hermite nodes are shown to be more efficient than the Gauss-Hermite nodes due to nestedness. We propose a Kronrod-Patterson-Hermite rule with moderate delay to further improve the performance. Our study demonstrates the effectiveness of the proposed method for uncertainty quantification through subsurface flow and transport examples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7532E..0QP','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7532E..0QP"><span>Morphological rational multi-scale algorithm for color contrast enhancement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Peregrina-Barreto, Hayde; Terol-Villalobos, Iván R.</p> <p>2010-01-01</p> <p>Contrast enhancement main goal consists on improving the image visual appearance but also it is used for providing a transformed image in order to segment it. In mathematical morphology several works have been derived from the framework theory for contrast enhancement proposed by Meyer and Serra. However, when working with images with a wide range of scene brightness, as for example when strong highlights and deep shadows appear in the same image, the proposed morphological methods do not allow the enhancement. In this work, a rational multi-scale method, which uses a class of morphological connected filters called filters by reconstruction, is proposed. Granulometry is used by finding the more accurate scales for filters and with the aim of avoiding the use of other little significant scales. The CIE-u'v'Y' space was used to introduce our results since it takes into account the Weber's Law and by avoiding the creation of new colors it permits to modify the luminance values without affecting the hue. The luminance component ('Y) is enhanced separately using the proposed method, next it is used for enhancing the chromatic components (u', v') by means of the center of gravity law of color mixing.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyC..548...14L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyC..548...14L"><span>A novel HTS magnetic levitation dining table</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lu, Yiyun; Huang, Huiying</p> <p>2018-05-01</p> <p>High temperature superconducting (HTS) bulk can levitate above or suspend below a permanent magnet stably. Many magnificent potential applications of HTS bulk are proposed by researchers. Until now, few reports have been found for real applications of HTS bulk. A complete set of small-scale HTS magnetic levitation table is proposed in the paper. The HTS magnetic levitation table includes an annular HTS magnetic levitation system which is composed of an annular HTS bulk array and an annular permanent magnet guideway (PMG). The annular PMG and the annular cryogenics vessel which used to maintain low temperature environment of the HTS bulk array are designed. 62 YBCO bulks are used to locate at the bottom of the annular vessel. A 3D-model finite element numerical method is used to design the HTS bulk magnetic levitation system. Equivalent magnetic levitation and guidance forces calculation rules are proposed aimed at the annular HTS magnetic levitation system stability. Based on the proposed method, levitation and guidance forces curves of the one YBCO bulk magnetic above PMG could be obtained. This method also can use to assist PMG design to check whether the designed PMG could reach the basic demand of the HTS magnetic levitation table.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4940408','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4940408"><span>Uncontrolled Manifold Reference Feedback Control of Multi-Joint Robot Arms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Togo, Shunta; Kagawa, Takahiro; Uno, Yoji</p> <p>2016-01-01</p> <p>The brain must coordinate with redundant bodies to perform motion tasks. The aim of the present study is to propose a novel control model that predicts the characteristics of human joint coordination at a behavioral level. To evaluate the joint coordination, an uncontrolled manifold (UCM) analysis that focuses on the trial-to-trial variance of joints has been proposed. The UCM is a nonlinear manifold associated with redundant kinematics. In this study, we directly applied the notion of the UCM to our proposed control model called the “UCM reference feedback control.” To simplify the problem, the present study considered how the redundant joints were controlled to regulate a given target hand position. We considered a conventional method that pre-determined a unique target joint trajectory by inverse kinematics or any other optimization method. In contrast, our proposed control method generates a UCM as a control target at each time step. The target UCM is a subspace of joint angles whose variability does not affect the hand position. The joint combination in the target UCM is then selected so as to minimize the cost function, which consisted of the joint torque and torque change. To examine whether the proposed method could reproduce human-like joint coordination, we conducted simulation and measurement experiments. In the simulation experiments, a three-link arm with a shoulder, elbow, and wrist regulates a one-dimensional target of a hand through proposed method. In the measurement experiments, subjects performed a one-dimensional target-tracking task. The kinematics, dynamics, and joint coordination were quantitatively compared with the simulation data of the proposed method. As a result, the UCM reference feedback control could quantitatively reproduce the difference of the mean value for the end hand position between the initial postures, the peaks of the bell-shape tangential hand velocity, the sum of the squared torque, the mean value for the torque change, the variance components, and the index of synergy as well as the human subjects. We concluded that UCM reference feedback control can reproduce human-like joint coordination. The inference for motor control of the human central nervous system based on the proposed method was discussed. PMID:27462215</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=displacement&pg=3&id=EJ1168795','ERIC'); return false;" href="https://eric.ed.gov/?q=displacement&pg=3&id=EJ1168795"><span>The Accuracy of Anthropometric Equations to Assess Body Fat in Adults with Down Syndrome</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Rossato, Mateus; Dellagrana, Rodolfo André; da Costa, Rafael Martins; de Souza Bezerra, Ewertton; dos Santos, João Otacílio Libardoni; Rech, Cassiano Ricardo</p> <p>2018-01-01</p> <p>Background: The aim of this study was to verify the accuracy of anthropometric equations to estimate the body density (BD) of adults with Down syndrome (DS), and propose new regression equations. Materials and methods: Twenty-one males (30.5 ± 9.4 years) and 17 females (27.3 ± 7.7 years) with DS participated in this study. The reference method for…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29724180','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29724180"><span>Joint sparse reconstruction of multi-contrast MRI images with graph based redundant wavelet transform.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lai, Zongying; Zhang, Xinlin; Guo, Di; Du, Xiaofeng; Yang, Yonggui; Guo, Gang; Chen, Zhong; Qu, Xiaobo</p> <p>2018-05-03</p> <p>Multi-contrast images in magnetic resonance imaging (MRI) provide abundant contrast information reflecting the characteristics of the internal tissues of human bodies, and thus have been widely utilized in clinical diagnosis. However, long acquisition time limits the application of multi-contrast MRI. One efficient way to accelerate data acquisition is to under-sample the k-space data and then reconstruct images with sparsity constraint. However, images are compromised at high acceleration factor if images are reconstructed individually. We aim to improve the images with a jointly sparse reconstruction and Graph-based redundant wavelet transform (GBRWT). First, a sparsifying transform, GBRWT, is trained to reflect the similarity of tissue structures in multi-contrast images. Second, joint multi-contrast image reconstruction is formulated as a ℓ 2, 1 norm optimization problem under GBRWT representations. Third, the optimization problem is numerically solved using a derived alternating direction method. Experimental results in synthetic and in vivo MRI data demonstrate that the proposed joint reconstruction method can achieve lower reconstruction errors and better preserve image structures than the compared joint reconstruction methods. Besides, the proposed method outperforms single image reconstruction with joint sparsity constraint of multi-contrast images. The proposed method explores the joint sparsity of multi-contrast MRI images under graph-based redundant wavelet transform and realizes joint sparse reconstruction of multi-contrast images. Experiment demonstrate that the proposed method outperforms the compared joint reconstruction methods as well as individual reconstructions. With this high quality image reconstruction method, it is possible to achieve the high acceleration factors by exploring the complementary information provided by multi-contrast MRI.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29280033','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29280033"><span>A combined Fisher and Laplacian score for feature selection in QSAR based drug design using compounds with known and unknown activities.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah</p> <p>2018-02-01</p> <p>Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JCAMD..32..375V','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JCAMD..32..375V"><span>A combined Fisher and Laplacian score for feature selection in QSAR based drug design using compounds with known and unknown activities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Valizade Hasanloei, Mohammad Amin; Sheikhpour, Razieh; Sarram, Mehdi Agha; Sheikhpour, Elnaz; Sharifi, Hamdollah</p> <p>2018-02-01</p> <p>Quantitative structure-activity relationship (QSAR) is an effective computational technique for drug design that relates the chemical structures of compounds to their biological activities. Feature selection is an important step in QSAR based drug design to select the most relevant descriptors. One of the most popular feature selection methods for classification problems is Fisher score which aim is to minimize the within-class distance and maximize the between-class distance. In this study, the properties of Fisher criterion were extended for QSAR models to define the new distance metrics based on the continuous activity values of compounds with known activities. Then, a semi-supervised feature selection method was proposed based on the combination of Fisher and Laplacian criteria which exploits both compounds with known and unknown activities to select the relevant descriptors. To demonstrate the efficiency of the proposed semi-supervised feature selection method in selecting the relevant descriptors, we applied the method and other feature selection methods on three QSAR data sets such as serine/threonine-protein kinase PLK3 inhibitors, ROCK inhibitors and phenol compounds. The results demonstrated that the QSAR models built on the selected descriptors by the proposed semi-supervised method have better performance than other models. This indicates the efficiency of the proposed method in selecting the relevant descriptors using the compounds with known and unknown activities. The results of this study showed that the compounds with known and unknown activities can be helpful to improve the performance of the combined Fisher and Laplacian based feature selection methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28361158','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28361158"><span>A Machine Learning-based Method for Question Type Classification in Biomedical Question Answering.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sarrouti, Mourad; Ouatik El Alaoui, Said</p> <p>2017-05-18</p> <p>Biomedical question type classification is one of the important components of an automatic biomedical question answering system. The performance of the latter depends directly on the performance of its biomedical question type classification system, which consists of assigning a category to each question in order to determine the appropriate answer extraction algorithm. This study aims to automatically classify biomedical questions into one of the four categories: (1) yes/no, (2) factoid, (3) list, and (4) summary. In this paper, we propose a biomedical question type classification method based on machine learning approaches to automatically assign a category to a biomedical question. First, we extract features from biomedical questions using the proposed handcrafted lexico-syntactic patterns. Then, we feed these features for machine-learning algorithms. Finally, the class label is predicted using the trained classifiers. Experimental evaluations performed on large standard annotated datasets of biomedical questions, provided by the BioASQ challenge, demonstrated that our method exhibits significant improved performance when compared to four baseline systems. The proposed method achieves a roughly 10-point increase over the best baseline in terms of accuracy. Moreover, the obtained results show that using handcrafted lexico-syntactic patterns as features' provider of support vector machine (SVM) lead to the highest accuracy of 89.40 %. The proposed method can automatically classify BioASQ questions into one of the four categories: yes/no, factoid, list, and summary. Furthermore, the results demonstrated that our method produced the best classification performance compared to four baseline systems.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29383737','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29383737"><span>Quality evaluation of no-reference MR images using multidirectional filters and image statistics.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jang, Jinseong; Bang, Kihun; Jang, Hanbyol; Hwang, Dosik</p> <p>2018-09-01</p> <p>This study aimed to develop a fully automatic, no-reference image-quality assessment (IQA) method for MR images. New quality-aware features were obtained by applying multidirectional filters to MR images and examining the feature statistics. A histogram of these features was then fitted to a generalized Gaussian distribution function for which the shape parameters yielded different values depending on the type of distortion in the MR image. Standard feature statistics were established through a training process based on high-quality MR images without distortion. Subsequently, the feature statistics of a test MR image were calculated and compared with the standards. The quality score was calculated as the difference between the shape parameters of the test image and the undistorted standard images. The proposed IQA method showed a >0.99 correlation with the conventional full-reference assessment methods; accordingly, this proposed method yielded the best performance among no-reference IQA methods for images containing six types of synthetic, MR-specific distortions. In addition, for authentically distorted images, the proposed method yielded the highest correlation with subjective assessments by human observers, thus demonstrating its superior performance over other no-reference IQAs. Our proposed IQA was designed to consider MR-specific features and outperformed other no-reference IQAs designed mainly for photographic images. Magn Reson Med 80:914-924, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17188639','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17188639"><span>Supporting the design of office layout meeting ergonomics requirements.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Margaritis, Spyros; Marmaras, Nicolas</p> <p>2007-11-01</p> <p>This paper proposes a method and an information technology tool aiming to support the ergonomics layout design of individual workstations in a given space (building). The proposed method shares common ideas with previous generic methods for office layout. However, it goes a step forward and focuses on the cognitive tasks which have to be carried out by the designer or the design team trying to alleviate them. This is achieved in two ways: (i) by decomposing the layout design problem to six main stages, during which only a limited number of variables and requirements are considered and (ii) by converting the ergonomics requirements to functional design guidelines. The information technology tool (ErgoOffice 0.1) automates certain phases of the layout design process, and supports the design team either by its editing and graphical facilities or by providing adequate memory support.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1967d0037Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1967d0037Y"><span>The algorithm of fast image stitching based on multi-feature extraction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Chunde; Wu, Ge; Shi, Jing</p> <p>2018-05-01</p> <p>This paper proposed an improved image registration method combining Hu-based invariant moment contour information and feature points detection, aiming to solve the problems in traditional image stitching algorithm, such as time-consuming feature points extraction process, redundant invalid information overload and inefficiency. First, use the neighborhood of pixels to extract the contour information, employing the Hu invariant moment as similarity measure to extract SIFT feature points in those similar regions. Then replace the Euclidean distance with Hellinger kernel function to improve the initial matching efficiency and get less mismatching points, further, estimate affine transformation matrix between the images. Finally, local color mapping method is adopted to solve uneven exposure, using the improved multiresolution fusion algorithm to fuse the mosaic images and realize seamless stitching. Experimental results confirm high accuracy and efficiency of method proposed in this paper.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://hdl.handle.net/2060/20110013377','NASA-TRS'); return false;" href="http://hdl.handle.net/2060/20110013377"><span>Robust Mosaicking of Stereo Digital Elevation Models from the Ames Stereo Pipeline</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://ntrs.nasa.gov/search.jsp">NASA Technical Reports Server (NTRS)</a></p> <p>Kim, Tae Min; Moratto, Zachary M.; Nefian, Ara Victor</p> <p>2010-01-01</p> <p>Robust estimation method is proposed to combine multiple observations and create consistent, accurate, dense Digital Elevation Models (DEMs) from lunar orbital imagery. The NASA Ames Intelligent Robotics Group (IRG) aims to produce higher-quality terrain reconstructions of the Moon from Apollo Metric Camera (AMC) data than is currently possible. In particular, IRG makes use of a stereo vision process, the Ames Stereo Pipeline (ASP), to automatically generate DEMs from consecutive AMC image pairs. However, the DEMs currently produced by the ASP often contain errors and inconsistencies due to image noise, shadows, etc. The proposed method addresses this problem by making use of multiple observations and by considering their goodness of fit to improve both the accuracy and robustness of the estimate. The stepwise regression method is applied to estimate the relaxed weight of each observation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4686908','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4686908"><span>A Scheme to Optimize Flow Routing and Polling Switch Selection of Software Defined Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chen, Huan; Li, Lemin; Ren, Jing; Wang, Yang; Zhao, Yangming; Wang, Xiong; Wang, Sheng; Xu, Shizhong</p> <p>2015-01-01</p> <p>This paper aims at minimizing the communication cost for collecting flow information in Software Defined Networks (SDN). Since flow-based information collecting method requires too much communication cost, and switch-based method proposed recently cannot benefit from controlling flow routing, jointly optimize flow routing and polling switch selection is proposed to reduce the communication cost. To this end, joint optimization problem is formulated as an Integer Linear Programming (ILP) model firstly. Since the ILP model is intractable in large size network, we also design an optimal algorithm for the multi-rooted tree topology and an efficient heuristic algorithm for general topology. According to extensive simulations, it is found that our method can save up to 55.76% communication cost compared with the state-of-the-art switch-based scheme. PMID:26690571</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10605E..2XL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10605E..2XL"><span>The optional selection of micro-motion feature based on Support Vector Machine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Bo; Ren, Hongmei; Xiao, Zhi-he; Sheng, Jing</p> <p>2017-11-01</p> <p>Micro-motion form of target is multiple, different micro-motion forms are apt to be modulated, which makes it difficult for feature extraction and recognition. Aiming at feature extraction of cone-shaped objects with different micro-motion forms, this paper proposes the best selection method of micro-motion feature based on support vector machine. After the time-frequency distribution of radar echoes, comparing the time-frequency spectrum of objects with different micro-motion forms, features are extracted based on the differences between the instantaneous frequency variations of different micro-motions. According to the methods based on SVM (Support Vector Machine) features are extracted, then the best features are acquired. Finally, the result shows the method proposed in this paper is feasible under the test condition of certain signal-to-noise ratio(SNR).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015ECSS..165...61P','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015ECSS..165...61P"><span>A simplified real time method to forecast semi-enclosed basins storm surge</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Pasquali, D.; Di Risio, M.; De Girolamo, P.</p> <p>2015-11-01</p> <p>Semi-enclosed basins are often prone to storm surge events. Indeed, their meteorological exposition, the presence of large continental shelf and their shape can lead to strong sea level set-up. A real time system aimed at forecasting storm surge may be of great help to protect human activities (i.e. to forecast flooding due to storm surge events), to manage ports and to safeguard coasts safety. This paper aims at illustrating a simple method able to forecast storm surge events in semi-enclosed basins in real time. The method is based on a mixed approach in which the results obtained by means of a simplified physics based model with low computational costs are corrected by means of statistical techniques. The proposed method is applied to a point of interest located in the Northern part of the Adriatic Sea. The comparison of forecasted levels against observed values shows the satisfactory reliability of the forecasts.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017OptEn..56k4103K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017OptEn..56k4103K"><span>On correct evaluation techniques of brightness enhancement effect measurement data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kukačka, Leoš; Dupuis, Pascal; Motomura, Hideki; Rozkovec, Jiří; Kolář, Milan; Zissis, Georges; Jinno, Masafumi</p> <p>2017-11-01</p> <p>This paper aims to establish confidence intervals of the quantification of brightness enhancement effects resulting from the use of pulsing bright light. It is found that the methods used so far may yield significant bias in the published results, overestimating or underestimating the enhancement effect. The authors propose to use a linear algebra method called the total least squares. Upon an example dataset, it is shown that this method does not yield biased results. The statistical significance of the results is also computed. It is concluded over an observation set that the currently used linear algebra methods present many patterns of noise sensitivity. Changing algorithm details leads to inconsistent results. It is thus recommended to use the method with the lowest noise sensitivity. Moreover, it is shown that this method also permits one to obtain an estimate of the confidence interval. This paper neither aims to publish results about a particular experiment nor to draw any particular conclusion about existence or nonexistence of the brightness enhancement effect.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_12");'>12</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li class="active"><span>14</span></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_14 --> <div id="page_15" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="281"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MS%26E..319a2012A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MS%26E..319a2012A"><span>Optimization of PID Parameters Utilizing Variable Weight Grey-Taguchi Method and Particle Swarm Optimization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd</p> <p>2018-03-01</p> <p>Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/17068087','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/17068087"><span>Simulated maximum likelihood method for estimating kinetic rates in gene expression.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tian, Tianhai; Xu, Songlin; Gao, Junbin; Burrage, Kevin</p> <p>2007-01-01</p> <p>Kinetic rate in gene expression is a key measurement of the stability of gene products and gives important information for the reconstruction of genetic regulatory networks. Recent developments in experimental technologies have made it possible to measure the numbers of transcripts and protein molecules in single cells. Although estimation methods based on deterministic models have been proposed aimed at evaluating kinetic rates from experimental observations, these methods cannot tackle noise in gene expression that may arise from discrete processes of gene expression, small numbers of mRNA transcript, fluctuations in the activity of transcriptional factors and variability in the experimental environment. In this paper, we develop effective methods for estimating kinetic rates in genetic regulatory networks. The simulated maximum likelihood method is used to evaluate parameters in stochastic models described by either stochastic differential equations or discrete biochemical reactions. Different types of non-parametric density functions are used to measure the transitional probability of experimental observations. For stochastic models described by biochemical reactions, we propose to use the simulated frequency distribution to evaluate the transitional density based on the discrete nature of stochastic simulations. The genetic optimization algorithm is used as an efficient tool to search for optimal reaction rates. Numerical results indicate that the proposed methods can give robust estimations of kinetic rates with good accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MSSP...92..293C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MSSP...92..293C"><span>Increasing the computational efficient of digital cross correlation by a vectorization method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chang, Ching-Yuan; Ma, Chien-Ching</p> <p>2017-08-01</p> <p>This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhyA..436..462M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhyA..436..462M"><span>An effective trust-based recommendation method using a novel graph clustering algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Moradi, Parham; Ahmadian, Sajad; Akhlaghian, Fardin</p> <p>2015-10-01</p> <p>Recommender systems are programs that aim to provide personalized recommendations to users for specific items (e.g. music, books) in online sharing communities or on e-commerce sites. Collaborative filtering methods are important and widely accepted types of recommender systems that generate recommendations based on the ratings of like-minded users. On the other hand, these systems confront several inherent issues such as data sparsity and cold start problems, caused by fewer ratings against the unknowns that need to be predicted. Incorporating trust information into the collaborative filtering systems is an attractive approach to resolve these problems. In this paper, we present a model-based collaborative filtering method by applying a novel graph clustering algorithm and also considering trust statements. In the proposed method first of all, the problem space is represented as a graph and then a sparsest subgraph finding algorithm is applied on the graph to find the initial cluster centers. Then, the proposed graph clustering algorithm is performed to obtain the appropriate users/items clusters. Finally, the identified clusters are used as a set of neighbors to recommend unseen items to the current active user. Experimental results based on three real-world datasets demonstrate that the proposed method outperforms several state-of-the-art recommender system methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24409735','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24409735"><span>[An improved low spectral distortion PCA fusion method].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Peng, Shi; Zhang, Ai-Wu; Li, Han-Lun; Hu, Shao-Xing; Meng, Xian-Gang; Sun, Wei-Dong</p> <p>2013-10-01</p> <p>Aiming at the spectral distortion produced in PCA fusion process, the present paper proposes an improved low spectral distortion PCA fusion method. This method uses NCUT (normalized cut) image segmentation algorithm to make a complex hyperspectral remote sensing image into multiple sub-images for increasing the separability of samples, which can weaken the spectral distortions of traditional PCA fusion; Pixels similarity weighting matrix and masks were produced by using graph theory and clustering theory. These masks are used to cut the hyperspectral image and high-resolution image into some sub-region objects. All corresponding sub-region objects between the hyperspectral image and high-resolution image are fused by using PCA method, and all sub-regional integration results are spliced together to produce a new image. In the experiment, Hyperion hyperspectral data and Rapid Eye data were used. And the experiment result shows that the proposed method has the same ability to enhance spatial resolution and greater ability to improve spectral fidelity performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ISPAr42.3..891L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ISPAr42.3..891L"><span>Ship Detection Based on Multiple Features in Random Forest Model for Hyperspectral Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, N.; Ding, L.; Zhao, H.; Shi, J.; Wang, D.; Gong, X.</p> <p>2018-04-01</p> <p>A novel method for detecting ships which aim to make full use of both the spatial and spectral information from hyperspectral images is proposed. Firstly, the band which is high signal-noise ratio in the range of near infrared or short-wave infrared spectrum, is used to segment land and sea on Otsu threshold segmentation method. Secondly, multiple features that include spectral and texture features are extracted from hyperspectral images. Principal components analysis (PCA) is used to extract spectral features, the Grey Level Co-occurrence Matrix (GLCM) is used to extract texture features. Finally, Random Forest (RF) model is introduced to detect ships based on the extracted features. To illustrate the effectiveness of the method, we carry out experiments over the EO-1 data by comparing single feature and different multiple features. Compared with the traditional single feature method and Support Vector Machine (SVM) model, the proposed method can stably achieve the target detection of ships under complex background and can effectively improve the detection accuracy of ships.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=ars&pg=4&id=EJ1174514','ERIC'); return false;" href="https://eric.ed.gov/?q=ars&pg=4&id=EJ1174514"><span>Developing and Testing a Method to Measure Academic Societal Impact</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Phillips, Paul; Moutinho, Luiz; Godinho, Pedro</p> <p>2018-01-01</p> <p>This paper aims to extend understanding of the business and societal impact of academic research. From a business school perspective, it has taken stock of the role of academic research and relevance in business and society. The proposed conceptual framework highlights the forces influencing the pursuit of academic rigour and relevance in…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=accounting+AND+General+AND+definition&pg=3&id=ED057472','ERIC'); return false;" href="https://eric.ed.gov/?q=accounting+AND+General+AND+definition&pg=3&id=ED057472"><span>Demographic Accounting and Model-Building. Education and Development Technical Reports.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Stone, Richard</p> <p></p> <p>This report describes and develops a model for coordinating a variety of demographic and social statistics within a single framework. The framework proposed, together with its associated methods of analysis, serves both general and specific functions. The general aim of these functions is to give numerical definition to the pattern of society and…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=%22seeds%22&pg=5&id=EJ1090121','ERIC'); return false;" href="https://eric.ed.gov/?q=%22seeds%22&pg=5&id=EJ1090121"><span>Analysis of Tasks in Pre-Service Elementary Teacher Education Courses</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Sierpinska, Anna; Osana, Helena</p> <p>2012-01-01</p> <p>This paper presents some results of research aimed at contributing to the development of a professional knowledge base for teachers of elementary mathematics methods courses, called here "teacher educators." We propose that a useful unit of analysis for this knowledge could be the tasks in which teacher-educators engage pre-service…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Journal+AND+Computer+AND+Information+AND+Systems&pg=6&id=EJ941450','ERIC'); return false;" href="https://eric.ed.gov/?q=Journal+AND+Computer+AND+Information+AND+Systems&pg=6&id=EJ941450"><span>A Core Journal Decision Model Based on Weighted Page Rank</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Wang, Hei-Chia; Chou, Ya-lin; Guo, Jiunn-Liang</p> <p>2011-01-01</p> <p>Purpose: The paper's aim is to propose a core journal decision method, called the local impact factor (LIF), which can evaluate the requirements of the local user community by combining both the access rate and the weighted impact factor, and by tracking citation information on the local users' articles. Design/methodology/approach: Many…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1149528.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1149528.pdf"><span>Standard and Robust Methods in Regression Imputation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Moraveji, Behjat; Jafarian, Koorosh</p> <p>2014-01-01</p> <p>The aim of this paper is to provide an introduction of new imputation algorithms for estimating missing values from official statistics in larger data sets of data pre-processing, or outliers. The goal is to propose a new algorithm called IRMI (iterative robust model-based imputation). This algorithm is able to deal with all challenges like…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=semantic+AND+web&pg=6&id=EJ912274','ERIC'); return false;" href="https://eric.ed.gov/?q=semantic+AND+web&pg=6&id=EJ912274"><span>Automatic Generation of Tests from Domain and Multimedia Ontologies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Papasalouros, Andreas; Kotis, Konstantinos; Kanaris, Konstantinos</p> <p>2011-01-01</p> <p>The aim of this article is to present an approach for generating tests in an automatic way. Although other methods have been already reported in the literature, the proposed approach is based on ontologies, representing both domain and multimedia knowledge. The article also reports on a prototype implementation of this approach, which…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=flow+AND+measurement&id=EJ1121585','ERIC'); return false;" href="https://eric.ed.gov/?q=flow+AND+measurement&id=EJ1121585"><span>Measuring Globalization: Existing Methods and Their Implications for Teaching Global Studies and Forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Zinkina, Julia; Korotayev, Andrey; Andreev, Aleksey I.</p> <p>2013-01-01</p> <p>Purpose: The purpose of this paper is to encourage discussions regarding the existing approaches to globalization measurement (taking mainly the form of indices and rankings) and their shortcomings in terms of applicability to developing Global Studies curricula. Another aim is to propose an outline for the globalization measurement methodology…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014LaPhy..24g4005B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014LaPhy..24g4005B"><span>Measuring general relativity effects in a terrestrial lab by means of laser gyroscopes</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Beverini, N.; Allegrini, M.; Beghi, A.; Belfi, J.; Bouhadef, B.; Calamai, M.; Carelli, G.; Cuccato, D.; Di Virgilio, A.; Maccioni, E.; Ortolan, A.; Porzio, A.; Santagata, R.; Solimeno, S.; Tartaglia, A.</p> <p>2014-07-01</p> <p>GINGER is a proposed tridimensional array of laser gyroscopes with the aim of measuring the Lense-Thirring effect, predicted by the general relativity theory, in a terrestrial laboratory environment. We discuss the required accuracy, the methods to achieve it, and the preliminary experimental work in this direction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=death+AND+penalty&pg=2&id=EJ911726','ERIC'); return false;" href="https://eric.ed.gov/?q=death+AND+penalty&pg=2&id=EJ911726"><span>Grasping the Social through Movies</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kennedy, Nilgun Fehim; Senses, Nazli; Ayan, Pelin</p> <p>2011-01-01</p> <p>In Turkey, one of the major challenges that university education faces is the indifference of young people towards social issues. The aim of this article is to contribute to the "practice" of critical pedagogy by proposing that showing movies is an important critical teaching method with the power both to give pleasure to the students…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24355755','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24355755"><span>Comorbidities impacting on prognosis after lung transplant.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vaquero Barrios, José Manuel; Redel Montero, Javier; Santos Luna, Francisco</p> <p>2014-01-01</p> <p>The aim of this review is to give an overview of the clinical circumstances presenting before lung transplant that may have negative repercussions on the long and short-term prognosis of the transplant. Methods for screening and diagnosis of common comorbidities with negative impact on the prognosis of the transplant are proposed, both for pulmonary and extrapulmonary diseases, and measures aimed at correcting these factors are discussed. Coordination and information exchange between referral centers and transplant centers would allow these comorbidities to be detected and corrected, with the aim of minimizing the risks and improving the life expectancy of transplant receivers. Copyright © 2013 SEPAR. Published by Elsevier Espana. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27926382','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27926382"><span>Prediction of anti-cancer drug response by kernelized multi-task learning.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tan, Mehmet</p> <p>2016-10-01</p> <p>Chemotherapy or targeted therapy are two of the main treatment options for many types of cancer. Due to the heterogeneous nature of cancer, the success of the therapeutic agents differs among patients. In this sense, determination of chemotherapeutic response of the malign cells is essential for establishing a personalized treatment protocol and designing new drugs. With the recent technological advances in producing large amounts of pharmacogenomic data, in silico methods have become important tools to achieve this aim. Data produced by using cancer cell lines provide a test bed for machine learning algorithms that try to predict the response of cancer cells to different agents. The potential use of these algorithms in drug discovery/repositioning and personalized treatments motivated us in this study to work on predicting drug response by exploiting the recent pharmacogenomic databases. We aim to improve the prediction of drug response of cancer cell lines. We propose to use a method that employs multi-task learning to improve learning by transfer, and kernels to extract non-linear relationships to predict drug response. The method outperforms three state-of-the-art algorithms on three anti-cancer drug screen datasets. We achieved a mean squared error of 3.305 and 0.501 on two different large scale screen data sets. On a recent challenge dataset, we obtained an error of 0.556. We report the methodological comparison results as well as the performance of the proposed algorithm on each single drug. The results show that the proposed method is a strong candidate to predict drug response of cancer cell lines in silico for pre-clinical studies. The source code of the algorithm and data used can be obtained from http://mtan.etu.edu.tr/Supplementary/kMTrace/. Copyright © 2016 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28212327','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28212327"><span>A Robust Random Forest-Based Approach for Heart Rate Monitoring Using Photoplethysmography Signal Contaminated by Intense Motion Artifacts.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ye, Yalan; He, Wenwen; Cheng, Yunfei; Huang, Wenxia; Zhang, Zhilin</p> <p>2017-02-16</p> <p>The estimation of heart rate (HR) based on wearable devices is of interest in fitness. Photoplethysmography (PPG) is a promising approach to estimate HR due to low cost; however, it is easily corrupted by motion artifacts (MA). In this work, a robust approach based on random forest is proposed for accurately estimating HR from the photoplethysmography signal contaminated by intense motion artifacts, consisting of two stages. Stage 1 proposes a hybrid method to effectively remove MA with a low computation complexity, where two MA removal algorithms are combined by an accurate binary decision algorithm whose aim is to decide whether or not to adopt the second MA removal algorithm. Stage 2 proposes a random forest-based spectral peak-tracking algorithm, whose aim is to locate the spectral peak corresponding to HR, formulating the problem of spectral peak tracking into a pattern classification problem. Experiments on the PPG datasets including 22 subjects used in the 2015 IEEE Signal Processing Cup showed that the proposed approach achieved the average absolute error of 1.65 beats per minute (BPM) on the 22 PPG datasets. Compared to state-of-the-art approaches, the proposed approach has better accuracy and robustness to intense motion artifacts, indicating its potential use in wearable sensors for health monitoring and fitness tracking.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27392352','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27392352"><span>Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen</p> <p>2016-07-07</p> <p>Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4430580','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4430580"><span>Proposing national identification number on dental prostheses as universal personal identification code - A revolution in forensic odontology</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Baad, Rajendra K.; Belgaumi, Uzma; Vibhute, Nupura; Kadashetti, Vidya; Chandrappa, Pramod Redder; Gugwad, Sushma</p> <p>2015-01-01</p> <p>The proper identification of a decedent is not only important for humanitarian and emotional reasons, but also for legal and administrative purposes. During the reconstructive identification process, all necessary information is gathered from the unknown body of the victim and hence that an objective reconstructed profile can be established. Denture marking systems are being used in various situations, and a number of direct and indirect methods are reported. We propose that national identification numbers be incorporated in all removable and fixed prostheses, so as to adopt a single and definitive universal personal identification code with the aim of achieving a uniform, standardized, easy, and fast identification method worldwide for forensic identification. PMID:26005294</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_13");'>13</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li class="active"><span>15</span></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_15 --> <div id="page_16" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="301"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..224a2031H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..224a2031H"><span>Research on Fault Rate Prediction Method of T/R Component</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hou, Xiaodong; Yang, Jiangping; Bi, Zengjun; Zhang, Yu</p> <p>2017-07-01</p> <p>T/R component is an important part of the large phased array radar antenna array, because of its large numbers, high fault rate, it has important significance for fault prediction. Aiming at the problems of traditional grey model GM(1,1) in practical operation, the discrete grey model is established based on the original model in this paper, and the optimization factor is introduced to optimize the background value, and the linear form of the prediction model is added, the improved discrete grey model of linear regression is proposed, finally, an example is simulated and compared with other models. The results show that the method proposed in this paper has higher accuracy and the solution is simple and the application scope is more extensive.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018OptCo.410...35T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018OptCo.410...35T"><span>Image reconstruction of dynamic infrared single-pixel imaging system</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Tong, Qi; Jiang, Yilin; Wang, Haiyan; Guo, Limin</p> <p>2018-03-01</p> <p>Single-pixel imaging technique has recently received much attention. Most of the current single-pixel imaging is aimed at relatively static targets or the imaging system is fixed, which is limited by the number of measurements received through the single detector. In this paper, we proposed a novel dynamic compressive imaging method to solve the imaging problem, where exists imaging system motion behavior, for the infrared (IR) rosette scanning system. The relationship between adjacent target images and scene is analyzed under different system movement scenarios. These relationships are used to build dynamic compressive imaging models. Simulation results demonstrate that the proposed method can improve the reconstruction quality of IR image and enhance the contrast between the target and the background in the presence of system movement.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.887a2048C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.887a2048C"><span>Human resource recommendation algorithm based on ensemble learning and Spark</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cong, Zihan; Zhang, Xingming; Wang, Haoxiang; Xu, Hongjie</p> <p>2017-08-01</p> <p>Aiming at the problem of “information overload” in the human resources industry, this paper proposes a human resource recommendation algorithm based on Ensemble Learning. The algorithm considers the characteristics and behaviours of both job seeker and job features in the real business circumstance. Firstly, the algorithm uses two ensemble learning methods-Bagging and Boosting. The outputs from both learning methods are then merged to form user interest model. Based on user interest model, job recommendation can be extracted for users. The algorithm is implemented as a parallelized recommendation system on Spark. A set of experiments have been done and analysed. The proposed algorithm achieves significant improvement in accuracy, recall rate and coverage, compared with recommendation algorithms such as UserCF and ItemCF.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ExA...tmp...31L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ExA...tmp...31L"><span>ALOHA—Astronomical Light Optical Hybrid Analysis - From experimental demonstrations to a MIR instrument proposal</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lehmann, L.; Darré, P.; Szemendera, L.; Gomes, J. T.; Baudoin, R.; Ceus, D.; Brustlein, S.; Delage, L.; Grossard, L.; Reynaud, F.</p> <p>2018-04-01</p> <p>This paper gives an overview of the Astronomical Light Optical Hybrid Analysis (ALOHA) project dedicated to investigate a new method for high resolution imaging in mid infrared astronomy. This proposal aims to use a non-linear frequency conversion process to shift the thermal infrared radiation to a shorter wavelength domain compatible with proven technology such as guided optics and detectors. After a description of the principle, we summarise the evolution of our study from the high flux seminal experiments to the latest results in the photon counting regime.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9815E..05F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9815E..05F"><span>A rapid place name locating algorithm based on ontology qualitative retrieval, ranking and recommendation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fan, Hong; Zhu, Anfeng; Zhang, Weixia</p> <p>2015-12-01</p> <p>In order to meet the rapid positioning of 12315 complaints, aiming at the natural language expression of telephone complaints, a semantic retrieval framework is proposed which is based on natural language parsing and geographical names ontology reasoning. Among them, a search result ranking and recommended algorithms is proposed which is regarding both geo-name conceptual similarity and spatial geometry relation similarity. The experiments show that this method can assist the operator to quickly find location of 12,315 complaints, increased industry and commerce customer satisfaction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhyE...80..191L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhyE...80..191L"><span>An evaluation method for nanoscale wrinkle</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Y. P.; Wang, C. G.; Zhang, L. M.; Tan, H. F.</p> <p>2016-06-01</p> <p>In this paper, a spectrum-based wrinkling analysis method via two-dimensional Fourier transformation is proposed aiming to solve the difficulty of nanoscale wrinkle evaluation. It evaluates the wrinkle characteristics including wrinkling wavelength and direction simply using a single wrinkling image. Based on this method, the evaluation results of nanoscale wrinkle characteristics show agreement with the open experimental results within an error of 6%. It is also verified to be appropriate for the macro wrinkle evaluation without scale limitations. The spectrum-based wrinkling analysis is an effective method for nanoscale evaluation, which contributes to reveal the mechanism of nanoscale wrinkling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/21428579-exponential-methods-time-integration-schroedinger-equation','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/21428579-exponential-methods-time-integration-schroedinger-equation"><span>Exponential Methods for the Time Integration of Schroedinger Equation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Cano, B.; Gonzalez-Pachon, A.</p> <p>2010-09-30</p> <p>We consider exponential methods of second order in time in order to integrate the cubic nonlinear Schroedinger equation. We are interested in taking profit of the special structure of this equation. Therefore, we look at symmetry, symplecticity and approximation of invariants of the proposed methods. That will allow to integrate till long times with reasonable accuracy. Computational efficiency is also our aim. Therefore, we make numerical computations in order to compare the methods considered and so as to conclude that explicit Lawson schemes projected on the norm of the solution are an efficient tool to integrate this equation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29576690','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29576690"><span>Deep imitation learning for 3D navigation tasks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Hussein, Ahmed; Elyan, Eyad; Gaber, Mohamed Medhat; Jayne, Chrisina</p> <p>2018-01-01</p> <p>Deep learning techniques have shown success in learning from raw high-dimensional data in various applications. While deep reinforcement learning is recently gaining popularity as a method to train intelligent agents, utilizing deep learning in imitation learning has been scarcely explored. Imitation learning can be an efficient method to teach intelligent agents by providing a set of demonstrations to learn from. However, generalizing to situations that are not represented in the demonstrations can be challenging, especially in 3D environments. In this paper, we propose a deep imitation learning method to learn navigation tasks from demonstrations in a 3D environment. The supervised policy is refined using active learning in order to generalize to unseen situations. This approach is compared to two popular deep reinforcement learning techniques: deep-Q-networks and Asynchronous actor-critic (A3C). The proposed method as well as the reinforcement learning methods employ deep convolutional neural networks and learn directly from raw visual input. Methods for combining learning from demonstrations and experience are also investigated. This combination aims to join the generalization ability of learning by experience with the efficiency of learning by imitation. The proposed methods are evaluated on 4 navigation tasks in a 3D simulated environment. Navigation tasks are a typical problem that is relevant to many real applications. They pose the challenge of requiring demonstrations of long trajectories to reach the target and only providing delayed rewards (usually terminal) to the agent. The experiments show that the proposed method can successfully learn navigation tasks from raw visual input while learning from experience methods fail to learn an effective policy. Moreover, it is shown that active learning can significantly improve the performance of the initially learned policy using a small number of active samples.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4029538','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4029538"><span>Mass type-specific sparse representation for mass classification in computer-aided detection on mammograms</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2013-01-01</p> <p>Background Breast cancer is the leading cause of both incidence and mortality in women population. For this reason, much research effort has been devoted to develop Computer-Aided Detection (CAD) systems for early detection of the breast cancers on mammograms. In this paper, we propose a new and novel dictionary configuration underpinning sparse representation based classification (SRC). The key idea of the proposed algorithm is to improve the sparsity in terms of mass margins for the purpose of improving classification performance in CAD systems. Methods The aim of the proposed SRC framework is to construct separate dictionaries according to the types of mass margins. The underlying idea behind our method is that the separated dictionaries can enhance the sparsity of mass class (true-positive), leading to an improved performance for differentiating mammographic masses from normal tissues (false-positive). When a mass sample is given for classification, the sparse solutions based on corresponding dictionaries are separately solved and combined at score level. Experiments have been performed on both database (DB) named as Digital Database for Screening Mammography (DDSM) and clinical Full Field Digital Mammogram (FFDM) DBs. In our experiments, sparsity concentration in the true class (SCTC) and area under the Receiver operating characteristic (ROC) curve (AUC) were measured for the comparison between the proposed method and a conventional single dictionary based approach. In addition, a support vector machine (SVM) was used for comparing our method with state-of-the-arts classifier extensively used for mass classification. Results Comparing with the conventional single dictionary configuration, the proposed approach is able to improve SCTC of up to 13.9% and 23.6% on DDSM and FFDM DBs, respectively. Moreover, the proposed method is able to improve AUC with 8.2% and 22.1% on DDSM and FFDM DBs, respectively. Comparing to SVM classifier, the proposed method improves AUC with 2.9% and 11.6% on DDSM and FFDM DBs, respectively. Conclusions The proposed dictionary configuration is found to well improve the sparsity of dictionaries, resulting in an enhanced classification performance. Moreover, the results show that the proposed method is better than conventional SVM classifier for classifying breast masses subject to various margins from normal tissues. PMID:24564973</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28370326','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28370326"><span>Probabilistic peak detection in CE-LIF for STR DNA typing.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Woldegebriel, Michael; van Asten, Arian; Kloosterman, Ate; Vivó-Truyols, Gabriel</p> <p>2017-07-01</p> <p>In this work, we present a novel probabilistic peak detection algorithm based on a Bayesian framework for forensic DNA analysis. The proposed method aims at an exhaustive use of raw electropherogram data from a laser-induced fluorescence multi-CE system. As the raw data are informative up to a single data point, the conventional threshold-based approaches discard relevant forensic information early in the data analysis pipeline. Our proposed method assigns a posterior probability reflecting the data point's relevance with respect to peak detection criteria. Peaks of low intensity generated from a truly existing allele can thus constitute evidential value instead of fully discarding them and contemplating a potential allele drop-out. This way of working utilizes the information available within each individual data point and thus avoids making early (binary) decisions on the data analysis that can lead to error propagation. The proposed method was tested and compared to the application of a set threshold as is current practice in forensic STR DNA profiling. The new method was found to yield a significant improvement in the number of alleles identified, regardless of the peak heights and deviation from Gaussian shape. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017PhyA..467..148T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017PhyA..467..148T"><span>A local immunization strategy for networks with overlapping community structure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Taghavian, Fatemeh; Salehi, Mostafa; Teimouri, Mehdi</p> <p>2017-02-01</p> <p>Since full coverage treatment is not feasible due to limited resources, we need to utilize an immunization strategy to effectively distribute the available vaccines. On the other hand, the structure of contact network among people has a significant impact on epidemics of infectious diseases (such as SARS and influenza) in a population. Therefore, network-based immunization strategies aim to reduce the spreading rate by removing the vaccinated nodes from contact network. Such strategies try to identify more important nodes in epidemics spreading over a network. In this paper, we address the effect of overlapping nodes among communities on epidemics spreading. The proposed strategy is an optimized random-walk based selection of these nodes. The whole process is local, i.e. it requires contact network information in the level of nodes. Thus, it is applicable to large-scale and unknown networks in which the global methods usually are unrealizable. Our simulation results on different synthetic and real networks show that the proposed method outperforms the existing local methods in most cases. In particular, for networks with strong community structures, high overlapping membership of nodes or small size communities, the proposed method shows better performance.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015GMD.....8.3471X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015GMD.....8.3471X"><span>On the use of Schwarz-Christoffel conformal mappings to the grid generation for global ocean models</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xu, S.; Wang, B.; Liu, J.</p> <p>2015-10-01</p> <p>In this article we propose two grid generation methods for global ocean general circulation models. Contrary to conventional dipolar or tripolar grids, the proposed methods are based on Schwarz-Christoffel conformal mappings that map areas with user-prescribed, irregular boundaries to those with regular boundaries (i.e., disks, slits, etc.). The first method aims at improving existing dipolar grids. Compared with existing grids, the sample grid achieves a better trade-off between the enlargement of the latitudinal-longitudinal portion and the overall smooth grid cell size transition. The second method addresses more modern and advanced grid design requirements arising from high-resolution and multi-scale ocean modeling. The generated grids could potentially achieve the alignment of grid lines to the large-scale coastlines, enhanced spatial resolution in coastal regions, and easier computational load balance. Since the grids are orthogonal curvilinear, they can be easily utilized by the majority of ocean general circulation models that are based on finite difference and require grid orthogonality. The proposed grid generation algorithms can also be applied to the grid generation for regional ocean modeling where complex land-sea distribution is present.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25709942','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25709942"><span>Multi-scale Morphological Image Enhancement of Chest Radiographs by a Hybrid Scheme.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Alavijeh, Fatemeh Shahsavari; Mahdavi-Nasab, Homayoun</p> <p>2015-01-01</p> <p>Chest radiography is a common diagnostic imaging test, which contains an enormous amount of information about a patient. However, its interpretation is highly challenging. The accuracy of the diagnostic process is greatly influenced by image processing algorithms; hence enhancement of the images is indispensable in order to improve visibility of the details. This paper aims at improving radiograph parameters such as contrast, sharpness, noise level, and brightness to enhance chest radiographs, making use of a triangulation method. Here, contrast limited adaptive histogram equalization technique and noise suppression are simultaneously performed in wavelet domain in a new scheme, followed by morphological top-hat and bottom-hat filtering. A unique implementation of morphological filters allows for adjustment of the image brightness and significant enhancement of the contrast. The proposed method is tested on chest radiographs from Japanese Society of Radiological Technology database. The results are compared with conventional enhancement techniques such as histogram equalization, contrast limited adaptive histogram equalization, Retinex, and some recently proposed methods to show its strengths. The experimental results reveal that the proposed method can remarkably improve the image contrast while keeping the sensitive chest tissue information so that radiologists might have a more precise interpretation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4335146','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4335146"><span>Multi-scale Morphological Image Enhancement of Chest Radiographs by a Hybrid Scheme</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Alavijeh, Fatemeh Shahsavari; Mahdavi-Nasab, Homayoun</p> <p>2015-01-01</p> <p>Chest radiography is a common diagnostic imaging test, which contains an enormous amount of information about a patient. However, its interpretation is highly challenging. The accuracy of the diagnostic process is greatly influenced by image processing algorithms; hence enhancement of the images is indispensable in order to improve visibility of the details. This paper aims at improving radiograph parameters such as contrast, sharpness, noise level, and brightness to enhance chest radiographs, making use of a triangulation method. Here, contrast limited adaptive histogram equalization technique and noise suppression are simultaneously performed in wavelet domain in a new scheme, followed by morphological top-hat and bottom-hat filtering. A unique implementation of morphological filters allows for adjustment of the image brightness and significant enhancement of the contrast. The proposed method is tested on chest radiographs from Japanese Society of Radiological Technology database. The results are compared with conventional enhancement techniques such as histogram equalization, contrast limited adaptive histogram equalization, Retinex, and some recently proposed methods to show its strengths. The experimental results reveal that the proposed method can remarkably improve the image contrast while keeping the sensitive chest tissue information so that radiologists might have a more precise interpretation. PMID:25709942</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010SPIE.7624E..1JY','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010SPIE.7624E..1JY"><span>Computer-aided classification of patients with dementia of Alzheimer's type based on cerebral blood flow determined with arterial spin labeling technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yamashita, Yasuo; Arimura, Hidetaka; Yoshiura, Takashi; Tokunaga, Chiaki; Magome, Taiki; Monji, Akira; Noguchi, Tomoyuki; Toyofuku, Fukai; Oki, Masafumi; Nakamura, Yasuhiko; Honda, Hiroshi</p> <p>2010-03-01</p> <p>Arterial spin labeling (ASL) is one of promising non-invasive magnetic resonance (MR) imaging techniques for diagnosis of Alzheimer's disease (AD) by measuring cerebral blood flow (CBF). The aim of this study was to develop a computer-aided classification system for AD patients based on CBFs measured by the ASL technique. The average CBFs in cortical regions were determined as functional image features based on the CBF map image, which was non-linearly transformed to a Talairach brain atlas by using a free-form deformation. An artificial neural network (ANN) was trained with the CBF functional features in 10 cortical regions, and was employed for distinguishing patients with AD from control subjects. For evaluation of the method, we applied the proposed method to 20 cases including ten AD patients and ten control subjects, who were scanned a 3.0-Tesla MR unit. As a result, the area under the receiver operating characteristic curve obtained by the proposed method was 0.893 based on a leave-one-out-by-case test in identification of AD cases among 20 cases. The proposed method would be feasible for classification of patients with AD.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5786922','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5786922"><span>Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel ‘V-plot’ methodology to display accuracy values</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P</p> <p>2018-01-01</p> <p>Background Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test’s performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. Methods and findings We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Cholrapid and Cholgold) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). Conclusion No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard. PMID:29387424</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10575E..3VM','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10575E..3VM"><span>Reduction in training time of a deep learning model in detection of lesions in CT</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Makkinejad, Nazanin; Tajbakhsh, Nima; Zarshenas, Amin; Khokhar, Ashfaq; Suzuki, Kenji</p> <p>2018-02-01</p> <p>Deep learning (DL) emerged as a powerful tool for object detection and classification in medical images. Building a well-performing DL model, however, requires a huge number of images for training, and it takes days to train a DL model even on a cutting edge high-performance computing platform. This study is aimed at developing a method for selecting a "small" number of representative samples from a large collection of training samples to train a DL model for the could be used to detect polyps in CT colonography (CTC), without compromising the classification performance. Our proposed method for representative sample selection (RSS) consists of a K-means clustering algorithm. For the performance evaluation, we applied the proposed method to select samples for the training of a massive training artificial neural network based DL model, to be used for the classification of polyps and non-polyps in CTC. Our results show that the proposed method reduce the training time by a factor of 15, while maintaining the classification performance equivalent to the model trained using the full training set. We compare the performance using area under the receiveroperating- characteristic curve (AUC).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4236968','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4236968"><span>A Framework for Spatial Interaction Analysis Based on Large-Scale Mobile Phone Data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Li, Weifeng; Cheng, Xiaoyun; Guo, Gaohua</p> <p>2014-01-01</p> <p>The overall understanding of spatial interaction and the exact knowledge of its dynamic evolution are required in the urban planning and transportation planning. This study aimed to analyze the spatial interaction based on the large-scale mobile phone data. The newly arisen mass dataset required a new methodology which was compatible with its peculiar characteristics. A three-stage framework was proposed in this paper, including data preprocessing, critical activity identification, and spatial interaction measurement. The proposed framework introduced the frequent pattern mining and measured the spatial interaction by the obtained association. A case study of three communities in Shanghai was carried out as verification of proposed method and demonstration of its practical application. The spatial interaction patterns and the representative features proved the rationality of the proposed framework. PMID:25435865</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4089199','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4089199"><span>Efficient Feature Selection and Classification of Protein Sequence Data in Bioinformatics</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Faye, Ibrahima; Samir, Brahim Belhaouari; Md Said, Abas</p> <p>2014-01-01</p> <p>Bioinformatics has been an emerging area of research for the last three decades. The ultimate aims of bioinformatics were to store and manage the biological data, and develop and analyze computational tools to enhance their understanding. The size of data accumulated under various sequencing projects is increasing exponentially, which presents difficulties for the experimental methods. To reduce the gap between newly sequenced protein and proteins with known functions, many computational techniques involving classification and clustering algorithms were proposed in the past. The classification of protein sequences into existing superfamilies is helpful in predicting the structure and function of large amount of newly discovered proteins. The existing classification results are unsatisfactory due to a huge size of features obtained through various feature encoding methods. In this work, a statistical metric-based feature selection technique has been proposed in order to reduce the size of the extracted feature vector. The proposed method of protein classification shows significant improvement in terms of performance measure metrics: accuracy, sensitivity, specificity, recall, F-measure, and so forth. PMID:25045727</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22687966-risk-analysis-within-environmental-impact-assessment-proposed-construction-activity','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22687966-risk-analysis-within-environmental-impact-assessment-proposed-construction-activity"><span>Risk analysis within environmental impact assessment of proposed construction activity</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zeleňáková, Martina; Zvijáková, Lenka</p> <p></p> <p>Environmental impact assessment is an important process, prior to approval of the investment plan, providing a detailed examination of the likely and foreseeable impacts of proposed construction activity on the environment. The objective of this paper is to develop a specific methodology for the analysis and evaluation of environmental impacts of selected constructions – flood protection structures using risk analysis methods. The application of methodology designed for the process of environmental impact assessment will develop assumptions for further improvements or more effective implementation and performance of this process. The main objective of the paper is to improve the implementation ofmore » the environmental impact assessment process. Through the use of risk analysis methods in environmental impact assessment process, the set objective has been achieved. - Highlights: This paper is informed by an effort to develop research with the aim of: • Improving existing qualitative and quantitative methods for assessing the impacts • A better understanding of relations between probabilities and consequences • Methodology for the EIA of flood protection constructions based on risk analysis • Creative approaches in the search for environmentally friendly proposed activities.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_14");'>14</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li class="active"><span>16</span></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_16 --> <div id="page_17" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="321"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4476722','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4476722"><span>Intelligent Scheduling for Underground Mobile Mining Equipment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Song, Zhen; Schunnesson, Håkan; Rinne, Mikael; Sturgul, John</p> <p>2015-01-01</p> <p>Many studies have been carried out and many commercial software applications have been developed to improve the performances of surface mining operations, especially for the loader-trucks cycle of surface mining. However, there have been quite few studies aiming to improve the mining process of underground mines. In underground mines, mobile mining equipment is mostly scheduled instinctively, without theoretical support for these decisions. Furthermore, in case of unexpected events, it is hard for miners to rapidly find solutions to reschedule and to adapt the changes. This investigation first introduces the motivation, the technical background, and then the objective of the study. A decision support instrument (i.e. schedule optimizer for mobile mining equipment) is proposed and described to address this issue. The method and related algorithms which are used in this instrument are presented and discussed. The proposed method was tested by using a real case of Kittilä mine located in Finland. The result suggests that the proposed method can considerably improve the working efficiency and reduce the working time of the underground mine. PMID:26098934</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3658776','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3658776"><span>Sliding Window-Based Region of Interest Extraction for Finger Vein Images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Yang, Lu; Yang, Gongping; Yin, Yilong; Xiao, Rongyang</p> <p>2013-01-01</p> <p>Region of Interest (ROI) extraction is a crucial step in an automatic finger vein recognition system. The aim of ROI extraction is to decide which part of the image is suitable for finger vein feature extraction. This paper proposes a finger vein ROI extraction method which is robust to finger displacement and rotation. First, we determine the middle line of the finger, which will be used to correct the image skew. Then, a sliding window is used to detect the phalangeal joints and further to ascertain the height of ROI. Last, for the corrective image with certain height, we will obtain the ROI by using the internal tangents of finger edges as the left and right boundary. The experimental results show that the proposed method can extract ROI more accurately and effectively compared with other methods, and thus improve the performance of finger vein identification system. Besides, to acquire the high quality finger vein image during the capture process, we propose eight criteria for finger vein capture from different aspects and these criteria should be helpful to some extent for finger vein capture. PMID:23507824</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29387424','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29387424"><span>Effects of disease severity distribution on the performance of quantitative diagnostic methods and proposal of a novel 'V-plot' methodology to display accuracy values.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Petraco, Ricardo; Dehbi, Hakim-Moulay; Howard, James P; Shun-Shin, Matthew J; Sen, Sayan; Nijjer, Sukhjinder S; Mayet, Jamil; Davies, Justin E; Francis, Darrel P</p> <p>2018-01-01</p> <p>Diagnostic accuracy is widely accepted by researchers and clinicians as an optimal expression of a test's performance. The aim of this study was to evaluate the effects of disease severity distribution on values of diagnostic accuracy as well as propose a sample-independent methodology to calculate and display accuracy of diagnostic tests. We evaluated the diagnostic relationship between two hypothetical methods to measure serum cholesterol (Chol rapid and Chol gold ) by generating samples with statistical software and (1) keeping the numerical relationship between methods unchanged and (2) changing the distribution of cholesterol values. Metrics of categorical agreement were calculated (accuracy, sensitivity and specificity). Finally, a novel methodology to display and calculate accuracy values was presented (the V-plot of accuracies). No single value of diagnostic accuracy can be used to describe the relationship between tests, as accuracy is a metric heavily affected by the underlying sample distribution. Our novel proposed methodology, the V-plot of accuracies, can be used as a sample-independent measure of a test performance against a reference gold standard.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JEI....27b3009H','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JEI....27b3009H"><span>Multiview point clouds denoising based on interference elimination</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Hu, Yang; Wu, Qian; Wang, Le; Jiang, Huanyu</p> <p>2018-03-01</p> <p>Newly emerging low-cost depth sensors offer huge potentials for three-dimensional (3-D) modeling, but existing high noise restricts these sensors from obtaining accurate results. Thus, we proposed a method for denoising registered multiview point clouds with high noise to solve that problem. The proposed method is aimed at fully using redundant information to eliminate the interferences among point clouds of different views based on an iterative procedure. In each iteration, noisy points are either deleted or moved to their weighted average targets in accordance with two cases. Simulated data and practical data captured by a Kinect v2 sensor were tested in experiments qualitatively and quantitatively. Results showed that the proposed method can effectively reduce noise and recover local features from highly noisy multiview point clouds with good robustness, compared to truncated signed distance function and moving least squares (MLS). Moreover, the resulting low-noise point clouds can be further smoothed by the MLS to achieve improved results. This study provides the feasibility of obtaining fine 3-D models with high-noise devices, especially for depth sensors, such as Kinect.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018InPhT..89..120L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018InPhT..89..120L"><span>Localization of thermal anomalies in electrical equipment using Infrared Thermography and support vector machine</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Laib dit Leksir, Y.; Mansour, M.; Moussaoui, A.</p> <p>2018-03-01</p> <p>Analysis and processing of databases obtained from infrared thermal inspections made on electrical installations require the development of new tools to obtain more information to visual inspections. Consequently, methods based on the capture of thermal images show a great potential and are increasingly employed in this field. However, there is a need for the development of effective techniques to analyse these databases in order to extract significant information relating to the state of the infrastructures. This paper presents a technique explaining how this approach can be implemented and proposes a system that can help to detect faults in thermal images of electrical installations. The proposed method classifies and identifies the region of interest (ROI). The identification is conducted using support vector machine (SVM) algorithm. The aim here is to capture the faults that exist in electrical equipments during an inspection of some machines using A40 FLIR camera. After that, binarization techniques are employed to select the region of interest. Later the comparative analysis of the obtained misclassification errors using the proposed method with Fuzzy c means and Ostu, has also be addressed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4352493','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4352493"><span>Adaptive Localization of Focus Point Regions via Random Patch Probabilistic Density from Whole-Slide, Ki-67-Stained Brain Tumor Tissue</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Alomari, Yazan M.; MdZin, Reena Rahayu</p> <p>2015-01-01</p> <p>Analysis of whole-slide tissue for digital pathology images has been clinically approved to provide a second opinion to pathologists. Localization of focus points from Ki-67-stained histopathology whole-slide tissue microscopic images is considered the first step in the process of proliferation rate estimation. Pathologists use eye pooling or eagle-view techniques to localize the highly stained cell-concentrated regions from the whole slide under microscope, which is called focus-point regions. This procedure leads to a high variety of interpersonal observations and time consuming, tedious work and causes inaccurate findings. The localization of focus-point regions can be addressed as a clustering problem. This paper aims to automate the localization of focus-point regions from whole-slide images using the random patch probabilistic density method. Unlike other clustering methods, random patch probabilistic density method can adaptively localize focus-point regions without predetermining the number of clusters. The proposed method was compared with the k-means and fuzzy c-means clustering methods. Our proposed method achieves a good performance, when the results were evaluated by three expert pathologists. The proposed method achieves an average false-positive rate of 0.84% for the focus-point region localization error. Moreover, regarding RPPD used to localize tissue from whole-slide images, 228 whole-slide images have been tested; 97.3% localization accuracy was achieved. PMID:25793010</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5470811','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5470811"><span>A Combined Independent Source Separation and Quality Index Optimization Method for Fetal ECG Extraction from Abdominal Maternal Leads</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Billeci, Lucia; Varanini, Maurizio</p> <p>2017-01-01</p> <p>The non-invasive fetal electrocardiogram (fECG) technique has recently received considerable interest in monitoring fetal health. The aim of our paper is to propose a novel fECG algorithm based on the combination of the criteria of independent source separation and of a quality index optimization (ICAQIO-based). The algorithm was compared with two methods applying the two different criteria independently—the ICA-based and the QIO-based methods—which were previously developed by our group. All three methods were tested on the recently implemented Fetal ECG Synthetic Database (FECGSYNDB). Moreover, the performance of the algorithm was tested on real data from the PhysioNet fetal ECG Challenge 2013 Database. The proposed combined method outperformed the other two algorithms on the FECGSYNDB (ICAQIO-based: 98.78%, QIO-based: 97.77%, ICA-based: 97.61%). Significant differences were obtained in particular in the conditions when uterine contractions and maternal and fetal ectopic beats occurred. On the real data, all three methods obtained very high performances, with the QIO-based method proving slightly better than the other two (ICAQIO-based: 99.38%, QIO-based: 99.76%, ICA-based: 99.37%). The findings from this study suggest that the proposed method could potentially be applied as a novel algorithm for accurate extraction of fECG, especially in critical recording conditions. PMID:28509860</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPA....8d5314Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPA....8d5314Z"><span>Numerical solution of the unsteady diffusion-convection-reaction equation based on improved spectral Galerkin method</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhong, Jiaqi; Zeng, Cheng; Yuan, Yupeng; Zhang, Yuzhe; Zhang, Ye</p> <p>2018-04-01</p> <p>The aim of this paper is to present an explicit numerical algorithm based on improved spectral Galerkin method for solving the unsteady diffusion-convection-reaction equation. The principal characteristics of this approach give the explicit eigenvalues and eigenvectors based on the time-space separation method and boundary condition analysis. With the help of Fourier series and Galerkin truncation, we can obtain the finite-dimensional ordinary differential equations which facilitate the system analysis and controller design. By comparing with the finite element method, the numerical solutions are demonstrated via two examples. It is shown that the proposed method is effective.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017Fract..2540001K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017Fract..2540001K"><span>Clustering Multiple Sclerosis Subgroups with Multifractal Methods and Self-Organizing Map Algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Karaca, Yeliz; Cattani, Carlo</p> <p></p> <p>Magnetic resonance imaging (MRI) is the most sensitive method to detect chronic nervous system diseases such as multiple sclerosis (MS). In this paper, Brownian motion Hölder regularity functions (polynomial, periodic (sine), exponential) for 2D image, such as multifractal methods were applied to MR brain images, aiming to easily identify distressed regions, in MS patients. With these regions, we have proposed an MS classification based on the multifractal method by using the Self-Organizing Map (SOM) algorithm. Thus, we obtained a cluster analysis by identifying pixels from distressed regions in MR images through multifractal methods and by diagnosing subgroups of MS patients through artificial neural networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017SPIE10420E..1MX','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017SPIE10420E..1MX"><span>A new optimal seam method for seamless image stitching</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xue, Jiale; Chen, Shengyong; Cheng, Xu; Han, Ying; Zhao, Meng</p> <p>2017-07-01</p> <p>A novel optimal seam method which aims to stitch those images with overlapping area more seamlessly has been propos ed. Considering the traditional gradient domain optimal seam method and fusion algorithm result in bad color difference measurement and taking a long time respectively, the input images would be converted to HSV space and a new energy function is designed to seek optimal stitching path. To smooth the optimal stitching path, a simplified pixel correction and weighted average method are utilized individually. The proposed methods exhibit performance in eliminating the stitching seam compared with the traditional gradient optimal seam and high efficiency with multi-band blending algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5673663','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5673663"><span>Session-RPE Method for Training Load Monitoring: Validity, Ecological Usefulness, and Influencing Factors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Haddad, Monoem; Stylianides, Georgios; Djaoui, Leo; Dellal, Alexandre; Chamari, Karim</p> <p>2017-01-01</p> <p>Purpose: The aim of this review is to (1) retrieve all data validating the Session-rating of perceived exertion (RPE)-method using various criteria, (2) highlight the rationale of this method and its ecological usefulness, and (3) describe factors that can alter RPE and users of this method should take into consideration. Method: Search engines such as SPORTDiscus, PubMed, and Google Scholar databases in the English language between 2001 and 2016 were consulted for the validity and usefulness of the session-RPE method. Studies were considered for further analysis when they used the session-RPE method proposed by Foster et al. in 2001. Participants were athletes of any gender, age, or level of competition. Studies using languages other than English were excluded in the analysis of the validity and reliability of the session-RPE method. Other studies were examined to explain the rationale of the session-RPE method and the origin of RPE. Results: A total of 950 studies cited the Foster et al. study that proposed the session RPE-method. 36 studies have examined the validity and reliability of this proposed method using the modified CR-10. Conclusion: These studies confirmed the validity and good reliability and internal consistency of session-RPE method in several sports and physical activities with men and women of different age categories (children, adolescents, and adults) among various expertise levels. This method could be used as “standing alone” method for training load (TL) monitoring purposes though some recommend to combine it with other physiological parameters as heart rate. PMID:29163016</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4929339','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4929339"><span>Temporal sparsity exploiting nonlocal regularization for 4D computed tomography reconstruction</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Kazantsev, Daniil; Guo, Enyu; Kaestner, Anders; Lionheart, William R. B.; Bent, Julian; Withers, Philip J.; Lee, Peter D.</p> <p>2016-01-01</p> <p>X-ray imaging applications in medical and material sciences are frequently limited by the number of tomographic projections collected. The inversion of the limited projection data is an ill-posed problem and needs regularization. Traditional spatial regularization is not well adapted to the dynamic nature of time-lapse tomography since it discards the redundancy of the temporal information. In this paper, we propose a novel iterative reconstruction algorithm with a nonlocal regularization term to account for time-evolving datasets. The aim of the proposed nonlocal penalty is to collect the maximum relevant information in the spatial and temporal domains. With the proposed sparsity seeking approach in the temporal space, the computational complexity of the classical nonlocal regularizer is substantially reduced (at least by one order of magnitude). The presented reconstruction method can be directly applied to various big data 4D (x, y, z+time) tomographic experiments in many fields. We apply the proposed technique to modelled data and to real dynamic X-ray microtomography (XMT) data of high resolution. Compared to the classical spatio-temporal nonlocal regularization approach, the proposed method delivers reconstructed images of improved resolution and higher contrast while remaining significantly less computationally demanding. PMID:27002902</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..131a2035R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..131a2035R"><span>Applying linear programming model to aggregate production planning of coated peanut products</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rohmah, W. G.; Purwaningsih, I.; Santoso, EF S. M.</p> <p>2018-03-01</p> <p>The aim of this study was to set the overall production level for each grade of coated peanut product to meet market demands with a minimum production cost. The linear programming model was applied in this study. The proposed model was used to minimize the total production cost based on the limited demand of coated peanuts. The demand values applied to the method was previously forecasted using time series method and production capacity aimed to plan the aggregate production for the next 6 month period. The results indicated that the production planning using the proposed model has resulted a better fitted pattern to the customer demands compared to that of the company policy. The production capacity of product family A, B, and C was relatively stable for the first 3 months of the planning periods, then began to fluctuate over the next 3 months. While, the production capacity of product family D and E was fluctuated over the 6-month planning periods, with the values in the range of 10,864 - 32,580 kg and 255 – 5,069 kg, respectively. The total production cost for all products was 27.06% lower than the production cost calculated using the company’s policy-based method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5667828','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5667828"><span>Authorship attribution of source code by using back propagation neural network based on particle swarm optimization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Xu, Guoai; Li, Qi; Guo, Yanhui; Zhang, Miao</p> <p>2017-01-01</p> <p>Authorship attribution is to identify the most likely author of a given sample among a set of candidate known authors. It can be not only applied to discover the original author of plain text, such as novels, blogs, emails, posts etc., but also used to identify source code programmers. Authorship attribution of source code is required in diverse applications, ranging from malicious code tracking to solving authorship dispute or software plagiarism detection. This paper aims to propose a new method to identify the programmer of Java source code samples with a higher accuracy. To this end, it first introduces back propagation (BP) neural network based on particle swarm optimization (PSO) into authorship attribution of source code. It begins by computing a set of defined feature metrics, including lexical and layout metrics, structure and syntax metrics, totally 19 dimensions. Then these metrics are input to neural network for supervised learning, the weights of which are output by PSO and BP hybrid algorithm. The effectiveness of the proposed method is evaluated on a collected dataset with 3,022 Java files belong to 40 authors. Experiment results show that the proposed method achieves 91.060% accuracy. And a comparison with previous work on authorship attribution of source code for Java language illustrates that this proposed method outperforms others overall, also with an acceptable overhead. PMID:29095934</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26353275','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26353275"><span>Iris Image Classification Based on Hierarchical Visual Codebook.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhenan Sun; Hui Zhang; Tieniu Tan; Jianyu Wang</p> <p>2014-06-01</p> <p>Iris recognition as a reliable method for personal identification has been well-studied with the objective to assign the class label of each iris image to a unique subject. In contrast, iris image classification aims to classify an iris image to an application specific category, e.g., iris liveness detection (classification of genuine and fake iris images), race classification (e.g., classification of iris images of Asian and non-Asian subjects), coarse-to-fine iris identification (classification of all iris images in the central database into multiple categories). This paper proposes a general framework for iris image classification based on texture analysis. A novel texture pattern representation method called Hierarchical Visual Codebook (HVC) is proposed to encode the texture primitives of iris images. The proposed HVC method is an integration of two existing Bag-of-Words models, namely Vocabulary Tree (VT), and Locality-constrained Linear Coding (LLC). The HVC adopts a coarse-to-fine visual coding strategy and takes advantages of both VT and LLC for accurate and sparse representation of iris texture. Extensive experimental results demonstrate that the proposed iris image classification method achieves state-of-the-art performance for iris liveness detection, race classification, and coarse-to-fine iris identification. A comprehensive fake iris image database simulating four types of iris spoof attacks is developed as the benchmark for research of iris liveness detection.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10575E..34M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10575E..34M"><span>Opacity annotation of diffuse lung diseases using deep convolutional neural network with multi-channel information</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Mabu, Shingo; Kido, Shoji; Hashimoto, Noriaki; Hirano, Yasushi; Kuremoto, Takashi</p> <p>2018-02-01</p> <p>This research proposes a multi-channel deep convolutional neural network (DCNN) for computer-aided diagnosis (CAD) that classifies normal and abnormal opacities of diffuse lung diseases in Computed Tomography (CT) images. Because CT images are gray scale, DCNN usually uses one channel for inputting image data. On the other hand, this research uses multi-channel DCNN where each channel corresponds to the original raw image or the images transformed by some preprocessing techniques. In fact, the information obtained only from raw images is limited and some conventional research suggested that preprocessing of images contributes to improving the classification accuracy. Thus, the combination of the original and preprocessed images is expected to show higher accuracy. The proposed method realizes region of interest (ROI)-based opacity annotation. We used lung CT images taken in Yamaguchi University Hospital, Japan, and they are divided into 32 × 32 ROI images. The ROIs contain six kinds of opacities: consolidation, ground-glass opacity (GGO), emphysema, honeycombing, nodular, and normal. The aim of the proposed method is to classify each ROI into one of the six opacities (classes). The DCNN structure is based on VGG network that secured the first and second places in ImageNet ILSVRC-2014. From the experimental results, the classification accuracy of the proposed method was better than the conventional method with single channel, and there was a significant difference between them.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15943424','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15943424"><span>A two-stage linear discriminant analysis via QR-decomposition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ye, Jieping; Li, Qi</p> <p>2005-06-01</p> <p>Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21908541','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21908541"><span>AUC-based biomarker ensemble with an application on gene scores predicting low bone mineral density.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhao, X G; Dai, W; Li, Y; Tian, L</p> <p>2011-11-01</p> <p>The area under the receiver operating characteristic (ROC) curve (AUC), long regarded as a 'golden' measure for the predictiveness of a continuous score, has propelled the need to develop AUC-based predictors. However, the AUC-based ensemble methods are rather scant, largely due to the fact that the associated objective function is neither continuous nor concave. Indeed, there is no reliable numerical algorithm identifying optimal combination of a set of biomarkers to maximize the AUC, especially when the number of biomarkers is large. We have proposed a novel AUC-based statistical ensemble methods for combining multiple biomarkers to differentiate a binary response of interest. Specifically, we propose to replace the non-continuous and non-convex AUC objective function by a convex surrogate loss function, whose minimizer can be efficiently identified. With the established framework, the lasso and other regularization techniques enable feature selections. Extensive simulations have demonstrated the superiority of the new methods to the existing methods. The proposal has been applied to a gene expression dataset to construct gene expression scores to differentiate elderly women with low bone mineral density (BMD) and those with normal BMD. The AUCs of the resulting scores in the independent test dataset has been satisfactory. Aiming for directly maximizing AUC, the proposed AUC-based ensemble method provides an efficient means of generating a stable combination of multiple biomarkers, which is especially useful under the high-dimensional settings. lutian@stanford.edu. Supplementary data are available at Bioinformatics online.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26213929','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26213929"><span>An Autonomous Satellite Time Synchronization System Using Remotely Disciplined VC-OCXOs.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Gu, Xiaobo; Chang, Qing; Glennon, Eamonn P; Xu, Baoda; Dempseter, Andrew G; Wang, Dun; Wu, Jiapeng</p> <p>2015-07-23</p> <p>An autonomous remote clock control system is proposed to provide time synchronization and frequency syntonization for satellite to satellite or ground to satellite time transfer, with the system comprising on-board voltage controlled oven controlled crystal oscillators (VC-OCXOs) that are disciplined to a remote master atomic clock or oscillator. The synchronization loop aims to provide autonomous operation over extended periods, be widely applicable to a variety of scenarios and robust. A new architecture comprising the use of frequency division duplex (FDD), synchronous time division (STDD) duplex and code division multiple access (CDMA) with a centralized topology is employed. This new design utilizes dual one-way ranging methods to precisely measure the clock error, adopts least square (LS) methods to predict the clock error and employs a third-order phase lock loop (PLL) to generate the voltage control signal. A general functional model for this system is proposed and the error sources and delays that affect the time synchronization are discussed. Related algorithms for estimating and correcting these errors are also proposed. The performance of the proposed system is simulated and guidance for selecting the clock is provided.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012AIPC.1487...95W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012AIPC.1487...95W"><span>Using analytic hierarchy process approach in ontological multicriterial decision making - Preliminary considerations</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wasielewska, K.; Ganzha, M.</p> <p>2012-10-01</p> <p>In this paper we consider combining ontologically demarcated information with Saaty's Analytic Hierarchy Process (AHP) [1] for the multicriterial assessment of offers during contract negotiations. The context for the proposal is provided by the Agents in Grid project (AiG; [2]), which aims at development of an agent-based infrastructure for efficient resource management in the Grid. In the AiG project, software agents representing users can either (1) join a team and earn money, or (2) find a team to execute a job. Moreover, agents form teams, managers of which negotiate with clients and workers terms of potential collaboration. Here, ontologically described contracts (Service Level Agreements) are the results of autonomous multiround negotiations. Therefore, taking into account relatively complex nature of the negotiated contracts, multicriterial assessment of proposals plays a crucial role. The AHP method is based on pairwise comparisons of criteria and relies on the judgement of a panel of experts. It measures how well does an offer serve the objective of a decision maker. In this paper, we propose how the AHP method can be used to assess ontologically described contract proposals.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_15");'>15</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li class="active"><span>17</span></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_17 --> <div id="page_18" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="341"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28113298','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28113298"><span>Muscle Activity Map Reconstruction from High Density Surface EMG Signals With Missing Channels Using Image Inpainting and Surface Reconstruction Methods.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ghaderi, Parviz; Marateb, Hamid R</p> <p>2017-07-01</p> <p>The aim of this study was to reconstruct low-quality High-density surface EMG (HDsEMG) signals, recorded with 2-D electrode arrays, using image inpainting and surface reconstruction methods. It is common that some fraction of the electrodes may provide low-quality signals. We used variety of image inpainting methods, based on partial differential equations (PDEs), and surface reconstruction methods to reconstruct the time-averaged or instantaneous muscle activity maps of those outlier channels. Two novel reconstruction algorithms were also proposed. HDsEMG signals were recorded from the biceps femoris and brachial biceps muscles during low-to-moderate-level isometric contractions, and some of the channels (5-25%) were randomly marked as outliers. The root-mean-square error (RMSE) between the original and reconstructed maps was then calculated. Overall, the proposed Poisson and wave PDE outperformed the other methods (average RMSE 8.7 μV rms ± 6.1 μV rms and 7.5 μV rms ± 5.9 μV rms ) for the time-averaged single-differential and monopolar map reconstruction, respectively. Biharmonic Spline, the discrete cosine transform, and the Poisson PDE outperformed the other methods for the instantaneous map reconstruction. The running time of the proposed Poisson and wave PDE methods, implemented using a Vectorization package, was 4.6 ± 5.7 ms and 0.6 ± 0.5 ms, respectively, for each signal epoch or time sample in each channel. The proposed reconstruction algorithms could be promising new tools for reconstructing muscle activity maps in real-time applications. Proper reconstruction methods could recover the information of low-quality recorded channels in HDsEMG signals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22130564-intervertebral-anticollision-constraints-improve-out-plane-translation-accuracy-single-plane-fluoroscopy-ct-registration-method-measuring-spinal-motion','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22130564-intervertebral-anticollision-constraints-improve-out-plane-translation-accuracy-single-plane-fluoroscopy-ct-registration-method-measuring-spinal-motion"><span>Intervertebral anticollision constraints improve out-of-plane translation accuracy of a single-plane fluoroscopy-to-CT registration method for measuring spinal motion</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Lin, Cheng-Chung; Tsai, Tsung-Yuan; Hsu, Shih-Jung</p> <p>2013-03-15</p> <p>Purpose: The study aimed to propose a new single-plane fluoroscopy-to-CT registration method integrated with intervertebral anticollision constraints for measuring three-dimensional (3D) intervertebral kinematics of the spine; and to evaluate the performance of the method without anticollision and with three variations of the anticollision constraints via an in vitro experiment. Methods: The proposed fluoroscopy-to-CT registration approach, called the weighted edge-matching with anticollision (WEMAC) method, was based on the integration of geometrical anticollision constraints for adjacent vertebrae and the weighted edge-matching score (WEMS) method that matched the digitally reconstructed radiographs of the CT models of the vertebrae and the measured single-plane fluoroscopymore » images. Three variations of the anticollision constraints, namely, T-DOF, R-DOF, and A-DOF methods, were proposed. An in vitro experiment using four porcine cervical spines in different postures was performed to evaluate the performance of the WEMS and the WEMAC methods. Results: The WEMS method gave high precision and small bias in all components for both vertebral pose and intervertebral pose measurements, except for relatively large errors for the out-of-plane translation component. The WEMAC method successfully reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five degrees of freedom (DOF) more or less unaltered. The means (standard deviations) of the out-of-plane translational errors were less than -0.5 (0.6) and -0.3 (0.8) mm for the T-DOF method and the R-DOF method, respectively. Conclusions: The proposed single-plane fluoroscopy-to-CT registration method reduced the out-of-plane translation errors for intervertebral kinematic measurements while keeping the measurement accuracies for the other five DOF more or less unaltered. With the submillimeter and subdegree accuracy, the WEMAC method was considered accurate for measuring 3D intervertebral kinematics during various functional activities for research and clinical applications.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28603404','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28603404"><span>Single image super-resolution via an iterative reproducing kernel Hilbert space method.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Deng, Liang-Jian; Guo, Weihong; Huang, Ting-Zhu</p> <p>2016-11-01</p> <p>Image super-resolution, a process to enhance image resolution, has important applications in satellite imaging, high definition television, medical imaging, etc. Many existing approaches use multiple low-resolution images to recover one high-resolution image. In this paper, we present an iterative scheme to solve single image super-resolution problems. It recovers a high quality high-resolution image from solely one low-resolution image without using a training data set. We solve the problem from image intensity function estimation perspective and assume the image contains smooth and edge components. We model the smooth components of an image using a thin-plate reproducing kernel Hilbert space (RKHS) and the edges using approximated Heaviside functions. The proposed method is applied to image patches, aiming to reduce computation and storage. Visual and quantitative comparisons with some competitive approaches show the effectiveness of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..108c2042W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..108c2042W"><span>Application of modified extended method in CREAM for safety inspector in coal mines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Jinhe; Zhang, Xiaohong; Zeng, Jianchao</p> <p>2018-01-01</p> <p>Safety inspector often performs duties in circumstances contributes to the oc currence of human failures. Therefore, the paper aims at quantifying human failure pro bability (HFP) of safety inspector during the coal mine operation with cognitive reliabi lity and error analysis method (CREAM). Whereas, some shortcomings of this approa ch that lacking considering the applicability of the common performance condition (C PC), and the subjective of evaluating CPC level which weaken the accuracy of the qua ntitative prediction results. A modified extended method in CREAM which is able to a ddress these difficulties with a CPC framework table is proposed, and the proposed me thodology is demonstrated by the virtue of a coal-mine accident example. The results a re expected to be useful in predicting HFP of safety inspector and contribute to the enh ancement of coal mine safety.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23268532','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23268532"><span>Gaussian processes for personalized e-health monitoring with wearable sensors.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Clifton, Lei; Clifton, David A; Pimentel, Marco A F; Watkinson, Peter J; Tarassenko, Lionel</p> <p>2013-01-01</p> <p>Advances in wearable sensing and communications infrastructure have allowed the widespread development of prototype medical devices for patient monitoring. However, such devices have not penetrated into clinical practice, primarily due to a lack of research into "intelligent" analysis methods that are sufficiently robust to support large-scale deployment. Existing systems are typically plagued by large false-alarm rates, and an inability to cope with sensor artifact in a principled manner. This paper has two aims: 1) proposal of a novel, patient-personalized system for analysis and inference in the presence of data uncertainty, typically caused by sensor artifact and data incompleteness; 2) demonstration of the method using a large-scale clinical study in which 200 patients have been monitored using the proposed system. This latter provides much-needed evidence that personalized e-health monitoring is feasible within an actual clinical environment, at scale, and that the method is capable of improving patient outcomes via personalized healthcare.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26176764','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26176764"><span>Exact and Metaheuristic Approaches for a Bi-Objective School Bus Scheduling Problem.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Xiaopan; Kong, Yunfeng; Dang, Lanxue; Hou, Yane; Ye, Xinyue</p> <p>2015-01-01</p> <p>As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MSSP...99..586B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MSSP...99..586B"><span>A new method of passive modifications for partial frequency assignment of general structures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Belotti, Roberto; Ouyang, Huajiang; Richiedei, Dario</p> <p>2018-01-01</p> <p>The assignment of a subset of natural frequencies to vibrating systems can be conveniently achieved by means of suitable structural modifications. It has been observed that such an approach usually leads to the undesired change of the unassigned natural frequencies, which is a phenomenon known as frequency spill-over. Such an issue has been dealt with in the literature only in simple specific cases. In this paper, a new and general method is proposed that aims to assign a subset of natural frequencies with low spill-over. The optimal structural modifications are determined through a three-step procedure that considers both the prescribed eigenvalues and the feasibility constraints, assuring that the obtained solution is physically realizable. The proposed method is therefore applicable to very general vibrating systems, such as those obtained through the finite element method. The numerical difficulties that may occur as a result of employing the method are also carefully addressed. Finally, the capabilities of the method are validated in three test-cases in which both lumped and distributed parameters are modified to obtain the desired eigenvalues.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23757575','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23757575"><span>Lazy collaborative filtering for data sets with missing values.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Ren, Yongli; Li, Gang; Zhang, Jun; Zhou, Wanlei</p> <p>2013-12-01</p> <p>As one of the biggest challenges in research on recommender systems, the data sparsity issue is mainly caused by the fact that users tend to rate a small proportion of items from the huge number of available items. This issue becomes even more problematic for the neighborhood-based collaborative filtering (CF) methods, as there are even lower numbers of ratings available in the neighborhood of the query item. In this paper, we aim to address the data sparsity issue in the context of neighborhood-based CF. For a given query (user, item), a set of key ratings is first identified by taking the historical information of both the user and the item into account. Then, an auto-adaptive imputation (AutAI) method is proposed to impute the missing values in the set of key ratings. We present a theoretical analysis to show that the proposed imputation method effectively improves the performance of the conventional neighborhood-based CF methods. The experimental results show that our new method of CF with AutAI outperforms six existing recommendation methods in terms of accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018RSOS....572434M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018RSOS....572434M"><span>Machine learning methods for locating re-entrant drivers from electrograms in a model of atrial fibrillation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>McGillivray, Max Falkenberg; Cheng, William; Peters, Nicholas S.; Christensen, Kim</p> <p>2018-04-01</p> <p>Mapping resolution has recently been identified as a key limitation in successfully locating the drivers of atrial fibrillation (AF). Using a simple cellular automata model of AF, we demonstrate a method by which re-entrant drivers can be located quickly and accurately using a collection of indirect electrogram measurements. The method proposed employs simple, out-of-the-box machine learning algorithms to correlate characteristic electrogram gradients with the displacement of an electrogram recording from a re-entrant driver. Such a method is less sensitive to local fluctuations in electrical activity. As a result, the method successfully locates 95.4% of drivers in tissues containing a single driver, and 95.1% (92.6%) for the first (second) driver in tissues containing two drivers of AF. Additionally, we demonstrate how the technique can be applied to tissues with an arbitrary number of drivers. In its current form, the techniques presented are not refined enough for a clinical setting. However, the methods proposed offer a promising path for future investigations aimed at improving targeted ablation for AF.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26774422','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26774422"><span>Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Niegowski, Maciej; Zivanovic, Miroslav</p> <p>2016-03-01</p> <p>We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=medical+AND+student+AND+psychiatry&pg=4&id=EJ808489','ERIC'); return false;" href="https://eric.ed.gov/?q=medical+AND+student+AND+psychiatry&pg=4&id=EJ808489"><span>Use of Clerkship Learning Objectives by Members of the Association of Directors of Medical Student Education in Psychiatry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Brodkey, Amy C.; Sierles, Frederick S.; Woodard, John L.</p> <p>2006-01-01</p> <p>Objective: The authors aimed to determine the extent and use of the 1995 psychiatry clerkship goals and objectives published by the Association of Directors of Medical Student Education in Psychiatry (ADMSEP) and to obtain members' guidance regarding their proposed revision. Methods: ADMSEP members were surveyed regarding their awareness and…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1164311.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1164311.pdf"><span>Sex Differences in Attitudes towards Online Privacy and Anonymity among Israeli Students with Different Technical Backgrounds</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Weinberger, Maor; Zhitomirsky-Geffet, Maayan; Bouhnik, Dan</p> <p>2017-01-01</p> <p>Introduction: In this exploratory study, we proposed an experimental framework to investigate and model male/female differences in attitudes towards online privacy and anonymity among Israeli students. Our aim was to comparatively model men and women's online privacy attitudes, and to assess the online privacy gender gap. Method: Various factors…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1130760.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1130760.pdf"><span>Semiology and a Semiological Reading of Power Myths in Education</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Kükürt, Remzi Onur</p> <p>2016-01-01</p> <p>By referring to the theory of semiology, this study aims to present how certain phrases, applications, images, and objects, which are assumed to be unnoticed in the educational process as if they were natural, could be read as signs encrypted with certain ideologically-loaded cultural codes, and to propose semiology as a method for educational…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=value+AND+based+AND+medicine&pg=2&id=EJ922553','ERIC'); return false;" href="https://eric.ed.gov/?q=value+AND+based+AND+medicine&pg=2&id=EJ922553"><span>Evidence-Based Administration for Decision Making in the Framework of Knowledge Strategic Management</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Del Junco, Julio Garcia; Zaballa, Rafael De Reyna; de Perea, Juan Garcia Alvarez</p> <p>2010-01-01</p> <p>Purpose: This paper seeks to present a model based on evidence-based administration (EBA), which aims to facilitate the creation, transformation and diffusion of knowledge in learning organizations. Design/methodology/approach: A theoretical framework is proposed based on EBA and the case method. Accordingly, an empirical study was carried out in…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29313209','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29313209"><span>An Ensemble Framework Coping with Instability in the Gene Selection Process.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Castellanos-Garzón, José A; Ramos, Juan; López-Sánchez, Daniel; de Paz, Juan F; Corchado, Juan M</p> <p>2018-03-01</p> <p>This paper proposes an ensemble framework for gene selection, which is aimed at addressing instability problems presented in the gene filtering task. The complex process of gene selection from gene expression data faces different instability problems from the informative gene subsets found by different filter methods. This makes the identification of significant genes by the experts difficult. The instability of results can come from filter methods, gene classifier methods, different datasets of the same disease and multiple valid groups of biomarkers. Even though there is a wide number of proposals, the complexity imposed by this problem remains a challenge today. This work proposes a framework involving five stages of gene filtering to discover biomarkers for diagnosis and classification tasks. This framework performs a process of stable feature selection, facing the problems above and, thus, providing a more suitable and reliable solution for clinical and research purposes. Our proposal involves a process of multistage gene filtering, in which several ensemble strategies for gene selection were added in such a way that different classifiers simultaneously assess gene subsets to face instability. Firstly, we apply an ensemble of recent gene selection methods to obtain diversity in the genes found (stability according to filter methods). Next, we apply an ensemble of known classifiers to filter genes relevant to all classifiers at a time (stability according to classification methods). The achieved results were evaluated in two different datasets of the same disease (pancreatic ductal adenocarcinoma), in search of stability according to the disease, for which promising results were achieved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2010IEITC..93.2316K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2010IEITC..93.2316K"><span>Reliable Wireless Broadcast with Linear Network Coding for Multipoint-to-Multipoint Real-Time Communications</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kondo, Yoshihisa; Yomo, Hiroyuki; Yamaguchi, Shinji; Davis, Peter; Miura, Ryu; Obana, Sadao; Sampei, Seiichi</p> <p></p> <p>This paper proposes multipoint-to-multipoint (MPtoMP) real-time broadcast transmission using network coding for ad-hoc networks like video game networks. We aim to achieve highly reliable MPtoMP broadcasting using IEEE 802.11 media access control (MAC) that does not include a retransmission mechanism. When each node detects packets from the other nodes in a sequence, the correctly detected packets are network-encoded, and the encoded packet is broadcasted in the next sequence as a piggy-back for its native packet. To prevent increase of overhead in each packet due to piggy-back packet transmission, network coding vector for each node is exchanged between all nodes in the negotiation phase. Each user keeps using the same coding vector generated in the negotiation phase, and only coding information that represents which user signal is included in the network coding process is transmitted along with the piggy-back packet. Our simulation results show that the proposed method can provide higher reliability than other schemes using multi point relay (MPR) or redundant transmissions such as forward error correction (FEC). We also implement the proposed method in a wireless testbed, and show that the proposed method achieves high reliability in a real-world environment with a practical degree of complexity when installed on current wireless devices.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28134811','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28134811"><span>Impedance-Based Pre-Stress Monitoring of Rock Bolts Using a Piezoceramic-Based Smart Washer-A Feasibility Study.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Bo; Huo, Linsheng; Chen, Dongdong; Li, Weijie; Song, Gangbing</p> <p>2017-01-27</p> <p>Pre-stress degradation or looseness of rock bolts in mining or tunnel engineering threatens the stability and reliability of the structures. In this paper, an innovative piezoelectric device named a "smart washer" with the impedance method is proposed with the aim of developing a real-time device to monitor the pre-stress level of rock bolts. The proposed method was verified through tests on a rock bolt specimen. By applying high-frequency sweep excitations (typically >30 kHz) to the smart washer that was installed on the rock bolt specimen, we observed that the variation in impedance signatures indicated the rock bolt pre-stress status. With the degradation of rock bolt pre-stress, the frequency in the dominating peak of the real part of the electrical impedance signature increased. To quantify the effectiveness of the proposed technique, a normalized root mean square deviation (RMSD) index was developed to evaluate the degradation level of the rock bolt pre-stress. The experimental results demonstrated that the normalized RMSD-based looseness index, which was computed from the impedance value detected by the "smart washer", increased with loss of the pre-stress of the rock bolt. Therefore, the proposed method can effectively detect the degradation of rock bolt pre-stress, as demonstrated by experiments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25133249','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25133249"><span>Investigation of a novel common subexpression elimination method for low power and area efficient DCT architecture.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Siddiqui, M F; Reza, A W; Kanesan, J; Ramiah, H</p> <p>2014-01-01</p> <p>A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4124737','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4124737"><span>Investigation of a Novel Common Subexpression Elimination Method for Low Power and Area Efficient DCT Architecture</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Siddiqui, M. F.; Reza, A. W.; Kanesan, J.; Ramiah, H.</p> <p>2014-01-01</p> <p>A wide interest has been observed to find a low power and area efficient hardware design of discrete cosine transform (DCT) algorithm. This research work proposed a novel Common Subexpression Elimination (CSE) based pipelined architecture for DCT, aimed at reproducing the cost metrics of power and area while maintaining high speed and accuracy in DCT applications. The proposed design combines the techniques of Canonical Signed Digit (CSD) representation and CSE to implement the multiplier-less method for fixed constant multiplication of DCT coefficients. Furthermore, symmetry in the DCT coefficient matrix is used with CSE to further decrease the number of arithmetic operations. This architecture needs a single-port memory to feed the inputs instead of multiport memory, which leads to reduction of the hardware cost and area. From the analysis of experimental results and performance comparisons, it is observed that the proposed scheme uses minimum logic utilizing mere 340 slices and 22 adders. Moreover, this design meets the real time constraints of different video/image coders and peak-signal-to-noise-ratio (PSNR) requirements. Furthermore, the proposed technique has significant advantages over recent well-known methods along with accuracy in terms of power reduction, silicon area usage, and maximum operating frequency by 41%, 15%, and 15%, respectively. PMID:25133249</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5336068','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5336068"><span>Impedance-Based Pre-Stress Monitoring of Rock Bolts Using a Piezoceramic-Based Smart Washer—A Feasibility Study</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Wang, Bo; Huo, Linsheng; Chen, Dongdong; Li, Weijie; Song, Gangbing</p> <p>2017-01-01</p> <p>Pre-stress degradation or looseness of rock bolts in mining or tunnel engineering threatens the stability and reliability of the structures. In this paper, an innovative piezoelectric device named a “smart washer” with the impedance method is proposed with the aim of developing a real-time device to monitor the pre-stress level of rock bolts. The proposed method was verified through tests on a rock bolt specimen. By applying high-frequency sweep excitations (typically >30 kHz) to the smart washer that was installed on the rock bolt specimen, we observed that the variation in impedance signatures indicated the rock bolt pre-stress status. With the degradation of rock bolt pre-stress, the frequency in the dominating peak of the real part of the electrical impedance signature increased. To quantify the effectiveness of the proposed technique, a normalized root mean square deviation (RMSD) index was developed to evaluate the degradation level of the rock bolt pre-stress. The experimental results demonstrated that the normalized RMSD-based looseness index, which was computed from the impedance value detected by the “smart washer”, increased with loss of the pre-stress of the rock bolt. Therefore, the proposed method can effectively detect the degradation of rock bolt pre-stress, as demonstrated by experiments. PMID:28134811</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_16");'>16</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li class="active"><span>18</span></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_18 --> <div id="page_19" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="361"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..177a2140K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..177a2140K"><span>Forecasting of the electrical actuators condition using stator’s current signals</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kruglova, T. N.; Yaroshenko, I. V.; Rabotalov, N. N.; Melnikov, M. A.</p> <p>2017-02-01</p> <p>This article describes a forecasting method for electrical actuators realized through the combination of Fourier transformation and neural network techniques. The method allows finding the value of diagnostic functions in the iterating operating cycle and the number of operational cycles in time before the BLDC actuator fails. For forecasting of the condition of the actuator, we propose a hierarchical structure of the neural network aiming to reduce the training time of the neural network and improve estimation accuracy.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22663242-formal-solutions-polarized-radiative-transfer-ii-high-order-methods','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22663242-formal-solutions-polarized-radiative-transfer-ii-high-order-methods"><span>Formal Solutions for Polarized Radiative Transfer. II. High-order Methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Janett, Gioele; Steiner, Oskar; Belluzzi, Luca, E-mail: gioele.janett@irsol.ch</p> <p></p> <p>When integrating the radiative transfer equation for polarized light, the necessity of high-order numerical methods is well known. In fact, well-performing high-order formal solvers enable higher accuracy and the use of coarser spatial grids. Aiming to provide a clear comparison between formal solvers, this work presents different high-order numerical schemes and applies the systematic analysis proposed by Janett et al., emphasizing their advantages and drawbacks in terms of order of accuracy, stability, and computational cost.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018ISPAr42.3.2449Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018ISPAr42.3.2449Z"><span>Research on Aircraft Target Detection Algorithm Based on Improved Radial Gradient Transformation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhao, Z. M.; Gao, X. M.; Jiang, D. N.; Zhang, Y. Q.</p> <p>2018-04-01</p> <p>Aiming at the problem that the target may have different orientation in the unmanned aerial vehicle (UAV) image, the target detection algorithm based on the rotation invariant feature is studied, and this paper proposes a method of RIFF (Rotation-Invariant Fast Features) based on look up table and polar coordinate acceleration to be used for aircraft target detection. The experiment shows that the detection performance of this method is basically equal to the RIFF, and the operation efficiency is greatly improved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/1300241','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/1300241"><span>[Econometric and ethical validation of regression logistics. Reducing of the number of patients in the evaluation of mortality].</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Castiel, D; Herve, C</p> <p>1992-01-01</p> <p>In general, a large number of patients is needed to conclude whether the results of a therapeutic strategy are significant or not. One can lower this number with a logit. The method has been proposed in an article published recently (Cost-utility analysis of early thrombolytic therapy, Pharmaco Economics, 1992). The present article is an essay aimed at validating the method, both from the econometric and ethical points of view.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9395E..0AK','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9395E..0AK"><span>Video enhancement method with color-protection post-processing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Kim, Youn Jin; Kwak, Youngshin</p> <p>2015-01-01</p> <p>The current study is aimed to propose a post-processing method for video enhancement by adopting a color-protection technique. The color-protection intends to attenuate perceptible artifacts due to over-enhancements in visually sensitive image regions such as low-chroma colors, including skin and gray objects. In addition, reducing the loss in color texture caused by the out-of-color-gamut signals is also taken into account. Consequently, color reproducibility of video sequences could be remarkably enhanced while the undesirable visual exaggerations are minimized.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26041176','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26041176"><span>Extraction of trace elements by ultrasound-assisted emulsification from edible oils producing detergentless microemulsions.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kara, Derya; Fisher, Andrew; Hill, Steve</p> <p>2015-12-01</p> <p>The aim of this study is to develop a new method for the extraction and preconcentration of trace elements from edible oils via an ultrasound-assisted extraction using ethylenediaminetetraacetic acid (EDTA) producing detergentless microemulsions. These were then analyzed using ICP-MS against matrix matched standards. Optimum experimental conditions were determined and the applicability of the proposed ultrasound-assisted extraction method was investigated. Under the optimal conditions, the detection limits (μg kg(-1)) were 2.47, 2.81, 0.013, 0.037, 1.37, 0.050, 0.049, 0.47, 0.032 and 0.087 for Al, Ca, Cd, Cu, Mg, Mn, Ni, Ti, V and Zn respectively for edible oils (3Sb/m). The accuracy of the developed method was checked by analyzing certified reference material. The proposed method was applied to different edible oils such as sunflower seed oil, rapeseed oil, olive oil and cod liver oil. Copyright © 2015 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MSSP..105..169L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MSSP..105..169L"><span>Chatter detection in milling process based on VMD and energy entropy</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Changfu; Zhu, Lida; Ni, Chenbing</p> <p>2018-05-01</p> <p>This paper presents a novel approach to detect the milling chatter based on Variational Mode Decomposition (VMD) and energy entropy. VMD has already been employed in feature extraction from non-stationary signals. The parameters like number of modes (K) and the quadratic penalty (α) need to be selected empirically when raw signal is decomposed by VMD. Aimed at solving the problem how to select K and α, the automatic selection method of VMD's based on kurtosis is proposed in this paper. When chatter occurs in the milling process, energy will be absorbed to chatter frequency bands. To detect the chatter frequency bands automatically, the chatter detection method based on energy entropy is presented. The vibration signal containing chatter frequency is simulated and three groups of experiments which represent three cutting conditions are conducted. To verify the effectiveness of method presented by this paper, chatter feather extraction has been successfully employed on simulation signals and experimental signals. The simulation and experimental results show that the proposed method can effectively detect the chatter.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28666569','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28666569"><span>Multivariate fault isolation of batch processes via variable selection in partial least squares discriminant analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yan, Zhengbing; Kuang, Te-Hui; Yao, Yuan</p> <p>2017-09-01</p> <p>In recent years, multivariate statistical monitoring of batch processes has become a popular research topic, wherein multivariate fault isolation is an important step aiming at the identification of the faulty variables contributing most to the detected process abnormality. Although contribution plots have been commonly used in statistical fault isolation, such methods suffer from the smearing effect between correlated variables. In particular, in batch process monitoring, the high autocorrelations and cross-correlations that exist in variable trajectories make the smearing effect unavoidable. To address such a problem, a variable selection-based fault isolation method is proposed in this research, which transforms the fault isolation problem into a variable selection problem in partial least squares discriminant analysis and solves it by calculating a sparse partial least squares model. As different from the traditional methods, the proposed method emphasizes the relative importance of each process variable. Such information may help process engineers in conducting root-cause diagnosis. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1839b0111Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1839b0111Z"><span>Method of gear fault diagnosis based on EEMD and improved Elman neural network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Qi; Zhao, Wei; Xiao, Shungen; Song, Mengmeng</p> <p>2017-05-01</p> <p>Aiming at crack and wear and so on of gears Fault information is difficult to diagnose usually due to its weak, a gear fault diagnosis method that is based on EEMD and improved Elman neural network fusion is proposed. A number of IMF components are obtained by decomposing denoised all kinds of fault signals with EEMD, and the pseudo IMF components is eliminated by using the correlation coefficient method to obtain the effective IMF component. The energy characteristic value of each effective component is calculated as the input feature quantity of Elman neural network, and the improved Elman neural network is based on standard network by adding a feedback factor. The fault data of normal gear, broken teeth, cracked gear and attrited gear were collected by field collecting. The results were analyzed by the diagnostic method proposed in this paper. The results show that compared with the standard Elman neural network, Improved Elman neural network has the advantages of high diagnostic efficiency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25099514','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25099514"><span>Multi-label learning with fuzzy hypergraph regularization for protein subcellular location prediction.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Jing; Tang, Yuan Yan; Chen, C L Philip; Fang, Bin; Lin, Yuewei; Shang, Zhaowei</p> <p>2014-12-01</p> <p>Protein subcellular location prediction aims to predict the location where a protein resides within a cell using computational methods. Considering the main limitations of the existing methods, we propose a hierarchical multi-label learning model FHML for both single-location proteins and multi-location proteins. The latent concepts are extracted through feature space decomposition and label space decomposition under the nonnegative data factorization framework. The extracted latent concepts are used as the codebook to indirectly connect the protein features to their annotations. We construct dual fuzzy hypergraphs to capture the intrinsic high-order relations embedded in not only feature space, but also label space. Finally, the subcellular location annotation information is propagated from the labeled proteins to the unlabeled proteins by performing dual fuzzy hypergraph Laplacian regularization. The experimental results on the six protein benchmark datasets demonstrate the superiority of our proposed method by comparing it with the state-of-the-art methods, and illustrate the benefit of exploiting both feature correlations and label correlations.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24772030','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24772030"><span>Modelling soil water retention using support vector machines with genetic algorithm optimisation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lamorski, Krzysztof; Sławiński, Cezary; Moreno, Felix; Barna, Gyöngyi; Skierucha, Wojciech; Arrue, José L</p> <p>2014-01-01</p> <p>This work presents point pedotransfer function (PTF) models of the soil water retention curve. The developed models allowed for estimation of the soil water content for the specified soil water potentials: -0.98, -3.10, -9.81, -31.02, -491.66, and -1554.78 kPa, based on the following soil characteristics: soil granulometric composition, total porosity, and bulk density. Support Vector Machines (SVM) methodology was used for model development. A new methodology for elaboration of retention function models is proposed. Alternative to previous attempts known from literature, the ν-SVM method was used for model development and the results were compared with the formerly used the C-SVM method. For the purpose of models' parameters search, genetic algorithms were used as an optimisation framework. A new form of the aim function used for models parameters search is proposed which allowed for development of models with better prediction capabilities. This new aim function avoids overestimation of models which is typically encountered when root mean squared error is used as an aim function. Elaborated models showed good agreement with measured soil water retention data. Achieved coefficients of determination values were in the range 0.67-0.92. Studies demonstrated usability of ν-SVM methodology together with genetic algorithm optimisation for retention modelling which gave better performing models than other tested approaches.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29738513','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29738513"><span>Extrinsic Calibration of a Laser Galvanometric Setup and a Range Camera.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sels, Seppe; Bogaerts, Boris; Vanlanduit, Steve; Penne, Rudi</p> <p>2018-05-08</p> <p>Currently, galvanometric scanning systems (like the one used in a scanning laser Doppler vibrometer) rely on a planar calibration procedure between a two-dimensional (2D) camera and the laser galvanometric scanning system to automatically aim a laser beam at a particular point on an object. In the case of nonplanar or moving objects, this calibration is not sufficiently accurate anymore. In this work, a three-dimensional (3D) calibration procedure that uses a 3D range sensor is proposed. The 3D calibration is valid for all types of objects and retains its accuracy when objects are moved between subsequent measurement campaigns. The proposed 3D calibration uses a Non-Perspective-n-Point (NPnP) problem solution. The 3D range sensor is used to calculate the position of the object under test relative to the laser galvanometric system. With this extrinsic calibration, the laser galvanometric scanning system can automatically aim a laser beam to this object. In experiments, the mean accuracy of aiming the laser beam on an object is below 10 mm for 95% of the measurements. This achieved accuracy is mainly determined by the accuracy and resolution of the 3D range sensor. The new calibration method is significantly better than the original 2D calibration method, which in our setup achieves errors below 68 mm for 95% of the measurements.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16555959','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16555959"><span>A survey of stakeholder organisations on the proposed new European chemical policy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dandrea, Jennifer; Combes, Robert</p> <p>2006-03-01</p> <p>In February 2001, the European Commission published a White Paper proposing that a single new system of chemical regulation should be applied throughout the Member States of the European Union. The proposed Registration, Evaluation and Authorisation of Chemicals (REACH) system was to include both new and existing chemicals, with the aim of ensuring that sufficient pertinent data were made available to enable human health and the environment to be protected. The policy was founded on the principle of sustainable industrial development, and ambitiously attempted to incorporate the needs and views of key stakeholder organisations, such as industry, trade associations, consumer groups, environmentalists, animal welfarists and Member State governments. During the period between the publication of the White Paper and the on-line publication of consultation documents, as part of a public consultation exercise, in May 2003, many of these key stakeholder organisations produced material in support of or critical of the White Paper, either in part or as a whole. In this paper, we have attempted to review this extensive material and to present it in the context of the current chemical regulatory system that the REACH system will replace. Emphasis is placed on the impact of the new policy on the number of animals used in the testing regimes within the REACH system and the inclusion of alternative methods into the legislation. Although supportive of the overriding aims of the new policy, FRAME believes that the fundamental concept of a risk-free environment is flawed, and that the new REACH system will involve the unjustifiable use of millions of laboratory animals. The new policy does include alternative methods, particularly for base set substances. Nevertheless, alternative testing methods that are already available have been excluded and replaced with outdated in vivo versions. There is also insufficient detail with regard to the further development and validation of alternative methods, particularly for substances of high concern, such as endocrine disrupters or reproductive toxins, for which no alternative testing methods currently exist.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15598177','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15598177"><span>A survey of stakeholder organisations on the proposed new European chemicals policy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dandrea, Jennifer; Combes, Robert D</p> <p>2003-11-01</p> <p>In February 2001, the European Commission published a White Paper proposing that a single new system of chemical regulation should be applied throughout the Member States of the European Union. The proposed Registration, Evaluation and Authorisation of Chemicals (REACH) system was to include both new and existing chemicals, with the aim of ensuring that sufficient pertinent data were made available to enable human health and the environment to be protected. The policy was founded on the principle of sustainable industrial development, and ambitiously attempted to incorporate the needs and views of key stakeholder organisations, such as industry, trade associations, consumer groups, environmentalists, animal welfarists and Member State governments. During the period between the publication of the White Paper and the on-line publication of consultation documents, as part of a public consultation exercise, in May 2003, many of these key stakeholder organisations produced material in support of or critical of the White Paper, either in part or as a whole. In this paper, we have attempted to review this extensive material and to present it in the context of the current chemical regulatory system that the REACH system will replace. Emphasis is placed on the impact of the new policy on the number of animals used in the testing regimes within the REACH system and the inclusion of alternative methods into the legislation. Although supportive of the overriding aims of the new policy, FRAME believes that the fundamental concept of a risk-free environment is flawed, and that the new REACH system will involve the unjustifiable use of millions of laboratory animals. The new policy does include alternative methods, particularly for base-set substances. Nevertheless, alternative testing methods that are already available have been excluded and replaced with outdated in vivo versions. There is also insufficient detail with regard to the further development and validation of alternative methods, particularly for substances of high concern, such as endocrine disrupters or reproductive toxins, for which no alternative testing methods currently exist.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21996649','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21996649"><span>QUAGOL: a guide for qualitative data analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dierckx de Casterlé, Bernadette; Gastmans, Chris; Bryon, Els; Denier, Yvonne</p> <p>2012-03-01</p> <p>Data analysis is a complex and contested part of the qualitative research process, which has received limited theoretical attention. Researchers are often in need of useful instructions or guidelines on how to analyze the mass of qualitative data, but face the lack of clear guidance for using particular analytic methods. The aim of this paper is to propose and discuss the Qualitative Analysis Guide of Leuven (QUAGOL), a guide that was developed in order to be able to truly capture the rich insights of qualitative interview data. The article describes six major problems researchers are often struggling with during the process of qualitative data analysis. Consequently, the QUAGOL is proposed as a guide to facilitate the process of analysis. Challenges emerged and lessons learned from own extensive experiences with qualitative data analysis within the Grounded Theory Approach, as well as from those of other researchers (as described in the literature), were discussed and recommendations were presented. Strengths and pitfalls of the proposed method were discussed in detail. The Qualitative Analysis Guide of Leuven (QUAGOL) offers a comprehensive method to guide the process of qualitative data analysis. The process consists of two parts, each consisting of five stages. The method is systematic but not rigid. It is characterized by iterative processes of digging deeper, constantly moving between the various stages of the process. As such, it aims to stimulate the researcher's intuition and creativity as optimal as possible. The QUAGOL guide is a theory and practice-based guide that supports and facilitates the process of analysis of qualitative interview data. Although the method can facilitate the process of analysis, it cannot guarantee automatic quality. The skills of the researcher and the quality of the research team remain the most crucial components of a successful process of analysis. Additionally, the importance of constantly moving between the various stages throughout the research process cannot be overstated. Copyright © 2011 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012EGUGA..14.6102R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012EGUGA..14.6102R"><span>Parameter sensitivity analysis of the mixed Green-Ampt/Curve-Number method for rainfall excess estimation in small ungauged catchments</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Romano, N.; Petroselli, A.; Grimaldi, S.</p> <p>2012-04-01</p> <p>With the aim of combining the practical advantages of the Soil Conservation Service - Curve Number (SCS-CN) method and Green-Ampt (GA) infiltration model, we have developed a mixed procedure, which is referred to as CN4GA (Curve Number for Green-Ampt). The basic concept is that, for a given storm, the computed SCS-CN total net rainfall amount is used to calibrate the soil hydraulic conductivity parameter of the Green-Ampt model so as to distribute in time the information provided by the SCS-CN method. In a previous contribution, the proposed mixed procedure was evaluated on 100 observed events showing encouraging results. In this study, a sensitivity analysis is carried out to further explore the feasibility of applying the CN4GA tool in small ungauged catchments. The proposed mixed procedure constrains the GA model with boundary and initial conditions so that the GA soil hydraulic parameters are expected to be insensitive toward the net hyetograph peak. To verify and evaluate this behaviour, synthetic design hyetograph and synthetic rainfall time series are selected and used in a Monte Carlo analysis. The results are encouraging and confirm that the parameter variability makes the proposed method an appropriate tool for hydrologic predictions in ungauged catchments. Keywords: SCS-CN method, Green-Ampt method, rainfall excess, ungauged basins, design hydrograph, rainfall-runoff modelling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25163062','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25163062"><span>Maximal likelihood correspondence estimation for face recognition across pose.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Li, Shaoxin; Liu, Xin; Chai, Xiujuan; Zhang, Haihong; Lao, Shihong; Shan, Shiguang</p> <p>2014-10-01</p> <p>Due to the misalignment of image features, the performance of many conventional face recognition methods degrades considerably in across pose scenario. To address this problem, many image matching-based methods are proposed to estimate semantic correspondence between faces in different poses. In this paper, we aim to solve two critical problems in previous image matching-based correspondence learning methods: 1) fail to fully exploit face specific structure information in correspondence estimation and 2) fail to learn personalized correspondence for each probe image. To this end, we first build a model, termed as morphable displacement field (MDF), to encode face specific structure information of semantic correspondence from a set of real samples of correspondences calculated from 3D face models. Then, we propose a maximal likelihood correspondence estimation (MLCE) method to learn personalized correspondence based on maximal likelihood frontal face assumption. After obtaining the semantic correspondence encoded in the learned displacement, we can synthesize virtual frontal images of the profile faces for subsequent recognition. Using linear discriminant analysis method with pixel-intensity features, state-of-the-art performance is achieved on three multipose benchmarks, i.e., CMU-PIE, FERET, and MultiPIE databases. Owe to the rational MDF regularization and the usage of novel maximal likelihood objective, the proposed MLCE method can reliably learn correspondence between faces in different poses even in complex wild environment, i.e., labeled face in the wild database.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JChPh.148n4102B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JChPh.148n4102B"><span>Clustering methods for the optimization of atomic cluster structure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bagattini, Francesco; Schoen, Fabio; Tigli, Luca</p> <p>2018-04-01</p> <p>In this paper, we propose a revised global optimization method and apply it to large scale cluster conformation problems. In the 1990s, the so-called clustering methods were considered among the most efficient general purpose global optimization techniques; however, their usage has quickly declined in recent years, mainly due to the inherent difficulties of clustering approaches in large dimensional spaces. Inspired from the machine learning literature, we redesigned clustering methods in order to deal with molecular structures in a reduced feature space. Our aim is to show that by suitably choosing a good set of geometrical features coupled with a very efficient descent method, an effective optimization tool is obtained which is capable of finding, with a very high success rate, all known putative optima for medium size clusters without any prior information, both for Lennard-Jones and Morse potentials. The main result is that, beyond being a reliable approach, the proposed method, based on the idea of starting a computationally expensive deep local search only when it seems worth doing so, is capable of saving a huge amount of searches with respect to an analogous algorithm which does not employ a clustering phase. In this paper, we are not claiming the superiority of the proposed method compared to specific, refined, state-of-the-art procedures, but rather indicating a quite straightforward way to save local searches by means of a clustering scheme working in a reduced variable space, which might prove useful when included in many modern methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28451518','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28451518"><span>Development and application of the RE-AIM QuEST mixed methods framework for program evaluation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Forman, Jane; Heisler, Michele; Damschroder, Laura J; Kaselitz, Elizabeth; Kerr, Eve A</p> <p>2017-06-01</p> <p>To increase the likelihood of successful implementation of interventions and promote dissemination across real-world settings, it is essential to evaluate outcomes related to dimensions other than Effectiveness alone. Glasgow and colleagues' RE-AIM framework specifies four additional types of outcomes that are important to decision-makers: Reach, Adoption, Implementation (including cost), and Maintenance. To further strengthen RE-AIM, we propose integrating qualitative assessments in an expanded framework: RE-AIM Qualitative Evaluation for Systematic Translation (RE-AIM QuEST), a mixed methods framework. RE-AIM QuEST guides formative evaluation to identify real-time implementation barriers and explain how implementation context may influence translation to additional settings. RE-AIM QuEST was used to evaluate a pharmacist-led hypertension management intervention at 3 VA facilities in 2008-2009. We systematically reviewed each of the five RE-AIM dimensions and created open-ended companion questions to quantitative measures and identified qualitative and quantitative data sources, measures, and analyses. To illustrate use of the RE-AIM QuEST framework, we provide examples of real-time, coordinated use of quantitative process measures and qualitative methods to identify site-specific issues, and retrospective use of these data sources and analyses to understand variation across sites and explain outcomes. For example, in the Reach dimension, we conducted real-time measurement of enrollment across sites and used qualitative data to better understand and address barriers at a low-enrollment site. The RE-AIM QuEST framework may be a useful tool for improving interventions in real-time, for understanding retrospectively why an intervention did or did not work, and for enhancing its sustainability and translation to other settings.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1454761-multimodal-event-detection-twitter-hashtag-networks','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1454761-multimodal-event-detection-twitter-hashtag-networks"><span>Multimodal Event Detection in Twitter Hashtag Networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Yilmaz, Yasin; Hero, Alfred O.</p> <p>2016-07-01</p> <p>In this study, event detection in a multimodal Twitter dataset is considered. We treat the hashtags in the dataset as instances with two modes: text and geolocation features. The text feature consists of a bag-of-words representation. The geolocation feature consists of geotags (i.e., geographical coordinates) of the tweets. Fusing the multimodal data we aim to detect, in terms of topic and geolocation, the interesting events and the associated hashtags. To this end, a generative latent variable model is assumed, and a generalized expectation-maximization (EM) algorithm is derived to learn the model parameters. The proposed method is computationally efficient, and lendsmore » itself to big datasets. Lastly, experimental results on a Twitter dataset from August 2014 show the efficacy of the proposed method.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_17");'>17</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li class="active"><span>19</span></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_19 --> <div id="page_20" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="381"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013SPIE.8658E..0BL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013SPIE.8658E..0BL"><span>Comic image understanding based on polygon detection</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Luyuan; Wang, Yongtao; Tang, Zhi; Liu, Dong</p> <p>2013-01-01</p> <p>Comic image understanding aims to automatically decompose scanned comic page images into storyboards and then identify the reading order of them, which is the key technique to produce digital comic documents that are suitable for reading on mobile devices. In this paper, we propose a novel comic image understanding method based on polygon detection. First, we segment a comic page images into storyboards by finding the polygonal enclosing box of each storyboard. Then, each storyboard can be represented by a polygon, and the reading order of them is determined by analyzing the relative geometric relationship between each pair of polygons. The proposed method is tested on 2000 comic images from ten printed comic series, and the experimental results demonstrate that it works well on different types of comic images.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10615E..5BZ','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10615E..5BZ"><span>LSTM for diagnosis of neurodegenerative diseases using gait data</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhao, Aite; Qi, Lin; Li, Jie; Dong, Junyu; Yu, Hui</p> <p>2018-04-01</p> <p>Neurodegenerative diseases (NDs) usually cause gait disorders and postural disorders, which provides an important basis for NDs diagnosis. By observing and analyzing these clinical manifestations, medical specialists finally give diagnostic results to the patient, which is inefficient and can be easily affected by doctors' subjectivity. In this paper, we propose a two-layer Long Short-Term Memory (LSTM) model to learn the gait patterns exhibited in the three NDs. The model was trained and tested using temporal data that was recorded by force-sensitive resistors including time series, such as stride interval and swing interval. Our proposed method outperforms other methods in literature in accordance with accuracy of the predicted diagnostic result. Our approach aims at providing the quantitative assessment so that to indicate the diagnosis and treatment of these neurodegenerative diseases in clinic</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2013AIPC.1557..566I','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2013AIPC.1557..566I"><span>A hybrid group method of data handling with discrete wavelet transform for GDP forecasting</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Isa, Nadira Mohamed; Shabri, Ani</p> <p>2013-09-01</p> <p>This study is proposed the application of hybridization model using Group Method of Data Handling (GMDH) and Discrete Wavelet Transform (DWT) in time series forecasting. The objective of this paper is to examine the flexibility of the hybridization GMDH in time series forecasting by using Gross Domestic Product (GDP). A time series data set is used in this study to demonstrate the effectiveness of the forecasting model. This data are utilized to forecast through an application aimed to handle real life time series. This experiment compares the performances of a hybrid model and a single model of Wavelet-Linear Regression (WR), Artificial Neural Network (ANN), and conventional GMDH. It is shown that the proposed model can provide a promising alternative technique in GDP forecasting.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JEI....27a3011L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JEI....27a3011L"><span>Local facet approximation for image stitching</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Li, Jing; Lai, Shiming; Liu, Yu; Wang, Zhengming; Zhang, Maojun</p> <p>2018-01-01</p> <p>Image stitching aims at eliminating multiview parallax and generating a seamless panorama given a set of input images. This paper proposes a local adaptive stitching method, which could achieve both accurate and robust image alignments across the whole panorama. A transformation estimation model is introduced by approximating the scene as a combination of neighboring facets. Then, the local adaptive stitching field is constructed using a series of linear systems of the facet parameters, which enables the parallax handling in three-dimensional space. We also provide a concise but effective global projectivity preserving technique that smoothly varies the transformations from local adaptive to global planar. The proposed model is capable of stitching both normal images and fisheye images. The efficiency of our method is quantitatively demonstrated in the comparative experiments on several challenging cases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/22572304-biorthogonal-decomposition-identification-simulation-non-stationary-non-gaussian-random-fields','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/22572304-biorthogonal-decomposition-identification-simulation-non-stationary-non-gaussian-random-fields"><span>A biorthogonal decomposition for the identification and simulation of non-stationary and non-Gaussian random fields</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Zentner, I.; Ferré, G., E-mail: gregoire.ferre@ponts.org; Poirion, F.</p> <p>2016-06-01</p> <p>In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated bymore » applications to earthquakes (seismic ground motion) and sea states (wave heights).« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5720360','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5720360"><span>Paule‐Mandel estimators for network meta‐analysis with random inconsistency effects</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Veroniki, Areti Angeliki; Law, Martin; Tricco, Andrea C.; Baker, Rose</p> <p>2017-01-01</p> <p>Network meta‐analysis is used to simultaneously compare multiple treatments in a single analysis. However, network meta‐analyses may exhibit inconsistency, where direct and different forms of indirect evidence are not in agreement with each other, even after allowing for between‐study heterogeneity. Models for network meta‐analysis with random inconsistency effects have the dual aim of allowing for inconsistencies and estimating average treatment effects across the whole network. To date, two classical estimation methods for fitting this type of model have been developed: a method of moments that extends DerSimonian and Laird's univariate method and maximum likelihood estimation. However, the Paule and Mandel estimator is another recommended classical estimation method for univariate meta‐analysis. In this paper, we extend the Paule and Mandel method so that it can be used to fit models for network meta‐analysis with random inconsistency effects. We apply all three estimation methods to a variety of examples that have been used previously and we also examine a challenging new dataset that is highly heterogenous. We perform a simulation study based on this new example. We find that the proposed Paule and Mandel method performs satisfactorily and generally better than the previously proposed method of moments because it provides more accurate inferences. Furthermore, the Paule and Mandel method possesses some advantages over likelihood‐based methods because it is both semiparametric and requires no convergence diagnostics. Although restricted maximum likelihood estimation remains the gold standard, the proposed methodology is a fully viable alternative to this and other estimation methods. PMID:28585257</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1806h0016L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1806h0016L"><span>Reflexion measurements for inverse characterization of steel diffusion bond mechanical properties</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Le Bourdais, Florian; Cachon, Lionel; Rigal, Emmanuel</p> <p>2017-02-01</p> <p>The present work describes a non-destructive testing method aimed at securing high manufacturing quality of the innovative compact heat exchanger developed under the framework of the CEA R&D program dedicated to the Advanced Sodium Technological Reactor for Industrial Demonstration (ASTRID). The heat exchanger assembly procedure currently proposed involves high temperature and high pressure diffusion welding of stainless steel plates. The aim of the non-destructive method presented herein is to characterize the quality of the welds obtained through this assembly process. Based on a low-frequency model developed by Baik and Thompson [1], pulse-echo normal incidence measurements are calibrated according to a specific procedure and allow the determination of the welding interface stiffness using a nonlinear fitting procedure in the frequency domain. Performing the characterization of plates after diffusion welding using this method allows a useful assessment of the material state as a function of the diffusion bonding process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29886198','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29886198"><span>Evolution of Toxoplasma-PCR methods and practices: a French national survey and proposal for technical guidelines.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Roux, Guillaume; Varlet-Marie, Emmanuelle; Bastien, Patrick; Sterkers, Yvon</p> <p>2018-06-08</p> <p>The molecular diagnosis of toxoplasmosis lacks standardisation due to the use of numerous methods with variable performance. This diversity of methods also impairs robust performance comparisons between laboratories. The harmonisation of practices by diffusion of technical guidelines is a useful way to improve these performances. The knowledge of methods and practices used for this molecular diagnosis is an essential step to provide guidelines for Toxoplasma-PCR. In the present study, we aimed (i) to describe the methods and practices of Toxoplasma-PCR used by clinical microbiology laboratories in France and (ii) to propose technical guidelines to improve molecular diagnosis of toxoplasmosis. To do so, a yearly self-administered questionnaire-based survey was undertaken in proficient French laboratories from 2008 to 2015, and guidelines were proposed based on the results of those as well as previously published work. This period saw the progressive abandonment of conventional PCR methods, of Toxoplasma-PCR targeting the B1 gene and of the use of two concomitant molecular methods for this diagnosis. The diversity of practices persisted during the study, in spite of the increasing use of commercial kits such as PCR kits, DNA extraction controls and PCR inhibition controls. We also observed a tendency towards the automation of DNA extraction. The evolution of practices did not always go together with an improvement in those, as reported notably by the declining use of Uracil-DNA Glycosylase to avoid carry-over contamination. We here propose technical recommendations which correspond to items explored during the survey, with respect to DNA extraction, Toxoplasma-PCR and good PCR practices. Copyright © 2018 Australian Society for Parasitology. Published by Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29945032','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29945032"><span>An efficient multi-objective optimization method for water quality sensor placement within water distribution systems considering contamination probability variations.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>He, Guilin; Zhang, Tuqiao; Zheng, Feifei; Zhang, Qingzhou</p> <p>2018-06-20</p> <p>Water quality security within water distribution systems (WDSs) has been an important issue due to their inherent vulnerability associated with contamination intrusion. This motivates intensive studies to identify optimal water quality sensor placement (WQSP) strategies, aimed to timely/effectively detect (un)intentional intrusion events. However, these available WQSP optimization methods have consistently presumed that each WDS node has an equal contamination probability. While being simple in implementation, this assumption may do not conform to the fact that the nodal contamination probability may be significantly regionally varied owing to variations in population density and user properties. Furthermore, the low computational efficiency is another important factor that has seriously hampered the practical applications of the currently available WQSP optimization approaches. To address these two issues, this paper proposes an efficient multi-objective WQSP optimization method to explicitly account for contamination probability variations. Four different contamination probability functions (CPFs) are proposed to represent the potential variations of nodal contamination probabilities within the WDS. Two real-world WDSs are used to demonstrate the utility of the proposed method. Results show that WQSP strategies can be significantly affected by the choice of the CPF. For example, when the proposed method is applied to the large case study with the CPF accounting for user properties, the event detection probabilities of the resultant solutions are approximately 65%, while these values are around 25% for the traditional approach, and such design solutions are achieved approximately 10,000 times faster than the traditional method. This paper provides an alternative method to identify optimal WQSP solutions for the WDS, and also builds knowledge regarding the impacts of different CPFs on sensor deployments. Copyright © 2018 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1863.0004N','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1863.0004N"><span>Optimum tuned mass damper design using harmony search with comparison of classical methods</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Nigdeli, Sinan Melih; Bekdaş, Gebrail; Sayin, Baris</p> <p>2017-07-01</p> <p>As known, tuned mass dampers (TMDs) are added to mechanical systems in order to obtain a good vibration damping. The main aim is to reduce the maximum amplitude at the resonance state. In this study, a metaheuristic algorithm called harmony search employed for the optimum design of TMDs. As the optimization objective, the transfer function of the acceleration of the system with respect to ground acceleration was minimized. The numerical trails were conducted for 4 single degree of freedom systems and the results were compared with classical methods. As a conclusion, the proposed method is feasible and more effective than the other documented methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24483710','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24483710"><span>A new method for evaluating the dissolution of orodispersible films.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Xia, Yiran; Chen, Fang; Zhang, Huiping; Luo, Chunlin</p> <p>2015-05-01</p> <p>The aim of this research was to develop and assess a new dissolution apparatus for orodispersible films (ODFs). The new apparatus was based on a flow-through cell design which requires only a limited amount of dissolution medium and can automatically collect samples in short-time intervals. Compared with the dissolution method in Chinese Pharmacopeia, our method simulated the flow condition of the oral cavity and resulted in reproducible dissolution data and remarkably discriminating capability. Therefore, we concluded that the proposed dissolution method was particularly suitable for evaluating the dissolution of ODFs and should also be applicable to other fast-dissolving solid dosage forms.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JCoPh.348..534S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JCoPh.348..534S"><span>Modified GMDH-NN algorithm and its application for global sensitivity analysis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Song, Shufang; Wang, Lu</p> <p>2017-11-01</p> <p>Global sensitivity analysis (GSA) is a very useful tool to evaluate the influence of input variables in the whole distribution range. Sobol' method is the most commonly used among variance-based methods, which are efficient and popular GSA techniques. High dimensional model representation (HDMR) is a popular way to compute Sobol' indices, however, its drawbacks cannot be ignored. We show that modified GMDH-NN algorithm can calculate coefficients of metamodel efficiently, so this paper aims at combining it with HDMR and proposes GMDH-HDMR method. The new method shows higher precision and faster convergent rate. Several numerical and engineering examples are used to confirm its advantages.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/biblio/1375028-generalized-sampling-preconditioning-scheme-sparse-approximation-polynomial-chaos-expansions','SCIGOV-STC'); return false;" href="https://www.osti.gov/biblio/1375028-generalized-sampling-preconditioning-scheme-sparse-approximation-polynomial-chaos-expansions"><span></span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Jakeman, John D.; Narayan, Akil; Zhou, Tao</p> <p></p> <p>We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4714878','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4714878"><span>Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Peng, Huan-Kai; Lee, Hao-Chih; Pan, Jia-Yu; Marculescu, Radu</p> <p>2016-01-01</p> <p>In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications. PMID:26771830</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPC.1968c0067A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPC.1968c0067A"><span>Voltage oriented control of self-excited induction generator for wind energy system with MPPT</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Amieur, Toufik; Taibi, Djamel; Amieur, Oualid</p> <p>2018-05-01</p> <p>This paper presents the study and simulation of the self-excited induction generator in the wind power production in isolated sites. With this intention, a model of the wind turbine was established. Extremum-seeking control algorithm method by using Maximum Power Point Tracking (MPPT) is proposed control solution aims at driving the average position of the operating point near to optimality. The reference of turbine rotor speed is adjusted such that the turbine operates around maximum power for the current wind speed value. After a brief review of the concepts of converting wind energy into electrical energy. The proposed modeling tools were developed to study the performance of standalone induction generators connected to capacitor bank. The purpose of this technique is to maintain a constant voltage at the output of the rectifier whatever the loads and speeds. The system studied in this work is developed and tested in MATLAB/Simulink environment. Simulation results validate the performance and effectiveness of the proposed control methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016PhyA..461..840F','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016PhyA..461..840F"><span>Using a two-phase evolutionary framework to select multiple network spreaders based on community structure</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Fu, Yu-Hsiang; Huang, Chung-Yuan; Sun, Chuen-Tsai</p> <p>2016-11-01</p> <p>Using network community structures to identify multiple influential spreaders is an appropriate method for analyzing the dissemination of information, ideas and infectious diseases. For example, data on spreaders selected from groups of customers who make similar purchases may be used to advertise products and to optimize limited resource allocation. Other examples include community detection approaches aimed at identifying structures and groups in social or complex networks. However, determining the number of communities in a network remains a challenge. In this paper we describe our proposal for a two-phase evolutionary framework (TPEF) for determining community numbers and maximizing community modularity. Lancichinetti-Fortunato-Radicchi benchmark networks were used to test our proposed method and to analyze execution time, community structure quality, convergence, and the network spreading effect. Results indicate that our proposed TPEF generates satisfactory levels of community quality and convergence. They also suggest a need for an index, mechanism or sampling technique to determine whether a community detection approach should be used for selecting multiple network spreaders.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26771830','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26771830"><span>Data-Driven Engineering of Social Dynamics: Pattern Matching and Profit Maximization.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Peng, Huan-Kai; Lee, Hao-Chih; Pan, Jia-Yu; Marculescu, Radu</p> <p>2016-01-01</p> <p>In this paper, we define a new problem related to social media, namely, the data-driven engineering of social dynamics. More precisely, given a set of observations from the past, we aim at finding the best short-term intervention that can lead to predefined long-term outcomes. Toward this end, we propose a general formulation that covers two useful engineering tasks as special cases, namely, pattern matching and profit maximization. By incorporating a deep learning model, we derive a solution using convex relaxation and quadratic-programming transformation. Moreover, we propose a data-driven evaluation method in place of the expensive field experiments. Using a Twitter dataset, we demonstrate the effectiveness of our dynamics engineering approach for both pattern matching and profit maximization, and study the multifaceted interplay among several important factors of dynamics engineering, such as solution validity, pattern-matching accuracy, and intervention cost. Finally, the method we propose is general enough to work with multi-dimensional time series, so it can potentially be used in many other applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29628620','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29628620"><span>Automatic Analysis of Pronunciations for Children with Speech Sound Disorders.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dudy, Shiran; Bedrick, Steven; Asgari, Meysam; Kain, Alexander</p> <p>2018-07-01</p> <p>Computer-Assisted Pronunciation Training (CAPT) systems aim to help a child learn the correct pronunciations of words. However, while there are many online commercial CAPT apps, there is no consensus among Speech Language Therapists (SLPs) or non-professionals about which CAPT systems, if any, work well. The prevailing assumption is that practicing with such programs is less reliable and thus does not provide the feedback necessary to allow children to improve their performance. The most common method for assessing pronunciation performance is the Goodness of Pronunciation (GOP) technique. Our paper proposes two new GOP techniques. We have found that pronunciation models that use explicit knowledge about error pronunciation patterns can lead to more accurate classification whether a phoneme was correctly pronounced or not. We evaluate the proposed pronunciation assessment methods against a baseline state of the art GOP approach, and show that the proposed techniques lead to classification performance that is more similar to that of a human expert.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4018504','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4018504"><span>Supervised extensions of chemography approaches: case studies of chemical liabilities assessment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p></p> <p>2014-01-01</p> <p>Chemical liabilities, such as adverse effects and toxicity, play a significant role in modern drug discovery process. In silico assessment of chemical liabilities is an important step aimed to reduce costs and animal testing by complementing or replacing in vitro and in vivo experiments. Herein, we propose an approach combining several classification and chemography methods to be able to predict chemical liabilities and to interpret obtained results in the context of impact of structural changes of compounds on their pharmacological profile. To our knowledge for the first time, the supervised extension of Generative Topographic Mapping is proposed as an effective new chemography method. New approach for mapping new data using supervised Isomap without re-building models from the scratch has been proposed. Two approaches for estimation of model’s applicability domain are used in our study to our knowledge for the first time in chemoinformatics. The structural alerts responsible for the negative characteristics of pharmacological profile of chemical compounds has been found as a result of model interpretation. PMID:24868246</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/servlets/purl/1375028','SCIGOV-STC'); return false;" href="https://www.osti.gov/servlets/purl/1375028"><span>A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/search">DOE Office of Scientific and Technical Information (OSTI.GOV)</a></p> <p>Jakeman, John D.; Narayan, Akil; Zhou, Tao</p> <p></p> <p>We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_18");'>18</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li class="active"><span>20</span></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_20 --> <div id="page_21" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="401"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.osti.gov/pages/biblio/1375028-generalized-sampling-preconditioning-scheme-sparse-approximation-polynomial-chaos-expansions','SCIGOV-DOEP'); return false;" href="https://www.osti.gov/pages/biblio/1375028-generalized-sampling-preconditioning-scheme-sparse-approximation-polynomial-chaos-expansions"><span>A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.osti.gov/pages">DOE PAGES</a></p> <p>Jakeman, John D.; Narayan, Akil; Zhou, Tao</p> <p>2017-06-22</p> <p>We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018E%26ES..108e2040L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018E%26ES..108e2040L"><span>Design of Energy Storage Management System Based on FPGA in Micro-Grid</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liang, Yafeng; Wang, Yanping; Han, Dexiao</p> <p>2018-01-01</p> <p>Energy storage system is the core to maintain the stable operation of smart micro-grid. Aiming at the existing problems of the energy storage management system in the micro-grid such as Low fault tolerance, easy to cause fluctuations in micro-grid, a new intelligent battery management system based on field programmable gate array is proposed : taking advantage of FPGA to combine the battery management system with the intelligent micro-grid control strategy. Finally, aiming at the problem that during estimation of battery charge State by neural network, initialization of weights and thresholds are not accurate leading to large errors in prediction results, the genetic algorithm is proposed to optimize the neural network method, and the experimental simulation is carried out. The experimental results show that the algorithm has high precision and provides guarantee for the stable operation of micro-grid.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28764713','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28764713"><span>Non-iterative geometric approach for inverse kinematics of redundant lead-module in a radiosurgical snake-like robot.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Omisore, Olatunji Mumini; Han, Shipeng; Ren, Lingxue; Zhang, Nannan; Ivanov, Kamen; Elazab, Ahmed; Wang, Lei</p> <p>2017-08-01</p> <p>Snake-like robot is an emerging form of serial-link manipulator with the morphologic design of biological snakes. The redundant robot can be used to assist medical experts in accessing internal organs with minimal or no invasion. Several snake-like robotic designs have been proposed for minimal invasive surgery, however, the few that were developed are yet to be fully explored for clinical procedures. This is due to lack of capability for full-fledged spatial navigation. In rare cases where such snake-like designs are spatially flexible, there exists no inverse kinematics (IK) solution with both precise control and fast response. In this study, we proposed a non-iterative geometric method for solving IK of lead-module of a snake-like robot designed for therapy or ablation of abdominal tumors. The proposed method is aimed at providing accurate and fast IK solution for given target points in the robot's workspace. n-1 virtual points (VPs) were geometrically computed and set as coordinates of intermediary joints in an n-link module. Suitable joint angles that can place the end-effector at given target points were then computed by vectorizing coordinates of the VPs, in addition to coordinates of the base point, target point, and tip of the first link in its default pose. The proposed method is applied to solve IK of two-link and redundant four-link modules. Both two-link and four-link modules were simulated with Robotics Toolbox in Matlab 8.3 (R2014a). Implementation result shows that the proposed method can solve IK of the spatially flexible robot with minimal error values. Furthermore, analyses of results from both modules show that the geometric method can reach 99.21 and 88.61% of points in their workspaces, respectively, with an error threshold of 1 mm. The proposed method is non-iterative and has a maximum execution time of 0.009 s. This paper focuses on solving IK problem of a spatially flexible robot which is part of a developmental project for abdominal surgery through minimal invasion or natural orifices. The study showed that the proposed geometric method can resolve IK of the snake-like robot with negligible error offset. Evaluation against well-known methods shows that the proposed method can reach several points in the robot's workspace with high accuracy and shorter computational time, simultaneously.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IJTP...57.1723C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IJTP...57.1723C"><span>Bidirectional Teleportation Protocol in Quantum Wireless Multi-hop Network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cai, Rui; Yu, Xu-Tao; Zhang, Zai-Chen</p> <p>2018-06-01</p> <p>We propose a bidirectional quantum teleportation protocol based on a composite GHZ-Bell state. In this protocol, the composite GHZ-Bell state channel is transformed into two-Bell state channel through gate operations and single qubit measurements. The channel transformation will lead to different kinds of quantum channel states, so a method is proposed to help determine the unitary matrices effectively under different quantum channels. Furthermore, we discuss the bidirectional teleportation protocol in the quantum wireless multi-hop network. This paper is aimed to provide a bidirectional teleportation protocol and study the bidirectional multi-hop teleportation in the quantum wireless communication network.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018IJTP..tmp...47C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018IJTP..tmp...47C"><span>Bidirectional Teleportation Protocol in Quantum Wireless Multi-hop Network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cai, Rui; Yu, Xu-Tao; Zhang, Zai-Chen</p> <p>2018-02-01</p> <p>We propose a bidirectional quantum teleportation protocol based on a composite GHZ-Bell state. In this protocol, the composite GHZ-Bell state channel is transformed into two-Bell state channel through gate operations and single qubit measurements. The channel transformation will lead to different kinds of quantum channel states, so a method is proposed to help determine the unitary matrices effectively under different quantum channels. Furthermore, we discuss the bidirectional teleportation protocol in the quantum wireless multi-hop network. This paper is aimed to provide a bidirectional teleportation protocol and study the bidirectional multi-hop teleportation in the quantum wireless communication network.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2012RScI...83j5103B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2012RScI...83j5103B"><span>Comparative evaluation of ultrasound scanner accuracy in distance measurement</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Branca, F. P.; Sciuto, S. A.; Scorza, A.</p> <p>2012-10-01</p> <p>The aim of the present study is to develop and compare two different automatic methods for accuracy evaluation in ultrasound phantom measurements on B-mode images: both of them give as a result the relative error e between measured distances, performed by 14 brand new ultrasound medical scanners, and nominal distances, among nylon wires embedded in a reference test object. The first method is based on a least squares estimation, while the second one applies the mean value of the same distance evaluated at different locations in ultrasound image (same distance method). Results for both of them are proposed and explained.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15340433','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15340433"><span>Towards sound epistemological foundations of statistical methods for high-dimensional biology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mehta, Tapan; Tanik, Murat; Allison, David B</p> <p>2004-09-01</p> <p>A sound epistemological foundation for biological inquiry comes, in part, from application of valid statistical procedures. This tenet is widely appreciated by scientists studying the new realm of high-dimensional biology, or 'omic' research, which involves multiplicity at unprecedented scales. Many papers aimed at the high-dimensional biology community describe the development or application of statistical techniques. The validity of many of these is questionable, and a shared understanding about the epistemological foundations of the statistical methods themselves seems to be lacking. Here we offer a framework in which the epistemological foundation of proposed statistical methods can be evaluated.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/1984afhr.rept.....T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/1984afhr.rept.....T"><span>Safety training priorities</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Thompson, N. A.; Ruck, H. W.</p> <p>1984-04-01</p> <p>The Air Force is interested in identifying potentially hazardous tasks and prevention of accidents. This effort proposes four methods for determining safety training priorities for job tasks in three enlisted specialties. These methods can be used to design training aimed at avoiding loss of people, time, materials, and money associated with on-the-job accidents. Job tasks performed by airmen were measured using task and job factor ratings. Combining accident reports and job inventories, subject-matter experts identified tasks associated with accidents over a 3-year period. Applying correlational, multiple regression, and cost-benefit analysis, four methods were developed for ordering hazardous tasks to determine safety training priorities.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19745725','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19745725"><span>Proposal of a method for the evaluation of inaccuracy of home sphygmomanometers.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Akpolat, Tekin</p> <p>2009-10-01</p> <p>There is no formal protocol for evaluating the individual accuracy of home sphygmomanometers. The aims of this study were to propose a method for achieving accuracy in automated home sphygmomanometers and to test the applicability of the defined method. The purposes of this method were to avoid major inaccuracies and to estimate the optimal circumstance for individual accuracy. The method has three stages and sequential measurement of blood pressure is used. The tested devices were categorized into four groups: accurate, acceptable, inaccurate and very inaccurate (major inaccuracy). The defined method takes approximately 10 min (excluding relaxation time) and was tested on three different occasions. The application of the method has shown that inaccuracy is a common problem among non-tested devices, that validated devices are superior to those that are non-validated or whose validation status is unknown, that major inaccuracy is common, especially in non-tested devices and that validation does not guarantee individual accuracy. A protocol addressing the accuracy of a particular sphygmomanometer in an individual patient is required, and a practical method has been suggested to achieve this. This method can be modified, but the main idea and approach should be preserved unless a better method is proposed. The purchase of validated devices and evaluation of accuracy for the purchased device in an individual patient will improve the monitoring of self-measurement of blood pressure at home. This study addresses device inaccuracy, but errors related to the patient, observer or blood pressure measurement technique should not be underestimated, and strict adherence to the manufacturer's instructions is essential.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3913316','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3913316"><span>Image Mosaic Method Based on SIFT Features of Line Segment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhu, Jun; Ren, Mingwu</p> <p>2014-01-01</p> <p>This paper proposes a novel image mosaic method based on SIFT (Scale Invariant Feature Transform) feature of line segment, aiming to resolve incident scaling, rotation, changes in lighting condition, and so on between two images in the panoramic image mosaic process. This method firstly uses Harris corner detection operator to detect key points. Secondly, it constructs directed line segments, describes them with SIFT feature, and matches those directed segments to acquire rough point matching. Finally, Ransac method is used to eliminate wrong pairs in order to accomplish image mosaic. The results from experiment based on four pairs of images show that our method has strong robustness for resolution, lighting, rotation, and scaling. PMID:24511326</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015PhyA..439....7R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015PhyA..439....7R"><span>Motif-Synchronization: A new method for analysis of dynamic brain networks with EEG</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rosário, R. S.; Cardoso, P. T.; Muñoz, M. A.; Montoya, P.; Miranda, J. G. V.</p> <p>2015-12-01</p> <p>The major aim of this work was to propose a new association method known as Motif-Synchronization. This method was developed to provide information about the synchronization degree and direction between two nodes of a network by counting the number of occurrences of some patterns between any two time series. The second objective of this work was to present a new methodology for the analysis of dynamic brain networks, by combining the Time-Varying Graph (TVG) method with a directional association method. We further applied the new algorithms to a set of human electroencephalogram (EEG) signals to perform a dynamic analysis of the brain functional networks (BFN).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28186053','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28186053"><span>AIM - Agile Instrumented Monitoring for Improving User Experience of Participation in HealthIT Development.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Pitkänen, Janne; Nieminen, Marko</p> <p>2017-01-01</p> <p>Participation of healthcare professionals in information technology development has emerged as an important challenge. As end-users, the professionals are willing to participate in the development activities, but their experiences on the current methods of participation remain mostly negative. There is lack of applicable methods for meeting the needs of agile development approach and scaling up to the largest implementation projects, while maintaining the interest of the professional users to participate in development activities and keeping up their ability to continue working in a productive manner. In this paper, we describe the Agile Instrumented Monitoring as a methodology, based on the methods of instrumented usability evaluation, for improving user experience in HealthIT development. The contribution of the proposed methodology is analyzed in relation to activities of whole iteration cycle and chosen usability evaluation methods, while the user experience of participation is addressed regarding healthcare professionals. Prospective weak and strong market tests for AIM are discussed in the conclusions for future work.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25356772','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25356772"><span>Methodological considerations of the GRADE method.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Malmivaara, Antti</p> <p>2015-02-01</p> <p>The GRADE method (Grading of Recommendations, Assessment, Development, and Evaluation) provides a tool for rating the quality of evidence for systematic reviews and clinical guidelines. This article aims to analyse conceptually how well grounded the GRADE method is, and to suggest improvements. The eight criteria for rating the quality of evidence as proposed by GRADE are here analysed in terms of each criterion's potential to provide valid information for grading evidence. Secondly, the GRADE method of allocating weights and summarizing the values of the criteria is considered. It is concluded that three GRADE criteria have an appropriate conceptual basis to be used as indicators of confidence in research evidence in systematic reviews: internal validity of a study, consistency of the findings, and publication bias. In network meta-analyses, the indirectness of evidence may also be considered. It is here proposed that the grade for the internal validity of a study could in some instances justifiably decrease the overall grade by three grades (e.g. from high to very low) instead of the up to two grade decrease, as suggested by the GRADE method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015AIPC.1657f0004A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015AIPC.1657f0004A"><span>Detecting brain tumor in computed tomography images using Markov random fields and fuzzy C-means clustering techniques</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Abdulbaqi, Hayder Saad; Jafri, Mohd Zubir Mat; Omar, Ahmad Fairuz; Mustafa, Iskandar Shahrim Bin; Abood, Loay Kadom</p> <p>2015-04-01</p> <p>Brain tumors, are an abnormal growth of tissues in the brain. They may arise in people of any age. They must be detected early, diagnosed accurately, monitored carefully, and treated effectively in order to optimize patient outcomes regarding both survival and quality of life. Manual segmentation of brain tumors from CT scan images is a challenging and time consuming task. Size and location accurate detection of brain tumor plays a vital role in the successful diagnosis and treatment of tumors. Brain tumor detection is considered a challenging mission in medical image processing. The aim of this paper is to introduce a scheme for tumor detection in CT scan images using two different techniques Hidden Markov Random Fields (HMRF) and Fuzzy C-means (FCM). The proposed method has been developed in this research in order to construct hybrid method between (HMRF) and threshold. These methods have been applied on 4 different patient data sets. The result of comparison among these methods shows that the proposed method gives good results for brain tissue detection, and is more robust and effective compared with (FCM) techniques.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016NatSR...638865S','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016NatSR...638865S"><span>Spreading to localized targets in complex networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Sun, Ye; Ma, Long; Zeng, An; Wang, Wen-Xu</p> <p>2016-12-01</p> <p>As an important type of dynamics on complex networks, spreading is widely used to model many real processes such as the epidemic contagion and information propagation. One of the most significant research questions in spreading is to rank the spreading ability of nodes in the network. To this end, substantial effort has been made and a variety of effective methods have been proposed. These methods usually define the spreading ability of a node as the number of finally infected nodes given that the spreading is initialized from the node. However, in many real cases such as advertising and news propagation, the spreading only aims to cover a specific group of nodes. Therefore, it is necessary to study the spreading ability of nodes towards localized targets in complex networks. In this paper, we propose a reversed local path algorithm for this problem. Simulation results show that our method outperforms the existing methods in identifying the influential nodes with respect to these localized targets. Moreover, the influential spreaders identified by our method can effectively avoid infecting the non-target nodes in the spreading process.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26800536','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26800536"><span>Transform-Based Channel-Data Compression to Improve the Performance of a Real-Time GPU-Based Software Beamformer.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lok, U-Wai; Li, Pai-Chi</p> <p>2016-03-01</p> <p>Graphics processing unit (GPU)-based software beamforming has advantages over hardware-based beamforming of easier programmability and a faster design cycle, since complicated imaging algorithms can be efficiently programmed and modified. However, the need for a high data rate when transferring ultrasound radio-frequency (RF) data from the hardware front end to the software back end limits the real-time performance. Data compression methods can be applied to the hardware front end to mitigate the data transfer issue. Nevertheless, most decompression processes cannot be performed efficiently on a GPU, thus becoming another bottleneck of the real-time imaging. Moreover, lossless (or nearly lossless) compression is desirable to avoid image quality degradation. In a previous study, we proposed a real-time lossless compression-decompression algorithm and demonstrated that it can reduce the overall processing time because the reduction in data transfer time is greater than the computation time required for compression/decompression. This paper analyzes the lossless compression method in order to understand the factors limiting the compression efficiency. Based on the analytical results, a nearly lossless compression is proposed to further enhance the compression efficiency. The proposed method comprises a transformation coding method involving modified lossless compression that aims at suppressing amplitude data. The simulation results indicate that the compression ratio (CR) of the proposed approach can be enhanced from nearly 1.8 to 2.5, thus allowing a higher data acquisition rate at the front end. The spatial and contrast resolutions with and without compression were almost identical, and the process of decompressing the data of a single frame on a GPU took only several milliseconds. Moreover, the proposed method has been implemented in a 64-channel system that we built in-house to demonstrate the feasibility of the proposed algorithm in a real system. It was found that channel data from a 64-channel system can be transferred using the standard USB 3.0 interface in most practical imaging applications.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29671741','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29671741"><span>Deep Correlated Holistic Metric Learning for Sketch-Based 3D Shape Retrieval.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dai, Guoxian; Xie, Jin; Fang, Yi</p> <p>2018-07-01</p> <p>How to effectively retrieve desired 3D models with simple queries is a long-standing problem in computer vision community. The model-based approach is quite straightforward but nontrivial, since people could not always have the desired 3D query model available by side. Recently, large amounts of wide-screen electronic devices are prevail in our daily lives, which makes the sketch-based 3D shape retrieval a promising candidate due to its simpleness and efficiency. The main challenge of sketch-based approach is the huge modality gap between sketch and 3D shape. In this paper, we proposed a novel deep correlated holistic metric learning (DCHML) method to mitigate the discrepancy between sketch and 3D shape domains. The proposed DCHML trains two distinct deep neural networks (one for each domain) jointly, which learns two deep nonlinear transformations to map features from both domains into a new feature space. The proposed loss, including discriminative loss and correlation loss, aims to increase the discrimination of features within each domain as well as the correlation between different domains. In the new feature space, the discriminative loss minimizes the intra-class distance of the deep transformed features and maximizes the inter-class distance of the deep transformed features to a large margin within each domain, while the correlation loss focused on mitigating the distribution discrepancy across different domains. Different from existing deep metric learning methods only with loss at the output layer, our proposed DCHML is trained with loss at both hidden layer and output layer to further improve the performance by encouraging features in the hidden layer also with desired properties. Our proposed method is evaluated on three benchmarks, including 3D Shape Retrieval Contest 2013, 2014, and 2016 benchmarks, and the experimental results demonstrate the superiority of our proposed method over the state-of-the-art methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.fs.usda.gov/treesearch/pubs/8452','TREESEARCH'); return false;" href="https://www.fs.usda.gov/treesearch/pubs/8452"><span>A review of protocols for monitoring streams and juvenile fish in forested regions of the Pacific Northwest.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.fs.usda.gov/treesearch/">Treesearch</a></p> <p>Scott A. Stolnack; Mason D. Bryant; Robert C. Wissmar</p> <p>2005-01-01</p> <p>This document reviews existing and proposed protocols used to monitor stream ecosystem conditions and responses to land management activities in the Pacific Northwest. Because of recent work aimed at improving the utility of habitat survey and fish abundance assessment methods, this review focuses on current (since 1993) monitoring efforts that assess stream habitat...</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://files.eric.ed.gov/fulltext/EJ1060845.pdf','ERIC'); return false;" href="http://files.eric.ed.gov/fulltext/EJ1060845.pdf"><span>Life Story of Chinese College Students with Perfectionism Personality: A Qualitative Study Based on a Life Story Model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Ma, Min; Zi, Fei</p> <p>2015-01-01</p> <p>This manuscript aims to explore and delineate the common characteristics of college students with perfectionism, to promote an in-depth understanding of dynamic personality development of perfectionists from the views of life story model proposed by McAdams (1985). The researchers adopted a narrative qualitative research method. The life stories…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=Reddy&pg=6&id=EJ922544','ERIC'); return false;" href="https://eric.ed.gov/?q=Reddy&pg=6&id=EJ922544"><span>Design and Development of Rubrics to Improve Assessment Outcomes: A Pilot Study in a Master's Level Business Program in India</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Reddy, Malini Y.</p> <p>2011-01-01</p> <p>Purpose: This paper seeks to discuss the characteristics that describe a rubric. It aims to propose a systematic method for developing curriculum wide rubrics and to discuss their potential utility for program quality assessment. Design/methodology/approach: Implementation of rubrics is a recent phenomenon in higher education. Prior research and…</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_19");'>19</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li class="active"><span>21</span></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_21 --> <div id="page_22" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="421"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=measure+AND+instrument+AND+quality&pg=4&id=EJ956054','ERIC'); return false;" href="https://eric.ed.gov/?q=measure+AND+instrument+AND+quality&pg=4&id=EJ956054"><span>Review: A Systematic Review of Quality of Life Measures for People with Intellectual Disabilities and Challenging Behaviours</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Townsend-White, C.; Pham, A. N. T.; Vassos, M. V.</p> <p>2012-01-01</p> <p>Background: The quality of life (QOL) construct is proposed as a method to assess service outcomes for people utilising disability services. With this in mind, the aim of this study was to conduct a systematic review of available QOL measures for people with intellectual disability (ID) to pinpoint psychometrically sound measures that can be…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://eric.ed.gov/?q=bond&pg=7&id=EJ1172707','ERIC'); return false;" href="https://eric.ed.gov/?q=bond&pg=7&id=EJ1172707"><span>Purification through Emotions: The Role of Shame in Plato's "Sophist" 230B4-E5</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.eric.ed.gov/ERICWebPortal/search/extended.jsp?_pageLabel=advanced">ERIC Educational Resources Information Center</a></p> <p>Candiotto, Laura</p> <p>2018-01-01</p> <p>This article proposes an analysis of Plato's "Sophist" (230b4--e5) that underlines the bond between the logical and the emotional components of the Socratic "elenchus", with the aim of depicting the social valence of this philosophical practice. The use of emotions characterizing the 'elenctic' method described by Plato is…</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25319442','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25319442"><span>Statistical testing of association between menstruation and migraine.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Barra, Mathias; Dahl, Fredrik A; Vetvik, Kjersti G</p> <p>2015-02-01</p> <p>To repair and refine a previously proposed method for statistical analysis of association between migraine and menstruation. Menstrually related migraine (MRM) affects about 20% of female migraineurs in the general population. The exact pathophysiological link from menstruation to migraine is hypothesized to be through fluctuations in female reproductive hormones, but the exact mechanisms remain unknown. Therefore, the main diagnostic criterion today is concurrency of migraine attacks with menstruation. Methods aiming to exclude spurious associations are wanted, so that further research into these mechanisms can be performed on a population with a true association. The statistical method is based on a simple two-parameter null model of MRM (which allows for simulation modeling), and Fisher's exact test (with mid-p correction) applied to standard 2 × 2 contingency tables derived from the patients' headache diaries. Our method is a corrected version of a previously published flawed framework. To our best knowledge, no other published methods for establishing a menstruation-migraine association by statistical means exist today. The probabilistic methodology shows good performance when subjected to receiver operator characteristic curve analysis. Quick reference cutoff values for the clinical setting were tabulated for assessing association given a patient's headache history. In this paper, we correct a proposed method for establishing association between menstruation and migraine by statistical methods. We conclude that the proposed standard of 3-cycle observations prior to setting an MRM diagnosis should be extended with at least one perimenstrual window to obtain sufficient information for statistical processing. © 2014 American Headache Society.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23872775','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23872775"><span>Color calibration of an RGB camera mounted in front of a microscope with strong color distortion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Charrière, Renée; Hébert, Mathieu; Trémeau, Alain; Destouches, Nathalie</p> <p>2013-07-20</p> <p>This paper aims at showing that performing color calibration of an RGB camera can be achieved even in the case where the optical system before the camera introduces strong color distortion. In the present case, the optical system is a microscope containing a halogen lamp, with a nonuniform irradiance on the viewed surface. The calibration method proposed in this work is based on an existing method, but it is preceded by a three-step preprocessing of the RGB images aiming at extracting relevant color information from the strongly distorted images, taking especially into account the nonuniform irradiance map and the perturbing texture due to the surface topology of the standard color calibration charts when observed at micrometric scale. The proposed color calibration process consists first in computing the average color of the color-chart patches viewed under the microscope; then computing white balance, gamma correction, and saturation enhancement; and finally applying a third-order polynomial regression color calibration transform. Despite the nonusual conditions for color calibration, fairly good performance is achieved from a 48 patch Lambertian color chart, since an average CIE-94 color difference on the color-chart colors lower than 2.5 units is obtained.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JPhCS.801a2039T','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JPhCS.801a2039T"><span>Detection of Hypertension Retinopathy Using Deep Learning and Boltzmann Machines</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Triwijoyo, B. K.; Pradipto, Y. D.</p> <p>2017-01-01</p> <p>hypertensive retinopathy (HR) in the retina of the eye is disturbance caused by high blood pressure disease, where there is a systemic change of arterial in the blood vessels of the retina. Most heart attacks occur in patients caused by high blood pressure symptoms of undiagnosed. Hypertensive retinopathy Symptoms such as arteriolar narrowing, retinal haemorrhage and cotton wool spots. Based on this reasons, the early diagnosis of the symptoms of hypertensive retinopathy is very urgent to aim the prevention and treatment more accurate. This research aims to develop a system for early detection of hypertension retinopathy stage. The proposed method is to determine the combined features artery and vein diameter ratio (AVR) as well as changes position with Optic Disk (OD) in retinal images to review the classification of hypertensive retinopathy using Deep Neural Networks (DNN) and Boltzmann Machines approach. We choose this approach of because based on previous research DNN models were more accurate in the image pattern recognition, whereas Boltzmann machines selected because It requires speedy iteration in the process of learning neural network. The expected results from this research are designed a prototype system early detection of hypertensive retinopathy stage and analysed the effectiveness and accuracy of the proposed methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26506890','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26506890"><span>Using pilot data to size a two-arm randomized trial to find a nearly optimal personalized treatment strategy.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Laber, Eric B; Zhao, Ying-Qi; Regh, Todd; Davidian, Marie; Tsiatis, Anastasios; Stanford, Joseph B; Zeng, Donglin; Song, Rui; Kosorok, Michael R</p> <p>2016-04-15</p> <p>A personalized treatment strategy formalizes evidence-based treatment selection by mapping patient information to a recommended treatment. Personalized treatment strategies can produce better patient outcomes while reducing cost and treatment burden. Thus, among clinical and intervention scientists, there is a growing interest in conducting randomized clinical trials when one of the primary aims is estimation of a personalized treatment strategy. However, at present, there are no appropriate sample size formulae to assist in the design of such a trial. Furthermore, because the sampling distribution of the estimated outcome under an estimated optimal treatment strategy can be highly sensitive to small perturbations in the underlying generative model, sample size calculations based on standard (uncorrected) asymptotic approximations or computer simulations may not be reliable. We offer a simple and robust method for powering a single stage, two-armed randomized clinical trial when the primary aim is estimating the optimal single stage personalized treatment strategy. The proposed method is based on inverting a plugin projection confidence interval and is thereby regular and robust to small perturbations of the underlying generative model. The proposed method requires elicitation of two clinically meaningful parameters from clinical scientists and uses data from a small pilot study to estimate nuisance parameters, which are not easily elicited. The method performs well in simulated experiments and is illustrated using data from a pilot study of time to conception and fertility awareness. Copyright © 2015 John Wiley & Sons, Ltd.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24642081','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24642081"><span>An ensemble heterogeneous classification methodology for discovering health-related knowledge in social media messages.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tuarob, Suppawong; Tucker, Conrad S; Salathe, Marcel; Ram, Nilam</p> <p>2014-06-01</p> <p>The role of social media as a source of timely and massive information has become more apparent since the era of Web 2.0.Multiple studies illustrated the use of information in social media to discover biomedical and health-related knowledge.Most methods proposed in the literature employ traditional document classification techniques that represent a document as a bag of words.These techniques work well when documents are rich in text and conform to standard English; however, they are not optimal for social media data where sparsity and noise are norms.This paper aims to address the limitations posed by the traditional bag-of-word based methods and propose to use heterogeneous features in combination with ensemble machine learning techniques to discover health-related information, which could prove to be useful to multiple biomedical applications, especially those needing to discover health-related knowledge in large scale social media data.Furthermore, the proposed methodology could be generalized to discover different types of information in various kinds of textual data. Social media data is characterized by an abundance of short social-oriented messages that do not conform to standard languages, both grammatically and syntactically.The problem of discovering health-related knowledge in social media data streams is then transformed into a text classification problem, where a text is identified as positive if it is health-related and negative otherwise.We first identify the limitations of the traditional methods which train machines with N-gram word features, then propose to overcome such limitations by utilizing the collaboration of machine learning based classifiers, each of which is trained to learn a semantically different aspect of the data.The parameter analysis for tuning each classifier is also reported. Three data sets are used in this research.The first data set comprises of approximately 5000 hand-labeled tweets, and is used for cross validation of the classification models in the small scale experiment, and for training the classifiers in the real-world large scale experiment.The second data set is a random sample of real-world Twitter data in the US.The third data set is a random sample of real-world Facebook Timeline posts. Two sets of evaluations are conducted to investigate the proposed model's ability to discover health-related information in the social media domain: small scale and large scale evaluations.The small scale evaluation employs 10-fold cross validation on the labeled data, and aims to tune parameters of the proposed models, and to compare with the stage-of-the-art method.The large scale evaluation tests the trained classification models on the native, real-world data sets, and is needed to verify the ability of the proposed model to handle the massive heterogeneity in real-world social media. The small scale experiment reveals that the proposed method is able to mitigate the limitations in the well established techniques existing in the literature, resulting in performance improvement of 18.61% (F-measure).The large scale experiment further reveals that the baseline fails to perform well on larger data with higher degrees of heterogeneity, while the proposed method is able to yield reasonably good performance and outperform the baseline by 46.62% (F-Measure) on average. Copyright © 2014 Elsevier Inc. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28883745','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28883745"><span>Review of devices used in neuromuscular electrical stimulation for stroke rehabilitation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Takeda, Kotaro; Tanino, Genichi; Miyasaka, Hiroyuki</p> <p>2017-01-01</p> <p>Neuromuscular electrical stimulation (NMES), specifically functional electrical stimulation (FES) that compensates for voluntary motion, and therapeutic electrical stimulation (TES) aimed at muscle strengthening and recovery from paralysis are widely used in stroke rehabilitation. The electrical stimulation of muscle contraction should be synchronized with intended motion to restore paralysis. Therefore, NMES devices, which monitor electromyogram (EMG) or electroencephalogram (EEG) changes with motor intention and use them as a trigger, have been developed. Devices that modify the current intensity of NMES, based on EMG or EEG, have also been proposed. Given the diversity in devices and stimulation methods of NMES, the aim of the current review was to introduce some commercial FES and TES devices and application methods, which depend on the condition of the patient with stroke, including the degree of paralysis.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26885440','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26885440"><span>Comparative Study of Novel Ratio Spectra and Isoabsorptive Point Based Spectrophotometric Methods: Application on a Binary Mixture of Ascorbic Acid and Rutin.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Darwish, Hany W; Bakheit, Ahmed H; Naguib, Ibrahim A</p> <p>2016-01-01</p> <p>This paper presents novel methods for spectrophotometric determination of ascorbic acid (AA) in presence of rutin (RU) (coformulated drug) in their combined pharmaceutical formulation. The seven methods are ratio difference (RD), isoabsorptive_RD (Iso_RD), amplitude summation (A_Sum), isoabsorptive point, first derivative of the ratio spectra ((1)DD), mean centering (MCN), and ratio subtraction (RS). On the other hand, RU was determined directly by measuring the absorbance at 358 nm in addition to the two novel Iso_RD and A_Sum methods. The work introduced in this paper aims to compare these different methods, showing the advantages for each and making a comparison of analysis results. The calibration curve is linear over the concentration range of 4-50 μg/mL for AA and RU. The results show the high performance of proposed methods for the analysis of the binary mixture. The optimum assay conditions were established and the proposed methods were successfully applied for the assay of the two drugs in laboratory prepared mixtures and combined pharmaceutical tablets with excellent recoveries. No interference was observed from common pharmaceutical additives.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4739477','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4739477"><span>Comparative Study of Novel Ratio Spectra and Isoabsorptive Point Based Spectrophotometric Methods: Application on a Binary Mixture of Ascorbic Acid and Rutin</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Darwish, Hany W.; Bakheit, Ahmed H.; Naguib, Ibrahim A.</p> <p>2016-01-01</p> <p>This paper presents novel methods for spectrophotometric determination of ascorbic acid (AA) in presence of rutin (RU) (coformulated drug) in their combined pharmaceutical formulation. The seven methods are ratio difference (RD), isoabsorptive_RD (Iso_RD), amplitude summation (A_Sum), isoabsorptive point, first derivative of the ratio spectra (1DD), mean centering (MCN), and ratio subtraction (RS). On the other hand, RU was determined directly by measuring the absorbance at 358 nm in addition to the two novel Iso_RD and A_Sum methods. The work introduced in this paper aims to compare these different methods, showing the advantages for each and making a comparison of analysis results. The calibration curve is linear over the concentration range of 4–50 μg/mL for AA and RU. The results show the high performance of proposed methods for the analysis of the binary mixture. The optimum assay conditions were established and the proposed methods were successfully applied for the assay of the two drugs in laboratory prepared mixtures and combined pharmaceutical tablets with excellent recoveries. No interference was observed from common pharmaceutical additives. PMID:26885440</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10621E..07L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10621E..07L"><span>A fast button surface defects detection method based on convolutional neural network</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Liu, Lizhe; Cao, Danhua; Wu, Songlin; Wu, Yubin; Wei, Taoran</p> <p>2018-01-01</p> <p>Considering the complexity of the button surface texture and the variety of buttons and defects, we propose a fast visual method for button surface defect detection, based on convolutional neural network (CNN). CNN has the ability to extract the essential features by training, avoiding designing complex feature operators adapted to different kinds of buttons, textures and defects. Firstly, we obtain the normalized button region and then use HOG-SVM method to identify the front and back side of the button. Finally, a convolutional neural network is developed to recognize the defects. Aiming at detecting the subtle defects, we propose a network structure with multiple feature channels input. To deal with the defects of different scales, we take a strategy of multi-scale image block detection. The experimental results show that our method is valid for a variety of buttons and able to recognize all kinds of defects that have occurred, including dent, crack, stain, hole, wrong paint and uneven. The detection rate exceeds 96%, which is much better than traditional methods based on SVM and methods based on template match. Our method can reach the speed of 5 fps on DSP based smart camera with 600 MHz frequency.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4934250','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4934250"><span>Point Cloud Based Relative Pose Estimation of a Satellite in Close Range</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Liu, Lujiang; Zhao, Gaopeng; Bo, Yuming</p> <p>2016-01-01</p> <p>Determination of the relative pose of satellites is essential in space rendezvous operations and on-orbit servicing missions. The key problems are the adoption of suitable sensor on board of a chaser and efficient techniques for pose estimation. This paper aims to estimate the pose of a target satellite in close range on the basis of its known model by using point cloud data generated by a flash LIDAR sensor. A novel model based pose estimation method is proposed; it includes a fast and reliable pose initial acquisition method based on global optimal searching by processing the dense point cloud data directly, and a pose tracking method based on Iterative Closest Point algorithm. Also, a simulation system is presented in this paper in order to evaluate the performance of the sensor and generate simulated sensor point cloud data. It also provides truth pose of the test target so that the pose estimation error can be quantified. To investigate the effectiveness of the proposed approach and achievable pose accuracy, numerical simulation experiments are performed; results demonstrate algorithm capability of operating with point cloud directly and large pose variations. Also, a field testing experiment is conducted and results show that the proposed method is effective. PMID:27271633</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28192717','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28192717"><span>System identification of velocity mechanomyogram measured with a capacitor microphone for muscle stiffness estimation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Uchiyama, Takanori; Tomoshige, Taiki</p> <p>2017-04-01</p> <p>A mechanomyogram (MMG) measured with a displacement sensor (displacement MMG) can provide a better estimation of longitudinal muscle stiffness than that measured with an acceleration sensor (acceleration MMG), but the displacement MMG cannot provide transverse muscle stiffness. We propose a method to estimate both longitudinal and transverse muscle stiffness from a velocity MMG using a system identification technique. The aims of this study are to show the advantages of the proposed method. The velocity MMG was measured using a capacitor microphone and a differential circuit, and the MMG, evoked by electrical stimulation, of the tibialis anterior muscle was measured five times in seven healthy young male volunteers. The evoked MMG system was identified using the singular value decomposition method and was approximated with a fourth-order model, which provides two undamped natural frequencies corresponding to the longitudinal and transverse muscle stiffness. The fluctuation of the undamped natural frequencies estimated from the velocity MMG was significantly smaller than that from the acceleration MMG. There was no significant difference between the fluctuations of the undamped natural frequencies estimated from the velocity MMG and that from the displacement MMG. The proposed method using the velocity MMG is thus more advantageous for muscle stiffness estimation. Copyright © 2017 Elsevier Ltd. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28593876','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28593876"><span>Missing value imputation for gene expression data by tailored nearest neighbors.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Faisal, Shahla; Tutz, Gerhard</p> <p>2017-04-25</p> <p>High dimensional data like gene expression and RNA-sequences often contain missing values. The subsequent analysis and results based on these incomplete data can suffer strongly from the presence of these missing values. Several approaches to imputation of missing values in gene expression data have been developed but the task is difficult due to the high dimensionality (number of genes) of the data. Here an imputation procedure is proposed that uses weighted nearest neighbors. Instead of using nearest neighbors defined by a distance that includes all genes the distance is computed for genes that are apt to contribute to the accuracy of imputed values. The method aims at avoiding the curse of dimensionality, which typically occurs if local methods as nearest neighbors are applied in high dimensional settings. The proposed weighted nearest neighbors algorithm is compared to existing missing value imputation techniques like mean imputation, KNNimpute and the recently proposed imputation by random forests. We use RNA-sequence and microarray data from studies on human cancer to compare the performance of the methods. The results from simulations as well as real studies show that the weighted distance procedure can successfully handle missing values for high dimensional data structures where the number of predictors is larger than the number of samples. The method typically outperforms the considered competitors.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27536447','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27536447"><span>Individual and population pharmacokinetic compartment analysis: a graphic procedure for quantification of predictive performance.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Eksborg, Staffan</p> <p>2013-01-01</p> <p>Pharmacokinetic studies are important for optimizing of drug dosing, but requires proper validation of the used pharmacokinetic procedures. However, simple and reliable statistical methods suitable for evaluation of the predictive performance of pharmacokinetic analysis are essentially lacking. The aim of the present study was to construct and evaluate a graphic procedure for quantification of predictive performance of individual and population pharmacokinetic compartment analysis. Original data from previously published pharmacokinetic compartment analyses after intravenous, oral, and epidural administration, and digitized data, obtained from published scatter plots of observed vs predicted drug concentrations from population pharmacokinetic studies using the NPEM algorithm and NONMEM computer program and Bayesian forecasting procedures, were used for estimating the predictive performance according to the proposed graphical method and by the method of Sheiner and Beal. The graphical plot proposed in the present paper proved to be a useful tool for evaluation of predictive performance of both individual and population compartment pharmacokinetic analysis. The proposed method is simple to use and gives valuable information concerning time- and concentration-dependent inaccuracies that might occur in individual and population pharmacokinetic compartment analysis. Predictive performance can be quantified by the fraction of concentration ratios within arbitrarily specified ranges, e.g. within the range 0.8-1.2.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011IJTIA.131.1246O','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011IJTIA.131.1246O"><span>Study on a Simple Method for Controlling the Engine Output Power of Hybrid Powered Railway Vehicles with Electric Double Layer Capacitors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Okano, Shota; Shibuya, Hiroyuki; Kondo, Keiichiro</p> <p></p> <p>This paper presents a simple and energy-saving method for controlling hybrid powered railway vehicles that run on rural non-electrified railway lines and have diesel engine and electrical double layer capacitors (EDLCs). The aim this study is to reduce both the fuel consumption and the capacitance of EDLCs. A basic idea proposed in this paper is that EDLCs supply and absorb the kinetic energy of the vehicle and the engine output compensates supply the energy loss with the vehicle running. Thus, the energy loss is not taken into consideration while expressing the EDLC voltage reference (equation 1); energy loss is considered when the engine is in operating mode. The proposed method is examined by performing numerical simulations for various values of engine operation time, load, and grade section. The results of this study reveal the relationship between the capacitance of the EDLCs and the fuel consumption. Using this proposed control methods, excessive charging of EDLCs can be avoided. The results of this study are expected to expedite the development of energy-saving railway vehicles for the non-electrified lines. Finally, the results of this study increase the possibility of developing hybrid powered railway vehicles.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25481568','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25481568"><span>Meeting the security requirements of electronic medical records in the ERA of high-speed computing.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Alanazi, H O; Zaidan, A A; Zaidan, B B; Kiah, M L Mat; Al-Bakri, S H</p> <p>2015-01-01</p> <p>This study has two objectives. First, it aims to develop a system with a highly secured approach to transmitting electronic medical records (EMRs), and second, it aims to identify entities that transmit private patient information without permission. The NTRU and the Advanced Encryption Standard (AES) cryptosystems are secured encryption methods. The AES is a tested technology that has already been utilized in several systems to secure sensitive data. The United States government has been using AES since June 2003 to protect sensitive and essential information. Meanwhile, NTRU protects sensitive data against attacks through the use of quantum computers, which can break the RSA cryptosystem and elliptic curve cryptography algorithms. A hybrid of AES and NTRU is developed in this work to improve EMR security. The proposed hybrid cryptography technique is implemented to secure the data transmission process of EMRs. The proposed security solution can provide protection for over 40 years and is resistant to quantum computers. Moreover, the technique provides the necessary evidence required by law to identify disclosure or misuse of patient records. The proposed solution can effectively secure EMR transmission and protect patient rights. It also identifies the source responsible for disclosing confidential patient records. The proposed hybrid technique for securing data managed by institutional websites must be improved in the future.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25832071','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25832071"><span>Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Luo, Xiongbiao; Wan, Ying; He, Xiangjian</p> <p>2015-04-01</p> <p>Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor's) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. The experimental results demonstrate that the authors' proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors' framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm optimization method with using the current observation information and adaptive evolutionary factors. The authors proposed framework greatly reduced the guidance errors from (4.3, 7.8) to (3.0 mm, 5.6°), compared to state-of-the-art methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/16268861','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/16268861"><span>The integrative review: updated methodology.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Whittemore, Robin; Knafl, Kathleen</p> <p>2005-12-01</p> <p>The aim of this paper is to distinguish the integrative review method from other review methods and to propose methodological strategies specific to the integrative review method to enhance the rigour of the process. Recent evidence-based practice initiatives have increased the need for and the production of all types of reviews of the literature (integrative reviews, systematic reviews, meta-analyses, and qualitative reviews). The integrative review method is the only approach that allows for the combination of diverse methodologies (for example, experimental and non-experimental research), and has the potential to play a greater role in evidence-based practice for nursing. With respect to the integrative review method, strategies to enhance data collection and extraction have been developed; however, methods of analysis, synthesis, and conclusion drawing remain poorly formulated. A modified framework for research reviews is presented to address issues specific to the integrative review method. Issues related to specifying the review purpose, searching the literature, evaluating data from primary sources, analysing data, and presenting the results are discussed. Data analysis methods of qualitative research are proposed as strategies that enhance the rigour of combining diverse methodologies as well as empirical and theoretical sources in an integrative review. An updated integrative review method has the potential to allow for diverse primary research methods to become a greater part of evidence-based practice initiatives.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28208587','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28208587"><span>Vehicle Detection in Aerial Images Based on Region Convolutional Neural Networks and Hard Negative Example Mining.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Tang, Tianyu; Zhou, Shilin; Deng, Zhipeng; Zou, Huanxin; Lei, Lin</p> <p>2017-02-10</p> <p>Detecting vehicles in aerial imagery plays an important role in a wide range of applications. The current vehicle detection methods are mostly based on sliding-window search and handcrafted or shallow-learning-based features, having limited description capability and heavy computational costs. Recently, due to the powerful feature representations, region convolutional neural networks (CNN) based detection methods have achieved state-of-the-art performance in computer vision, especially Faster R-CNN. However, directly using it for vehicle detection in aerial images has many limitations: (1) region proposal network (RPN) in Faster R-CNN has poor performance for accurately locating small-sized vehicles, due to the relatively coarse feature maps; and (2) the classifier after RPN cannot distinguish vehicles and complex backgrounds well. In this study, an improved detection method based on Faster R-CNN is proposed in order to accomplish the two challenges mentioned above. Firstly, to improve the recall, we employ a hyper region proposal network (HRPN) to extract vehicle-like targets with a combination of hierarchical feature maps. Then, we replace the classifier after RPN by a cascade of boosted classifiers to verify the candidate regions, aiming at reducing false detection by negative example mining. We evaluate our method on the Munich vehicle dataset and the collected vehicle dataset, with improvements in accuracy and robustness compared to existing methods.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_20");'>20</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li class="active"><span>22</span></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_22 --> <div id="page_23" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="441"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017ISPAr42W1..325G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017ISPAr42W1..325G"><span>Object Manifold Alignment for Multi-Temporal High Resolution Remote Sensing Images Classification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gao, G.; Zhang, M.; Gu, Y.</p> <p>2017-05-01</p> <p>Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, "pepper and salt" appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and "pepper and salt" problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of "pepper and salt".</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29870347','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29870347"><span>LFNet: A Novel Bidirectional Recurrent Convolutional Neural Network for Light-Field Image Super-Resolution.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Yunlong; Liu, Fei; Zhang, Kunbo; Hou, Guangqi; Sun, Zhenan; Tan, Tieniu</p> <p>2018-09-01</p> <p>The low spatial resolution of light-field image poses significant difficulties in exploiting its advantage. To mitigate the dependency of accurate depth or disparity information as priors for light-field image super-resolution, we propose an implicitly multi-scale fusion scheme to accumulate contextual information from multiple scales for super-resolution reconstruction. The implicitly multi-scale fusion scheme is then incorporated into bidirectional recurrent convolutional neural network, which aims to iteratively model spatial relations between horizontally or vertically adjacent sub-aperture images of light-field data. Within the network, the recurrent convolutions are modified to be more effective and flexible in modeling the spatial correlations between neighboring views. A horizontal sub-network and a vertical sub-network of the same network structure are ensembled for final outputs via stacked generalization. Experimental results on synthetic and real-world data sets demonstrate that the proposed method outperforms other state-of-the-art methods by a large margin in peak signal-to-noise ratio and gray-scale structural similarity indexes, which also achieves superior quality for human visual systems. Furthermore, the proposed method can enhance the performance of light field applications such as depth estimation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JPS...328..174G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JPS...328..174G"><span>Data pieces-based parameter identification for lithium-ion battery</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gao, Wei; Zou, Yuan; Sun, Fengchun; Hu, Xiaosong; Yu, Yang; Feng, Sen</p> <p>2016-10-01</p> <p>Battery characteristics vary with temperature and aging, it is necessary to identify battery parameters periodically for electric vehicles to ensure reliable State-of-Charge (SoC) estimation, battery equalization and safe operation. Aiming for on-board applications, this paper proposes a data pieces-based parameter identification (DPPI) method to identify comprehensive battery parameters including capacity, OCV (open circuit voltage)-Ah relationship and impedance-Ah relationship simultaneously only based on battery operation data. First a vehicle field test was conducted and battery operation data was recorded, then the DPPI method is elaborated based on vehicle test data, parameters of all 97 cells of the battery package are identified and compared. To evaluate the adaptability of the proposed DPPI method, it is used to identify battery parameters of different aging levels and different temperatures based on battery aging experiment data. Then a concept of ;OCV-Ah aging database; is proposed, based on which battery capacity can be identified even though the battery was never fully charged or discharged. Finally, to further examine the effectiveness of the identified battery parameters, they are used to perform SoC estimation for the test vehicle with adaptive extended Kalman filter (AEKF). The result shows good accuracy and reliability.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5119548','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5119548"><span>High-kVp Assisted Metal Artifact Reduction for X-ray Computed Tomography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Xi, Yan; Jin, Yannan; De Man, Bruno; Wang, Ge</p> <p>2016-01-01</p> <p>In X-ray computed tomography (CT), the presence of metallic parts in patients causes serious artifacts and degrades image quality. Many algorithms were published for metal artifact reduction (MAR) over the past decades with various degrees of success but without a perfect solution. Some MAR algorithms are based on the assumption that metal artifacts are due only to strong beam hardening and may fail in the case of serious photon starvation. Iterative methods handle photon starvation by discarding or underweighting corrupted data, but the results are not always stable and they come with high computational cost. In this paper, we propose a high-kVp-assisted CT scan mode combining a standard CT scan with a few projection views at a high-kVp value to obtain critical projection information near the metal parts. This method only requires minor hardware modifications on a modern CT scanner. Two MAR algorithms are proposed: dual-energy normalized MAR (DNMAR) and high-energy embedded MAR (HEMAR), aiming at situations without and with photon starvation respectively. Simulation results obtained with the CT simulator CatSim demonstrate that the proposed DNMAR and HEMAR methods can eliminate metal artifacts effectively. PMID:27891293</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4119734','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4119734"><span>Real-Time Safety Risk Assessment Based on a Real-Time Location System for Hydropower Construction Sites</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Fan, Qixiang; Qiang, Maoshan</p> <p>2014-01-01</p> <p>The concern for workers' safety in construction industry is reflected in many studies focusing on static safety risk identification and assessment. However, studies on real-time safety risk assessment aimed at reducing uncertainty and supporting quick response are rare. A method for real-time safety risk assessment (RTSRA) to implement a dynamic evaluation of worker safety states on construction site has been proposed in this paper. The method provides construction managers who are in charge of safety with more abundant information to reduce the uncertainty of the site. A quantitative calculation formula, integrating the influence of static and dynamic hazards and that of safety supervisors, is established to link the safety risk of workers with the locations of on-site assets. By employing the hidden Markov model (HMM), the RTSRA provides a mechanism for processing location data provided by the real-time location system (RTLS) and analyzing the probability distributions of different states in terms of false positives and negatives. Simulation analysis demonstrated the logic of the proposed method and how it works. Application case shows that the proposed RTSRA is both feasible and effective in managing construction project safety concerns. PMID:25114958</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/25114958','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/25114958"><span>Real-time safety risk assessment based on a real-time location system for hydropower construction sites.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Jiang, Hanchen; Lin, Peng; Fan, Qixiang; Qiang, Maoshan</p> <p>2014-01-01</p> <p>The concern for workers' safety in construction industry is reflected in many studies focusing on static safety risk identification and assessment. However, studies on real-time safety risk assessment aimed at reducing uncertainty and supporting quick response are rare. A method for real-time safety risk assessment (RTSRA) to implement a dynamic evaluation of worker safety states on construction site has been proposed in this paper. The method provides construction managers who are in charge of safety with more abundant information to reduce the uncertainty of the site. A quantitative calculation formula, integrating the influence of static and dynamic hazards and that of safety supervisors, is established to link the safety risk of workers with the locations of on-site assets. By employing the hidden Markov model (HMM), the RTSRA provides a mechanism for processing location data provided by the real-time location system (RTLS) and analyzing the probability distributions of different states in terms of false positives and negatives. Simulation analysis demonstrated the logic of the proposed method and how it works. Application case shows that the proposed RTSRA is both feasible and effective in managing construction project safety concerns.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018PhyA..497..109M','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018PhyA..497..109M"><span>Cellular automata rule characterization and classification using texture descriptors</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Machicao, Jeaneth; Ribas, Lucas C.; Scabini, Leonardo F. S.; Bruno, Odermir M.</p> <p>2018-05-01</p> <p>The cellular automata (CA) spatio-temporal patterns have attracted the attention from many researchers since it can provide emergent behavior resulting from the dynamics of each individual cell. In this manuscript, we propose an approach of texture image analysis to characterize and classify CA rules. The proposed method converts the CA spatio-temporal patterns into a gray-scale image. The gray-scale is obtained by creating a binary number based on the 8-connected neighborhood of each dot of the CA spatio-temporal pattern. We demonstrate that this technique enhances the CA rule characterization and allow to use different texture image analysis algorithms. Thus, various texture descriptors were evaluated in a supervised training approach aiming to characterize the CA's global evolution. Our results show the efficiency of the proposed method for the classification of the elementary CA (ECAs), reaching a maximum of 99.57% of accuracy rate according to the Li-Packard scheme (6 classes) and 94.36% for the classification of the 88 rules scheme. Moreover, within the image analysis context, we found a better performance of the method by means of a transformation of the binary states to a gray-scale.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28841548','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28841548"><span>Multi-Atlas Segmentation using Partially Annotated Data: Methods and Annotation Strategies.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Koch, Lisa M; Rajchl, Martin; Bai, Wenjia; Baumgartner, Christian F; Tong, Tong; Passerat-Palmbach, Jonathan; Aljabar, Paul; Rueckert, Daniel</p> <p>2017-08-22</p> <p>Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2014OptEn..53k3101Z','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2014OptEn..53k3101Z"><span>Fast and accurate computation of system matrix for area integral model-based algebraic reconstruction technique</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Zhang, Shunli; Zhang, Dinghua; Gong, Hao; Ghasemalizadeh, Omid; Wang, Ge; Cao, Guohua</p> <p>2014-11-01</p> <p>Iterative algorithms, such as the algebraic reconstruction technique (ART), are popular for image reconstruction. For iterative reconstruction, the area integral model (AIM) is more accurate for better reconstruction quality than the line integral model (LIM). However, the computation of the system matrix for AIM is more complex and time-consuming than that for LIM. Here, we propose a fast and accurate method to compute the system matrix for AIM. First, we calculate the intersection of each boundary line of a narrow fan-beam with pixels in a recursive and efficient manner. Then, by grouping the beam-pixel intersection area into six types according to the slopes of the two boundary lines, we analytically compute the intersection area of the narrow fan-beam with the pixels in a simple algebraic fashion. Overall, experimental results show that our method is about three times faster than the Siddon algorithm and about two times faster than the distance-driven model (DDM) in computation of the system matrix. The reconstruction speed of our AIM-based ART is also faster than the LIM-based ART that uses the Siddon algorithm and DDM-based ART, for one iteration. The fast reconstruction speed of our method was accomplished without compromising the image quality.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9484E..0QB','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9484E..0QB"><span>Long-term surface EMG monitoring using K-means clustering and compressive sensing</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Balouchestani, Mohammadreza; Krishnan, Sridhar</p> <p>2015-05-01</p> <p>In this work, we present an advanced K-means clustering algorithm based on Compressed Sensing theory (CS) in combination with the K-Singular Value Decomposition (K-SVD) method for Clustering of long-term recording of surface Electromyography (sEMG) signals. The long-term monitoring of sEMG signals aims at recording of the electrical activity produced by muscles which are very useful procedure for treatment and diagnostic purposes as well as for detection of various pathologies. The proposed algorithm is examined for three scenarios of sEMG signals including healthy person (sEMG-Healthy), a patient with myopathy (sEMG-Myopathy), and a patient with neuropathy (sEMG-Neuropathr), respectively. The proposed algorithm can easily scan large sEMG datasets of long-term sEMG recording. We test the proposed algorithm with Principal Component Analysis (PCA) and Linear Correlation Coefficient (LCC) dimensionality reduction methods. Then, the output of the proposed algorithm is fed to K-Nearest Neighbours (K-NN) and Probabilistic Neural Network (PNN) classifiers in order to calclute the clustering performance. The proposed algorithm achieves a classification accuracy of 99.22%. This ability allows reducing 17% of Average Classification Error (ACE), 9% of Training Error (TE), and 18% of Root Mean Square Error (RMSE). The proposed algorithm also reduces 14% clustering energy consumption compared to the existing K-Means clustering algorithm.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26206400','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26206400"><span>Hidden pattern discovery on epileptic EEG with 1-D local binary patterns and epileptic seizures detection by grey relational analysis.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Kaya, Yılmaz</p> <p>2015-09-01</p> <p>This paper proposes a novel approach to detect epilepsy seizures by using Electroencephalography (EEG), which is one of the most common methods for the diagnosis of epilepsy, based on 1-Dimension Local Binary Pattern (1D-LBP) and grey relational analysis (GRA) methods. The main aim of this paper is to evaluate and validate a novel approach, which is a computer-based quantitative EEG analyzing method and based on grey systems, aimed to help decision-maker. In this study, 1D-LBP, which utilizes all data points, was employed for extracting features in raw EEG signals, Fisher score (FS) was employed to select the representative features, which can also be determined as hidden patterns. Additionally, GRA is performed to classify EEG signals through these Fisher scored features. The experimental results of the proposed approach, which was employed in a public dataset for validation, showed that it has a high accuracy in identifying epileptic EEG signals. For various combinations of epileptic EEG, such as A-E, B-E, C-E, D-E, and A-D clusters, 100, 96, 100, 99.00 and 100% were achieved, respectively. Also, this work presents an attempt to develop a new general-purpose hidden pattern determination scheme, which can be utilized for different categories of time-varying signals.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5028490','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5028490"><span>The Proposal of a BDS Syllabus Framework to Suit Choice Based Credit System (CBCS)</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Sethuraman, KR; Narayan, KA</p> <p>2016-01-01</p> <p>Introduction Higher education takes a new dimension universally in the form of choice based Credit System (CBCS). In India, the University Grants Commission (UGC) has made CBCS mandatory in all fields except for Health Profession. Not much attempts were made in designing a BDS syllabus to suit CBCS. Aim Aim of the study was to propose a model dental syllabus to fit into choice based credit system. Materials and Methods A model BDS syllabus Prototype for CBCS was designed based on the UGC guidelines for terms as well as calculations for CBCS. Engineering curriculum models from IIT and Anna University were also referred to. Results Semester based BDS syllabus was designed without changing the norms of Dental Council of India (DCI). All the must know areas of the subjects were considered as “core” areas and the desirable and nice to know areas are left for “electives” by the students. By this method, none of the subject was left out at the same time students are provided with electives to learn deeper on their topics of choice. Conclusion The existing BDS syllabus can be effectively modified by incorporating few changes based on the UGC regulations for Choice based credit system. The proposed framework gives an insight on the nature of modifications that are needed. By adopting this, BDS Course regulations can also follow CBCS without neglecting or reducing the weightage of any subject. PMID:27656467</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27413392','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27413392"><span>Human Activity Recognition in AAL Environments Using Random Projections.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Damaševičius, Robertas; Vasiljevas, Mindaugas; Šalkevičius, Justas; Woźniak, Marcin</p> <p>2016-01-01</p> <p>Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject's body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018AIPA....8e5113G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018AIPA....8e5113G"><span>Design, analysis, and testing of a flexure-based vibration-assisted polishing device</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gu, Yan; Zhou, Yan; Lin, Jieqiong; Lu, Mingming; Zhang, Chenglong; Chen, Xiuyuan</p> <p>2018-05-01</p> <p>A vibration-assisted polishing device (VAPD) composed of leaf-spring and right-circular flexure hinges is proposed with the aim of realizing vibration-assisted machining along elliptical trajectories. To design the structure, energy methods and the finite-element method are used to calculate the performance of the proposed VAPD. An improved bacterial foraging optimization algorithm is used to optimize the structural parameters. In addition, the performance of the VAPD is tested experimentally. The experimental results indicate that the maximum strokes of the two directional mechanisms operating along the Z1 and Z2 directions are 29.5 μm and 29.3 μm, respectively, and the maximum motion resolutions are 10.05 nm and 10.01 nm, respectively. The maximum working bandwidth is 1,879 Hz, and the device has a good step response.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4931102','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4931102"><span>Human Activity Recognition in AAL Environments Using Random Projections</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Damaševičius, Robertas; Vasiljevas, Mindaugas; Šalkevičius, Justas; Woźniak, Marcin</p> <p>2016-01-01</p> <p>Automatic human activity recognition systems aim to capture the state of the user and its environment by exploiting heterogeneous sensors attached to the subject's body and permit continuous monitoring of numerous physiological signals reflecting the state of human actions. Successful identification of human activities can be immensely useful in healthcare applications for Ambient Assisted Living (AAL), for automatic and intelligent activity monitoring systems developed for elderly and disabled people. In this paper, we propose the method for activity recognition and subject identification based on random projections from high-dimensional feature space to low-dimensional projection space, where the classes are separated using the Jaccard distance between probability density functions of projected data. Two HAR domain tasks are considered: activity identification and subject identification. The experimental results using the proposed method with Human Activity Dataset (HAD) data are presented. PMID:27413392</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015SPIE.9412E..2SX','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015SPIE.9412E..2SX"><span>A rapid parallelization of cone-beam projection and back-projection operator based on texture fetching interpolation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xie, Lizhe; Hu, Yining; Chen, Yang; Shi, Luyao</p> <p>2015-03-01</p> <p>Projection and back-projection are the most computational consuming parts in Computed Tomography (CT) reconstruction. Parallelization strategies using GPU computing techniques have been introduced. We in this paper present a new parallelization scheme for both projection and back-projection. The proposed method is based on CUDA technology carried out by NVIDIA Corporation. Instead of build complex model, we aimed on optimizing the existing algorithm and make it suitable for CUDA implementation so as to gain fast computation speed. Besides making use of texture fetching operation which helps gain faster interpolation speed, we fixed sampling numbers in the computation of projection, to ensure the synchronization of blocks and threads, thus prevents the latency caused by inconsistent computation complexity. Experiment results have proven the computational efficiency and imaging quality of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2004IJTIA.123.1371K','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2004IJTIA.123.1371K"><span>Transmission and Reproduction of Force Sensation by Bilateral Control</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Katsura, Seiichiro; Ohnishi, Kouhei</p> <p></p> <p>Minimally invasive surgery (MIS) which thinks a great deal of patient’s quality of life (QOL) has attracted attention during about ten years. In this paper, it aims at development of the technology for transmitting force sensation required in medical treatment especially through surgical instruments, such as forceps. In bilateral control, it is a problem how master and slave robots realize the law of action and reaction to the environment. Mechanism of contact with environment and bilateral controller based on stiffness are shown. Master arm in contact with human and slave arm in contact with environment are given compliance, and stable contact with environment can be realized. The proposed method is applied to 3-link master-slave manipulators. As a result, transmission and reproduction of force sensation can be realized. The experimental results show viability of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016AIPC.1725b0006A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016AIPC.1725b0006A"><span>Fixation strength analysis of cup to bone material using finite element simulation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Anwar, Iwan Budiwan; Saputra, Eko; Ismail, Rifky; Jamari, J.; van der Heide, Emile</p> <p>2016-04-01</p> <p>Fixation of acetabular cup to bone material is an important initial stability for artificial hip joint. In general, the fixation in cement less-type acetabular cup uses press-fit and screw methods. These methods can be applied alone or together. Based on literature survey, the additional screw inside of cup is effective; however, it has little effect in whole fixation. Therefore, an acetabular cup with good fixation, easy manufacture and easy installation is required. This paper is aiming at evaluating and proposing a new cup fixation design. To prove the strength of the present cup fixation design, the finite element simulation of three dimensional cup with new fixation design was performed. The present cup design was examined with twist axial and radial rotation. Results showed that the proposed cup design was better than the general version.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4570298','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4570298"><span>An Autonomous Satellite Time Synchronization System Using Remotely Disciplined VC-OCXOs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Gu, Xiaobo; Chang, Qing; Glennon, Eamonn P.; Xu, Baoda; Dempseter, Andrew G.; Wang, Dun; Wu, Jiapeng</p> <p>2015-01-01</p> <p>An autonomous remote clock control system is proposed to provide time synchronization and frequency syntonization for satellite to satellite or ground to satellite time transfer, with the system comprising on-board voltage controlled oven controlled crystal oscillators (VC-OCXOs) that are disciplined to a remote master atomic clock or oscillator. The synchronization loop aims to provide autonomous operation over extended periods, be widely applicable to a variety of scenarios and robust. A new architecture comprising the use of frequency division duplex (FDD), synchronous time division (STDD) duplex and code division multiple access (CDMA) with a centralized topology is employed. This new design utilizes dual one-way ranging methods to precisely measure the clock error, adopts least square (LS) methods to predict the clock error and employs a third-order phase lock loop (PLL) to generate the voltage control signal. A general functional model for this system is proposed and the error sources and delays that affect the time synchronization are discussed. Related algorithms for estimating and correcting these errors are also proposed. The performance of the proposed system is simulated and guidance for selecting the clock is provided. PMID:26213929</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26978837','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26978837"><span>Sequential Nonlinear Learning for Distributed Multiagent Systems via Extreme Learning Machines.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Vanli, Nuri Denizcan; Sayin, Muhammed O; Delibalta, Ibrahim; Kozat, Suleyman Serdar</p> <p>2017-03-01</p> <p>We study online nonlinear learning over distributed multiagent systems, where each agent employs a single hidden layer feedforward neural network (SLFN) structure to sequentially minimize arbitrary loss functions. In particular, each agent trains its own SLFN using only the data that is revealed to itself. On the other hand, the aim of the multiagent system is to train the SLFN at each agent as well as the optimal centralized batch SLFN that has access to all the data, by exchanging information between neighboring agents. We address this problem by introducing a distributed subgradient-based extreme learning machine algorithm. The proposed algorithm provides guaranteed upper bounds on the performance of the SLFN at each agent and shows that each of these individual SLFNs asymptotically achieves the performance of the optimal centralized batch SLFN. Our performance guarantees explicitly distinguish the effects of data- and network-dependent parameters on the convergence rate of the proposed algorithm. The experimental results illustrate that the proposed algorithm achieves the oracle performance significantly faster than the state-of-the-art methods in the machine learning and signal processing literature. Hence, the proposed method is highly appealing for the applications involving big data.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li class="active"><span>23</span></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_23 --> <div id="page_24" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="461"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017JIEIB..98..567A','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017JIEIB..98..567A"><span>Optimized Controller Design for a 12-Pulse Voltage Source Converter Based HVDC System</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Agarwal, Ruchi; Singh, Sanjeev</p> <p>2017-12-01</p> <p>The paper proposes an optimized controller design scheme for power quality improvement in 12-pulse voltage source converter based high voltage direct current system. The proposed scheme is hybrid combination of golden section search and successive linear search method. The paper aims at reduction of current sensor and optimization of controller. The voltage and current controller parameters are selected for optimization due to its impact on power quality. The proposed algorithm for controller optimizes the objective function which is composed of current harmonic distortion, power factor, and DC voltage ripples. The detailed designs and modeling of the complete system are discussed and its simulation is carried out in MATLAB-Simulink environment. The obtained results are presented to demonstrate the effectiveness of the proposed scheme under different transient conditions such as load perturbation, non-linear load condition, voltage sag condition, and tapped load fault under one phase open condition at both points-of-common coupling.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1842c0016D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1842c0016D"><span>The effect of different distance measures in detecting outliers using clustering-based algorithm for circular regression model</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Di, Nur Faraidah Muhammad; Satari, Siti Zanariah</p> <p>2017-05-01</p> <p>Outlier detection in linear data sets has been done vigorously but only a small amount of work has been done for outlier detection in circular data. In this study, we proposed multiple outliers detection in circular regression models based on the clustering algorithm. Clustering technique basically utilizes distance measure to define distance between various data points. Here, we introduce the similarity distance based on Euclidean distance for circular model and obtain a cluster tree using the single linkage clustering algorithm. Then, a stopping rule for the cluster tree based on the mean direction and circular standard deviation of the tree height is proposed. We classify the cluster group that exceeds the stopping rule as potential outlier. Our aim is to demonstrate the effectiveness of proposed algorithms with the similarity distances in detecting the outliers. It is found that the proposed methods are performed well and applicable for circular regression model.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29372662','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29372662"><span>QSAR modelling using combined simple competitive learning networks and RBF neural networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Sheikhpour, R; Sarram, M A; Rezaeian, M; Sheikhpour, E</p> <p>2018-04-01</p> <p>The aim of this study was to propose a QSAR modelling approach based on the combination of simple competitive learning (SCL) networks with radial basis function (RBF) neural networks for predicting the biological activity of chemical compounds. The proposed QSAR method consisted of two phases. In the first phase, an SCL network was applied to determine the centres of an RBF neural network. In the second phase, the RBF neural network was used to predict the biological activity of various phenols and Rho kinase (ROCK) inhibitors. The predictive ability of the proposed QSAR models was evaluated and compared with other QSAR models using external validation. The results of this study showed that the proposed QSAR modelling approach leads to better performances than other models in predicting the biological activity of chemical compounds. This indicated the efficiency of simple competitive learning networks in determining the centres of RBF neural networks.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018JCoPh.354...43J','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018JCoPh.354...43J"><span>Model reduction method using variable-separation for stochastic saddle point problems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Jiang, Lijian; Li, Qiuqi</p> <p>2018-02-01</p> <p>In this paper, we consider a variable-separation (VS) method to solve the stochastic saddle point (SSP) problems. The VS method is applied to obtain the solution in tensor product structure for stochastic partial differential equations (SPDEs) in a mixed formulation. The aim of such a technique is to construct a reduced basis approximation of the solution of the SSP problems. The VS method attempts to get a low rank separated representation of the solution for SSP in a systematic enrichment manner. No iteration is performed at each enrichment step. In order to satisfy the inf-sup condition in the mixed formulation, we enrich the separated terms for the primal system variable at each enrichment step. For the SSP problems by regularization or penalty, we propose a more efficient variable-separation (VS) method, i.e., the variable-separation by penalty method. This can avoid further enrichment of the separated terms in the original mixed formulation. The computation of the variable-separation method decomposes into offline phase and online phase. Sparse low rank tensor approximation method is used to significantly improve the online computation efficiency when the number of separated terms is large. For the applications of SSP problems, we present three numerical examples to illustrate the performance of the proposed methods.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26605450','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26605450"><span>An Evaluation Model for a Multidisciplinary Chronic Pelvic Pain Clinic: Application of the RE-AIM Framework.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Chen, Innie; Money, Deborah; Yong, Paul; Williams, Christina; Allaire, Catherine</p> <p>2015-09-01</p> <p>Chronic pelvic pain (CPP) is a prevalent, debilitating, and costly condition. Although national guidelines and empiric evidence support the use of a multidisciplinary model of care for such patients, such clinics are uncommon in Canada. The BC Women's Centre for Pelvic Pain and Endometriosis was created to respond to this need, and there is interest in this model of care's impact on the burden of disease in British Columbia. We sought to create an approach to its evaluation using the RE-AIM (Reach, Efficacy, Adoption, Implementation, Maintenance) evaluation framework to assess the impact of the care model and to guide clinical decision-making and policy. The RE-AIM evaluation framework was applied to consider the different dimensions of impact of the BC Centre. The proposed measures, data sources, and data management strategies for this mixed-methods approach were identified. The five dimensions of impact were considered at individual and organizational levels, and corresponding indicators were proposed to enable integration into existing data infrastructure to facilitate collection and early program evaluation. The RE-AIM framework can be applied to the evaluation of a multidisciplinary chronic pelvic pain clinic. This will allow better assessment of the impact of innovative models of care for women with chronic pelvic pain.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016JThSc..25..188D','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016JThSc..25..188D"><span>Finite element method formulation in polar coordinates for transient heat conduction problems</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Duda, Piotr</p> <p>2016-04-01</p> <p>The aim of this paper is the formulation of the finite element method in polar coordinates to solve transient heat conduction problems. It is hard to find in the literature a formulation of the finite element method (FEM) in polar or cylindrical coordinates for the solution of heat transfer problems. This document shows how to apply the most often used boundary conditions. The global equation system is solved by the Crank-Nicolson method. The proposed algorithm is verified in three numerical tests. In the first example, the obtained transient temperature distribution is compared with the temperature obtained from the presented analytical solution. In the second numerical example, the variable boundary condition is assumed. In the last numerical example the component with the shape different than cylindrical is used. All examples show that the introduction of the polar coordinate system gives better results than in the Cartesian coordinate system. The finite element method formulation in polar coordinates is valuable since it provides a higher accuracy of the calculations without compacting the mesh in cylindrical or similar to tubular components. The proposed method can be applied for circular elements such as boiler drums, outlet headers, flux tubes. This algorithm can be useful during the solution of inverse problems, which do not allow for high density grid. This method can calculate the temperature distribution in the bodies of different properties in the circumferential and the radial direction. The presented algorithm can be developed for other coordinate systems. The examples demonstrate a good accuracy and stability of the proposed method.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017AIPC.1817d0007R','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017AIPC.1817d0007R"><span>Numerical design of RF ablation applicator for hepatic cancer treatment</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Rakhmadi, Aditya; Basari</p> <p>2017-02-01</p> <p>Currently, cancer has become one of health problems that is difficult to be overcomed. This disease is not only difficult to be cured, but also to be detected and may cause death. For this reason, RF ablation treatment method is proposed to cure cancer. RF ablation therapy is a method in which an applicator is inserted into the body to kill cancer cells by heating the cells. The cancer cells are exposed to the temperature more than 60°C in short duration (few second to few minutes) so thus cell destruction occurs locally. For the sake of the successful treatment, a minimally invasive method is selected in order for perfect local temperature distribution in cancer cells can be achieved. In this paper, a coax-fed dipole-type applicator with interstitial irradiation technique is proposed aimed at RF ablation into hepatic cells. Numerical simulation is performed to obtain a suitable geometric dimension at operating frequency around 2.45 GHz, in order to localize the ablation area. The proposed applicator is inserted into a simple phantom representing an adult human body model in which normal and cancerous liver cells. The simulated results show that the proposed applicator is able to operate at center frequency of 2.355 GHz with blood droplet-type ablation zone and the temperature around the cancer cell by 60°C can be achieved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2008SPIE.6963E..0RH','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2008SPIE.6963E..0RH"><span>Multi-objects recognition for distributed intelligent sensor networks</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>He, Haibo; Chen, Sheng; Cao, Yuan; Desai, Sachi; Hohil, Myron E.</p> <p>2008-04-01</p> <p>This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4541904','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4541904"><span>Non-Destructive Current Sensing for Energy Efficiency Monitoring in Buildings with Environmental Certification</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Mota, Lia Toledo Moreira; Mota, Alexandre de Assis; Coiado, Lorenzo Campos</p> <p>2015-01-01</p> <p>Nowadays, buildings environmental certifications encourage the implementation of initiatives aiming to increase energy efficiency in buildings. In these certification systems, increased energy efficiency arising from such initiatives must be demonstrated. Thus, a challenge to be faced is how to check the increase in energy efficiency related to each of the employed initiatives without a considerable building retrofit. In this context, this work presents a non-destructive method for electric current sensing to assess implemented initiatives to increase energy efficiency in buildings with environmental certification. This method proposes the use of a sensor that can be installed directly in the low voltage electrical circuit conductors that are powering the initiative under evaluation, without the need for reforms that result in significant costs, repair, and maintenance. The proposed sensor consists of three elements: an air-core transformer current sensor, an amplifying/filtering stage, and a microprocessor. A prototype of the proposed sensor was developed and tests were performed to validate this sensor. Based on laboratory tests, it was possible to characterize the proposed current sensor with respect to the number of turns and cross-sectional area of the primary and secondary coils. Furthermore, using the Least Squares Method, it was possible to determine the efficiency of the air core transformer current sensor (the best efficiency found, considering different test conditions, was 2%), which leads to a linear output response. PMID:26184208</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2015EJASP2015...56X','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2015EJASP2015...56X"><span>Non-integer expansion embedding techniques for reversible image watermarking</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Xiang, Shijun; Wang, Yi</p> <p>2015-12-01</p> <p>This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor's performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.'s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26184208','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26184208"><span>Non-Destructive Current Sensing for Energy Efficiency Monitoring in Buildings with Environmental Certification.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Mota, Lia Toledo Moreira; Mota, Alexandre de Assis; Coiado, Lorenzo Campos</p> <p>2015-07-10</p> <p>Nowadays, buildings environmental certifications encourage the implementation of initiatives aiming to increase energy efficiency in buildings. In these certification systems, increased energy efficiency arising from such initiatives must be demonstrated. Thus, a challenge to be faced is how to check the increase in energy efficiency related to each of the employed initiatives without a considerable building retrofit. In this context, this work presents a non-destructive method for electric current sensing to assess implemented initiatives to increase energy efficiency in buildings with environmental certification. This method proposes the use of a sensor that can be installed directly in the low voltage electrical circuit conductors that are powering the initiative under evaluation, without the need for reforms that result in significant costs, repair, and maintenance. The proposed sensor consists of three elements: an air-core transformer current sensor, an amplifying/filtering stage, and a microprocessor. A prototype of the proposed sensor was developed and tests were performed to validate this sensor. Based on laboratory tests, it was possible to characterize the proposed current sensor with respect to the number of turns and cross-sectional area of the primary and secondary coils. Furthermore, using the Least Squares Method, it was possible to determine the efficiency of the air core transformer current sensor (the best efficiency found, considering different test conditions, was 2%), which leads to a linear output response.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2005JMiMi..15.2184Y','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2005JMiMi..15.2184Y"><span>A novel fabrication method for suspended high-aspect-ratio microstructures</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Yang, Yao-Joe; Kuo, Wen-Cheng</p> <p>2005-11-01</p> <p>Suspended high-aspect-ratio structures (suspended HARS) are widely used for MEMS devices such as micro-gyroscopes, micro-accelerometers, optical switches and so on. Various fabrication methods, such as SOI, SCREAM, AIM, SBM and BELST processes, were proposed to fabricate HARS. However, these methods focus on the fabrication of suspended microstructures with relatively small widths of trench opening (e.g. less than 10 µm). In this paper, we propose a novel process for fabricating very high-aspect-ratio suspended structures with large widths of trench opening using photoresist as an etching mask. By enhancing the microtrenching effect, we can easily release the suspended structure without thoroughly removing the floor polymer inside the trenches for the cases with a relatively small trench aspect ratio. All the process steps can be integrated into a single-run single-mask ICP-RIE process, which effectively reduces the process complexity and fabrication cost. We also discuss the phenomenon of corner erosion, which results in the undesired etching of silicon structures during the structure-releasing step. By using the proposed process, 100 µm thick suspended structures with the trench aspect ratio of about 20 are demonstrated. Also, the proposed process can be used to fabricate devices for applications which require large in-plane displacement. This paper was orally presented in the Transducers'05, Seoul, Korea (paper ID: 3B1.3).</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29249347','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29249347"><span>Enhancement of dynamic myocardial perfusion PET images based on low-rank plus sparse decomposition.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Lu, Lijun; Ma, Xiaomian; Mohy-Ud-Din, Hassan; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan</p> <p>2018-02-01</p> <p>The absolute quantification of dynamic myocardial perfusion (MP) PET imaging is challenged by the limited spatial resolution of individual frame images due to division of the data into shorter frames. This study aims to develop a method for restoration and enhancement of dynamic PET images. We propose that the image restoration model should be based on multiple constraints rather than a single constraint, given the fact that the image characteristic is hardly described by a single constraint alone. At the same time, it may be possible, but not optimal, to regularize the image with multiple constraints simultaneously. Fortunately, MP PET images can be decomposed into a superposition of background vs. dynamic components via low-rank plus sparse (L + S) decomposition. Thus, we propose an L + S decomposition based MP PET image restoration model and express it as a convex optimization problem. An iterative soft thresholding algorithm was developed to solve the problem. Using realistic dynamic 82 Rb MP PET scan data, we optimized and compared its performance with other restoration methods. The proposed method resulted in substantial visual as well as quantitative accuracy improvements in terms of noise versus bias performance, as demonstrated in extensive 82 Rb MP PET simulations. In particular, the myocardium defect in the MP PET images had improved visual as well as contrast versus noise tradeoff. The proposed algorithm was also applied on an 8-min clinical cardiac 82 Rb MP PET study performed on the GE Discovery PET/CT, and demonstrated improved quantitative accuracy (CNR and SNR) compared to other algorithms. The proposed method is effective for restoration and enhancement of dynamic PET images. Copyright © 2017 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2006NIMPA.569..467B','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2006NIMPA.569..467B"><span>Quantitative imaging for clinical dosimetry</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Bardiès, Manuel; Flux, Glenn; Lassmann, Michael; Monsieurs, Myriam; Savolainen, Sauli; Strand, Sven-Erik</p> <p>2006-12-01</p> <p>Patient-specific dosimetry in nuclear medicine is now a legal requirement in many countries throughout the EU for targeted radionuclide therapy (TRT) applications. In order to achieve that goal, an increased level of accuracy in dosimetry procedures is needed. Current research in nuclear medicine dosimetry should not only aim at developing new methods to assess the delivered radiation absorbed dose at the patient level, but also to ensure that the proposed methods can be put into practice in a sufficient number of institutions. A unified dosimetry methodology is required for making clinical outcome comparisons possible.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/24426676','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/24426676"><span>The Proposal and Early Results of Capitate Forage as a New Treatment Method for Kienböck's Disease.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Bekler, Halil Ibrahim; Erdag, Yiğit; Gumustas, Seyit Ali; Pehlivanoglu, Gökhan</p> <p>2013-12-01</p> <p>Kienböck's disease is a type of avascular necrosis which disrupts the biomechanics of the wrist as a result of the changes it creates in the lunate bone. Its treatment generally consists of osteotomies intended to relieve the pressure on the bone, pedicle bone grafting applications aiming to increase bone blood supply, and salvage procedures. Capitate forage is a safe and simple-to-apply surgical treatment method which is intended to enhance neovascularization of the lunate much like a radius osteotomy or core decompression.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29272537','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29272537"><span>Improved Maximum Parsimony Models for Phylogenetic Networks.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Van Iersel, Leo; Jones, Mark; Scornavacca, Celine</p> <p>2018-05-01</p> <p>Phylogenetic networks are well suited to represent evolutionary histories comprising reticulate evolution. Several methods aiming at reconstructing explicit phylogenetic networks have been developed in the last two decades. In this article, we propose a new definition of maximum parsimony for phylogenetic networks that permits to model biological scenarios that cannot be modeled by the definitions currently present in the literature (namely, the "hardwired" and "softwired" parsimony). Building on this new definition, we provide several algorithmic results that lay the foundations for new parsimony-based methods for phylogenetic network reconstruction.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21934700','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21934700"><span>Bias correction for estimated QTL effects using the penalized maximum likelihood method.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, J; Yue, C; Zhang, Y-M</p> <p>2012-04-01</p> <p>A penalized maximum likelihood method has been proposed as an important approach to the detection of epistatic quantitative trait loci (QTL). However, this approach is not optimal in two special situations: (1) closely linked QTL with effects in opposite directions and (2) small-effect QTL, because the method produces downwardly biased estimates of QTL effects. The present study aims to correct the bias by using correction coefficients and shifting from the use of a uniform prior on the variance parameter of a QTL effect to that of a scaled inverse chi-square prior. The results of Monte Carlo simulation experiments show that the improved method increases the power from 25 to 88% in the detection of two closely linked QTL of equal size in opposite directions and from 60 to 80% in the identification of QTL with small effects (0.5% of the total phenotypic variance). We used the improved method to detect QTL responsible for the barley kernel weight trait using 145 doubled haploid lines developed in the North American Barley Genome Mapping Project. Application of the proposed method to other shrinkage estimation of QTL effects is discussed.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4503688','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4503688"><span>Exact and Metaheuristic Approaches for a Bi-Objective School Bus Scheduling Problem</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Chen, Xiaopan; Kong, Yunfeng; Dang, Lanxue; Hou, Yane; Ye, Xinyue</p> <p>2015-01-01</p> <p>As a class of hard combinatorial optimization problems, the school bus routing problem has received considerable attention in the last decades. For a multi-school system, given the bus trips for each school, the school bus scheduling problem aims at optimizing bus schedules to serve all the trips within the school time windows. In this paper, we propose two approaches for solving the bi-objective school bus scheduling problem: an exact method of mixed integer programming (MIP) and a metaheuristic method which combines simulated annealing with local search. We develop MIP formulations for homogenous and heterogeneous fleet problems respectively and solve the models by MIP solver CPLEX. The bus type-based formulation for heterogeneous fleet problem reduces the model complexity in terms of the number of decision variables and constraints. The metaheuristic method is a two-stage framework for minimizing the number of buses to be used as well as the total travel distance of buses. We evaluate the proposed MIP and the metaheuristic method on two benchmark datasets, showing that on both instances, our metaheuristic method significantly outperforms the respective state-of-the-art methods. PMID:26176764</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27884873','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27884873"><span>Optimization of Robust HPLC Method for Quantitation of Ambroxol Hydrochloride and Roxithromycin Using a DoE Approach.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Patel, Rashmin B; Patel, Nilay M; Patel, Mrunali R; Solanki, Ajay B</p> <p>2017-03-01</p> <p>The aim of this work was to develop and optimize a robust HPLC method for the separation and quantitation of ambroxol hydrochloride and roxithromycin utilizing Design of Experiment (DoE) approach. The Plackett-Burman design was used to assess the impact of independent variables (concentration of organic phase, mobile phase pH, flow rate and column temperature) on peak resolution, USP tailing and number of plates. A central composite design was utilized to evaluate the main, interaction, and quadratic effects of independent variables on the selected dependent variables. The optimized HPLC method was validated based on ICH Q2R1 guideline and was used to separate and quantify ambroxol hydrochloride and roxithromycin in tablet formulations. The findings showed that DoE approach could be effectively applied to optimize a robust HPLC method for quantification of ambroxol hydrochloride and roxithromycin in tablet formulations. Statistical comparison between results of proposed and reported HPLC method revealed no significant difference; indicating the ability of proposed HPLC method for analysis of ambroxol hydrochloride and roxithromycin in pharmaceutical formulations. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MeScT..28d5401G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MeScT..28d5401G"><span>Application of optimized multiscale mathematical morphology for bearing fault diagnosis</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Gong, Tingkai; Yuan, Yanbin; Yuan, Xiaohui; Wu, Xiaotao</p> <p>2017-04-01</p> <p>In order to suppress noise effectively and extract the impulsive features in the vibration signals of faulty rolling element bearings, an optimized multiscale morphology (OMM) based on conventional multiscale morphology (CMM) and iterative morphology (IM) is presented in this paper. Firstly, the operator used in the IM method must be non-idempotent; therefore, an optimized difference (ODIF) operator has been designed. Furthermore, in the iterative process the current operation is performed on the basis of the previous one. This means that if a larger scale is employed, more fault features are inhibited. Thereby, a unit scale is proposed as the structuring element (SE) scale in IM. According to the above definitions, the IM method is implemented on the results over different scales obtained by CMM. The validity of the proposed method is first evaluated by a simulated signal. Subsequently, aimed at an outer race fault two vibration signals sampled by different accelerometers are analyzed by OMM and CMM, respectively. The same is done for an inner race fault. The results show that the optimized method is effective in diagnosing the two bearing faults. Compared with the CMM method, the OMM method can extract much more fault features under strong noise background.</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li class="active"><span>24</span></li> <li><a href="#" onclick='return showDiv("page_25");'>25</a></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_24 --> <div id="page_25" class="hiddenDiv"> <div class="row"> <div class="col-sm-12"> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div> </div> <div class="row"> <div class="col-sm-12"> <ol class="result-class" start="481"> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27448355','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27448355"><span>Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo</p> <p>2016-10-01</p> <p>Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/29628913','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/29628913"><span>A Retrospective Review of Microbiological Methods Applied in Studies Following the Deepwater Horizon Oil Spill.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Zhang, Shuangfei; Hu, Zhong; Wang, Hui</p> <p>2018-01-01</p> <p>The Deepwater Horizon (DWH) oil spill in the Gulf of Mexico in 2010 resulted in serious damage to local marine and coastal environments. In addition to the physical removal and chemical dispersion of spilled oil, biodegradation by indigenous microorganisms was regarded as the most effective way for cleaning up residual oil. Different microbiological methods were applied to investigate the changes and responses of bacterial communities after the DWH oil spills. By summarizing and analyzing these microbiological methods, giving recommendations and proposing some methods that have not been used, this review aims to provide constructive guidelines for microbiological studies after environmental disasters, especially those involving organic pollutants.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5876298','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=5876298"><span>A Retrospective Review of Microbiological Methods Applied in Studies Following the Deepwater Horizon Oil Spill</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Zhang, Shuangfei; Hu, Zhong; Wang, Hui</p> <p>2018-01-01</p> <p>The Deepwater Horizon (DWH) oil spill in the Gulf of Mexico in 2010 resulted in serious damage to local marine and coastal environments. In addition to the physical removal and chemical dispersion of spilled oil, biodegradation by indigenous microorganisms was regarded as the most effective way for cleaning up residual oil. Different microbiological methods were applied to investigate the changes and responses of bacterial communities after the DWH oil spills. By summarizing and analyzing these microbiological methods, giving recommendations and proposing some methods that have not been used, this review aims to provide constructive guidelines for microbiological studies after environmental disasters, especially those involving organic pollutants. PMID:29628913</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/23994758','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/23994758"><span>Novel method for screening of enteric film coatings properties with magnetic resonance imaging.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Dorożyński, Przemysław; Jamróz, Witold; Niwiński, Krzysztof; Kurek, Mateusz; Węglarz, Władysław P; Jachowicz, Renata; Kulinowski, Piotr</p> <p>2013-11-18</p> <p>The aim of the study is to present the concept of novel method for fast screening of enteric coating compositions properties without the need of preparation of tablets batches for fluid bed coating. Proposed method involves evaluation of enteric coated model tablets in specially designed testing cell with application of MRI technique. The results obtained in the testing cell were compared with results of dissolution studies of mini-tablets coated in fluid bed apparatus. The method could be useful in early stage of formulation development for screening of film coating properties that will shorten and simplify the development works. Copyright © 2013 Elsevier B.V. All rights reserved.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4240524','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4240524"><span>RO1 Funding for Mixed Methods Research: Lessons learned from the Mixed-Method Analysis of Japanese Depression Project</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Arnault, Denise Saint; Fetters, Michael D.</p> <p>2013-01-01</p> <p>Mixed methods research has made significant in-roads in the effort to examine complex health related phenomenon. However, little has been published on the funding of mixed methods research projects. This paper addresses that gap by presenting an example of an NIMH funded project using a mixed methods QUAL-QUAN triangulation design entitled “The Mixed-Method Analysis of Japanese Depression.” We present the Cultural Determinants of Health Seeking model that framed the study, the specific aims, the quantitative and qualitative data sources informing the study, and overview of the mixing of the two studies. Finally, we examine reviewer's comments and our insights related to writing mixed method proposal successful for achieving RO1 level funding. PMID:25419196</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/28979004','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/28979004"><span>Summary and Synthesis: How to Present a Research Proposal.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Setia, Maninder Singh; Panda, Saumya</p> <p>2017-01-01</p> <p>This concluding module attempts to synthesize the key learning points discussed during the course of the previous ten sets of modules on methodology and biostatistics. The objective of this module is to discuss how to present a model research proposal, based on whatever was discussed in the preceding modules. The lynchpin of a research proposal is the protocol, and the key component of a protocol is the study design. However, one must not neglect the other areas, be it the project summary through which one catches the eyes of the reviewer of the proposal, or the background and the literature review, or the aims and objectives of the study. Two critical areas in the "methods" section that cannot be emphasized more are the sampling strategy and a formal estimation of sample size. Without a legitimate sample size, none of the conclusions based on the statistical analysis would be valid. Finally, the ethical parameters of the study should be well understood by the researchers, and that should get reflected in the proposal.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4482002','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=4482002"><span>Collaborative Localization and Location Verification in WSNs</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Miao, Chunyu; Dai, Guoyong; Ying, Kezhen; Chen, Qingzhang</p> <p>2015-01-01</p> <p>Localization is one of the most important technologies in wireless sensor networks. A lightweight distributed node localization scheme is proposed by considering the limited computational capacity of WSNs. The proposed scheme introduces the virtual force model to determine the location by incremental refinement. Aiming at solving the drifting problem and malicious anchor problem, a location verification algorithm based on the virtual force mode is presented. In addition, an anchor promotion algorithm using the localization reliability model is proposed to re-locate the drifted nodes. Extended simulation experiments indicate that the localization algorithm has relatively high precision and the location verification algorithm has relatively high accuracy. The communication overhead of these algorithms is relative low, and the whole set of reliable localization methods is practical as well as comprehensive. PMID:25954948</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2017MS%26E..231a2058G','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2017MS%26E..231a2058G"><span>HWDA: A coherence recognition and resolution algorithm for hybrid web data aggregation</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Guo, Shuhang; Wang, Jian; Wang, Tong</p> <p>2017-09-01</p> <p>Aiming at the object confliction recognition and resolution problem for hybrid distributed data stream aggregation, a distributed data stream object coherence solution technology is proposed. Firstly, the framework was defined for the object coherence conflict recognition and resolution, named HWDA. Secondly, an object coherence recognition technology was proposed based on formal language description logic and hierarchical dependency relationship between logic rules. Thirdly, a conflict traversal recognition algorithm was proposed based on the defined dependency graph. Next, the conflict resolution technology was prompted based on resolution pattern matching including the definition of the three types of conflict, conflict resolution matching pattern and arbitration resolution method. At last, the experiment use two kinds of web test data sets to validate the effect of application utilizing the conflict recognition and resolution technology of HWDA.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2011SPIE.7963E..1JL','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2011SPIE.7963E..1JL"><span>Multiscale quantification of tissue spiculation and distortion for detection of architectural distortion and spiculated mass in mammography</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Lao, Zhiqiang; Zheng, Xin</p> <p>2011-03-01</p> <p>This paper proposes a multiscale method to quantify tissue spiculation and distortion in mammography CAD systems that aims at improving the sensitivity in detecting architectural distortion and spiculated mass. This approach addresses the difficulty of predetermining the neighborhood size for feature extraction in characterizing lesions demonstrating spiculated mass/architectural distortion that may appear in different sizes. The quantification is based on the recognition of tissue spiculation and distortion pattern using multiscale first-order phase portrait model in texture orientation field generated by Gabor filter bank. A feature map is generated based on the multiscale quantification for each mammogram and two features are then extracted from the feature map. These two features will be combined with other mass features to provide enhanced discriminate ability in detecting lesions demonstrating spiculated mass and architectural distortion. The efficiency and efficacy of the proposed method are demonstrated with results obtained by applying the method to over 500 cancer cases and over 1000 normal cases.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19905185','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19905185"><span>Graph-based analysis of kinetics on multidimensional potential-energy surfaces.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Okushima, T; Niiyama, T; Ikeda, K S; Shimizu, Y</p> <p>2009-09-01</p> <p>The aim of this paper is twofold: one is to give a detailed description of an alternative graph-based analysis method, which we call saddle connectivity graph, for analyzing the global topography and the dynamical properties of many-dimensional potential-energy landscapes and the other is to give examples of applications of this method in the analysis of the kinetics of realistic systems. A Dijkstra-type shortest path algorithm is proposed to extract dynamically dominant transition pathways by kinetically defining transition costs. The applicability of this approach is first confirmed by an illustrative example of a low-dimensional random potential. We then show that a coarse-graining procedure tailored for saddle connectivity graphs can be used to obtain the kinetic properties of 13- and 38-atom Lennard-Jones clusters. The coarse-graining method not only reduces the complexity of the graphs, but also, with iterative use, reveals a self-similar hierarchical structure in these clusters. We also propose that the self-similarity is common to many-atom Lennard-Jones clusters.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018EnOp...50..599L','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018EnOp...50..599L"><span>Concurrent topological design of composite structures and materials containing multiple phases of distinct Poisson's ratios</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Long, Kai; Yuan, Philip F.; Xu, Shanqing; Xie, Yi Min</p> <p>2018-04-01</p> <p>Most studies on composites assume that the constituent phases have different values of stiffness. Little attention has been paid to the effect of constituent phases having distinct Poisson's ratios. This research focuses on a concurrent optimization method for simultaneously designing composite structures and materials with distinct Poisson's ratios. The proposed method aims to minimize the mean compliance of the macrostructure with a given mass of base materials. In contrast to the traditional interpolation of the stiffness matrix through numerical results, an interpolation scheme of the Young's modulus and Poisson's ratio using different parameters is adopted. The numerical results demonstrate that the Poisson effect plays a key role in reducing the mean compliance of the final design. An important contribution of the present study is that the proposed concurrent optimization method can automatically distribute base materials with distinct Poisson's ratios between the macrostructural and microstructural levels under a single constraint of the total mass.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/15613398','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/15613398"><span>Dynamic network reconstruction from gene expression data applied to immune response during bacterial infection.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Guthke, Reinhard; Möller, Ulrich; Hoffmann, Martin; Thies, Frank; Töpfer, Susanne</p> <p>2005-04-15</p> <p>The immune response to bacterial infection represents a complex network of dynamic gene and protein interactions. We present an optimized reverse engineering strategy aimed at a reconstruction of this kind of interaction networks. The proposed approach is based on both microarray data and available biological knowledge. The main kinetics of the immune response were identified by fuzzy clustering of gene expression profiles (time series). The number of clusters was optimized using various evaluation criteria. For each cluster a representative gene with a high fuzzy-membership was chosen in accordance with available physiological knowledge. Then hypothetical network structures were identified by seeking systems of ordinary differential equations, whose simulated kinetics could fit the gene expression profiles of the cluster-representative genes. For the construction of hypothetical network structures singular value decomposition (SVD) based methods and a newly introduced heuristic Network Generation Method here were compared. It turned out that the proposed novel method could find sparser networks and gave better fits to the experimental data. Reinhard.Guthke@hki-jena.de.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/26025553','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/26025553"><span>New approach to the determination of contaminants of emerging concern in natural water: study of alprazolam employing adsorptive cathodic stripping voltammetry.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Nunes, Chalder Nogueira; Pauluk, Lucas Ely; Dos Anjos, Vanessa Egéa; Lopes, Mauro Chierici; Quináia, Sueli Pércio</p> <p>2015-08-01</p> <p>Contaminants of emerging concern (CECs) are chemicals, including pharmaceutical and personal care products, not commonly monitored in the aquatic environment. Pharmaceuticals are nowadays considered as an important environmental contaminant. Chromatography methods which require expensive equipment and complicated sample pretreatment are used for detection of CECs in natural water. Thus, in this study we proposed a simple, fast, and low-cost voltammetric method as a screening tool for the determination of CECs in natural water prior to chromatography. A case study was conducted with alprazolam (benzodiazepine). The method was optimized and validated in-house. The limit of quantification was 0.4 μg L(-1) for a 120 s preconcentration time. The recoveries ranged from 93 to 120 % for accuracy tests. A further proposal aim was to determine for the first time the occurrence of alprazolam in Brazilian river water and to evaluate its potential use as a marker of contamination by wastewater.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/19152365','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/19152365"><span>SHOP: a method for structure-based fragment and scaffold hopping.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Fontaine, Fabien; Cross, Simon; Plasencia, Guillem; Pastor, Manuel; Zamora, Ismael</p> <p>2009-03-01</p> <p>A new method for fragment and scaffold replacement is presented that generates new families of compounds with biological activity, using GRID molecular interaction fields (MIFs) and the crystal structure of the targets. In contrast to virtual screening strategies, this methodology aims only to replace a fragment of the original molecule, maintaining the other structural elements that are known or suspected to have a critical role in ligand binding. First, we report a validation of the method, recovering up to 95% of the original fragments searched among the top-five proposed solutions, using 164 fragment queries from 11 diverse targets. Second, six key customizable parameters are investigated, concluding that filtering the receptor MIF using the co-crystallized ligand atom type has the greatest impact on the ranking of the proposed solutions. Finally, 11 examples using more realistic scenarios have been performed; diverse chemotypes are returned, including some that are similar to compounds that are known to bind to similar targets.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018SPIE10615E..36W','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018SPIE10615E..36W"><span>An efficient method for the fusion of light field refocused images</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Wang, Yingqian; Yang, Jungang; Xiao, Chao; An, Wei</p> <p>2018-04-01</p> <p>Light field cameras have drawn much attention due to the advantage of post-capture adjustments such as refocusing after exposure. The depth of field in refocused images is always shallow because of the large equivalent aperture. As a result, a large number of multi-focus images are obtained and an all-in-focus image is demanded. Consider that most multi-focus image fusion algorithms do not particularly aim at large numbers of source images and traditional DWT-based fusion approach has serious problems in dealing with lots of multi-focus images, causing color distortion and ringing effect. To solve this problem, this paper proposes an efficient multi-focus image fusion method based on stationary wavelet transform (SWT), which can deal with a large quantity of multi-focus images with shallow depth of fields. We compare SWT-based approach with DWT-based approach on various occasions. And the results demonstrate that the proposed method performs much better both visually and quantitatively.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/27608451','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/27608451"><span>Social Collaborative Filtering by Trust.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Yang, Bo; Lei, Yu; Liu, Jiming; Li, Wenjie</p> <p>2017-08-01</p> <p>Recommender systems are used to accurately and actively provide users with potentially interesting information or services. Collaborative filtering is a widely adopted approach to recommendation, but sparse data and cold-start users are often barriers to providing high quality recommendations. To address such issues, we propose a novel method that works to improve the performance of collaborative filtering recommendations by integrating sparse rating data given by users and sparse social trust network among these same users. This is a model-based method that adopts matrix factorization technique that maps users into low-dimensional latent feature spaces in terms of their trust relationship, and aims to more accurately reflect the users reciprocal influence on the formation of their own opinions and to learn better preferential patterns of users for high-quality recommendations. We use four large-scale datasets to show that the proposed method performs much better, especially for cold start users, than state-of-the-art recommendation algorithms for social collaborative filtering based on trust.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.ncbi.nlm.nih.gov/pubmed/21172745','PUBMED'); return false;" href="https://www.ncbi.nlm.nih.gov/pubmed/21172745"><span>Fast H.264/AVC FRExt intra coding using belief propagation.</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="https://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pubmed">PubMed</a></p> <p>Milani, Simone</p> <p>2011-01-01</p> <p>In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2018MSSP..104..575C','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2018MSSP..104..575C"><span>An efficient iterative model reduction method for aeroviscoelastic panel flutter analysis in the supersonic regime</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Cunha-Filho, A. G.; Briend, Y. P. J.; de Lima, A. M. G.; Donadon, M. V.</p> <p>2018-05-01</p> <p>The flutter boundary prediction of complex aeroelastic systems is not an easy task. In some cases, these analyses may become prohibitive due to the high computational cost and time associated with the large number of degrees of freedom of the aeroelastic models, particularly when the aeroelastic model incorporates a control strategy with the aim of suppressing the flutter phenomenon, such as the use of viscoelastic treatments. In this situation, the use of a model reduction method is essential. However, the construction of a modal reduction basis for aeroviscoelastic systems is still a challenge, owing to the inherent frequency- and temperature-dependent behavior of the viscoelastic materials. Thus, the main contribution intended for the present study is to propose an efficient and accurate iterative enriched Ritz basis to deal with aeroviscoelastic systems. The main features and capabilities of the proposed model reduction method are illustrated in the prediction of flutter boundary for a thin three-layer sandwich flat panel and a typical aeronautical stiffened panel, both under supersonic flow.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('http://adsabs.harvard.edu/abs/2016SPIE10155E..3DC','NASAADS'); return false;" href="http://adsabs.harvard.edu/abs/2016SPIE10155E..3DC"><span>Beam hardening correction for interior tomography based on exponential formed model and radon inversion transform</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://adsabs.harvard.edu/abstract_service.html">NASA Astrophysics Data System (ADS)</a></p> <p>Chen, Siyu; Zhang, Hanming; Li, Lei; Xi, Xiaoqi; Han, Yu; Yan, Bin</p> <p>2016-10-01</p> <p>X-ray computed tomography (CT) has been extensively applied in industrial non-destructive testing (NDT). However, in practical applications, the X-ray beam polychromaticity often results in beam hardening problems for image reconstruction. The beam hardening artifacts, which manifested as cupping, streaks and flares, not only debase the image quality, but also disturb the subsequent analyses. Unfortunately, conventional CT scanning requires that the scanned object is completely covered by the field of view (FOV), the state-of-art beam hardening correction methods only consider the ideal scanning configuration, and often suffer problems for interior tomography due to the projection truncation. Aiming at this problem, this paper proposed a beam hardening correction method based on radon inversion transform for interior tomography. Experimental results show that, compared to the conventional correction algorithms, the proposed approach has achieved excellent performance in both beam hardening artifacts reduction and truncation artifacts suppression. Therefore, the presented method has vitally theoretic and practicable meaning in artifacts correction of industrial CT.</p> </li> <li> <p><a target="_blank" rel="noopener noreferrer" onclick="trackOutboundLink('https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3061932','PMC'); return false;" href="https://www.pubmedcentral.nih.gov/articlerender.fcgi?tool=pmcentrez&artid=3061932"><span>Separation of left and right lungs using 3D information of sequential CT images and a guided dynamic programming algorithm</span></a></p> <p><a target="_blank" rel="noopener noreferrer" href="http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?DB=pmc">PubMed Central</a></p> <p>Park, Sang Cheol; Leader, Joseph Ken; Tan, Jun; Lee, Guee Sang; Kim, Soo Hyung; Na, In Seop; Zheng, Bin</p> <p>2011-01-01</p> <p>Objective this article presents a new computerized scheme that aims to accurately and robustly separate left and right lungs on CT examinations. Methods we developed and tested a method to separate the left and right lungs using sequential CT information and a guided dynamic programming algorithm using adaptively and automatically selected start point and end point with especially severe and multiple connections. Results the scheme successfully identified and separated all 827 connections on the total 4034 CT images in an independent testing dataset of CT examinations. The proposed scheme separated multiple connections regardless of their locations, and the guided dynamic programming algorithm reduced the computation time to approximately 4.6% in comparison with the traditional dynamic programming and avoided the permeation of the separation boundary into normal lung tissue. Conclusions The proposed method is able to robustly and accurately disconnect all connections between left and right lungs and the guided dynamic programming algorithm is able to remove redundant processing. PMID:21412104</p> </li> </ol> <div class="pull-right"> <ul class="pagination"> <li><a href="#" onclick='return showDiv("page_1");'>«</a></li> <li><a href="#" onclick='return showDiv("page_21");'>21</a></li> <li><a href="#" onclick='return showDiv("page_22");'>22</a></li> <li><a href="#" onclick='return showDiv("page_23");'>23</a></li> <li><a href="#" onclick='return showDiv("page_24");'>24</a></li> <li class="active"><span>25</span></li> <li><a href="#" onclick='return showDiv("page_25");'>»</a></li> </ul> </div> </div><!-- col-sm-12 --> </div><!-- row --> </div><!-- page_25 --> <div class="footer-extlink text-muted" style="margin-bottom:1rem; text-align:center;">Some links on this page may take you to non-federal websites. Their policies may differ from this site.</div> </div><!-- container --> <a id="backToTop" href="#top"> Top </a> <footer> <nav> <ul class="links"> <li><a href="/sitemap.html">Site Map</a></li> <li><a href="/website-policies.html">Website Policies</a></li> <li><a href="https://www.energy.gov/vulnerability-disclosure-policy" target="_blank">Vulnerability Disclosure Program</a></li> <li><a href="/contact.html">Contact Us</a></li> </ul> </nav> </footer> <script type="text/javascript"><!-- // var lastDiv = ""; function showDiv(divName) { // hide last div if (lastDiv) { document.getElementById(lastDiv).className = "hiddenDiv"; } //if value of the box is not nothing and an object with that name exists, then change the class if (divName && document.getElementById(divName)) { document.getElementById(divName).className = "visibleDiv"; lastDiv = divName; } } //--> </script> <script> /** * Function that tracks a click on an outbound link in Google Analytics. * This function takes a valid URL string as an argument, and uses that URL string * as the event label. */ var trackOutboundLink = function(url,collectionCode) { try { h = window.open(url); setTimeout(function() { ga('send', 'event', 'topic-page-click-through', collectionCode, url); }, 1000); } catch(err){} }; </script> <!-- Google Analytics --> <script> (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){ (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o), m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m) })(window,document,'script','//www.google-analytics.com/analytics.js','ga'); ga('create', 'UA-1122789-34', 'auto'); ga('send', 'pageview'); </script> <!-- End Google Analytics --> <script> showDiv('page_1') </script> </body> </html>