Linnorm: improved statistical analysis for single cell RNA-seq expression data
Yip, Shun H.; Wang, Panwen; Kocher, Jean-Pierre A.; Sham, Pak Chung
2017-01-01
Abstract Linnorm is a novel normalization and transformation method for the analysis of single cell RNA sequencing (scRNA-seq) data. Linnorm is developed to remove technical noises and simultaneously preserve biological variations in scRNA-seq data, such that existing statistical methods can be improved. Using real scRNA-seq data, we compared Linnorm with existing normalization methods, including NODES, SAMstrt, SCnorm, scran, DESeq and TMM. Linnorm shows advantages in speed, technical noise removal and preservation of cell heterogeneity, which can improve existing methods in the discovery of novel subtypes, pseudo-temporal ordering of cells, clustering analysis, etc. Linnorm also performs better than existing DEG analysis methods, including BASiCS, NODES, SAMstrt, Seurat and DESeq2, in false positive rate control and accuracy. PMID:28981748
Linnorm: improved statistical analysis for single cell RNA-seq expression data.
Yip, Shun H; Wang, Panwen; Kocher, Jean-Pierre A; Sham, Pak Chung; Wang, Junwen
2017-12-15
Linnorm is a novel normalization and transformation method for the analysis of single cell RNA sequencing (scRNA-seq) data. Linnorm is developed to remove technical noises and simultaneously preserve biological variations in scRNA-seq data, such that existing statistical methods can be improved. Using real scRNA-seq data, we compared Linnorm with existing normalization methods, including NODES, SAMstrt, SCnorm, scran, DESeq and TMM. Linnorm shows advantages in speed, technical noise removal and preservation of cell heterogeneity, which can improve existing methods in the discovery of novel subtypes, pseudo-temporal ordering of cells, clustering analysis, etc. Linnorm also performs better than existing DEG analysis methods, including BASiCS, NODES, SAMstrt, Seurat and DESeq2, in false positive rate control and accuracy. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Tan, York Kiat; Allen, John C; Lye, Weng Kit; Conaghan, Philip G; D'Agostino, Maria Antonietta; Chew, Li-Ching; Thumboo, Julian
2016-01-01
A pilot study testing novel ultrasound (US) joint-selection methods in rheumatoid arthritis. Responsiveness of novel [individualized US (IUS) and individualized composite US (ICUS)] methods were compared with existing US methods and the Disease Activity Score at 28 joints (DAS28) for 12 patients followed for 3 months. IUS selected up to 7 and 12 most ultrasonographically inflamed joints, while ICUS additionally incorporated clinically symptomatic joints. The existing, IUS, and ICUS methods' standardized response means were -0.39, -1.08, and -1.11, respectively, for 7 joints; -0.49, -1.00, and -1.16, respectively, for 12 joints; and -0.94 for DAS28. Novel methods effectively demonstrate inflammatory improvement when compared with existing methods and DAS28.
Accurate reconstruction of viral quasispecies spectra through improved estimation of strain richness
2015-01-01
Background Estimating the number of different species (richness) in a mixed microbial population has been a main focus in metagenomic research. Existing methods of species richness estimation ride on the assumption that the reads in each assembled contig correspond to only one of the microbial genomes in the population. This assumption and the underlying probabilistic formulations of existing methods are not useful for quasispecies populations where the strains are highly genetically related. The lack of knowledge on the number of different strains in a quasispecies population is observed to hinder the precision of existing Viral Quasispecies Spectrum Reconstruction (QSR) methods due to the uncontrolled reconstruction of a large number of in silico false positives. In this work, we formulated a novel probabilistic method for strain richness estimation specifically targeting viral quasispecies. By using this approach we improved our recently proposed spectrum reconstruction pipeline ViQuaS to achieve higher levels of precision in reconstructed quasispecies spectra without compromising the recall rates. We also discuss how one other existing popular QSR method named ShoRAH can be improved using this new approach. Results On benchmark data sets, our estimation method provided accurate richness estimates (< 0.2 median estimation error) and improved the precision of ViQuaS by 2%-13% and F-score by 1%-9% without compromising the recall rates. We also demonstrate that our estimation method can be used to improve the precision and F-score of ShoRAH by 0%-7% and 0%-5% respectively. Conclusions The proposed probabilistic estimation method can be used to estimate the richness of viral populations with a quasispecies behavior and to improve the accuracy of the quasispecies spectra reconstructed by the existing methods ViQuaS and ShoRAH in the presence of a moderate level of technical sequencing errors. Availability http://sourceforge.net/projects/viquas/ PMID:26678073
Existing methods for improving the accuracy of digital-to-analog converters
NASA Astrophysics Data System (ADS)
Eielsen, Arnfinn A.; Fleming, Andrew J.
2017-09-01
The performance of digital-to-analog converters is principally limited by errors in the output voltage levels. Such errors are known as element mismatch and are quantified by the integral non-linearity. Element mismatch limits the achievable accuracy and resolution in high-precision applications as it causes gain and offset errors, as well as harmonic distortion. In this article, five existing methods for mitigating the effects of element mismatch are compared: physical level calibration, dynamic element matching, noise-shaping with digital calibration, large periodic high-frequency dithering, and large stochastic high-pass dithering. These methods are suitable for improving accuracy when using digital-to-analog converters that use multiple discrete output levels to reconstruct time-varying signals. The methods improve linearity and therefore reduce harmonic distortion and can be retrofitted to existing systems with minor hardware variations. The performance of each method is compared theoretically and confirmed by simulations and experiments. Experimental results demonstrate that three of the five methods provide significant improvements in the resolution and accuracy when applied to a general-purpose digital-to-analog converter. As such, these methods can directly improve performance in a wide range of applications including nanopositioning, metrology, and optics.
Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.
Ye, Jun
2015-03-01
In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.
This manual is intended as a source document for individuals responsible for improving the performance of an existing, non-complying wastewater treatment facility. Described are: 1) methods to evaluate an existing facility's capability to achieve improved performance, 2) a ...
Infrared Ship Target Segmentation Based on Spatial Information Improved FCM.
Bai, Xiangzhi; Chen, Zhiguo; Zhang, Yu; Liu, Zhaoying; Lu, Yi
2016-12-01
Segmentation of infrared (IR) ship images is always a challenging task, because of the intensity inhomogeneity and noise. The fuzzy C-means (FCM) clustering is a classical method widely used in image segmentation. However, it has some shortcomings, like not considering the spatial information or being sensitive to noise. In this paper, an improved FCM method based on the spatial information is proposed for IR ship target segmentation. The improvements include two parts: 1) adding the nonlocal spatial information based on the ship target and 2) using the spatial shape information of the contour of the ship target to refine the local spatial constraint by Markov random field. In addition, the results of K -means are used to initialize the improved FCM method. Experimental results show that the improved method is effective and performs better than the existing methods, including the existing FCM methods, for segmentation of the IR ship images.
NASA Astrophysics Data System (ADS)
Wang, Bin; Wu, Xinyuan
2014-11-01
In this paper we consider multi-frequency highly oscillatory second-order differential equations x″ (t) + Mx (t) = f (t , x (t) ,x‧ (t)) where high-frequency oscillations are generated by the linear part Mx (t), and M is positive semi-definite (not necessarily nonsingular). It is known that Filon-type methods are effective approach to numerically solving highly oscillatory problems. Unfortunately, however, existing Filon-type asymptotic methods fail to apply to the highly oscillatory second-order differential equations when M is singular. We study and propose an efficient improvement on the existing Filon-type asymptotic methods, so that the improved Filon-type asymptotic methods can be able to numerically solving this class of multi-frequency highly oscillatory systems with a singular matrix M. The improved Filon-type asymptotic methods are designed by combining Filon-type methods with the asymptotic methods based on the variation-of-constants formula. We also present one efficient and practical improved Filon-type asymptotic method which can be performed at lower cost. Accompanying numerical results show the remarkable efficiency.
PARTNERING TO IMPROVE HUMAN EXPOSURE METHODS
Methods development research is an application-driven scientific area that addresses programmatic needs. The goals are to reduce measurement uncertainties, address data gaps, and improve existing analytical procedures for estimating human exposures. Partnerships have been develop...
Groth, Katrina M.; Smith, Curtis L.; Swiler, Laura P.
2014-04-05
In the past several years, several international agencies have begun to collect data on human performance in nuclear power plant simulators [1]. This data provides a valuable opportunity to improve human reliability analysis (HRA), but there improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used in to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this article, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existingmore » HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.« less
Boomerang: A method for recursive reclassification.
Devlin, Sean M; Ostrovnaya, Irina; Gönen, Mithat
2016-09-01
While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted toward this reclassification goal. In this article, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a prespecified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia data set where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent data set. © 2016, The International Biometric Society.
Boomerang: A Method for Recursive Reclassification
Devlin, Sean M.; Ostrovnaya, Irina; Gönen, Mithat
2016-01-01
Summary While there are many validated prognostic classifiers used in practice, often their accuracy is modest and heterogeneity in clinical outcomes exists in one or more risk subgroups. Newly available markers, such as genomic mutations, may be used to improve the accuracy of an existing classifier by reclassifying patients from a heterogenous group into a higher or lower risk category. The statistical tools typically applied to develop the initial classifiers are not easily adapted towards this reclassification goal. In this paper, we develop a new method designed to refine an existing prognostic classifier by incorporating new markers. The two-stage algorithm called Boomerang first searches for modifications of the existing classifier that increase the overall predictive accuracy and then merges to a pre-specified number of risk groups. Resampling techniques are proposed to assess the improvement in predictive accuracy when an independent validation data set is not available. The performance of the algorithm is assessed under various simulation scenarios where the marker frequency, degree of censoring, and total sample size are varied. The results suggest that the method selects few false positive markers and is able to improve the predictive accuracy of the classifier in many settings. Lastly, the method is illustrated on an acute myeloid leukemia dataset where a new refined classifier incorporates four new mutations into the existing three category classifier and is validated on an independent dataset. PMID:26754051
Embedded WENO: A design strategy to improve existing WENO schemes
NASA Astrophysics Data System (ADS)
van Lith, Bart S.; ten Thije Boonkkamp, Jan H. M.; IJzerman, Wilbert L.
2017-02-01
Embedded WENO methods utilise all adjacent smooth substencils to construct a desirable interpolation. Conventional WENO schemes under-use this possibility close to large gradients or discontinuities. We develop a general approach for constructing embedded versions of existing WENO schemes. Embedded methods based on the WENO schemes of Jiang and Shu [1] and on the WENO-Z scheme of Borges et al. [2] are explicitly constructed. Several possible choices are presented that result in either better spectral properties or a higher order of convergence for sufficiently smooth solutions. However, these improvements carry over to discontinuous solutions. The embedded methods are demonstrated to be indeed improvements over their standard counterparts by several numerical examples. All the embedded methods presented have no added computational effort compared to their standard counterparts.
The Data-to-Action Framework: A Rapid Program Improvement Process
ERIC Educational Resources Information Center
Zakocs, Ronda; Hill, Jessica A.; Brown, Pamela; Wheaton, Jocelyn; Freire, Kimberley E.
2015-01-01
Although health education programs may benefit from quality improvement methods, scant resources exist to help practitioners apply these methods for program improvement. The purpose of this article is to describe the Data-to-Action framework, a process that guides practitioners through rapid-feedback cycles in order to generate actionable data to…
A simplified analytic form for generation of axisymmetric plasma boundaries
Luce, Timothy C.
2017-02-23
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
A simplified analytic form for generation of axisymmetric plasma boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luce, Timothy C.
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
Improving Upon String Methods for Transition State Discovery.
Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker
2012-02-14
Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.
Model Uncertainty Quantification Methods In Data Assimilation
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.
2017-12-01
Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.
ECO-ITS : Intelligent Transportation System Applications to Improve Environmental Performance
DOT National Transportation Integrated Search
2012-05-01
This report describes recent research supported by the US DOTs AERIS program, building upon existing work through developing and improving data collection methods, developing new data fusion techniques to improve estimates, and applying appropriat...
A hybrid voice/data modulation for the VHF aeronautical channels
NASA Technical Reports Server (NTRS)
Akos, Dennis M.
1993-01-01
A method of improving the spectral efficiency of the existing Very High Frequency (VHF) Amplitude Modulation (AM) voice communication channels is proposed. The technique is to phase modulate the existing voice amplitude modulated carrier with digital data. This allows the transmission of digital information over an existing AM voice channel with no change to the existing AM signal format. There is no modification to the existing AM receiver to demodulate the voice signal and an additional receiver module can be added for processing of the digital data. The existing VHF AM transmitter requires only a slight modification for the addition of the digital data signal. The past work in the area is summarized and presented together with an improved system design and the proposed implementation.
A basic guide to overlay design using nondestructive testing equipment data
NASA Astrophysics Data System (ADS)
Turner, Vernon R.
1990-08-01
The purpose of this paper is to provide a basic and concise guide to designing asphalt concrete (AC) overlays over existing AC pavements. The basis for these designs is deflection data obtained from nondestructive testing (NDT) equipment. This data is used in design procedures which produce required overlay thickness or an estimate of remaining pavement life. This guide enables one to design overlays or better monitor the designs being performed by others. This paper will discuss three types of NDT equipment, the Asphalt Institute Overlay Designs by Deflection Analysis and by the effective thickness method as well as a method of estimating remaining pavement life, correlations between NDT equipment and recent correlations in Washington State. Asphalt overlays provide one of the most cost effective methods of improving existing pavements. Asphalt overlays can be used to strengthen existing pavements, to reduce maintenance costs, to increase pavement life, to provide a smoother ride, and to improve skid resistance.
Patient, staff and physician satisfaction: a new model, instrument and their implications.
York, Anne S; McCarthy, Kim A
2011-01-01
Customer satisfaction's importance is well-documented in the marketing literature and is rapidly gaining wide acceptance in the healthcare industry. The purpose of this paper is to introduce a new customer-satisfaction measuring method - Reichheld's ultimate question - and compare it with traditional techniques using data gathered from four healthcare clinics. A new survey method, called the ultimate question, was used to collect patient satisfaction data. It was subsequently compared with the data collected via an existing method. Findings suggest that the ultimate question provides similar ratings to existing models at lower costs. A relatively small sample size may affect the generalizability of the results; it is also possible that potential spill-over effects exist owing to two patient satisfaction surveys administered at the same time. This new ultimate question method greatly improves the process and ease with which hospital or clinic administrators are able to collect patient (as well as staff and physician) satisfaction data in healthcare settings. Also, the feedback gained from this method is actionable and can be used to make strategic improvements that will impact business and ultimately increase profitability. The paper's real value is pinpointing specific quality improvement areas based not just on patient ratings but also physician and staff satisfaction, which often underlie patients' clinical experiences.
DOT National Transportation Integrated Search
2011-06-01
Micro-electromechanical systems (MEMS) provide vast improvements over existing sensing methods in the context of structural health monitoring (SHM) of highway infrastructure systems, including improved system reliability, improved longevity and enhan...
Sunyit Visiting Faculty Research
2012-01-01
deblurring with Gaussian and impulse noise . Improvements in both PSNR and visual quality of IFASDA over a typical existing method are demonstrated...blurring Images Corrupted by Mixed Impulse plus Gaussian Noise / Department of Mathematics Syracuse University This work studies a problem of image...restoration that observed images are contaminated by Gaussian and impulse noise . Existing methods in the literature are based on minimizing an objective
An improved conjugate gradient scheme to the solution of least squares SVM.
Chu, Wei; Ong, Chong Jin; Keerthi, S Sathiya
2005-03-01
The least square support vector machines (LS-SVM) formulation corresponds to the solution of a linear system of equations. Several approaches to its numerical solutions have been proposed in the literature. In this letter, we propose an improved method to the numerical solution of LS-SVM and show that the problem can be solved using one reduced system of linear equations. Compared with the existing algorithm for LS-SVM, the approach used in this letter is about twice as efficient. Numerical results using the proposed method are provided for comparisons with other existing algorithms.
Rail Inspection Systems Analysis and Technology Survey
DOT National Transportation Integrated Search
1977-09-01
The study was undertaken to identify existing rail inspection system capabilities and methods which might be used to improve these capabilities. Task I was a study to quantify existing inspection parameters and Task II was a cost effectiveness study ...
EMUDRA: Ensemble of Multiple Drug Repositioning Approaches to Improve Prediction Accuracy.
Zhou, Xianxiao; Wang, Minghui; Katsyv, Igor; Irie, Hanna; Zhang, Bin
2018-04-24
Availability of large-scale genomic, epigenetic and proteomic data in complex diseases makes it possible to objectively and comprehensively identify therapeutic targets that can lead to new therapies. The Connectivity Map has been widely used to explore novel indications of existing drugs. However, the prediction accuracy of the existing methods, such as Kolmogorov-Smirnov statistic remains low. Here we present a novel high-performance drug repositioning approach that improves over the state-of-the-art methods. We first designed an expression weighted cosine method (EWCos) to minimize the influence of the uninformative expression changes and then developed an ensemble approach termed EMUDRA (Ensemble of Multiple Drug Repositioning Approaches) to integrate EWCos and three existing state-of-the-art methods. EMUDRA significantly outperformed individual drug repositioning methods when applied to simulated and independent evaluation datasets. We predicted using EMUDRA and experimentally validated an antibiotic rifabutin as an inhibitor of cell growth in triple negative breast cancer. EMUDRA can identify drugs that more effectively target disease gene signatures and will thus be a useful tool for identifying novel therapies for complex diseases and predicting new indications for existing drugs. The EMUDRA R package is available at doi:10.7303/syn11510888. bin.zhang@mssm.edu or zhangb@hotmail.com. Supplementary data are available at Bioinformatics online.
An Improved Aerial Target Localization Method with a Single Vector Sensor
Zhao, Anbang; Bi, Xuejie; Hui, Juan; Zeng, Caigao; Ma, Lin
2017-01-01
This paper focuses on the problems encountered in the actual data processing with the use of the existing aerial target localization methods, analyzes the causes of the problems, and proposes an improved algorithm. Through the processing of the sea experiment data, it is found that the existing algorithms have higher requirements for the accuracy of the angle estimation. The improved algorithm reduces the requirements of the angle estimation accuracy and obtains the robust estimation results. The closest distance matching estimation algorithm and the horizontal distance estimation compensation algorithm are proposed. The smoothing effect of the data after being post-processed by using the forward and backward two-direction double-filtering method has been improved, thus the initial stage data can be filtered, so that the filtering results retain more useful information. In this paper, the aerial target height measurement methods are studied, the estimation results of the aerial target are given, so as to realize the three-dimensional localization of the aerial target and increase the understanding of the underwater platform to the aerial target, so that the underwater platform has better mobility and concealment. PMID:29135956
Improvement in Existing Chest Wall Irregularities During Breast Reconstruction
Huber, Katherine M.; Zimmerman, Amanda; Dayicioglu, Deniz
2018-01-01
Mastectomies for both cancer resection and risk reduction are becoming more common. Existing chest wall irregularities are found in these women presenting for breast reconstruction after mastectomy and can pose reconstructive challenges. Women who desired breast reconstruction after mastectomy were evaluated preoperatively for existing chest wall irregularities. Case reports were selected to highlight common irregularities and methods for improving cosmetic outcome concurrently with breast reconstruction procedures. Muscular anomalies, pectus excavatum, scoliosis, polythelia case reports are discussed. Relevant data from the literature are presented. Chest wall irregularities are occasionally encountered in women who request breast reconstruction. Correction of these deformities is possible and safe during breast reconstruction and can lead to improved cosmetic outcome and patient satisfaction. PMID:29318956
Improvement in Existing Chest Wall Irregularities During Breast Reconstruction.
Huber, Katherine M; Zimmerman, Amanda; Dayicioglu, Deniz
2018-01-01
Mastectomies for both cancer resection and risk reduction are becoming more common. Existing chest wall irregularities are found in these women presenting for breast reconstruction after mastectomy and can pose reconstructive challenges. Women who desired breast reconstruction after mastectomy were evaluated preoperatively for existing chest wall irregularities. Case reports were selected to highlight common irregularities and methods for improving cosmetic outcome concurrently with breast reconstruction procedures. Muscular anomalies, pectus excavatum, scoliosis, polythelia case reports are discussed. Relevant data from the literature are presented. Chest wall irregularities are occasionally encountered in women who request breast reconstruction. Correction of these deformities is possible and safe during breast reconstruction and can lead to improved cosmetic outcome and patient satisfaction.
Lombaert, Herve; Grady, Leo; Polimeni, Jonathan R.; Cheriet, Farida
2013-01-01
Existing methods for surface matching are limited by the trade-off between precision and computational efficiency. Here we present an improved algorithm for dense vertex-to-vertex correspondence that uses direct matching of features defined on a surface and improves it by using spectral correspondence as a regularization. This algorithm has the speed of both feature matching and spectral matching while exhibiting greatly improved precision (distance errors of 1.4%). The method, FOCUSR, incorporates implicitly such additional features to calculate the correspondence and relies on the smoothness of the lowest-frequency harmonics of a graph Laplacian to spatially regularize the features. In its simplest form, FOCUSR is an improved spectral correspondence method that nonrigidly deforms spectral embeddings. We provide here a full realization of spectral correspondence where virtually any feature can be used as additional information using weights on graph edges, but also on graph nodes and as extra embedded coordinates. As an example, the full power of FOCUSR is demonstrated in a real case scenario with the challenging task of brain surface matching across several individuals. Our results show that combining features and regularizing them in a spectral embedding greatly improves the matching precision (to a sub-millimeter level) while performing at much greater speed than existing methods. PMID:23868776
A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.
Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang
2017-01-01
Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.
DOT National Transportation Integrated Search
2003-10-01
Fog seals are a method of adding asphalt to an existing pavement surface to improve sealing or waterproofing, prevent further stone loss by holding aggregate in place, or simply improve the surface appearance. However, inappropriate use can result in...
Balbale, Salva N.; Locatelli, Sara M.; LaVela, Sherri L.
2016-01-01
In this methodological article, we examine participatory methods in-depth to demonstrate how these methods can be adopted for quality improvement (QI) projects in health care. We draw on existing literature and our QI initiatives in the Department of Veterans Affairs to discuss the application of photovoice and guided tours in QI efforts. We highlight lessons learned and several benefits of using participatory methods in this area. Using participatory methods, evaluators can engage patients, providers and other stakeholders as partners to enhance care. Participant involvement helps yield actionable data that can be translated into improved care practices. Use of these methods also helps generate key insights to inform improvements that truly resonate with stakeholders. Using participatory methods is a valuable strategy to harness participant engagement and drive improvements that address individual needs. In applying these innovative methodologies, evaluators can transcend traditional approaches to uniquely support evaluations and improvements in health care. PMID:26667882
Numerical simulation for the air entrainment of aerated flow with an improved multiphase SPH model
NASA Astrophysics Data System (ADS)
Wan, Hang; Li, Ran; Pu, Xunchi; Zhang, Hongwei; Feng, Jingjie
2017-11-01
Aerated flow is a complex hydraulic phenomenon that exists widely in the field of environmental hydraulics. It is generally characterised by large deformation and violent fragmentation of the free surface. Compared to Euler methods (volume of fluid (VOF) method or rigid-lid hypothesis method), the existing single-phase Smooth Particle Hydrodynamics (SPH) method has performed well for solving particle motion. A lack of research on interphase interaction and air concentration, however, has affected the application of SPH model. In our study, an improved multiphase SPH model is presented to simulate aeration flows. A drag force was included in the momentum equation to ensure accuracy of the air particle slip velocity. Furthermore, a calculation method for air concentration is developed to analyse the air entrainment characteristics. Two studies were used to simulate the hydraulic and air entrainment characteristics. And, compared with the experimental results, the simulation results agree with the experimental results well.
Selection of remedial alternatives for mine sites: a multicriteria decision analysis approach.
Betrie, Getnet D; Sadiq, Rehan; Morin, Kevin A; Tesfamariam, Solomon
2013-04-15
The selection of remedial alternatives for mine sites is a complex task because it involves multiple criteria and often with conflicting objectives. However, an existing framework used to select remedial alternatives lacks multicriteria decision analysis (MCDA) aids and does not consider uncertainty in the selection of alternatives. The objective of this paper is to improve the existing framework by introducing deterministic and probabilistic MCDA methods. The Preference Ranking Organization Method for Enrichment Evaluation (PROMETHEE) methods have been implemented in this study. The MCDA analysis involves processing inputs to the PROMETHEE methods that are identifying the alternatives, defining the criteria, defining the criteria weights using analytical hierarchical process (AHP), defining the probability distribution of criteria weights, and conducting Monte Carlo Simulation (MCS); running the PROMETHEE methods using these inputs; and conducting a sensitivity analysis. A case study was presented to demonstrate the improved framework at a mine site. The results showed that the improved framework provides a reliable way of selecting remedial alternatives as well as quantifying the impact of different criteria on selecting alternatives. Copyright © 2013 Elsevier Ltd. All rights reserved.
A New Adaptive Framework for Collaborative Filtering Prediction
Almosallam, Ibrahim A.; Shang, Yi
2010-01-01
Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix’s system. PMID:21572924
A New Adaptive Framework for Collaborative Filtering Prediction.
Almosallam, Ibrahim A; Shang, Yi
2008-06-01
Collaborative filtering is one of the most successful techniques for recommendation systems and has been used in many commercial services provided by major companies including Amazon, TiVo and Netflix. In this paper we focus on memory-based collaborative filtering (CF). Existing CF techniques work well on dense data but poorly on sparse data. To address this weakness, we propose to use z-scores instead of explicit ratings and introduce a mechanism that adaptively combines global statistics with item-based values based on data density level. We present a new adaptive framework that encapsulates various CF algorithms and the relationships among them. An adaptive CF predictor is developed that can self adapt from user-based to item-based to hybrid methods based on the amount of available ratings. Our experimental results show that the new predictor consistently obtained more accurate predictions than existing CF methods, with the most significant improvement on sparse data sets. When applied to the Netflix Challenge data set, our method performed better than existing CF and singular value decomposition (SVD) methods and achieved 4.67% improvement over Netflix's system.
Indoor Air Quality in Chemistry Laboratories.
ERIC Educational Resources Information Center
Hays, Steve M.
This paper presents air quality and ventilation data from an existing chemical laboratory facility and discusses the work practice changes implemented in response to deficiencies in ventilation. General methods for improving air quality in existing laboratories are presented and investigation techniques for characterizing air quality are…
FMLRC: Hybrid long read error correction using an FM-index.
Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D
2018-02-09
Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.
Ensemble-based prediction of RNA secondary structures.
Aghaeepour, Nima; Hoos, Holger H
2013-04-24
Accurate structure prediction methods play an important role for the understanding of RNA function. Energy-based, pseudoknot-free secondary structure prediction is one of the most widely used and versatile approaches, and improved methods for this task have received much attention over the past five years. Despite the impressive progress that as been achieved in this area, existing evaluations of the prediction accuracy achieved by various algorithms do not provide a comprehensive, statistically sound assessment. Furthermore, while there is increasing evidence that no prediction algorithm consistently outperforms all others, no work has been done to exploit the complementary strengths of multiple approaches. In this work, we present two contributions to the area of RNA secondary structure prediction. Firstly, we use state-of-the-art, resampling-based statistical methods together with a previously published and increasingly widely used dataset of high-quality RNA structures to conduct a comprehensive evaluation of existing RNA secondary structure prediction procedures. The results from this evaluation clarify the performance relationship between ten well-known existing energy-based pseudoknot-free RNA secondary structure prediction methods and clearly demonstrate the progress that has been achieved in recent years. Secondly, we introduce AveRNA, a generic and powerful method for combining a set of existing secondary structure prediction procedures into an ensemble-based method that achieves significantly higher prediction accuracies than obtained from any of its component procedures. Our new, ensemble-based method, AveRNA, improves the state of the art for energy-based, pseudoknot-free RNA secondary structure prediction by exploiting the complementary strengths of multiple existing prediction procedures, as demonstrated using a state-of-the-art statistical resampling approach. In addition, AveRNA allows an intuitive and effective control of the trade-off between false negative and false positive base pair predictions. Finally, AveRNA can make use of arbitrary sets of secondary structure prediction procedures and can therefore be used to leverage improvements in prediction accuracy offered by algorithms and energy models developed in the future. Our data, MATLAB software and a web-based version of AveRNA are publicly available at http://www.cs.ubc.ca/labs/beta/Software/AveRNA.
A Design To Improve Children's Competencies in Solving Mathematical Word Problems.
ERIC Educational Resources Information Center
Zimmerman, Helene
A discrepancy exists between children's ability to compute and their ability to solve mathematical word problems. The literature suggests a variety of methods that have been attempted to improve this skill with varying success. The utilization of manipulatives, visualization, illustration, and emphasis on improving listening skills all were…
Alanazi, Hamdan O; Abdullah, Abdul Hanan; Qureshi, Kashif Naseer
2017-04-01
Recently, Artificial Intelligence (AI) has been used widely in medicine and health care sector. In machine learning, the classification or prediction is a major field of AI. Today, the study of existing predictive models based on machine learning methods is extremely active. Doctors need accurate predictions for the outcomes of their patients' diseases. In addition, for accurate predictions, timing is another significant factor that influences treatment decisions. In this paper, existing predictive models in medicine and health care have critically reviewed. Furthermore, the most famous machine learning methods have explained, and the confusion between a statistical approach and machine learning has clarified. A review of related literature reveals that the predictions of existing predictive models differ even when the same dataset is used. Therefore, existing predictive models are essential, and current methods must be improved.
Hybrid Tracking Algorithm Improvements and Cluster Analysis Methods.
1982-02-26
UPGMA ), and Ward’s method. Ling’s papers describe a (k,r) clustering method. Each of these methods have individual characteristics which make them...Reference 7), UPGMA is probably the most frequently used clustering strategy. UPGMA tries to group new points into an existing cluster by using an
Sculpting bespoke mountains: Determining free energies with basis expansions
NASA Astrophysics Data System (ADS)
Whitmer, Jonathan K.; Fluitt, Aaron M.; Antony, Lucas; Qin, Jian; McGovern, Michael; de Pablo, Juan J.
2015-07-01
The intriguing behavior of a wide variety of physical systems, ranging from amorphous solids or glasses to proteins, is a direct manifestation of underlying free energy landscapes riddled with local minima separated by large barriers. Exploring such landscapes has arguably become one of statistical physics's great challenges. A new method is proposed here for uniform sampling of rugged free energy surfaces. The method, which relies on special Green's functions to approximate the Dirac delta function, improves significantly on existing simulation techniques by providing a boundary-agnostic approach that is capable of mapping complex features in multidimensional free energy surfaces. The usefulness of the proposed approach is established in the context of a simple model glass former and model proteins, demonstrating improved convergence and accuracy over existing methods.
The 1999 ICSI/IHI colloquium on clinical quality improvement--"quality: settling the frontier".
Palmersheim, T M
1999-12-01
A Colloquium on Clinical Quality Improvement, "Quality: Setting the Frontier," held in May 1999, covered methods and programs in clinical quality improvement. Leadership and organizational behavior were the main themes of the breakout sessions; specific topics included implementing guidelines, applying continuous quality improvement (CQI) methods in preventive services and primary care, and using systems thinking to improve clinical outcomes. Three keynote addresses were presented. James L. Reinertsen, MD (CareGroup, Boston), characterized the financial challenges faced by many health care organizations as a "clarion call" for leadership on quality. "The leadership imperative is to establish an environment in which quality can thrive, despite unprecedented, severe economic pressures on our health systems." How do we make improvement more effective? G. Ross Baker, PhD (University of Toronto), reviewed what organizational literature says about making teams more effective, understanding the organizational context to enable improvement work, and augmenting existing methods for creating sustainable improvement. For example, he noted the increasing interest among may organizations in rapid-cycle improvement but cautioned that such efforts may work best where problems can be addressed by existing clinical teams (not cross-functional work groups) and where there are available solutions that have worked in other settings. Mark Chassin, MD (Mount Sinai School of Medicine, New York), stated that critical tasks for improving quality include increasing public awareness, engaging clinicians in improvement, increasing the investment in producing measures and improvement tools, and reinventing health care delivery, clinical education and training, and QI.
Re-refinement from deposited X-ray data can deliver improved models for most PDB entries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joosten, Robbie P.; Womack, Thomas; Vriend, Gert, E-mail: vriend@cmbi.ru.nl
2009-02-01
An evaluation of validation and real-space intervention possibilities for improving existing automated (re-)refinement methods. The deposition of X-ray data along with the customary structural models defining PDB entries makes it possible to apply large-scale re-refinement protocols to these entries, thus giving users the benefit of improvements in X-ray methods that have occurred since the structure was deposited. Automated gradient refinement is an effective method to achieve this goal, but real-space intervention is most often required in order to adequately address problems detected by structure-validation software. In order to improve the existing protocol, automated re-refinement was combined with structure validation andmore » difference-density peak analysis to produce a catalogue of problems in PDB entries that are amenable to automatic correction. It is shown that re-refinement can be effective in producing improvements, which are often associated with the systematic use of the TLS parameterization of B factors, even for relatively new and high-resolution PDB entries, while the accompanying manual or semi-manual map analysis and fitting steps show good prospects for eventual automation. It is proposed that the potential for simultaneous improvements in methods and in re-refinement results be further encouraged by broadening the scope of depositions to include refinement metadata and ultimately primary rather than reduced X-ray data.« less
Evaluation of Service and Methods Demonstration Projects : Philosophy and Approach
DOT National Transportation Integrated Search
1976-05-01
The Urban Mass Transportation Administration's Service and Methods Demonstration (SMD) Program has the objective of improving existing transit operations by sponsoring the development and implementation of new techniques and services on a nation-wide...
NASA Astrophysics Data System (ADS)
Tamboli, Prakash Kumar; Duttagupta, Siddhartha P.; Roy, Kallol
2015-08-01
The paper deals with dynamic compensation of delayed Self Powered Flux Detectors (SPFDs) using discrete time H∞ filtering method for improving the response of SPFDs with significant delayed components such as Platinum and Vanadium SPFD. We also present a comparative study between the Linear Matrix Inequality (LMI) based H∞ filtering and Algebraic Riccati Equation (ARE) based Kalman filtering methods with respect to their delay compensation capabilities. Finally an improved recursive H∞ filter based on the adaptive fading memory technique is proposed which provides an improved performance over existing methods. The existing delay compensation algorithms do not account for the rate of change in the signal for determining the filter gain and therefore add significant noise during the delay compensation process. The proposed adaptive fading memory H∞ filter minimizes the overall noise very effectively at the same time keeps the response time at minimum values. The recursive algorithm is easy to implement in real time as compared to the LMI (or ARE) based solutions.
An Abstraction-Based Data Model for Information Retrieval
NASA Astrophysics Data System (ADS)
McAllister, Richard A.; Angryk, Rafal A.
Language ontologies provide an avenue for automated lexical analysis that may be used to supplement existing information retrieval methods. This paper presents a method of information retrieval that takes advantage of WordNet, a lexical database, to generate paths of abstraction, and uses them as the basis for an inverted index structure to be used in the retrieval of documents from an indexed corpus. We present this method as a entree to a line of research on using ontologies to perform word-sense disambiguation and improve the precision of existing information retrieval techniques.
Method and apparatus for improving the performance of a nuclear power electrical generation system
Tsiklauri, Georgi V.; Durst, Bruce M.
1995-01-01
A method and apparatus for improving the efficiency and performance a of nuclear electrical generation system that comprises the addition of steam handling equipment to an existing plant that results in a surprising increase in plant performance. More particularly, a gas turbine electrical generation system with heat recovery boiler is installed along with a high pressure and a low pressure mixer superheater. Depending upon plant characteristics, the existing moisture separator reheater (MSR) can be either augmented or done away with. The instant invention enables a reduction in T.sub.hot without a derating of the reactor unit, and improves efficiency of the plant's electrical conversion cycle. Coupled with this advantage is a possible extension of the plant's fuel cycle length due to an increased electrical conversion efficiency. The reduction in T.sub.hot further allows for a surprising extension of steam generator life. An additional advantage is the reduction in erosion/corrosion of secondary system components including turbine blades and diaphragms. The gas turbine generator used in the instant invention can also replace or augment existing peak or emergency power needs.
NASA Astrophysics Data System (ADS)
Jang, T. S.
2018-03-01
A dispersion-relation preserving (DRP) method, as a semi-analytic iterative procedure, has been proposed by Jang (2017) for integrating the classical Boussinesq equation. It has been shown to be a powerful numerical procedure for simulating a nonlinear dispersive wave system because it preserves the dispersion-relation, however, there still exists a potential flaw, e.g., a restriction on nonlinear wave amplitude and a small region of convergence (ROC) and so on. To remedy the flaw, a new DRP method is proposed in this paper, aimed at improving convergence performance. The improved method is proved to have convergence properties and dispersion-relation preserving nature for small waves; of course, unique existence of the solutions is also proved. In addition, by a numerical experiment, the method is confirmed to be good at observing nonlinear wave phenomena such as moving solitary waves and their binary collision with different wave amplitudes. Especially, it presents a ROC (much) wider than that of the previous method by Jang (2017). Moreover, it gives the numerical simulation of a high (or large-amplitude) nonlinear dispersive wave. In fact, it is demonstrated to simulate a large-amplitude solitary wave and the collision of two solitary waves with large-amplitudes that we have failed to simulate with the previous method. Conclusively, it is worth noting that better convergence results are achieved compared to Jang (2017); i.e., they represent a major improvement in practice over the previous method.
Balbale, Salva N; Locatelli, Sara M; LaVela, Sherri L
2016-08-01
In this methodological article, we examine participatory methods in depth to demonstrate how these methods can be adopted for quality improvement (QI) projects in health care. We draw on existing literature and our QI initiatives in the Department of Veterans Affairs to discuss the application of photovoice and guided tours in QI efforts. We highlight lessons learned and several benefits of using participatory methods in this area. Using participatory methods, evaluators can engage patients, providers, and other stakeholders as partners to enhance care. Participant involvement helps yield actionable data that can be translated into improved care practices. Use of these methods also helps generate key insights to inform improvements that truly resonate with stakeholders. Using participatory methods is a valuable strategy to harness participant engagement and drive improvements that address individual needs. In applying these innovative methodologies, evaluators can transcend traditional approaches to uniquely support evaluations and improvements in health care. © The Author(s) 2015.
Efficient genotype compression and analysis of large genetic variation datasets
Layer, Ryan M.; Kindlon, Neil; Karczewski, Konrad J.; Quinlan, Aaron R.
2015-01-01
Genotype Query Tools (GQT) is a new indexing strategy that expedites analyses of genome variation datasets in VCF format based on sample genotypes, phenotypes and relationships. GQT’s compressed genotype index minimizes decompression for analysis, and performance relative to existing methods improves with cohort size. We show substantial (up to 443 fold) performance gains over existing methods and demonstrate GQT’s utility for exploring massive datasets involving thousands to millions of genomes. PMID:26550772
Improved mapping of radio sources from VLBI data by least-square fit
NASA Technical Reports Server (NTRS)
Rodemich, E. R.
1985-01-01
A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.
NASA Astrophysics Data System (ADS)
Ebrahimian, Ali; Wilson, Bruce N.; Gulliver, John S.
2016-05-01
Impervious surfaces are useful indicators of the urbanization impacts on water resources. Effective impervious area (EIA), which is the portion of total impervious area (TIA) that is hydraulically connected to the drainage system, is a better catchment parameter in the determination of actual urban runoff. Development of reliable methods for quantifying EIA rather than TIA is currently one of the knowledge gaps in the rainfall-runoff modeling context. The objective of this study is to improve the rainfall-runoff data analysis method for estimating EIA fraction in urban catchments by eliminating the subjective part of the existing method and by reducing the uncertainty of EIA estimates. First, the theoretical framework is generalized using a general linear least square model and using a general criterion for categorizing runoff events. Issues with the existing method that reduce the precision of the EIA fraction estimates are then identified and discussed. Two improved methods, based on ordinary least square (OLS) and weighted least square (WLS) estimates, are proposed to address these issues. The proposed weighted least squares method is then applied to eleven urban catchments in Europe, Canada, and Australia. The results are compared to map measured directly connected impervious area (DCIA) and are shown to be consistent with DCIA values. In addition, both of the improved methods are applied to nine urban catchments in Minnesota, USA. Both methods were successful in removing the subjective component inherent in the analysis of rainfall-runoff data of the current method. The WLS method is more robust than the OLS method and generates results that are different and more precise than the OLS method in the presence of heteroscedastic residuals in our rainfall-runoff data.
Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634
Methods for estimating bicycling and walking in Washington state.
DOT National Transportation Integrated Search
2014-05-01
This report presents the work performed in the first and second phases in the process of creating a method to : calculate Bicycle and Pedestrian Miles Traveled (BMT/PMT) for the state of Washington. First, we recommend : improvements to the existing ...
The economic impact of drag in general aviation
NASA Technical Reports Server (NTRS)
Neal, R. D.
1975-01-01
General aviation aircraft fuel consumption and operating costs are closely linked to drag reduction methods. Improvements in airplane drag are envisioned for new models; their effects will be in the 5 to 10% range. Major improvements in fuel consumption over existing turbofan airplanes will be the combined results of improved aerodynamics plus additional effects from advanced turbofan engine designs.
Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.
Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang
2016-06-22
An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.
Polymer/Silicate Nanocomposites Developed for Improved Thermal Stability and Barrier Properties
NASA Technical Reports Server (NTRS)
Campbell, Sandi G.
2001-01-01
The nanoscale reinforcement of polymers is becoming an attractive means of improving the properties and stability of polymers. Polymer-silicate nanocomposites are a relatively new class of materials with phase dimensions typically on the order of a few nanometers. Because of their nanometer-size features, nanocomposites possess unique properties typically not shared by more conventional composites. Polymer-layered silicate nanocomposites can attain a certain degree of stiffness, strength, and barrier properties with far less ceramic content than comparable glass- or mineral-reinforced polymers. Reinforcement of existing and new polyimides by this method offers an opportunity to greatly improve existing polymer properties without altering current synthetic or processing procedures.
Fernández-Carrobles, M. Milagro; Tadeo, Irene; Bueno, Gloria; Noguera, Rosa; Déniz, Oscar; Salido, Jesús; García-Rojo, Marcial
2013-01-01
Given that angiogenesis and lymphangiogenesis are strongly related to prognosis in neoplastic and other pathologies and that many methods exist that provide different results, we aim to construct a morphometric tool allowing us to measure different aspects of the shape and size of vascular vessels in a complete and accurate way. The developed tool presented is based on vessel closing which is an essential property to properly characterize the size and the shape of vascular and lymphatic vessels. The method is fast and accurate improving existing tools for angiogenesis analysis. The tool also improves the accuracy of vascular density measurements, since the set of endothelial cells forming a vessel is considered as a single object. PMID:24489494
Comparing and improving reconstruction methods for proxies based on compositional data
NASA Astrophysics Data System (ADS)
Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.
2017-12-01
Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data
Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging
Patel, Tapan P.; Man, Karen; Firestein, Bonnie L.; Meaney, David F.
2017-01-01
Background Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s–1000 +neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. New method Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. Results We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. Comparison with existing method(s) We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. Conclusions We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. PMID:25629800
NASA Astrophysics Data System (ADS)
Vinh, T.
1980-08-01
There is a need for better and more effective lightning protection for transmission and switching substations. In the past, a number of empirical methods were utilized to design systems to protect substations and transmission lines from direct lightning strokes. The need exists for convenient analytical lightning models adequate for engineering usage. In this study, analytical lightning models were developed along with a method for improved analysis of the physical properties of lightning through their use. This method of analysis is based upon the most recent statistical field data. The result is an improved method for predicting the occurrence of sheilding failure and for designing more effective protection for high and extra high voltage substations from direct strokes.
Improve the prediction of RNA-binding residues using structural neighbours.
Li, Quan; Cao, Zanxia; Liu, Haiyan
2010-03-01
The interactions between RNA-binding proteins (RBPs) with RNA play key roles in managing some of the cell's basic functions. The identification and prediction of RNA binding sites is important for understanding the RNA-binding mechanism. Computational approaches are being developed to predict RNA-binding residues based on the sequence- or structure-derived features. To achieve higher prediction accuracy, improvements on current prediction methods are necessary. We identified that the structural neighbors of RNA-binding and non-RNA-binding residues have different amino acid compositions. Combining this structure-derived feature with evolutionary (PSSM) and other structural information (secondary structure and solvent accessibility) significantly improves the predictions over existing methods. Using a multiple linear regression approach and 6-fold cross validation, our best model can achieve an overall correct rate of 87.8% and MCC of 0.47, with a specificity of 93.4%, correctly predict 52.4% of the RNA-binding residues for a dataset containing 107 non-homologous RNA-binding proteins. Compared with existing methods, including the amino acid compositions of structure neighbors lead to clearly improvement. A web server was developed for predicting RNA binding residues in a protein sequence (or structure),which is available at http://mcgill.3322.org/RNA/.
Statistical Methods Applied to Gamma-ray Spectroscopy Algorithms in Nuclear Security Missions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fagan, Deborah K.; Robinson, Sean M.; Runkle, Robert C.
2012-10-01
In a wide range of nuclear security missions, gamma-ray spectroscopy is a critical research and development priority. One particularly relevant challenge is the interdiction of special nuclear material for which gamma-ray spectroscopy supports the goals of detecting and identifying gamma-ray sources. This manuscript examines the existing set of spectroscopy methods, attempts to categorize them by the statistical methods on which they rely, and identifies methods that have yet to be considered. Our examination shows that current methods effectively estimate the effect of counting uncertainty but in many cases do not address larger sources of decision uncertainty—ones that are significantly moremore » complex. We thus explore the premise that significantly improving algorithm performance requires greater coupling between the problem physics that drives data acquisition and statistical methods that analyze such data. Untapped statistical methods, such as Bayes Modeling Averaging and hierarchical and empirical Bayes methods have the potential to reduce decision uncertainty by more rigorously and comprehensively incorporating all sources of uncertainty. We expect that application of such methods will demonstrate progress in meeting the needs of nuclear security missions by improving on the existing numerical infrastructure for which these analyses have not been conducted.« less
An overview of recent developments and current status of gluten ELISA methods
USDA-ARS?s Scientific Manuscript database
ELISA methods for detecting and quantitating allergens have been around for some time and they are continuously improved. In this context, the development of gluten methods is no exception. Around the turn of the millennium, doubts were raised whether the existing “Skerritt-ELISA” would meet the 20 ...
Liang, Sai; Qu, Shen; Xu, Ming
2016-02-02
To develop industry-specific policies for mitigating environmental pressures, previous studies primarily focus on identifying sectors that directly generate large amounts of environmental pressures (a.k.a. production-based method) or indirectly drive large amounts of environmental pressures through supply chains (e.g., consumption-based method). In addition to those sectors as important environmental pressure producers or drivers, there exist sectors that are also important to environmental pressure mitigation as transmission centers. Economy-wide environmental pressure mitigation might be achieved by improving production efficiency of these key transmission sectors, that is, using less upstream inputs to produce unitary output. We develop a betweenness-based method to measure the importance of transmission sectors, borrowing the betweenness concept from network analysis. We quantify the betweenness of sectors by examining supply chain paths extracted from structural path analysis that pass through a particular sector. We take China as an example and find that those critical transmission sectors identified by betweenness-based method are not always identifiable by existing methods. This indicates that betweenness-based method can provide additional insights that cannot be obtained with existing methods on the roles individual sectors play in generating economy-wide environmental pressures. Betweenness-based method proposed here can therefore complement existing methods for guiding sector-level environmental pressure mitigation strategies.
NASA Technical Reports Server (NTRS)
Garcia, F., Jr.
1974-01-01
A study of the solution problem of a complex entry optimization was studied. The problem was transformed into a two-point boundary value problem by using classical calculus of variation methods. Two perturbation methods were devised. These methods attempted to desensitize the contingency of the solution of this type of problem on the required initial co-state estimates. Also numerical results are presented for the optimal solution resulting from a number of different initial co-states estimates. The perturbation methods were compared. It is found that they are an improvement over existing methods.
Measure Guideline. Wood Window Repair, Rehabilitation, and Replacement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, P.; Eng, P.
2012-12-01
This measure guideline provides information and guidance on rehabilitating, retrofitting, and replacing existing window assemblies in residential construction. The intent is to provide information regarding means and methods to improve the energy and comfort performance of existing wood window assemblies in a way that takes into consideration component durability, in-service operation, and long term performance of the strategies.
Measure Guideline: Window Repair, Rehabilitation, and Replacement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, P.
2012-12-01
This measure guideline provides information and guidance on rehabilitating, retrofitting, and replacing existing window assemblies in residential construction. The intent is to provide information regarding means and methods to improve the energy and comfort performance of existing wood window assemblies in a way that takes into consideration component durability, in-service operation, and long term performance of the strategies.
Methods for Upgrading an Intramural-Recreational Sports Program: An Agency Report.
ERIC Educational Resources Information Center
Newman, Richard E.; Miller, Michael T.
This study assessed the state of intramural-recreational (IR) programs at Peru State College (Nebraska) and offered suggestions for the improvement of existing IR programs. The existing IR sports program is directed by a part-time adjunct staff member with the aid of student assistants and receives limited support. Upgrading the directorship of…
Big Data is a powerful tool for environmental improvements in the construction business
NASA Astrophysics Data System (ADS)
Konikov, Aleksandr; Konikov, Gregory
2017-10-01
The work investigates the possibility of applying the Big Data method as a tool to implement environmental improvements in the construction business. The method is recognized as effective in analyzing big volumes of heterogeneous data. It is noted that all preconditions exist for this method to be successfully used for resolution of environmental issues in the construction business. It is proven that the principal Big Data techniques (cluster analysis, crowd sourcing, data mixing and integration) can be applied in the sphere in question. It is concluded that Big Data is a truly powerful tool to implement environmental improvements in the construction business.
An improved design method for EPC middleware
NASA Astrophysics Data System (ADS)
Lou, Guohuan; Xu, Ran; Yang, Chunming
2014-04-01
For currently existed problems and difficulties during the small and medium enterprises use EPC (Electronic Product Code) ALE (Application Level Events) specification to achieved middleware, based on the analysis of principle of EPC Middleware, an improved design method for EPC middleware is presented. This method combines the powerful function of MySQL database, uses database to connect reader-writer with upper application system, instead of development of ALE application program interface to achieve a middleware with general function. This structure is simple and easy to implement and maintain. Under this structure, different types of reader-writers added can be configured conveniently and the expandability of the system is improved.
[The methods of assessment of health risk from exposure to radon and radon daughters].
Demin, V F; Zhukovskiy, M V; Kiselev, S M
2014-01-01
The critical analysis of existing models of the relationship dose-effect (RDE) for radon exposure on human health has been performed. Conclusion about the necessity and possibility of improving these models has been made. A new improved version ofthe RDE has been developed. A technique for assessing the human health risk of exposure to radon, including the method for estimating of exposure doses of radon, an improved model of RDE, proper methodology risk assessment has been described. Methodology is proposed for the use in the territory of Russia.
Multiple Testing of Gene Sets from Gene Ontology: Possibilities and Pitfalls.
Meijer, Rosa J; Goeman, Jelle J
2016-09-01
The use of multiple testing procedures in the context of gene-set testing is an important but relatively underexposed topic. If a multiple testing method is used, this is usually a standard familywise error rate (FWER) or false discovery rate (FDR) controlling procedure in which the logical relationships that exist between the different (self-contained) hypotheses are not taken into account. Taking those relationships into account, however, can lead to more powerful variants of existing multiple testing procedures and can make summarizing and interpreting the final results easier. We will show that, from the perspective of interpretation as well as from the perspective of power improvement, FWER controlling methods are more suitable than FDR controlling methods. As an example of a possible power improvement, we suggest a modified version of the popular method by Holm, which we also implemented in the R package cherry. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-01-01
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744
Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.
Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki
2015-05-22
In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.
Prediction-Correction Algorithms for Time-Varying Constrained Optimization
Simonetto, Andrea; Dall'Anese, Emiliano
2017-07-26
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonetto, Andrea; Dall'Anese, Emiliano
This article develops online algorithms to track solutions of time-varying constrained optimization problems. Particularly, resembling workhorse Kalman filtering-based approaches for dynamical systems, the proposed methods involve prediction-correction steps to provably track the trajectory of the optimal solutions of time-varying convex problems. The merits of existing prediction-correction methods have been shown for unconstrained problems and for setups where computing the inverse of the Hessian of the cost function is computationally affordable. This paper addresses the limitations of existing methods by tackling constrained problems and by designing first-order prediction steps that rely on the Hessian of the cost function (and do notmore » require the computation of its inverse). In addition, the proposed methods are shown to improve the convergence speed of existing prediction-correction methods when applied to unconstrained problems. Numerical simulations corroborate the analytical results and showcase performance and benefits of the proposed algorithms. A realistic application of the proposed method to real-time control of energy resources is presented.« less
Treated Wastewater Effluent as a Source of Microbial Pollution of Surface Water Resources
Naidoo, Shalinee; Olaniran, Ademola O.
2013-01-01
Since 1990, more than 1.8 billion people have gained access to potable water and improved sanitation worldwide. Whilst this represents a vital step towards improving global health and well-being, accelerated population growth coupled with rapid urbanization has further strained existing water supplies. Whilst South Africa aims at spending 0.5% of its GDP on improving sanitation, additional factors such as hydrological variability and growing agricultural needs have further increased dependence on this finite resource. Increasing pressure on existing wastewater treatment plants has led to the discharge of inadequately treated effluent, reinforcing the need to improve and adopt more stringent methods for monitoring discharged effluent and surrounding water sources. This review provides an overview of the relative efficiencies of the different steps involved in wastewater treatment as well as the commonly detected microbial indicators with their associated health implications. In addition, it highlights the need to enforce more stringent measures to ensure compliance of treated effluent quality to the existing guidelines. PMID:24366046
Search automation of the generalized method of device operational characteristics improvement
NASA Astrophysics Data System (ADS)
Petrova, I. Yu; Puchkova, A. A.; Zaripova, V. M.
2017-01-01
The article presents brief results of analysis of existing search methods of the closest patents, which can be applied to determine generalized methods of device operational characteristics improvement. There were observed the most widespread clustering algorithms and metrics for determining the proximity degree between two documents. The article proposes the technique of generalized methods determination; it has two implementation variants and consists of 7 steps. This technique has been implemented in the “Patents search” subsystem of the “Intellect” system. Also the article gives an example of the use of the proposed technique.
Artificial mismatch hybridization
Guo, Zhen; Smith, Lloyd M.
1998-01-01
An improved nucleic acid hybridization process is provided which employs a modified oligonucleotide and improves the ability to discriminate a control nucleic acid target from a variant nucleic acid target containing a sequence variation. The modified probe contains at least one artificial mismatch relative to the control nucleic acid target in addition to any mismatch(es) arising from the sequence variation. The invention has direct and advantageous application to numerous existing hybridization methods, including, applications that employ, for example, the Polymerase Chain Reaction, allele-specific nucleic acid sequencing methods, and diagnostic hybridization methods.
Research of ceramic matrix for a safe immobilization of radioactive sludge waste
NASA Astrophysics Data System (ADS)
Dorofeeva, Ludmila; Orekhov, Dmitry
2018-03-01
The research and improvement of the existing method for radioactive waste hardening by fixation in a ceramic matrix was carried out. For the samples covered with the sodium silicate and tested after the storage on the air the speed of a radionuclides leaching was determined. The properties of a clay ceramics and the optimum conditions of sintering were defined. The experimental data about the influence of a temperature mode sintering, water quantities, sludge and additives in the samples on their mechanical durability and a water resistance were obtained. The comparative analysis of the conducted research is aimed at improvement of the existing method of the hardening radioactive waste by inclusion in a ceramic matrix and reveals the advantages of the received results over analogs.
A comparison of cover pole with standard vegetation monitoring methods
USDA-ARS?s Scientific Manuscript database
The ability of resource managers to make informed decisions regarding wildlife habitat could be improved with the use of existing datasets and the use of cost effective, standardized methods to simultaneously quantify vertical and horizontal cover. The objectives of this study were to (1) characteri...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tukey, J.W.; Bloomfield, P.
In its most general terms, the work carried out under the contract consists of the development of new data analytic methods and the improvement of existing methods, their implementation on computer, especially minicomputers, and the development of non-statistical, systems-level software to support these activities. The work reported or completed is reviewed. (GHT)
Use of the Transformative Framework in Mixed Methods Studies
ERIC Educational Resources Information Center
Sweetman, David; Badiee, Manijeh; Creswell, John W.
2010-01-01
A concern exists that mixed methods studies do not contain advocacy stances. Preliminary evidence suggests that this is not the case, but to address this issue in more depth the authors examined 13 mixed methods studies that contained an advocacy, transformative lens. Such a lens consisted of incorporating intent to advocate for an improvement in…
ERIC Educational Resources Information Center
Sondergeld, Toni A.; Koskey, Kristin L. K.
2011-01-01
An abundance of comprehensive school reform (CSR) literature exists illustrating CSRs are effective in improving student outcomes. However, much of this research reports on top-down reforms, focuses on academic outcomes, and uses quantitative methods alone. Many educational researchers have argued for the use of mixed methods for providing a…
Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun
2014-12-19
In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different situations.
Gas-Dynamic Methods to Reduce Gas Flow Nonuniformity from the Annular Frames of Gas Turbine Engines
NASA Astrophysics Data System (ADS)
Kolmakova, D.; Popov, G.
2018-01-01
Gas flow nonuniformity is one of the main sources of rotor blade vibrations in the gas turbine engines. Usually, the flow circumferential nonuniformity occurs near the annular frames, located in the flow channel of the engine. This leads to the increased dynamic stresses in blades and consequently to the blade damage. The goal of the research was to find an acceptable method of reducing the level of gas flow nonuniformity. Two different methods were investigated during this research. Thus, this study gives the ideas about methods of improving the flow structure in gas turbine engine. Based on existing conditions (under development or existing engine) it allows the selection of the most suitable method for reducing gas flow nonuniformity.
Automated Transition State Theory Calculations for High-Throughput Kinetics.
Bhoorasingh, Pierre L; Slakman, Belinda L; Seyedzadeh Khanshan, Fariba; Cain, Jason Y; West, Richard H
2017-09-21
A scarcity of known chemical kinetic parameters leads to the use of many reaction rate estimates, which are not always sufficiently accurate, in the construction of detailed kinetic models. To reduce the reliance on these estimates and improve the accuracy of predictive kinetic models, we have developed a high-throughput, fully automated, reaction rate calculation method, AutoTST. The algorithm integrates automated saddle-point geometry search methods and a canonical transition state theory kinetics calculator. The automatically calculated reaction rates compare favorably to existing estimated rates. Comparison against high level theoretical calculations show the new automated method performs better than rate estimates when the estimate is made by a poor analogy. The method will improve by accounting for internal rotor contributions and by improving methods to determine molecular symmetry.
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
NASA Astrophysics Data System (ADS)
Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao
2017-03-01
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
Development of an Improved Simulator for Chemical and Microbial EOR Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pope, Gary A.; Sepehrnoori, Kamy; Delshad, Mojdeh
2000-09-11
The objective of this research was to extend the capability of an existing simulator (UTCHEM) to improved oil recovery methods that use surfactants, polymers, gels, alkaline chemicals, microorganisms and foam as well as various combinations of these in both conventional and naturally fractured oil reservoirs. Task 1 is the addition of a dual-porosity model for chemical improved of recovery processes in naturally fractured oil reservoirs. Task 2 is the addition of a foam model. Task 3 addresses several numerical and coding enhancements that will greatly improve the versatility and performance of UTCHEM. Task 4 is the enhancements of physical propertymore » models.« less
Monitoring total mixed rations and feed delivery systems.
Oelberg, Thomas J; Stone, William
2014-11-01
This article is intended to give practitioners a method to evaluate total mixed ration (TMR) consistency and to give them practical solutions to improve TMR consistency that will improve cattle performance and health. Practitioners will learn how to manage the variation in moisture and nutrients that exists in haylage and corn silage piles and in bales of hay, and methods to reduce variation in the TMR mixing and delivery process. Copyright © 2014 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Fengyu
Traditional deterministic reserve requirements rely on ad-hoc, rule of thumb methods to determine adequate reserve in order to ensure a reliable unit commitment. Since congestion and uncertainties exist in the system, both the quantity and the location of reserves are essential to ensure system reliability and market efficiency. The modeling of operating reserves in the existing deterministic reserve requirements acquire the operating reserves on a zonal basis and do not fully capture the impact of congestion. The purpose of a reserve zone is to ensure that operating reserves are spread across the network. Operating reserves are shared inside each reserve zone, but intra-zonal congestion may block the deliverability of operating reserves within a zone. Thus, improving reserve policies such as reserve zones may improve the location and deliverability of reserve. As more non-dispatchable renewable resources are integrated into the grid, it will become increasingly difficult to predict the transfer capabilities and the network congestion. At the same time, renewable resources require operators to acquire more operating reserves. With existing deterministic reserve requirements unable to ensure optimal reserve locations, the importance of reserve location and reserve deliverability will increase. While stochastic programming can be used to determine reserve by explicitly modelling uncertainties, there are still scalability as well as pricing issues. Therefore, new methods to improve existing deterministic reserve requirements are desired. One key barrier of improving existing deterministic reserve requirements is its potential market impacts. A metric, quality of service, is proposed in this thesis to evaluate the price signal and market impacts of proposed hourly reserve zones. Three main goals of this thesis are: 1) to develop a theoretical and mathematical model to better locate reserve while maintaining the deterministic unit commitment and economic dispatch structure, especially with the consideration of renewables, 2) to develop a market settlement scheme of proposed dynamic reserve policies such that the market efficiency is improved, 3) to evaluate the market impacts and price signal of the proposed dynamic reserve policies.
A new collage steganographic algorithm using cartoon design
NASA Astrophysics Data System (ADS)
Yi, Shuang; Zhou, Yicong; Pun, Chi-Man; Chen, C. L. Philip
2014-02-01
Existing collage steganographic methods suffer from low payload of embedding messages. To improve the payload while providing a high level of security protection to messages, this paper introduces a new collage steganographic algorithm using cartoon design. It embeds messages into the least significant bits (LSBs) of color cartoon objects, applies different permutations to each object, and adds objects to a cartoon cover image to obtain the stego image. Computer simulations and comparisons demonstrate that the proposed algorithm shows significantly higher capacity of embedding messages compared with existing collage steganographic methods.
Yoshie, Ayano; Kanda, Ayato; Nakamura, Takahiro; Igusa, Hisao; Hara, Setsuko
2009-01-01
Although there are various determination methods for gamma -oryzanol contained in rice bran oil by absorptiometry, normal-phase HPLC, and reversed-phase HPLC, their accuracies and the correlations among them have not been revealed yet. Chloroform-containing mixed solvents are widely used as mobile phases in some HPLC methods, but researchers have been apprehensive about its use in terms of safety for the human body and the environment.In the present study, a simple and accurate determination method was developed by improving the reversed-phase HPLC method. This novel HPLC method uses methanol/acetonitrile/acetic acid (52/45/3 v/v/v), a non-chlorinated solvent, as the mobile phase, and shows an excellent linearity (y = 0.9527x + 0.1241, R(2) = 0.9974) with absorptiometry. The mean relative errors among the existing 3 methods and the novel method, determined by adding fixed amounts of gamma-oryzanol into refined rice salad oil, were -4.7% for the absorptiometry, -6.8% for the existing normal-phase HPLC, +4.6% for the existing reversed-phase HPLC, and -1.6% for the novel reversed-phase HPLC method. gamma -Oryzanol content in 12 kinds of crude rice bran oils obtained from different sources were determined by the four methods. The mean content of those oils were 1.75+/-0.18% for the absorptiometry, 1.29+/-0.11% for the existing normal-phase HPLC, 1.51+/-0.10% for the existing reversed-phase HPLC, and 1.54+/-0.19% for the novel reversed-phase HPLC method.
ERIC Educational Resources Information Center
Hunt, Pete; Barrios, Lisa; Telljohann, Susan K.; Mazyck, Donna
2015-01-01
Background: The Whole School, Whole Community, Whole Child (WSCC) model shows the interrelationship between health and learning and the potential for improving educational outcomes by improving health outcomes. However, current descriptions do not explain how to implement the model. Methods: The existing literature, including scientific articles,…
The Changing Role of Guidance and Counselling in Alberta: Fact or Fiction?
ERIC Educational Resources Information Center
Carstensen, Peter; Melnychuk, Don
1980-01-01
Increased activity and production of materials and methods hold potential for constructive change in guidance and counseling. But there is need for reorganization of existing materials to alleviate the bandwagon effect if new methods of guidance and counseling are to improve in Alberta. (JAC)
A Vector Representation for Thermodynamic Relationships
ERIC Educational Resources Information Center
Pogliani, Lionello
2006-01-01
The existing vector formalism method for thermodynamic relationship maintains tractability and uses accessible mathematics, which can be seen as a diverting and entertaining step into the mathematical formalism of thermodynamics and as an elementary application of matrix algebra. The method is based on ideas and operations apt to improve the…
Brain Network Regional Synchrony Analysis in Deafness
Xu, Lei; Liang, Mao-Jin
2018-01-01
Deafness, the most common auditory disease, has greatly affected people for a long time. The major treatment for deafness is cochlear implantation (CI). However, till today, there is still a lack of objective and precise indicator serving as evaluation of the effectiveness of the cochlear implantation. The goal of this EEG-based study is to effectively distinguish CI children from those prelingual deafened children without cochlear implantation. The proposed method is based on the functional connectivity analysis, which focuses on the brain network regional synchrony. Specifically, we compute the functional connectivity between each channel pair first. Then, we quantify the brain network synchrony among regions of interests (ROIs), where both intraregional synchrony and interregional synchrony are computed. And finally the synchrony values are concatenated to form the feature vector for the SVM classifier. What is more, we develop a new ROI partition method of 128-channel EEG recording system. That is, both the existing ROI partition method and the proposed ROI partition method are used in the experiments. Compared with the existing EEG signal classification methods, our proposed method has achieved significant improvements as large as 87.20% and 86.30% when the existing ROI partition method and the proposed ROI partition method are used, respectively. It further demonstrates that the new ROI partition method is comparable to the existing ROI partition method. PMID:29854776
[Watsu: a modern method in physiotherapy, body regeneration, and sports].
Weber-Nowakowska, Katarzyna; Gebska, Magdalena; Zyzniewska-Banaszak, Ewelina
2013-01-01
Progress in existing methods of physiotherapy and body regeneration and introduction of new methods has made it possible to precisely select the techniques according to patient needs. The modern therapist is capable of improving the physical and mental condition of the patient. Watsu helps the therapist eliminate symptoms from the locomotor system and reach the psychic sphere at the same time.
2014-01-01
Automatic reconstruction of metabolic pathways for an organism from genomics and transcriptomics data has been a challenging and important problem in bioinformatics. Traditionally, known reference pathways can be mapped into an organism-specific ones based on its genome annotation and protein homology. However, this simple knowledge-based mapping method might produce incomplete pathways and generally cannot predict unknown new relations and reactions. In contrast, ab initio metabolic network construction methods can predict novel reactions and interactions, but its accuracy tends to be low leading to a lot of false positives. Here we combine existing pathway knowledge and a new ab initio Bayesian probabilistic graphical model together in a novel fashion to improve automatic reconstruction of metabolic networks. Specifically, we built a knowledge database containing known, individual gene / protein interactions and metabolic reactions extracted from existing reference pathways. Known reactions and interactions were then used as constraints for Bayesian network learning methods to predict metabolic pathways. Using individual reactions and interactions extracted from different pathways of many organisms to guide pathway construction is new and improves both the coverage and accuracy of metabolic pathway construction. We applied this probabilistic knowledge-based approach to construct the metabolic networks from yeast gene expression data and compared its results with 62 known metabolic networks in the KEGG database. The experiment showed that the method improved the coverage of metabolic network construction over the traditional reference pathway mapping method and was more accurate than pure ab initio methods. PMID:25374614
NASA Technical Reports Server (NTRS)
Dinar, N.
1978-01-01
Several aspects of multigrid methods are briefly described. The main subjects include the development of very efficient multigrid algorithms for systems of elliptic equations (Cauchy-Riemann, Stokes, Navier-Stokes), as well as the development of control and prediction tools (based on local mode Fourier analysis), used to analyze, check and improve these algorithms. Preliminary research on multigrid algorithms for time dependent parabolic equations is also described. Improvements in existing multigrid processes and algorithms for elliptic equations were studied.
Ergonomics and simulation-based approach in improving facility layout
NASA Astrophysics Data System (ADS)
Abad, Jocelyn D.
2018-02-01
The use of the simulation-based technique in facility layout has been a choice in the industry due to its convenience and efficient generation of results. Nevertheless, the solutions generated are not capable of addressing delays due to worker's health and safety which significantly impact overall operational efficiency. It is, therefore, critical to incorporate ergonomics in facility design. In this study, workstation analysis was incorporated into Promodel simulation to improve the facility layout of a garment manufacturing. To test the effectiveness of the method, existing and improved facility designs were measured using comprehensive risk level, efficiency, and productivity. Results indicated that the improved facility layout generated a decrease in comprehensive risk level and rapid upper limb assessment score; an increase of 78% in efficiency and 194% increase in productivity compared to existing design and thus proved that the approach is effective in attaining overall facility design improvement.
Improving concrete overlay construction : executive summary.
DOT National Transportation Integrated Search
2010-06-01
As the US highway system ages and available funding diminishes, transportation agencies : are looking for effective methods for preserving and extending the life of existing : pavements. These agencies are also being encouraged to minimize constructi...
Soft symmetry improvement of two particle irreducible effective actions
NASA Astrophysics Data System (ADS)
Brown, Michael J.; Whittingham, Ian B.
2017-01-01
Two particle irreducible effective actions (2PIEAs) are valuable nonperturbative techniques in quantum field theory; however, finite truncations of them violate the Ward identities (WIs) of theories with spontaneously broken symmetries. The symmetry improvement (SI) method of Pilaftsis and Teresi attempts to overcome this by imposing the WIs as constraints on the solution; however, the method suffers from the nonexistence of solutions in linear response theory and in certain truncations in equilibrium. Motivated by this, we introduce a new method called soft-symmetry improvement (SSI) which relaxes the constraint. Violations of WIs are allowed but punished in a least-squares implementation of the symmetry improvement idea. A new parameter ξ controls the strength of the constraint. The method interpolates between the unimproved (ξ →∞ ) and SI (ξ →0 ) cases, and the hope is that practically useful solutions can be found for finite ξ . We study the SSI 2PIEA for a scalar O (N ) model in the Hartree-Fock approximation. We find that the method is IR sensitive; the system must be formulated in finite volume V and temperature T =β-1 , and the V β →∞ limit must be taken carefully. Three distinct limits exist. Two are equivalent to the unimproved 2PIEA and SI 2PIEA respectively, and the third is a new limit where the WI is satisfied but the phase transition is strongly first order and solutions can fail to exist depending on ξ . Further, these limits are disconnected from each other; there is no smooth way to interpolate from one to another. These results suggest that any potential advantages of SSI methods, and indeed any application of (S)SI methods out of equilibrium, must occur in finite volume.
Improved phase-ellipse method for in-situ geophone calibration.
Liu, Huaibao P.; Peselnick, L.
1986-01-01
For amplitude and phase response calibration of moving-coil electromagnetic geophones 2 parameters are needed, namely the geophone natural frequency, fo, and the geophone upper resonance frequency fu. The phase-ellipse method is commonly used for the in situ determination of these parameters. For a given signal-to-noise ratio, the precision of the measurement of fo and fu depends on the phase sensitivity, f(delta PHI/delta PHIf). For some commercial geophones (f(delta PHI/delta PHI) at fu can be an order of magnitude less than the sensitivity at fo. Presents an improved phase-ellipse method with increased precision. Compared to measurements made with the existing phase-ellipse methods, the method shows a 6- and 3-fold improvement in the precision, respectively, on measurements of fo and fu on a commercial geophone.-from Authors
NASA Astrophysics Data System (ADS)
Hong, Wei; Wang, Shaoping; Liu, Haokuo; Tomovic, Mileta M.; Chao, Zhang
2017-01-01
The inductive debris detection is an effective method for monitoring mechanical wear, and could be used to prevent serious accidents. However, debris detection during early phase of mechanical wear, when small debris (<100 um) is generated, requires that the sensor has high sensitivity with respect to background noise. In order to detect smaller debris by existing sensors, this paper presents a hybrid method which combines Band Pass Filter and Correlation Algorithm to improve sensor signal-to-noise ratio (SNR). The simulation results indicate that the SNR will be improved at least 2.67 times after signal processing. In other words, this method ensures debris identification when the sensor's SNR is bigger than -3 dB. Thus, smaller debris will be detected in the same SNR. Finally, effectiveness of the proposed method is experimentally validated.
NASA Technical Reports Server (NTRS)
Mcmillan, O. J.; Mendenhall, M. R.; Perkins, S. C., Jr.
1984-01-01
Work is described dealing with two areas which are dominated by the nonlinear effects of vortex flows. The first area concerns the stall/spin characteristics of a general aviation wing with a modified leading edge. The second area concerns the high-angle-of-attack characteristics of high performance military aircraft. For each area, the governing phenomena are described as identified with the aid of existing experimental data. Existing analytical methods are reviewed, and the most promising method for each area used to perform some preliminary calculations. Based on these results, the strengths and weaknesses of the methods are defined, and research programs recommended to improve the methods as a result of better understanding of the flow mechanisms involved.
Method and apparatus for improving the performance of a steam driven power system by steam mixing
Tsiklauri, Georgi V.; Durst, Bruce M.; Prichard, Andrew W.; Reid, Bruce D.; Burritt, James
1998-01-01
A method and apparatus for improving the efficiency and performance of a steam driven power plant wherein addition of steam handling equipment to an existing plant results in a surprising increase in plant performance. For Example, a gas turbine electrical generation system with heat recovery boiler may be installed along with a micro-jet high pressure and a low pressure mixer superheater. Depending upon plant characteristics, the existing moisture separator reheater (MSR) can be either augmented or done away with. The instant invention enables a reduction in T.sub.hot without a derating of the reactor unit, and improves efficiency of the plant's electrical conversion cycle. Coupled with this advantage is a possible extension of the plant's fuel cycle length due to an increased electrical conversion efficiency. The reduction in T.sub.hot further allows for a surprising extension of steam generator life. An additional advantage is the reduction in erosion/corrosion of secondary system components including turbine blades and diaphragms. The gas turbine generator used in the instant invention can also replace or augment existing peak or emergency power needs. Another benefit of the instant invention is the extension of plant life and the reduction of downtime due to refueling.
An oscillatory kernel function method for lifting surfaces in mixed transonic flow
NASA Technical Reports Server (NTRS)
Cunningham, A. M., Jr.
1974-01-01
A study was conducted on the use of combined subsonic and supersonic linear theory to obtain economical and yet realistic solutions to unsteady transonic flow problems. With some modification, existing linear theory methods were combined into a single computer program. The method was applied to problems for which measured steady Mach number distributions and unsteady pressure distributions were available. By comparing theory and experiment, the transonic method showed a significant improvement over uniform flow methods. The results also indicated that more exact local Mach number effects and normal shock boundary conditions on the perturbation potential were needed. The validity of these improvements was demonstrated by application to steady flow.
Glatfelter, D.R.; Butch, G.K.
1994-01-01
The study results indicate that installation of streamflow-gaging stations at 15 new sites would improve collection of flood data. Instrumenting the 15 new sites plus 26 existing streamflow-gaging stations with telemetry, preferably data-collection platforms with satellite transmitters, would improve transmission of data to users of the information.
Methods to Fabricate and Improve Stand-alone and Integrated Filters
NASA Technical Reports Server (NTRS)
Greer, Frank (Inventor); Nikzad, Shouleh (Inventor)
2014-01-01
Embodiments of the invention provide for fabricating a filter, for electromagnetic radiation, in at least three ways, including (1) fabricating integrated thin film filters directly on a detector; (2) fabricating a free standing thin film filter that may be used with a detector; and (3) treating an existing filter to improve the filter's properties.
A method for improved visual landscape compatibility of mobile home park
Daniel R. Jones
1979-01-01
This paper is a description of a research effort directed to improving the visual image of mobile home parks in the landscape. The study is an application of existing methodologies for measuring scenic quality and visual landscape compatibility to an unsolved problem. The paper summarizes two major areas of investigation: regional location factors based on visual...
Incorporating Total Quality Management in an Engineering Design Course. Report 5-1993.
ERIC Educational Resources Information Center
Wilczynski, V.; And Others
One definition of creativity is the conviction that each and every existing idea can be improved. It is proposed that creativity in an engineering design process can be encouraged by the adoption of Total Quality Management (TQM) methods based on a commitment to continuous improvement. This paper addresses the introduction and application of TQM…
Educational Data Mining Applications and Tasks: A Survey of the Last 10 Years
ERIC Educational Resources Information Center
Bakhshinategh, Behdad; Zaiane, Osmar R.; ElAtia, Samira; Ipperciel, Donald
2018-01-01
Educational Data Mining (EDM) is the field of using data mining techniques in educational environments. There exist various methods and applications in EDM which can follow both applied research objectives such as improving and enhancing learning quality, as well as pure research objectives, which tend to improve our understanding of the learning…
Minimum maximum temperature gradient coil design.
While, Peter T; Poole, Michael S; Forbes, Larry K; Crozier, Stuart
2013-08-01
Ohmic heating is a serious problem in gradient coil operation. A method is presented for redesigning cylindrical gradient coils to operate at minimum peak temperature, while maintaining field homogeneity and coil performance. To generate these minimaxT coil windings, an existing analytic method for simulating the spatial temperature distribution of single layer gradient coils is combined with a minimax optimization routine based on sequential quadratic programming. Simulations are provided for symmetric and asymmetric gradient coils that show considerable improvements in reducing maximum temperature over existing methods. The winding patterns of the minimaxT coils were found to be heavily dependent on the assumed thermal material properties and generally display an interesting "fish-eye" spreading of windings in the dense regions of the coil. Small prototype coils were constructed and tested for experimental validation and these demonstrate that with a reasonable estimate of material properties, thermal performance can be improved considerably with negligible change to the field error or standard figures of merit. © 2012 Wiley Periodicals, Inc.
Carbohydrate-Loading: A Safe and Effective Method of Improving Endurance Performance.
ERIC Educational Resources Information Center
Beeker, Richard T.; Israel, Richard G.
Carbohydrate-loading prior to distance events is a common practice among endurance athletes. The purposes of this paper are to review previous research and to clarify misconceptions which may exist concerning carbohydrate-loading. The most effective method of carbohydrate-loading involves a training run of sufficient intensity and duration to…
ERIC Educational Resources Information Center
Bell, Robin
2016-01-01
Existing literature examining the teaching of research methods highlights difficulties students face when developing research competencies. Studies of student-centred teaching approaches have found increased student performance and improved confidence in undertaking research projects. To develop a student-centred approach, it could be beneficial…
The most remote point method for the site selection of the future GGOS network
NASA Astrophysics Data System (ADS)
Hase, Hayo; Pedreros, Felipe
2014-10-01
The Global Geodetic Observing System (GGOS) proposes 30-40 geodetic observatories as global infrastructure for the most accurate reference frame to monitor the global change. To reach this goal, several geodetic observatories have upgrade plans to become GGOS stations. Most initiatives are driven by national institutions following national interests. From a global perspective, the site distribution remains incomplete and the initiatives to improve this are up until now insufficient. This article is a contribution to answer the question on where to install new GGOS observatories and where to add observation techniques to existing observatories. It introduces the iterative most remote point (MRP) method for filling in the largest gaps in existing technique-specific networks. A spherical version of the Voronoi-diagram is used to pick the optimal location of the new observatory, but practical concerns determine its realistic location. Once chosen, the process is iterated. A quality and a homogeneity parameter of global networks measure the progress of improving the homogeneity of the global site distribution. This method is applied to the global networks of VGOS, and VGOS co-located with SLR to derive some clues about where additional observatory sites or additional observation techniques at existing observatories will improve the GGOS network configuration. With only six additional VGOS-stations, the homogeneity of the global VGOS-network could be significantly improved by more than . From the presented analysis, 25 known or new co-located VGOS and SLR sites are proposed as the future GGOS backbone: Colombo, Easter Island, Fairbanks, Fortaleza, Galapagos, GGAO, Hartebeesthoek, Honiara, Ibadan, Kokee Park, La Plata, Mauritius, McMurdo, Metsahövi, Ny Alesund, Riyadh, San Diego, Santa Maria, Shanghai, Syowa, Tahiti, Tristan de Cunha, Warkworth, Wettzell, and Yarragadee.
Material identification based on electrostatic sensing technology
NASA Astrophysics Data System (ADS)
Liu, Kai; Chen, Xi; Li, Jingnan
2018-04-01
When the robot travels on the surface of different media, the uncertainty of the medium will seriously affect the autonomous action of the robot. In this paper, the distribution characteristics of multiple electrostatic charges on the surface of materials are detected, so as to improve the accuracy of the existing electrostatic signal material identification methods, which is of great significance to help the robot optimize the control algorithm. In this paper, based on the electrostatic signal material identification method proposed by predecessors, the multi-channel detection circuit is used to obtain the electrostatic charge distribution at different positions of the material surface, the weights are introduced into the eigenvalue matrix, and the weight distribution is optimized by the evolutionary algorithm, which makes the eigenvalue matrix more accurately reflect the surface charge distribution characteristics of the material. The matrix is used as the input of the k-Nearest Neighbor (kNN)classification algorithm to classify the dielectric materials. The experimental results show that the proposed method can significantly improve the recognition rate of the existing electrostatic signal material recognition methods.
Active link selection for efficient semi-supervised community detection
NASA Astrophysics Data System (ADS)
Yang, Liang; Jin, Di; Wang, Xiao; Cao, Xiaochun
2015-03-01
Several semi-supervised community detection algorithms have been proposed recently to improve the performance of traditional topology-based methods. However, most of them focus on how to integrate supervised information with topology information; few of them pay attention to which information is critical for performance improvement. This leads to large amounts of demand for supervised information, which is expensive or difficult to obtain in most fields. For this problem we propose an active link selection framework, that is we actively select the most uncertain and informative links for human labeling for the efficient utilization of the supervised information. We also disconnect the most likely inter-community edges to further improve the efficiency. Our main idea is that, by connecting uncertain nodes to their community hubs and disconnecting the inter-community edges, one can sharpen the block structure of adjacency matrix more efficiently than randomly labeling links as the existing methods did. Experiments on both synthetic and real networks demonstrate that our new approach significantly outperforms the existing methods in terms of the efficiency of using supervised information. It needs ~13% of the supervised information to achieve a performance similar to that of the original semi-supervised approaches.
Molecular Dynamics Information Improves cis-Peptide-Based Function Annotation of Proteins.
Das, Sreetama; Bhadra, Pratiti; Ramakumar, Suryanarayanarao; Pal, Debnath
2017-08-04
cis-Peptide bonds, whose occurrence in proteins is rare but evolutionarily conserved, are implicated to play an important role in protein function. This has led to their previous use in a homology-independent, fragment-match-based protein function annotation method. However, proteins are not static molecules; dynamics is integral to their activity. This is nicely epitomized by the geometric isomerization of cis-peptide to trans form for molecular activity. Hence we have incorporated both static (cis-peptide) and dynamics information to improve the prediction of protein molecular function. Our results show that cis-peptide information alone cannot detect functional matches in cases where cis-trans isomerization exists but 3D coordinates have been obtained for only the trans isomer or when the cis-peptide bond is incorrectly assigned as trans. On the contrary, use of dynamics information alone includes false-positive matches for cases where fragments with similar secondary structure show similar dynamics, but the proteins do not share a common function. Combining the two methods reduces errors while detecting the true matches, thereby enhancing the utility of our method in function annotation. A combined approach, therefore, opens up new avenues of improving existing automated function annotation methodologies.
Active link selection for efficient semi-supervised community detection
Yang, Liang; Jin, Di; Wang, Xiao; Cao, Xiaochun
2015-01-01
Several semi-supervised community detection algorithms have been proposed recently to improve the performance of traditional topology-based methods. However, most of them focus on how to integrate supervised information with topology information; few of them pay attention to which information is critical for performance improvement. This leads to large amounts of demand for supervised information, which is expensive or difficult to obtain in most fields. For this problem we propose an active link selection framework, that is we actively select the most uncertain and informative links for human labeling for the efficient utilization of the supervised information. We also disconnect the most likely inter-community edges to further improve the efficiency. Our main idea is that, by connecting uncertain nodes to their community hubs and disconnecting the inter-community edges, one can sharpen the block structure of adjacency matrix more efficiently than randomly labeling links as the existing methods did. Experiments on both synthetic and real networks demonstrate that our new approach significantly outperforms the existing methods in terms of the efficiency of using supervised information. It needs ~13% of the supervised information to achieve a performance similar to that of the original semi-supervised approaches. PMID:25761385
Contrast-dependent saturation adjustment for outdoor image enhancement.
Wang, Shuhang; Cho, Woon; Jang, Jinbeum; Abidi, Mongi A; Paik, Joonki
2017-01-01
Outdoor images captured in bad-weather conditions usually have poor intensity contrast and color saturation since the light arriving at the camera is severely scattered or attenuated. The task of improving image quality in poor conditions remains a challenge. Existing methods of image quality improvement are usually effective for a small group of images but often fail to produce satisfactory results for a broader variety of images. In this paper, we propose an image enhancement method, which makes it applicable to enhance outdoor images by using content-adaptive contrast improvement as well as contrast-dependent saturation adjustment. The main contribution of this work is twofold: (1) we propose the content-adaptive histogram equalization based on the human visual system to improve the intensity contrast; and (2) we introduce a simple yet effective prior for adjusting the color saturation depending on the intensity contrast. The proposed method is tested with different kinds of images, compared with eight state-of-the-art methods: four enhancement methods and four haze removal methods. Experimental results show the proposed method can more effectively improve the visibility and preserve the naturalness of the images, as opposed to the compared methods.
Ndabarora, Eléazar; Mchunu, Gugu
2014-01-01
Various studies have reported that university students, who are mostly young people, rarely use existing HIV/AIDS preventive methods. Although studies have shown that young university students have a high degree of knowledge about HIV/AIDS and HIV modes of transmission, they are still not utilising the existing HIV prevention methods and still engage in risky sexual practices favourable to HIV. Some variables, such as awareness of existing HIV/AIDS prevention methods, have been associated with utilisation of such methods. The study aimed to explore factors that influence use of existing HIV/AIDS prevention methods among university students residing in a selected campus, using the Health Belief Model (HBM) as a theoretical framework. A quantitative research approach and an exploratory-descriptive design were used to describe perceived factors that influence utilisation by university students of HIV/AIDS prevention methods. A total of 335 students completed online and manual questionnaires. Study findings showed that the factors which influenced utilisation of HIV/AIDS prevention methods were mainly determined by awareness of the existing university-based HIV/AIDS prevention strategies. Most utilised prevention methods were voluntary counselling and testing services and free condoms. Perceived susceptibility and perceived threat of HIV/AIDS score was also found to correlate with HIV risk index score. Perceived susceptibility and perceived threat of HIV/AIDS showed correlation with self-efficacy on condoms and their utilisation. Most HBM variables were not predictors of utilisation of HIV/AIDS prevention methods among students. Intervention aiming to improve the utilisation of HIV/AIDS prevention methods among students at the selected university should focus on removing identified barriers, promoting HIV/AIDS prevention services and providing appropriate resources to implement such programmes.
Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang
2018-02-20
We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.
A cross-correlation-based estimate of the galaxy luminosity function
NASA Astrophysics Data System (ADS)
van Daalen, Marcel P.; White, Martin
2018-06-01
We extend existing methods for using cross-correlations to derive redshift distributions for photometric galaxies, without using photometric redshifts. The model presented in this paper simultaneously yields highly accurate and unbiased redshift distributions and, for the first time, redshift-dependent luminosity functions, using only clustering information and the apparent magnitudes of the galaxies as input. In contrast to many existing techniques for recovering unbiased redshift distributions, the output of our method is not degenerate with the galaxy bias b(z), which is achieved by modelling the shape of the luminosity bias. We successfully apply our method to a mock galaxy survey and discuss improvements to be made before applying our model to real data.
Munoz-Plaza, Corrine E; Parry, Carla; Hahn, Erin E; Tang, Tania; Nguyen, Huong Q; Gould, Michael K; Kanter, Michael H; Sharp, Adam L
2016-08-15
Despite reports advocating for integration of research into healthcare delivery, scant literature exists describing how this can be accomplished. Examples highlighting application of qualitative research methods embedded into a healthcare system are particularly needed. This article describes the process and value of embedding qualitative research as the second phase of an explanatory, sequential, mixed methods study to improve antibiotic stewardship for acute sinusitis. Purposive sampling of providers for in-depth interviews improved understanding of unwarranted antibiotic prescribing and elicited stakeholder recommendations for improvement. Qualitative data collection, transcription and constant comparative analyses occurred iteratively. Emerging themes and sub-themes identified primary drivers of unwarranted antibiotic prescribing patterns and recommendations for improving practice. These findings informed the design of a health system intervention to improve antibiotic stewardship for acute sinusitis. Core components of the intervention are also described. Qualitative research can be effectively applied in learning healthcare systems to elucidate quantitative results and inform improvement efforts.
Deep learning methods for protein torsion angle prediction.
Li, Haiou; Hou, Jie; Adhikari, Badri; Lyu, Qiang; Cheng, Jianlin
2017-09-18
Deep learning is one of the most powerful machine learning methods that has achieved the state-of-the-art performance in many domains. Since deep learning was introduced to the field of bioinformatics in 2012, it has achieved success in a number of areas such as protein residue-residue contact prediction, secondary structure prediction, and fold recognition. In this work, we developed deep learning methods to improve the prediction of torsion (dihedral) angles of proteins. We design four different deep learning architectures to predict protein torsion angles. The architectures including deep neural network (DNN) and deep restricted Boltzmann machine (DRBN), deep recurrent neural network (DRNN) and deep recurrent restricted Boltzmann machine (DReRBM) since the protein torsion angle prediction is a sequence related problem. In addition to existing protein features, two new features (predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments) are used as input to each of the four deep learning architectures to predict phi and psi angles of protein backbone. The mean absolute error (MAE) of phi and psi angles predicted by DRNN, DReRBM, DRBM and DNN is about 20-21° and 29-30° on an independent dataset. The MAE of phi angle is comparable to the existing methods, but the MAE of psi angle is 29°, 2° lower than the existing methods. On the latest CASP12 targets, our methods also achieved the performance better than or comparable to a state-of-the art method. Our experiment demonstrates that deep learning is a valuable method for predicting protein torsion angles. The deep recurrent network architecture performs slightly better than deep feed-forward architecture, and the predicted residue contact number and the error distribution of torsion angles extracted from sequence fragments are useful features for improving prediction accuracy.
Gao, Xiang; Lin, Huaiying; Revanna, Kashi; Dong, Qunfeng
2017-05-10
Species-level classification for 16S rRNA gene sequences remains a serious challenge for microbiome researchers, because existing taxonomic classification tools for 16S rRNA gene sequences either do not provide species-level classification, or their classification results are unreliable. The unreliable results are due to the limitations in the existing methods which either lack solid probabilistic-based criteria to evaluate the confidence of their taxonomic assignments, or use nucleotide k-mer frequency as the proxy for sequence similarity measurement. We have developed a method that shows significantly improved species-level classification results over existing methods. Our method calculates true sequence similarity between query sequences and database hits using pairwise sequence alignment. Taxonomic classifications are assigned from the species to the phylum levels based on the lowest common ancestors of multiple database hits for each query sequence, and further classification reliabilities are evaluated by bootstrap confidence scores. The novelty of our method is that the contribution of each database hit to the taxonomic assignment of the query sequence is weighted by a Bayesian posterior probability based upon the degree of sequence similarity of the database hit to the query sequence. Our method does not need any training datasets specific for different taxonomic groups. Instead only a reference database is required for aligning to the query sequences, making our method easily applicable for different regions of the 16S rRNA gene or other phylogenetic marker genes. Reliable species-level classification for 16S rRNA or other phylogenetic marker genes is critical for microbiome research. Our software shows significantly higher classification accuracy than the existing tools and we provide probabilistic-based confidence scores to evaluate the reliability of our taxonomic classification assignments based on multiple database matches to query sequences. Despite its higher computational costs, our method is still suitable for analyzing large-scale microbiome datasets for practical purposes. Furthermore, our method can be applied for taxonomic classification of any phylogenetic marker gene sequences. Our software, called BLCA, is freely available at https://github.com/qunfengdong/BLCA .
Quality improvement in pediatrics: past, present, and future.
Schwartz, Stephanie P; Rehder, Kyle J
2017-01-01
Almost two decades ago, the landmark report "To Err is Human" compelled healthcare to address the large numbers of hospitalized patients experiencing preventable harm. Concurrently, it became clear that the rapidly rising cost of healthcare would be unsustainable in the long-term. As a result, quality improvement methodologies initially rooted in other high-reliability industries have become a primary focus of healthcare. Multiple pediatric studies demonstrate remarkable quality and safety improvements in several domains including handoffs, catheter-associated blood stream infections, and other serious safety events. While both quality improvement and research are data-driven processes, significant differences exist between the two. Research utilizes a hypothesis driven approach to obtain new knowledge while quality improvement often incorporates a cyclic approach to translate existing knowledge into clinical practice. Recent publications have provided guidelines and methods for effectively reporting quality and safety work and improvement implementations. This review examines not only how quality improvement in pediatrics has led to improved outcomes, but also looks to the future of quality improvement in healthcare with focus on education and collaboration to ensure best practice approaches to caring for children.
NASA Astrophysics Data System (ADS)
Gilliom, R.; Hogue, T. S.; McCray, J. E.
2017-12-01
There is a need for improved parameterization of stormwater best management practices (BMP) performance estimates to improve modeling of urban hydrology, planning and design of green infrastructure projects, and water quality crediting for stormwater management. Percent removal is commonly used to estimate BMP pollutant removal efficiency, but there is general agreement that this approach has significant uncertainties and is easily affected by site-specific factors. Additionally, some fraction of monitored BMPs have negative percent removal, so it is important to understand the probability that a BMP will provide the desired water quality function versus exacerbating water quality problems. The widely used k-C* equation has shown to provide a more adaptable and accurate method to model BMP contaminant attenuation, and previous work has begun to evaluate the strengths and weaknesses of the k-C* method. However, no systematic method exists for obtaining first-order removal rate constants needed to use the k-C* equation for stormwater BMPs; thus there is minimal application of the method. The current research analyzes existing water quality data in the International Stormwater BMP Database to provide screening-level parameterization of the k-C* equation for selected BMP types and analysis of factors that skew the distribution of efficiency estimates from the database. Results illustrate that while certain BMPs are more likely to provide desired contaminant removal than others, site- and design-specific factors strongly influence performance. For example, bioretention systems show both the highest and lowest removal rates of dissolved copper, total phosphorous, and total nitrogen. Exploration and discussion of this and other findings will inform the application of the probabilistic pollutant removal rate constants. Though data limitations exist, this research will facilitate improved accuracy of BMP modeling and ultimately aid decision-making for stormwater quality management in urban systems.
Existence of topological multi-string solutions in Abelian gauge field theories
NASA Astrophysics Data System (ADS)
Han, Jongmin; Sohn, Juhee
2017-11-01
In this paper, we consider a general form of self-dual equations arising from Abelian gauge field theories coupled with the Einstein equations. By applying the super/subsolution method, we prove that topological multi-string solutions exist for any coupling constant, which improves previously known results. We provide two examples for application: the self-dual Einstein-Maxwell-Higgs model and the gravitational Maxwell gauged O(3) sigma model.
NASA Astrophysics Data System (ADS)
Wei, Zhongbao; Tseng, King Jet; Wai, Nyunt; Lim, Tuti Mariana; Skyllas-Kazacos, Maria
2016-11-01
Reliable state estimate depends largely on an accurate battery model. However, the parameters of battery model are time varying with operating condition variation and battery aging. The existing co-estimation methods address the model uncertainty by integrating the online model identification with state estimate and have shown improved accuracy. However, the cross interference may arise from the integrated framework to compromise numerical stability and accuracy. Thus this paper proposes the decoupling of model identification and state estimate to eliminate the possibility of cross interference. The model parameters are online adapted with the recursive least squares (RLS) method, based on which a novel joint estimator based on extended Kalman Filter (EKF) is formulated to estimate the state of charge (SOC) and capacity concurrently. The proposed joint estimator effectively compresses the filter order which leads to substantial improvement in the computational efficiency and numerical stability. Lab scale experiment on vanadium redox flow battery shows that the proposed method is highly authentic with good robustness to varying operating conditions and battery aging. The proposed method is further compared with some existing methods and shown to be superior in terms of accuracy, convergence speed, and computational cost.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Millis, Andrew
Understanding the behavior of interacting electrons in molecules and solids so that one can predict new superconductors, catalysts, light harvesters, energy and battery materials and optimize existing ones is the ``quantum many-body problem’’. This is one of the scientific grand challenges of the 21 st century. A complete solution to the problem has been proven to be exponentially hard, meaning that straightforward numerical approaches fail. New insights and new methods are needed to provide accurate yet feasible approximate solutions. This CMSCN project brought together chemists and physicists to combine insights from the two disciplines to develop innovative new approaches. Outcomesmore » included the Density Matrix Embedding method, a new, computationally inexpensive and extremely accurate approach that may enable first principles treatment of superconducting and magnetic properties of strongly correlated materials, new techniques for existing methods including an Adaptively Truncated Hilbert Space approach that will vastly expand the capabilities of the dynamical mean field method, a self-energy embedding theory and a new memory-function based approach to the calculations of the behavior of driven systems. The methods developed under this project are now being applied to improve our understanding of superconductivity, to calculate novel topological properties of materials and to characterize and improve the properties of nanoscale devices.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santos-Villalobos, Hector J; Barstow, Del R; Karakaya, Mahmut
Iris recognition has been proven to be an accurate and reliable biometric. However, the recognition of non-ideal iris images such as off angle images is still an unsolved problem. We propose a new biometric targeted eye model and a method to reconstruct the off-axis eye to its frontal view allowing for recognition using existing methods and algorithms. This allows for existing enterprise level algorithms and approaches to be largely unmodified by using our work as a pre-processor to improve performance. In addition, we describe the `Limbus effect' and its importance for an accurate segmentation of off-axis irides. Our method usesmore » an anatomically accurate human eye model and ray-tracing techniques to compute a transformation function, which reconstructs the iris to its frontal, non-refracted state. Then, the same eye model is used to render a frontal view of the reconstructed iris. The proposed method is fully described and results from synthetic data are shown to establish an upper limit on performance improvement and establish the importance of the proposed approach over traditional linear elliptical unwrapping methods. Our results with synthetic data demonstrate the ability to perform an accurate iris recognition with an image taken as much as 70 degrees off-axis.« less
1991-08-01
being used in both current and long-range research programs that are expected to make the Army more effective in matching the requirements for first- and... make substantial improvements to the existing selection and classifi- cation system. xi IMPROVING THE SELECTION, CLASSIFICATION, AND UTILIZATION OF...basis for new methods of allocating personnel, and making near-real-time decisions on the best match between characteristics of an individual enlistee
Initial Ship Design Using a Pearson Correlation Coefficient and Artificial Intelligence Techniques
NASA Astrophysics Data System (ADS)
Moon, Byung Young; Kim, Soo Young; Kang, Gyung Ju
In this paper we analyzed correlation between geometrical character and resistance, and effective horse power by using Pearson correlation coefficient which is one of the data mining methods. Also we made input data to ship's geometrical character which has strong correlation with output data. We calculated effective horse power and resistance by using Neuro-Fuzzy system. To verify the calculation, 9 of 11 container ships' data were improved as data of Neuro-Fuzzy system and the others were improved as verification data. After analyzing rate of error between existing data and calculation data, we concluded that calculation data have sound agreement with existing data.
ERIC Educational Resources Information Center
Van Zyl, Douglas G.
2011-01-01
Purpose of the study. The purpose of this study was to examine single-gender groupings for sixth grade mathematics classes as a strategy to improve student achievement. The method of research was quantitative, with MAP mathematics test data being used to determine if any relationship exists between the strategy and student achievement. Findings.…
Reduction of gas flow nonuniformity in gas turbine engines by means of gas-dynamic methods
NASA Astrophysics Data System (ADS)
Matveev, V.; Baturin, O.; Kolmakova, D.; Popov, G.
2017-08-01
Gas flow nonuniformity is one of the main sources of rotor blade vibrations in the gas turbine engines. Usually, the flow circumferential nonuniformity occurs near the annular frames, located in the flow channel of the engine. This leads to the increased dynamic stresses in blades and as a consequence to the blade damage. The goal of the research was to find an acceptable method of reducing the level of gas flow nonuniformity as the source of dynamic stresses in the rotor blades. Two different methods were investigated during this research. Thus, this study gives the ideas about methods of improving the flow structure in gas turbine engine. On the basis of existing conditions (under development or existing engine) it allows the selection of the most suitable method for reducing gas flow nonuniformity.
Incorporating conditional random fields and active learning to improve sentiment identification.
Zhang, Kunpeng; Xie, Yusheng; Yang, Yi; Sun, Aaron; Liu, Hengchang; Choudhary, Alok
2014-10-01
Many machine learning, statistical, and computational linguistic methods have been developed to identify sentiment of sentences in documents, yielding promising results. However, most of state-of-the-art methods focus on individual sentences and ignore the impact of context on the meaning of a sentence. In this paper, we propose a method based on conditional random fields to incorporate sentence structure and context information in addition to syntactic information for improving sentiment identification. We also investigate how human interaction affects the accuracy of sentiment labeling using limited training data. We propose and evaluate two different active learning strategies for labeling sentiment data. Our experiments with the proposed approach demonstrate a 5%-15% improvement in accuracy on Amazon customer reviews compared to existing supervised learning and rule-based methods. Copyright © 2014 Elsevier Ltd. All rights reserved.
Documenting Preservice Teacher Growth through Critical Assessment of Online Lesson Plans
ERIC Educational Resources Information Center
Cude, Michelle D.; Haraway, Dana L.
2017-01-01
This research explores the question of how students in a social studies methods course improve skills in analyzing and critiquing pre-existing lesson plans. It utilizes a pre-post authentic assessment tool to measure student growth in key skills of lesson plan critique over the course of one semester's methods instruction. The results support the…
ERIC Educational Resources Information Center
Corbett, Patrick
2010-01-01
This article presents a consideration of how students' existing information-seeking behaviors affect traditional methods of teaching library research in first-year writing courses and offers an alternative method that uses both library and popular Internet search tools. It addresses one aspect of the ongoing pedagogical struggle with new…
Serang, Oliver; Noble, William Stafford
2012-01-01
The problem of identifying the proteins in a complex mixture using tandem mass spectrometry can be framed as an inference problem on a graph that connects peptides to proteins. Several existing protein identification methods make use of statistical inference methods for graphical models, including expectation maximization, Markov chain Monte Carlo, and full marginalization coupled with approximation heuristics. We show that, for this problem, the majority of the cost of inference usually comes from a few highly connected subgraphs. Furthermore, we evaluate three different statistical inference methods using a common graphical model, and we demonstrate that junction tree inference substantially improves rates of convergence compared to existing methods. The python code used for this paper is available at http://noble.gs.washington.edu/proj/fido. PMID:22331862
A Review On Missing Value Estimation Using Imputation Algorithm
NASA Astrophysics Data System (ADS)
Armina, Roslan; Zain, Azlan Mohd; Azizah Ali, Nor; Sallehuddin, Roselina
2017-09-01
The presence of the missing value in the data set has always been a major problem for precise prediction. The method for imputing missing value needs to minimize the effect of incomplete data sets for the prediction model. Many algorithms have been proposed for countermeasure of missing value problem. In this review, we provide a comprehensive analysis of existing imputation algorithm, focusing on the technique used and the implementation of global or local information of data sets for missing value estimation. In addition validation method for imputation result and way to measure the performance of imputation algorithm also described. The objective of this review is to highlight possible improvement on existing method and it is hoped that this review gives reader better understanding of imputation method trend.
A Novel Evaluation Model for the Vehicle Navigation Device Market Using Hybrid MCDM Techniques
NASA Astrophysics Data System (ADS)
Lin, Chia-Li; Hsieh, Meng-Shu; Tzeng, Gwo-Hshiung
The developing strategy of ND is also presented to initiate the product roadmap. Criteria for evaluation are constructed via reviewing papers, interviewing experts and brain-storming. The ISM (interpretive structural modeling) method was used to construct the relationship between each criterion. The existing NDs were sampled to benchmark the gap between the consumer’s aspired/desired utilities with respect to the utilities of existing/developing NDs. The VIKOR method was applied to rank the sampled NDs. This paper will propose the key driving criteria of purchasing new ND and compare the consumer behavior of various characters. Those conclusions can be served as a reference for ND producers for improving existing functions or planning further utilities in the next e-era ND generation.
Fused methods for visual saliency estimation
NASA Astrophysics Data System (ADS)
Danko, Amanda S.; Lyu, Siwei
2015-02-01
In this work, we present a new model of visual saliency by combing results from existing methods, improving upon their performance and accuracy. By fusing pre-attentive and context-aware methods, we highlight the abilities of state-of-the-art models while compensating for their deficiencies. We put this theory to the test in a series of experiments, comparatively evaluating the visual saliency maps and employing them for content-based image retrieval and thumbnail generation. We find that on average our model yields definitive improvements upon recall and f-measure metrics with comparable precisions. In addition, we find that all image searches using our fused method return more correct images and additionally rank them higher than the searches using the original methods alone.
NASA Astrophysics Data System (ADS)
Maris, E.; Froelich, D.
The designers of products subject to the European regulations on waste have an obligation to improve the recyclability of their products from the very first design stages. The statutory texts refer to ISO standard 22 628, which proposes a method to calculate vehicle recyclability. There are several scientific studies that propose other calculation methods as well. Yet the feedback from the CREER club, a group of manufacturers and suppliers expert in ecodesign and recycling, is that the product recyclability calculation method proposed in this standard is not satisfactory, since only a mass indicator is used, the calculation scope is not clearly defined, and common data on the recycling industry does not exist to allow comparable calculations to be made for different products. For these reasons, it is difficult for manufacturers to have access to a method and common data for calculation purposes.
Automated Geometry assisted PEC for electron beam direct write nanolithography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ocola, Leonidas E.; Gosztola, David J.; Rosenmann, Daniel
Nanoscale geometry assisted proximity effect correction (NanoPEC) is demonstrated to improve PEC for nanoscale structures over standard PEC, in terms of feature sharpness for sub-100 nm structures. The method was implemented onto an existing commercially available PEC software. Plasmonic arrays of crosses were fabricated using regular PEC and NanoPEC, and optical absorbance was measured. Results confirm that the improved sharpness of the structures leads to increased sharpness in the optical absorbance spectrum features. We also demonstrated that this method of PEC is applicable to arbitrary shaped structures beyond crosses.
Sparse Matrix for ECG Identification with Two-Lead Features.
Tseng, Kuo-Kun; Luo, Jiao; Hegarty, Robert; Wang, Wenmin; Haiting, Dong
2015-01-01
Electrocardiograph (ECG) human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods.
New advances in the partial-reflection-drifts experiment using microprocessors
NASA Technical Reports Server (NTRS)
Ruggerio, R. L.; Bowhill, S. A.
1982-01-01
Improvements to the partial reflection drifts experiment are completed. The results of the improvements include real time processing and simultaneous measurements of the D region with coherent scatter. Preliminary results indicate a positive correlation between drift velocities calculated by both methods during a two day interval. The possibility now exists for extended observations between partial reflection and coherent scatter. In addition, preliminary measurements could be performed between partial reflection and meteor radar to complete a comparison of methods used to determine velocities in the D region.
Parameterizing the Variability and Uncertainty of Wind and Solar in CEMs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frew, Bethany
We present current and improved methods for estimating the capacity value and curtailment impacts from variable generation (VG) in capacity expansion models (CEMs). The ideal calculation of these variability metrics is through an explicit co-optimized investment-dispatch model using multiple years of VG and load data. Because of data and computational limitations, existing CEMs typically approximate these metrics using a subset of all hours from a single year and/or using statistical methods, which often do not capture the tail-event impacts or the broader set of interactions between VG, storage, and conventional generators. In our proposed new methods, we use hourly generationmore » and load values across all hours of the year to characterize the (1) contribution of VG to system capacity during high load hours, (2) the curtailment level of VG, and (3) the reduction in VG curtailment due to storage and shutdown of select thermal generators. Using CEM model outputs from a preceding model solve period, we apply these methods to exogenously calculate capacity value and curtailment metrics for the subsequent model solve period. Preliminary results suggest that these hourly methods offer improved capacity value and curtailment representations of VG in the CEM from existing approximation methods without additional computational burdens.« less
Construction of mathematical model for measuring material concentration by colorimetric method
NASA Astrophysics Data System (ADS)
Liu, Bing; Gao, Lingceng; Yu, Kairong; Tan, Xianghua
2018-06-01
This paper use the method of multiple linear regression to discuss the data of C problem of mathematical modeling in 2017. First, we have established a regression model for the concentration of 5 substances. But only the regression model of the substance concentration of urea in milk can pass through the significance test. The regression model established by the second sets of data can pass the significance test. But this model exists serious multicollinearity. We have improved the model by principal component analysis. The improved model is used to control the system so that it is possible to measure the concentration of material by direct colorimetric method.
Pankratz, Curt; Warda, Lynne; Piotrowski, Caroline
2016-01-01
Motor vehicle collisions and bicycle collisions and falls are a leading cause of death by preventable injury for children. In order to design, implement and evaluate campaigns and programs aimed at improving child safety, accurate surveillance is needed. This paper examined the challenges that confront efforts to collect surveillance data relevant to child traffic safety, including observation, interview, and focus group methods. Strategies to address key challenges in order to improve the efficiency and accuracy of surveillance methods were recommended. The potential for new technology to enhance existing surveillance methods was also explored. PMID:27399749
Fast sparse recovery and coherence factor weighting in optoacoustic tomography
NASA Astrophysics Data System (ADS)
He, Hailong; Prakash, Jaya; Buehler, Andreas; Ntziachristos, Vasilis
2017-03-01
Sparse recovery algorithms have shown great potential to reconstruct images with limited view datasets in optoacoustic tomography, with a disadvantage of being computational expensive. In this paper, we improve the fast convergent Split Augmented Lagrangian Shrinkage Algorithm (SALSA) method based on least square QR (LSQR) formulation for performing accelerated reconstructions. Further, coherence factor is calculated to weight the final reconstruction result, which can further reduce artifacts arising in limited-view scenarios and acoustically heterogeneous mediums. Several phantom and biological experiments indicate that the accelerated SALSA method with coherence factor (ASALSA-CF) can provide improved reconstructions and much faster convergence compared to existing sparse recovery methods.
The Dark Focus of Visual Accommodation: Its Existence, Its Measurement, Its Effects
1979-11-01
DaVinci Jepicted the lens as a light focusing agent, but went virtually unnoticed as there was no available means to mass-produce his drawings (see...the Bates method of treating myopia in wni2n suggestion and relaxation techniques apparently yielded improved acuity. Working with hypnosis ,. he found...1’ Acute myopes had the greatest improvement during hypnosis . 2) Out ’f 1 49 nypnosis, acuity improvement transferred, but no refractive changes
Multi-Target Tracking Using an Improved Gaussian Mixture CPHD Filter.
Si, Weijian; Wang, Liwei; Qu, Zhiyu
2016-11-23
The cardinalized probability hypothesis density (CPHD) filter is an alternative approximation to the full multi-target Bayesian filter for tracking multiple targets. However, although the joint propagation of the posterior intensity and cardinality distribution in its recursion allows more reliable estimates of the target number than the PHD filter, the CPHD filter suffers from the spooky effect where there exists arbitrary PHD mass shifting in the presence of missed detections. To address this issue in the Gaussian mixture (GM) implementation of the CPHD filter, this paper presents an improved GM-CPHD filter, which incorporates a weight redistribution scheme into the filtering process to modify the updated weights of the Gaussian components when missed detections occur. In addition, an efficient gating strategy that can adaptively adjust the gate sizes according to the number of missed detections of each Gaussian component is also presented to further improve the computational efficiency of the proposed filter. Simulation results demonstrate that the proposed method offers favorable performance in terms of both estimation accuracy and robustness to clutter and detection uncertainty over the existing methods.
Genetic algorithm-based improved DOA estimation using fourth-order cumulants
NASA Astrophysics Data System (ADS)
Ahmed, Ammar; Tufail, Muhammad
2017-05-01
Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.
Mornkham, Tanupat; Wangsomnuk, Preeya Puangsomlee; Fu, Yong-Bi; Wangsomnuk, Pinich; Jogloy, Sanun; Patanothai, Aran
2013-04-29
Jerusalem artichoke (Helianthus tuberosus L.) is an important tuber crop. However, Jerusalem artichoke seeds contain high levels of starch and lipid, making the extraction of high-quality RNA extremely difficult and the gene expression analysis challenging. This study was aimed to improve existing methods for extracting total RNA from Jerusalem artichoke dry seeds and to assess the applicability of the improved method in other plant species. Five RNA extraction methods were evaluated on Jerusalem artichoke seeds and two were modified. One modified method with the significant improvement was applied to assay seeds of diverse Jerusalem artichoke accessions, sunflower, rice, maize, peanut and marigold. The effectiveness of the improved method to extract total RNA from seeds was assessed using qPCR analysis of four selected genes. The improved method of Ma and Yang (2011) yielded a maximum RNA solubility and removed most interfering substances. The improved protocol generated 29 to 41 µg RNA/30 mg fresh weight. An A260/A280 ratio of 1.79 to 2.22 showed their RNA purity. Extracted RNA was effective for downstream applications such as first-stranded cDNA synthesis, cDNA cloning and qPCR. The improved method was also effective to extract total RNA from seeds of sunflower, rice, maize and peanut that are rich in polyphenols, lipids and polysaccharides.
Research on Swivel Construction Technology of 22,400 Tons in Zoucheng Thirty Meter Bridge
NASA Astrophysics Data System (ADS)
Han, Jun; Benlin, Xiao
2018-05-01
In recent years, with the rapid development of highways and railways in our country, there have been many new bridges that need to cross the existing routes. If the conventional construction methods are used, the existing traffic will be affected and the traffic will be built above the busy traffic lines, so there is a big security risk, the construction methods must be improved and innovated. In this paper, it intends to research and develop some key technologies of swivel construction. According to the construction features to use finite element method of swivel cable-stayed bridge to analyse the cable-stayed bridge . The swivel construction process is carried out to solve the technical problems and difficulties in the construction.
NASA Astrophysics Data System (ADS)
Liang, Li; Takaaki, Ohkubo; Guang-hui, Li
2018-03-01
In recent years, earthquakes have occurred frequently, and the seismic performance of existing school buildings has become particularly important. The main method for improving the seismic resistance of existing buildings is reinforcement. However, there are few effective methods to evaluate the effect of reinforcement. Ambient vibration measurement experiments were conducted before and after seismic retrofitting using wireless measurement system and the changes of vibration characteristics were compared. The changes of acceleration response spectrum, natural periods and vibration modes indicate that the wireless vibration measurement system can be effectively applied to evaluate the effect of seismic retrofitting. The method can evaluate the effect of seismic retrofitting qualitatively, it is difficult to evaluate the effect of seismic retrofitting quantitatively at this stage.
Huang, Hao; Zhang, Guifu; Zhao, Kun; ...
2016-10-20
A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (K DP) and to improve rain estimation. Moreover, the hybrid K DP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δ hv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and K DP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid K DP estimatormore » over existing methods.« less
Data distribution method of workflow in the cloud environment
NASA Astrophysics Data System (ADS)
Wang, Yong; Wu, Junjuan; Wang, Ying
2017-08-01
Cloud computing for workflow applications provides the required high efficiency calculation and large storage capacity and it also brings challenges to the protection of trade secrets and other privacy data. Because of privacy data will cause the increase of the data transmission time, this paper presents a new data allocation algorithm based on data collaborative damage degree, to improve the existing data allocation strategy? Safety and public cloud computer algorithm depends on the private cloud; the static allocation method in the initial stage only to the non-confidential data division to improve the original data, in the operational phase will continue to generate data to dynamically adjust the data distribution scheme. The experimental results show that the improved method is effective in reducing the data transmission time.
Remily-Wood, Elizabeth R.; Benson, Kaaron; Baz, Rachid C.; Chen, Y. Ann; Hussein, Mohamad; Hartley-Brown, Monique A.; Sprung, Robert W.; Perez, Brianna; Liu, Richard Z.; Yoder, Sean; Teer, Jamie; Eschrich, Steven A.; Koomen, John M.
2014-01-01
Purpose Quantitative mass spectrometry assays for immunoglobulins (Igs) are compared with existing clinical methods in samples from patients with plasma cell dyscrasias, e.g. multiple myeloma. Experimental design Using LC-MS/MS data, Ig constant region peptides and transitions were selected for liquid chromatography-multiple reaction monitoring mass spectrometry (LC-MRM). Quantitative assays were used to assess Igs in serum from 83 patients. Results LC-MRM assays quantify serum levels of Igs and their isoforms (IgG1–4, IgA1–2, IgM, IgD, and IgE, as well as kappa(κ) and lambda(λ) light chains). LC-MRM quantification has been applied to single samples from a patient cohort and a longitudinal study of an IgE patient undergoing treatment, to enable comparison with existing clinical methods. Proof-of-concept data for defining and monitoring variable region peptides are provided using the H929 multiple myeloma cell line and two MM patients. Conclusions and Clinical Relevance LC-MRM assays targeting constant region peptides determine the type and isoform of the involved immunoglobulin and quantify its expression; the LC-MRM approach has improved sensitivity compared with the current clinical method, but slightly higher interassay variability. Detection of variable region peptides is a promising way to improve Ig quantification, which could produce a dramatic increase in sensitivity over existing methods, and could further complement current clinical techniques. PMID:24723328
Remily-Wood, Elizabeth R; Benson, Kaaron; Baz, Rachid C; Chen, Y Ann; Hussein, Mohamad; Hartley-Brown, Monique A; Sprung, Robert W; Perez, Brianna; Liu, Richard Z; Yoder, Sean J; Teer, Jamie K; Eschrich, Steven A; Koomen, John M
2014-10-01
Quantitative MS assays for Igs are compared with existing clinical methods in samples from patients with plasma cell dyscrasias, for example, multiple myeloma (MM). Using LC-MS/MS data, Ig constant region peptides, and transitions were selected for LC-MRM MS. Quantitative assays were used to assess Igs in serum from 83 patients. RNA sequencing and peptide-based LC-MRM are used to define peptides for quantification of the disease-specific Ig. LC-MRM assays quantify serum levels of Igs and their isoforms (IgG1-4, IgA1-2, IgM, IgD, and IgE, as well as kappa (κ) and lambda (λ) light chains). LC-MRM quantification has been applied to single samples from a patient cohort and a longitudinal study of an IgE patient undergoing treatment, to enable comparison with existing clinical methods. Proof-of-concept data for defining and monitoring variable region peptides are provided using the H929 MM cell line and two MM patients. LC-MRM assays targeting constant region peptides determine the type and isoform of the involved Ig and quantify its expression; the LC-MRM approach has improved sensitivity compared with the current clinical method, but slightly higher inter-assay variability. Detection of variable region peptides is a promising way to improve Ig quantification, which could produce a dramatic increase in sensitivity over existing methods, and could further complement current clinical techniques. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Inventing and improving ribozyme function: rational design versus iterative selection methods
NASA Technical Reports Server (NTRS)
Breaker, R. R.; Joyce, G. F.
1994-01-01
Two major strategies for generating novel biological catalysts exist. One relies on our knowledge of biopolymer structure and function to aid in the 'rational design' of new enzymes. The other, often called 'irrational design', aims to generate new catalysts, in the absence of detailed physicochemical knowledge, by using selection methods to search a library of molecules for functional variants. Both strategies have been applied, with considerable success, to the remodeling of existing ribozymes and the development of ribozymes with novel catalytic function. The two strategies are by no means mutually exclusive, and are best applied in a complementary fashion to obtain ribozymes with the desired catalytic properties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Hao; Zhang, Guifu; Zhao, Kun
A hybrid method of combining linear programming (LP) and physical constraints is developed to estimate specific differential phase (K DP) and to improve rain estimation. Moreover, the hybrid K DP estimator and the existing estimators of LP, least squares fitting, and a self-consistent relation of polarimetric radar variables are evaluated and compared using simulated data. Our simulation results indicate the new estimator's superiority, particularly in regions where backscattering phase (δ hv) dominates. Further, a quantitative comparison between auto-weather-station rain-gauge observations and K DP-based radar rain estimates for a Meiyu event also demonstrate the superiority of the hybrid K DP estimatormore » over existing methods.« less
NASA Astrophysics Data System (ADS)
Yamaguchi, Makoto; Midorikawa, Saburoh
The empirical equation for estimating the site amplification factor of ground motion by the average shear-wave velocity of ground (AVS) is examined. In the existing equations, the coefficient on dependence of the amplification factor on the AVS was treated as constant. The analysis showed that the coefficient varies with change of the AVS for short periods. A new estimation equation was proposed considering the dependence on the AVS. The new equation can represent soil characteristics that the softer soil has the longer predominant period, and can make better estimations for short periods than the existing method.
Research in action: using positive deviance to improve quality of health care
Bradley, Elizabeth H; Curry, Leslie A; Ramanadhan, Shoba; Rowe, Laura; Nembhard, Ingrid M; Krumholz, Harlan M
2009-01-01
Background Despite decades of efforts to improve quality of health care, poor performance persists in many aspects of care. Less than 1% of the enormous national investment in medical research is focused on improving health care delivery. Furthermore, when effective innovations in clinical care are discovered, uptake of these innovations is often delayed and incomplete. In this paper, we build on the established principle of 'positive deviance' to propose an approach to identifying practices that improve health care quality. Methods We synthesize existing literature on positive deviance, describe major alternative approaches, propose benefits and limitations of a positive deviance approach for research directed toward improving quality of health care, and describe an application of this approach in improving hospital care for patients with acute myocardial infarction. Results The positive deviance approach, as adapted for use in health care, presumes that the knowledge about 'what works' is available in existing organizations that demonstrate consistently exceptional performance. Steps in this approach: identify 'positive deviants,' i.e., organizations that consistently demonstrate exceptionally high performance in the area of interest (e.g., proper medication use, timeliness of care); study the organizations in-depth using qualitative methods to generate hypotheses about practices that allow organizations to achieve top performance; test hypotheses statistically in larger, representative samples of organizations; and work in partnership with key stakeholders, including potential adopters, to disseminate the evidence about newly characterized best practices. The approach is particularly appropriate in situations where organizations can be ranked reliably based on valid performance measures, where there is substantial natural variation in performance within an industry, when openness about practices to achieve exceptional performance exists, and where there is an engaged constituency to promote uptake of discovered practices. Conclusion The identification and examination of health care organizations that demonstrate positive deviance provides an opportunity to characterize and disseminate strategies for improving quality. PMID:19426507
Linear combination methods to improve diagnostic/prognostic accuracy on future observations
Kang, Le; Liu, Aiyi; Tian, Lili
2014-01-01
Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714
El-Jardali, Fadi; Fadlallah, Racha
2017-08-16
Improving quality of care and patient safety practices can strengthen health care delivery systems, improve health sector performance, and accelerate attainment of health-related Sustainability Development Goals. Although quality improvement is now prominent on the health policy agendas of governments in low- and middle-income countries (LMICs), including countries of the Eastern Mediterranean Region (EMR), progress to date has not been optimal. The objective of this study is to comprehensively review existing quality improvement and patient safety policies and strategies in two selected countries of the EMR (Lebanon and Jordan) to determine the extent to which these have been institutionalized within existing health systems. We used a mixed methods approach that combined documentation review, stakeholder surveys and key informant interviews. Existing quality improvement and patient safety initiatives were assessed across five components of an analytical framework for assessing health care quality and patient safety: health systems context; national policies and legislation; organizations and institutions; methods, techniques and tools; and health care infrastructure and resources. Both Lebanon and Jordan have made important progress in terms of increased attention to quality and accreditation in national health plans and strategies, licensing requirements for health care professionals and organizations (albeit to varying extents), and investments in health information systems. A key deficiency in both countries is the absence of an explicit national policy for quality improvement and patient safety across the health system. Instead, there is a spread of several (disjointed) pieces of legal measures and national plans leading to fragmentation and lack of clear articulation of responsibilities across the entire continuum of care. Moreover, both countries lack national sets of standardized and applicable quality indicators for performance measurement and benchmarking. Importantly, incentive systems that link contractual agreement, regulations, accreditation, and performance indicators are underutilized in Lebanon and absent in Jordan. At the healthcare organizational level, there is a need to instill a culture of continuous quality improvement and promote professional training in quality improvement and patient safety. Study findings highlight the importance of aligning policies, organizations, methods, capacities and resources in order to institutionalize quality improvement and patient safety practices in health systems. Gaps and dysfunctions identified can help inform national deliberations and dialogues among key stakeholders in each study country. Findings can also inform future quality improvement efforts in the EMR and beyond, with a particular emphasis on LMICs.
Dynamic Bayesian Networks as a Probabilistic Metamodel for Combat Simulations
2014-09-18
test is commonly used for large data sets and is the method of comparison presented in Section 5.5. 4.3.3 Kullback - Leibler Divergence Goodness of Fit ...methods exist that might improve the results. A goodness of fit test using the Kullback - Leibler Divergence was proposed in the first paper, but still... Kullback - Leibler Divergence Goodness of Fit Test . . .
ERIC Educational Resources Information Center
Botagariyev, ?ulegen A.; Kubiyeva, Svetlana S.; Baizakova, Venera E.; Mambetov, Nurolla; Tulegenov, Yerkin K.; Aralbayev, Alpysbay S.; Kairgozhin, Dulat U.
2016-01-01
The purpose of this study was to determine the effectiveness of the existing model of teaching physical training in secondary schools and the analysis of a game like method introduced to improve physical fitness of students. The authors substantiated the use of a game like method during physical training classes, which implementation should create…
Pseudorange Measurement Method Based on AIS Signals.
Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng
2017-05-22
In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system.
Pseudorange Measurement Method Based on AIS Signals
Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng
2017-01-01
In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system. PMID:28531153
A robust and hierarchical approach for the automatic co-registration of intensity and visible images
NASA Astrophysics Data System (ADS)
González-Aguilera, Diego; Rodríguez-Gonzálvez, Pablo; Hernández-López, David; Luis Lerma, José
2012-09-01
This paper presents a new robust approach to integrate intensity and visible images which have been acquired with a terrestrial laser scanner and a calibrated digital camera, respectively. In particular, an automatic and hierarchical method for the co-registration of both sensors is developed. The approach integrates several existing solutions to improve the performance of the co-registration between range-based and visible images: the Affine Scale-Invariant Feature Transform (A-SIFT), the epipolar geometry, the collinearity equations, the Groebner basis solution and the RANdom SAmple Consensus (RANSAC), integrating a voting scheme. The approach presented herein improves the existing co-registration approaches in automation, robustness, reliability and accuracy.
Method and apparatus for steam mixing a nuclear fueled electricity generation system
Tsiklauri, Georgi V.; Durst, Bruce M.
1996-01-01
A method and apparatus for improving the efficiency and performance of a nuclear electrical generation system that comprises the addition of steam handling equipment to an existing plant that results in a surprising increase in plant performance. More particularly, a gas turbine electrical generation system with heat recovery boiler is installed along with a micro-jet high pressure and a low pressure mixer superheater. Depending upon plant characteristics, the existing moisture separator reheater (MSR) can be either augmented or done away with. The instant invention enables a reduction in T.sub.hot without a derating of the reactor unit, and improves efficiency of the plant's electrical conversion cycle. Coupled with this advantage is a possible extension of the plant's fuel cycle length due to an increased electrical conversion efficiency. The reduction in T.sub.hot further allows for a surprising extension of steam generator life. An additional advantage is the reduction in erosion/corrosion of secondary system components including turbine blades and diaphragms. The gas turbine generator used in the instant invention can also replace or augment existing peak or emergency power needs. Another benefit of the instant invention is the extension of plant life and the reduction of downtime due to refueling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oesterling, Patrick; Scheuermann, Gerik; Teresniak, Sven
During the last decades, electronic textual information has become the world's largest and most important information source available. People have added a variety of daily newspapers, books, scientific and governmental publications, blogs and private messages to this wellspring of endless information and knowledge. Since neither the existing nor the new information can be read in its entirety, computers are used to extract and visualize meaningful or interesting topics and documents from this huge information clutter. In this paper, we extend, improve and combine existing individual approaches into an overall framework that supports topological analysis of high dimensional document point cloudsmore » given by the well-known tf-idf document-term weighting method. We show that traditional distance-based approaches fail in very high dimensional spaces, and we describe an improved two-stage method for topology-based projections from the original high dimensional information space to both two dimensional (2-D) and three dimensional (3-D) visualizations. To show the accuracy and usability of this framework, we compare it to methods introduced recently and apply it to complex document and patent collections.« less
Salissou, Yacoubou; Panneton, Raymond
2010-11-01
Several methods for measuring the complex wave number and the characteristic impedance of sound absorbers have been proposed in the literature. These methods can be classified into single frequency and wideband methods. In this paper, the main existing methods are revisited and discussed. An alternative method which is not well known or discussed in the literature while exhibiting great potential is also discussed. This method is essentially an improvement of the wideband method described by Iwase et al., rewritten so that the setup is more ISO 10534-2 standard-compliant. Glass wool, melamine foam and acoustical/thermal insulator wool are used to compare the main existing wideband non-iterative methods with this alternative method. It is found that, in the middle and high frequency ranges the alternative method yields results that are comparable in accuracy to the classical two-cavity method and the four-microphone transfer-matrix method. However, in the low frequency range, the alternative method appears to be more accurate than the other methods, especially when measuring the complex wave number.
A Novel Method for Block Size Forensics Based on Morphological Operations
NASA Astrophysics Data System (ADS)
Luo, Weiqi; Huang, Jiwu; Qiu, Guoping
Passive forensics analysis aims to find out how multimedia data is acquired and processed without relying on pre-embedded or pre-registered information. Since most existing compression schemes for digital images are based on block processing, one of the fundamental steps for subsequent forensics analysis is to detect the presence of block artifacts and estimate the block size for a given image. In this paper, we propose a novel method for blind block size estimation. A 2×2 cross-differential filter is first applied to detect all possible block artifact boundaries, morphological operations are then used to remove the boundary effects caused by the edges of the actual image contents, and finally maximum-likelihood estimation (MLE) is employed to estimate the block size. The experimental results evaluated on over 1300 nature images show the effectiveness of our proposed method. Compared with existing gradient-based detection method, our method achieves over 39% accuracy improvement on average.
Hybrid recommendation methods in complex networks.
Fiasconaro, A; Tumminello, M; Nicosia, V; Latora, V; Mantegna, R N
2015-07-01
We propose two recommendation methods, based on the appropriate normalization of already existing similarity measures, and on the convex combination of the recommendation scores derived from similarity between users and between objects. We validate the proposed measures on three data sets, and we compare the performance of our methods to other recommendation systems recently proposed in the literature. We show that the proposed similarity measures allow us to attain an improvement of performances of up to 20% with respect to existing nonparametric methods, and that the accuracy of a recommendation can vary widely from one specific bipartite network to another, which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system. Finally, we study how an increasing presence of random links in the network affects the recommendation scores, finding that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets.
Max-margin multiattribute learning with low-rank constraint.
Zhang, Qiang; Chen, Lin; Li, Baoxin
2014-07-01
Attribute learning has attracted a lot of interests in recent years for its advantage of being able to model high-level concepts with a compact set of midlevel attributes. Real-world objects often demand multiple attributes for effective modeling. Most existing methods learn attributes independently without explicitly considering their intrinsic relatedness. In this paper, we propose max margin multiattribute learning with low-rank constraint, which learns a set of attributes simultaneously, using only relative ranking of the attributes for the data. By learning all the attributes simultaneously through low-rank constraint, the proposed method is able to capture their intrinsic correlation for improved learning; by requiring only relative ranking, the method avoids restrictive binary labels of attributes that are often assumed by many existing techniques. The proposed method is evaluated on both synthetic data and real visual data including a challenging video data set. Experimental results demonstrate the effectiveness of the proposed method.
Improving surgeon utilization in an orthopedic department using simulation modeling
Simwita, Yusta W; Helgheim, Berit I
2016-01-01
Purpose Worldwide more than two billion people lack appropriate access to surgical services due to mismatch between existing human resource and patient demands. Improving utilization of existing workforce capacity can reduce the existing gap between surgical demand and available workforce capacity. In this paper, the authors use discrete event simulation to explore the care process at an orthopedic department. Our main focus is improving utilization of surgeons while minimizing patient wait time. Methods The authors collaborated with orthopedic department personnel to map the current operations of orthopedic care process in order to identify factors that influence poor surgeons utilization and high patient waiting time. The authors used an observational approach to collect data. The developed model was validated by comparing the simulation output with the actual patient data that were collected from the studied orthopedic care process. The authors developed a proposal scenario to show how to improve surgeon utilization. Results The simulation results showed that if ancillary services could be performed before the start of clinic examination services, the orthopedic care process could be highly improved. That is, improved surgeon utilization and reduced patient waiting time. Simulation results demonstrate that with improved surgeon utilizations, up to 55% increase of future demand can be accommodated without patients reaching current waiting time at this clinic, thus, improving patient access to health care services. Conclusion This study shows how simulation modeling can be used to improve health care processes. This study was limited to a single care process; however the findings can be applied to improve other orthopedic care process with similar operational characteristics. PMID:29355193
Global Optimization Ensemble Model for Classification Methods
Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab
2014-01-01
Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382
I-SonReb: an improved NDT method to evaluate the in situ strength of carbonated concrete
NASA Astrophysics Data System (ADS)
Breccolotti, Marco; Bonfigli, Massimo F.
2015-10-01
Concrete strength evaluated in situ by means of the conventional SonReb method can be highly overestimated in presence of carbonation. This latter, in fact, is responsible for the physical and chemical alteration of the outer layer of concrete. As most of the existing concrete structures are subjected to carbonation, it is of high importance to overcome this problem. In this paper, an Improved SonReb method (I-SonReb) for carbonated concretes is proposed. It relies on the definition of a correction coefficient of the measured Rebound index as a function of the carbonated concrete cover thickness, an additional parameter to be measured during in situ testing campaigns. The usefulness of the method has been validated showing the improvement in the accuracy of concrete strength estimation from two sets of NDT experimental data collected from investigations on real structures.
Tilt measurement using inclinometer based on redundant configuration of MEMS accelerometers
NASA Astrophysics Data System (ADS)
Lu, Jiazhen; Liu, Xuecong; Zhang, Hao
2018-05-01
Inclinometers are widely used in tilt measurement and their required accuracy is becoming ever higher. Most existing methods can effectively work only when the tilt is less than 60°, and the accuracy still can be improved. A redundant configuration of micro-electro mechanical system accelerometers is proposed in this paper and a least squares method and data processing normalization are used. A rigorous mathematical derivation is given. Simulation and experiment are used to verify its feasibility. The results of a Monte Carlo simulation, repeated 3000 times, and turntable reference experiments have shown that the tilt measure range can be expanded to 0°–90° by this method and that the measurement accuracy of θ can be improved by more than 10 times and the measurement accuracy of γ can be also improved effectively. The proposed method is proved to be effective and significant in practical application.
Ma, Junshui; Bayram, Sevinç; Tao, Peining; Svetnik, Vladimir
2011-03-15
After a review of the ocular artifact reduction literature, a high-throughput method designed to reduce the ocular artifacts in multichannel continuous EEG recordings acquired at clinical EEG laboratories worldwide is proposed. The proposed method belongs to the category of component-based methods, and does not rely on any electrooculography (EOG) signals. Based on a concept that all ocular artifact components exist in a signal component subspace, the method can uniformly handle all types of ocular artifacts, including eye-blinks, saccades, and other eye movements, by automatically identifying ocular components from decomposed signal components. This study also proposes an improved strategy to objectively and quantitatively evaluate artifact reduction methods. The evaluation strategy uses real EEG signals to synthesize realistic simulated datasets with different amounts of ocular artifacts. The simulated datasets enable us to objectively demonstrate that the proposed method outperforms some existing methods when no high-quality EOG signals are available. Moreover, the results of the simulated datasets improve our understanding of the involved signal decomposition algorithms, and provide us with insights into the inconsistency regarding the performance of different methods in the literature. The proposed method was also applied to two independent clinical EEG datasets involving 28 volunteers and over 1000 EEG recordings. This effort further confirms that the proposed method can effectively reduce ocular artifacts in large clinical EEG datasets in a high-throughput fashion. Copyright © 2011 Elsevier B.V. All rights reserved.
Zero leakage separable and semipermanent ducting joints
NASA Technical Reports Server (NTRS)
Mischel, H. T.
1973-01-01
A study program has been conducted to explore new methods of achieving zero leakage, separable and semipermanent, ducting joints for space flight vehicles. The study consisted of a search of literature of existing zero leakage methods, the generation of concepts of new methods of achieving the desired zero leakage criteria and the development of detailed analysis and design of a selected concept. Other techniques of leak detection were explored with a view toward improving this area.
A Method that Will Captivate U.
Martin, Sophie; Coller, Jeff
2015-09-03
In an age of next-generation sequencing, the ability to purify RNA transcripts has become a critical issue. In this issue, Duffy et al. (2015) improve on a pre-existing technique of RNA labeling and purification by 4-thiouridine tagging. By increasing the efficiency of RNA capture, this method will enhance the ability to study RNA dynamics, especially for transcripts normally inefficiently captured by previous methods. Copyright © 2015 Elsevier Inc. All rights reserved.
New developments in transit noise and vibration criteria
NASA Astrophysics Data System (ADS)
Hanson, Carl E.
2004-05-01
Federal Transit Administration (FTA) noise and vibration impact criteria were developed in the early 1990's. Noise criteria are ambient-based, developed from the Schultz curve and fundamental research performed by the U.S. Environmental Protection Agency in the 1970's. Vibration criteria are single-value rms vibration velocity levels. After 10 years of experience applying the criteria in assessments of new transit projects throughout the United States, FTA is updating its methods. Approach to assessment of new projects in existing high-noise environments will be clarified. Method for assessing noise impacts due to horn blowing at grade crossings will be provided. Vibration criteria will be expanded to include spectral information. This paper summarizes the background of the current criteria, discusses examples where existing methods are lacking, and describes the planned remedies to improve criteria and methods.
Study on application of aerospace technology to improve surgical implants
NASA Technical Reports Server (NTRS)
Johnson, R. E.; Youngblood, J. L.
1982-01-01
The areas where aerospace technology could be used to improve the reliability and performance of metallic, orthopedic implants was assessed. Specifically, comparisons were made of material controls, design approaches, analytical methods and inspection approaches being used in the implant industry with hardware for the aerospace industries. Several areas for possible improvement were noted such as increased use of finite element stress analysis and fracture control programs on devices where the needs exist for maximum reliability and high structural performance.
Improved numerical methods for turbulent viscous recirculating flows
NASA Technical Reports Server (NTRS)
Vandoormaal, J. P.; Turan, A.; Raithby, G. D.
1986-01-01
The objective of the present study is to improve both the accuracy and computational efficiency of existing numerical techniques used to predict viscous recirculating flows in combustors. A review of the status of the study is presented along with some illustrative results. The effort to improve the numerical techniques consists of the following technical tasks: (1) selection of numerical techniques to be evaluated; (2) two dimensional evaluation of selected techniques; and (3) three dimensional evaluation of technique(s) recommended in Task 2.
Efficient Strategies for Estimating the Spatial Coherence of Backscatter
Hyun, Dongwoon; Crowley, Anna Lisa C.; Dahl, Jeremy J.
2017-01-01
The spatial coherence of ultrasound backscatter has been proposed to reduce clutter in medical imaging, to measure the anisotropy of the scattering source, and to improve the detection of blood flow. These techniques rely on correlation estimates that are obtained using computationally expensive strategies. In this study, we assess existing spatial coherence estimation methods and propose three computationally efficient modifications: a reduced kernel, a downsampled receive aperture, and the use of an ensemble correlation coefficient. The proposed methods are implemented in simulation and in vivo studies. Reducing the kernel to a single sample improved computational throughput and improved axial resolution. Downsampling the receive aperture was found to have negligible effect on estimator variance, and improved computational throughput by an order of magnitude for a downsample factor of 4. The ensemble correlation estimator demonstrated lower variance than the currently used average correlation. Combining the three methods, the throughput was improved 105-fold in simulation with a downsample factor of 4 and 20-fold in vivo with a downsample factor of 2. PMID:27913342
NASA Astrophysics Data System (ADS)
Wang, Hongyan
2017-04-01
This paper addresses the waveform optimization problem for improving the detection performance of multi-input multioutput (MIMO) orthogonal frequency division multiplexing (OFDM) radar-based space-time adaptive processing (STAP) in the complex environment. By maximizing the output signal-to-interference-and-noise-ratio (SINR) criterion, the waveform optimization problem for improving the detection performance of STAP, which is subjected to the constant modulus constraint, is derived. To tackle the resultant nonlinear and complicated optimization issue, a diagonal loading-based method is proposed to reformulate the issue as a semidefinite programming one; thereby, this problem can be solved very efficiently. In what follows, the optimized waveform can be obtained to maximize the output SINR of MIMO-OFDM such that the detection performance of STAP can be improved. The simulation results show that the proposed method can improve the output SINR detection performance considerably as compared with that of uncorrelated waveforms and the existing MIMO-based STAP method.
Improved dense trajectories for action recognition based on random projection and Fisher vectors
NASA Astrophysics Data System (ADS)
Ai, Shihui; Lu, Tongwei; Xiong, Yudian
2018-03-01
As an important application of intelligent monitoring system, the action recognition in video has become a very important research area of computer vision. In order to improve the accuracy rate of the action recognition in video with improved dense trajectories, one advanced vector method is introduced. Improved dense trajectories combine Fisher Vector with Random Projection. The method realizes the reduction of the characteristic trajectory though projecting the high-dimensional trajectory descriptor into the low-dimensional subspace based on defining and analyzing Gaussian mixture model by Random Projection. And a GMM-FV hybrid model is introduced to encode the trajectory feature vector and reduce dimension. The computational complexity is reduced by Random Projection which can drop Fisher coding vector. Finally, a Linear SVM is used to classifier to predict labels. We tested the algorithm in UCF101 dataset and KTH dataset. Compared with existed some others algorithm, the result showed that the method not only reduce the computational complexity but also improved the accuracy of action recognition.
A reconsideration of negative ratings for network-based recommendation
NASA Astrophysics Data System (ADS)
Hu, Liang; Ren, Liang; Lin, Wenbin
2018-01-01
Recommendation algorithms based on bipartite networks have become increasingly popular, thanks to their accuracy and flexibility. Currently, many of these methods ignore users' negative ratings. In this work, we propose a method to exploit negative ratings for the network-based inference algorithm. We find that negative ratings play a positive role regardless of sparsity of data sets. Furthermore, we improve the efficiency of our method and compare it with the state-of-the-art algorithms. Experimental results show that the present method outperforms the existing algorithms.
The use of periodization in exercise prescriptions for inactive adults: A systematic review
Strohacker, Kelley; Fazzino, Daniel; Breslin, Whitney L.; Xu, Xiaomeng
2015-01-01
Background Periodization of exercise is a method typically used in sports training, but the impact of periodized exercise on health outcomes in untrained adults is unclear. Purpose This review aims to summarize existing research wherein aerobic or resistance exercise was prescribed to inactive adults using a recognized periodization method. Methods A search of relevant databases, conducted between January and February of 2014, yielded 21 studies published between 2000 and 2013 that assessed the impact of periodized exercise on health outcomes in untrained participants. Results Substantial heterogeneity existed between studies, even under the same periodization method. Compared to baseline values or non-training control groups, prescribing periodized resistance or aerobic exercise yielded significant improvements in health outcomes related to traditional and emerging risk factors for cardiovascular disease, low-back and neck/shoulder pain, disease severity, and quality of life, with mixed results for increasing bone mineral density. Conclusions Although it is premature to conclude that periodized exercise is superior to non-periodized exercise for improving health outcomes, periodization appears to be a feasible means of prescribing exercise to inactive adults within an intervention setting. Further research is necessary to understand the effectiveness of periodizing aerobic exercise, the psychological effects of periodization, and the feasibility of implementing flexible non-linear methods. PMID:26844095
Bifurcating fronts for the Taylor-Couette problem in infinite cylinders
NASA Astrophysics Data System (ADS)
Hărăguş-Courcelle, M.; Schneider, G.
We show the existence of bifurcating fronts for the weakly unstable Taylor-Couette problem in an infinite cylinder. These fronts connect a stationary bifurcating pattern, here the Taylor vortices, with the trivial ground state, here the Couette flow. In order to show the existence result we improve a method which was already used in establishing the existence of bifurcating fronts for the Swift-Hohenberg equation by Collet and Eckmann, 1986, and by Eckmann and Wayne, 1991. The existence proof is based on spatial dynamics and center manifold theory. One of the difficulties in applying center manifold theory comes from an infinite number of eigenvalues on the imaginary axis for vanishing bifurcation parameter. But nevertheless, a finite dimensional reduction is possible, since the eigenvalues leave the imaginary axis with different velocities, if the bifurcation parameter is increased. In contrast to previous work we have to use normalform methods and a non-standard cut-off function to obtain a center manifold which is large enough to contain the bifurcating fronts.
A New Moving Object Detection Method Based on Frame-difference and Background Subtraction
NASA Astrophysics Data System (ADS)
Guo, Jiajia; Wang, Junping; Bai, Ruixue; Zhang, Yao; Li, Yong
2017-09-01
Although many methods of moving object detection have been proposed, moving object extraction is still the core in video surveillance. However, with the complex scene in real world, false detection, missed detection and deficiencies resulting from cavities inside the body still exist. In order to solve the problem of incomplete detection for moving objects, a new moving object detection method combined an improved frame-difference and Gaussian mixture background subtraction is proposed in this paper. To make the moving object detection more complete and accurate, the image repair and morphological processing techniques which are spatial compensations are applied in the proposed method. Experimental results show that our method can effectively eliminate ghosts and noise and fill the cavities of the moving object. Compared to other four moving object detection methods which are GMM, VIBE, frame-difference and a literature's method, the proposed method improve the efficiency and accuracy of the detection.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
The Data-to-Action Framework: A Rapid Program Improvement Process.
Zakocs, Ronda; Hill, Jessica A; Brown, Pamela; Wheaton, Jocelyn; Freire, Kimberley E
2015-08-01
Although health education programs may benefit from quality improvement methods, scant resources exist to help practitioners apply these methods for program improvement. The purpose of this article is to describe the Data-to-Action framework, a process that guides practitioners through rapid-feedback cycles in order to generate actionable data to improve implementation of ongoing programs. The framework was designed while implementing DELTA PREP, a 3-year project aimed at building the primary prevention capacities of statewide domestic violence coalitions. The authors describe the framework's main steps and provide a case example of a rapid-feedback cycle and several examples of rapid-feedback memos produced during the project period. The authors also discuss implications for health education evaluation and practice. © 2015 Society for Public Health Education.
NASA Astrophysics Data System (ADS)
Lin, Wei; Li, Xizhe; Yang, Zhengming; Lin, Lijun; Xiong, Shengchun; Wang, Zhiyuan; Wang, Xiangyang; Xiao, Qianhua
Based on the basic principle of the porosity method in image segmentation, considering the relationship between the porosity of the rocks and the fractal characteristics of the pore structures, a new improved image segmentation method was proposed, which uses the calculated porosity of the core images as a constraint to obtain the best threshold. The results of comparative analysis show that the porosity method can best segment images theoretically, but the actual segmentation effect is deviated from the real situation. Due to the existence of heterogeneity and isolated pores of cores, the porosity method that takes the experimental porosity of the whole core as the criterion cannot achieve the desired segmentation effect. On the contrary, the new improved method overcomes the shortcomings of the porosity method, and makes a more reasonable binary segmentation for the core grayscale images, which segments images based on the actual porosity of each image by calculated. Moreover, the image segmentation method based on the calculated porosity rather than the measured porosity also greatly saves manpower and material resources, especially for tight rocks.
VMT Mix Modeling for Mobile Source Emissions Forecasting: Formulation and Empirical Application
DOT National Transportation Integrated Search
2000-05-01
The purpose of the current report is to propose and implement a methodology for obtaining improved link-specific vehicle miles of travel (VMT) mix values compared to those obtained from existent methods. Specifically, the research is developing a fra...
Improved design of electrophoretic equipment for rapid sickle-cell-anemia screening
NASA Technical Reports Server (NTRS)
Reddick, J. M.; Hirsch, I.
1974-01-01
Effective mass screening may be accomplished by modifying existing electrophoretic equipment in conjunction with multisample applicator used with cellulose-acetate-matrix test paper. Using this method, approximately 20 to 25 samples can undergo electrophoresis in 5 to 6 minutes.
DOT National Transportation Integrated Search
2016-05-01
Florida International University researchers examined the existing performance measures and the project prioritization method in the CMP and updated them to better reflect the current conditions and strategic goals of FDOT. They also developed visual...
Machine learning in heart failure: ready for prime time.
Awan, Saqib Ejaz; Sohel, Ferdous; Sanfilippo, Frank Mario; Bennamoun, Mohammed; Dwivedi, Girish
2018-03-01
The aim of this review is to present an up-to-date overview of the application of machine learning methods in heart failure including diagnosis, classification, readmissions and medication adherence. Recent studies have shown that the application of machine learning techniques may have the potential to improve heart failure outcomes and management, including cost savings by improving existing diagnostic and treatment support systems. Recently developed deep learning methods are expected to yield even better performance than traditional machine learning techniques in performing complex tasks by learning the intricate patterns hidden in big medical data. The review summarizes the recent developments in the application of machine and deep learning methods in heart failure management.
Online optimization of storage ring nonlinear beam dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Xiaobiao; Safranek, James
2015-08-01
We propose to optimize the nonlinear beam dynamics of existing and future storage rings with direct online optimization techniques. This approach may have crucial importance for the implementation of diffraction limited storage rings. In this paper considerations and algorithms for the online optimization approach are discussed. We have applied this approach to experimentally improve the dynamic aperture of the SPEAR3 storage ring with the robust conjugate direction search method and the particle swarm optimization method. The dynamic aperture was improved by more than 5 mm within a short period of time. Experimental setup and results are presented.
REVIEW OF IMPROVEMENTS IN RADIO FREQUENCY PHOTONICS
2017-09-01
control boards keep the MZM biased at quadrature. A couple of methods exist for bias control: optical power monitoring or second harmonic power... bias , referred to as low- biasing . The increased RF gain for operating at the low bias point comes from the improved optical gain of the sidebands...Figure 3: Optical Gain for an MZM at Quadrature and Low Bias Operation ............................... 3 Figure 4: RF Gain for an MZM at Different
Fire Safety Aspects of Polymeric Materials. Volume 6. Aircraft. Civil and Military
1977-01-01
resistance of the existing Polyurethane foam- based seating sys- tems be improved through design, construction, and selection of covering materials. 12. A...aircraft interiors under real fire conditions. To provide the data base for developing improved fire safety standards for aircraft, four types of...the determination of immobilizing effect was based on performance in the swimming test, a simple exercise method also favored by Kimmerle to provide
NASA Astrophysics Data System (ADS)
Wang, Tongda; Cheng, Jianhua; Guan, Dongxue; Kang, Yingyao; Zhang, Wei
2017-09-01
Due to the lever-arm effect and flexural deformation in the practical application of transfer alignment (TA), the TA performance is decreased. The existing polar TA algorithm only compensates a fixed lever-arm without considering the dynamic lever-arm caused by flexural deformation; traditional non-polar TA algorithms also have some limitations. Thus, the performance of existing compensation algorithms is unsatisfactory. In this paper, a modified compensation algorithm of the lever-arm effect and flexural deformation is proposed to promote the accuracy and speed of the polar TA. On the basis of a dynamic lever-arm model and a noise compensation method for flexural deformation, polar TA equations are derived in grid frames. Based on the velocity-plus-attitude matching method, the filter models of polar TA are designed. An adaptive Kalman filter (AKF) is improved to promote the robustness and accuracy of the system, and then applied to the estimation of the misalignment angles. Simulation and experiment results have demonstrated that the modified compensation algorithm based on the improved AKF for polar TA can effectively compensate the lever-arm effect and flexural deformation, and then improve the accuracy and speed of TA in the polar region.
Sheng, Ben; Marsh, Kimberly; Slavkovic, Aleksandra B.; Gregson, Simon; Eaton, Jeffrey W.; Bao, Le
2017-01-01
Objective HIV prevalence data collected from routine HIV testing of pregnant women at antenatal clinics (ANC-RT) are potentially available from all facilities that offer testing services to pregnant women, and can be used to improve estimates of national and sub-national HIV prevalence trends. We develop methods to incorporate this new data source into the UNAIDS Estimation and Projection Package (EPP) in Spectrum 2017. Methods We develop a new statistical model for incorporating ANC-RT HIV prevalence data, aggregated either to the health facility level (‘site-level’) or regionally (‘census-level’), to estimate HIV prevalence alongside existing sources of HIV prevalence data from ANC unlinked anonymous testing (ANC-UAT) and household-based national population surveys. Synthetic data are generated to understand how the availability of ANC-RT data affects the accuracy of various parameter estimates. Results We estimate HIV prevalence and additional parameters using both ANC-RT and other existing data. Fitting HIV prevalence using synthetic data generally gives precise estimates of the underlying trend and other parameters. More years of ANC-RT data should improve prevalence estimates. More ANC-RT sites and continuation with existing ANC-UAT sites may improve the estimate of calibration between ANC-UAT and ANC-RT sites. Conclusion We have proposed methods to incorporate ANC-RT data into Spectrum to obtain more precise estimates of prevalence and other measures of the epidemic. Many assumptions about the accuracy, consistency, and representativeness of ANC-RT prevalence underlie the use of these data for monitoring HIV epidemic trends, and should be tested as more data become available from national ANC-RT programs. PMID:28296804
2018-01-01
ABSTRACT Background: Information from patient complaints – a widely accepted measure of patient satisfaction with services – can inform improvements in service quality, and contribute towards overall health systems performance. While analyses of data from patient complaints received much emphasis, there is limited published literature on key interventions to improve complaint management systems. Objectives: The objectives are two-fold: first, to synthesise existing evidence and provide practical options to inform future policy and practice and, second, to identify key outstanding gaps in the existing literature to inform agenda for future research. Methods: We report results of review of the existing literature. Peer-reviewed published literature was searched in OVID Medline, OVID Global Health and PubMed. In addition, relevant citations from the reviewed articles were followed up, and we also report grey literature from the UK and the Netherlands. Results: Effective interventions can improve collection of complaints (e.g. establishing easy-to-use channels and raising patients’ awareness of these), analysis of complaint data (e.g. creating structures and spaces for analysis and learning from complaints data), and subsequent action (e.g. timely feedback to complainants and integrating learning from complaints into service quality improvement). No one single measure can be sufficient, and any intervention to improve patient complaint management system must include different components, which need to be feasible, effective, scalable, and sustainable within local context. Conclusions: Effective interventions to strengthen patient complaints systems need to be: comprehensive, integrated within existing systems, context-specific and cognizant of the information asymmetry and the unequal power relations between the key actors. Four gaps in the published literature represent an agenda for future research: limited understanding of contexts of effective interventions, absence of system-wide approaches, lack of evidence from low- and middle-income countries and absence of focused empirical assessments of behaviour of staff who manage patient complaints. PMID:29658393
Improved Ant Algorithms for Software Testing Cases Generation
Yang, Shunkun; Xu, Jiaqi
2014-01-01
Existing ant colony optimization (ACO) for software testing cases generation is a very popular domain in software testing engineering. However, the traditional ACO has flaws, as early search pheromone is relatively scarce, search efficiency is low, search model is too simple, positive feedback mechanism is easy to porduce the phenomenon of stagnation and precocity. This paper introduces improved ACO for software testing cases generation: improved local pheromone update strategy for ant colony optimization, improved pheromone volatilization coefficient for ant colony optimization (IPVACO), and improved the global path pheromone update strategy for ant colony optimization (IGPACO). At last, we put forward a comprehensive improved ant colony optimization (ACIACO), which is based on all the above three methods. The proposed technique will be compared with random algorithm (RND) and genetic algorithm (GA) in terms of both efficiency and coverage. The results indicate that the improved method can effectively improve the search efficiency, restrain precocity, promote case coverage, and reduce the number of iterations. PMID:24883391
ERIC Educational Resources Information Center
Murmura, Federica; Casolani, Nicola; Bravi, Laura
2016-01-01
This paper develops a theoretical framework that could facilitate the application of the Autovalutazione, Valutazione periodica, Accreditamento (AVA) method in Italian universities, trying to simplify the use of this approach, and to cover the existing gap between Italy and others European academic institutions. The new competitive environment in…
Two Stochastic Phases of Tick-wise Price Fluctuation and the Price Prediction Generator
NASA Astrophysics Data System (ADS)
Tanaka-Yamawaki, Mieko; Tokuoka, Seiji
2007-07-01
We report in this paper the existence of two different stochastic phases in the tick-wise price fluctuations. Based on this observation, we improve our old method of developing the evolutional strategy to predict the direction of the tick-wise price movements. We obtain a stable predictive power even in the region where the old method had a difficulty.
Wavelet-based image compression using shuffling and bit plane correlation
NASA Astrophysics Data System (ADS)
Kim, Seungjong; Jeong, Jechang
2000-12-01
In this paper, we propose a wavelet-based image compression method using shuffling and bit plane correlation. The proposed method improves coding performance in two steps: (1) removing the sign bit plane by shuffling process on quantized coefficients, (2) choosing the arithmetic coding context according to maximum correlation direction. The experimental results are comparable or superior for some images with low correlation, to existing coders.
Federating Cyber and Physical Models for Event-Driven Situational Awareness
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stephan, Eric G.; Pawlowski, Ronald A.; Sridhar, Siddharth
The purpose of this paper is to describe a novel method to improve electric power system monitoring and control software application interoperability. This method employs the concept of federation, which is defined as the use of existing models that represent aspects of a system in specific domains (such as physical and cyber security domains) and building interface to link all of domain models.
Bahl, Rajiv; Martines, Jose; Bhandari, Nita; Biloglav, Zrinka; Edmond, Karen; Iyengar, Sharad; Kramer, Michael; Lawn, Joy E; Manandhar, D S; Mori, Rintaro; Rasmussen, Kathleen M; Sachdev, H P S; Singhal, Nalini; Tomlinson, Mark; Victora, Cesar; Williams, Anthony F; Chan, Kit Yee; Rudan, Igor
2012-06-01
This paper aims to identify health research priorities that could improve the rate of progress in reducing global neonatal mortality from preterm birth and low birth weight (PB/LBW), as set out in the UN's Millennium Development Goal 4. We applied the Child Health and Nutrition Research Initiative (CHNRI) methodology for setting priorities in health research investments. In the process coordinated by the World Health Organization in 2007-2008, 21 researchers with interest in child, maternal and newborn health suggested 82 research ideas that spanned across the broad spectrum of epidemiological research, health policy and systems research, improvement of existing interventions and development of new interventions. The 82 research questions were then assessed for answerability, effectiveness, deliverability, maximum potential for mortality reduction and the effect on equity using the CHNRI method. The top 10 identified research priorities were dominated by health systems and policy research questions (eg, identification of LBW infants born at home within 24-48 hours of birth for additional care; approaches to improve quality of care of LBW infants in health facilities; identification of barriers to optimal home care practices including care seeking; and approaches to increase the use of antenatal corticosteriods in preterm labor and to improve access to hospital care for LBW infants). These were followed by priorities for improvement of the existing interventions (eg, early initiation of breastfeeding, including feeding mode and techniques for those unable to suckle directly from the breast; improved cord care, such as chlorhexidine application; and alternative methods to Kangaroo Mother Care (KMC) to keep LBW infants warm in community settings). The highest-ranked epidemiological question suggested improving criteria for identifying LBW infants who need to be cared for in a hospital. Among the new interventions, the greatest support was shown for the development of new simple and effective interventions for providing thermal care to LBW infants, if KMC is not acceptable to the mother. The context for this exercise was set within the MDG4, requiring an urgent and rapid progress in mortality reduction from low birth weight, rather than identifying long-term strategic solutions of the greatest potential. In a short-term context, the health policy and systems research to improve access and coverage by the existing interventions, coupled with further research to improve effectiveness, deliverability and acceptance of existing interventions, and epidemiological research to address the key gaps in knowledge, were all highlighted as research priorities.
Robust digital image watermarking using distortion-compensated dither modulation
NASA Astrophysics Data System (ADS)
Li, Mianjie; Yuan, Xiaochen
2018-04-01
In this paper, we propose a robust feature extraction based digital image watermarking method using Distortion- Compensated Dither Modulation (DC-DM). Our proposed local watermarking method provides stronger robustness and better flexibility than traditional global watermarking methods. We improve robustness by introducing feature extraction and DC-DM method. To extract the robust feature points, we propose a DAISY-based Robust Feature Extraction (DRFE) method by employing the DAISY descriptor and applying the entropy calculation based filtering. The experimental results show that the proposed method achieves satisfactory robustness under the premise of ensuring watermark imperceptibility quality compared to other existing methods.
Micro-optical-mechanical system photoacoustic spectrometer
Kotovsky, Jack; Benett, William J.; Tooker, Angela C.; Alameda, Jennifer B.
2013-01-01
All-optical photoacoustic spectrometer sensing systems (PASS system) and methods include all the hardware needed to analyze the presence of a large variety of materials (solid, liquid and gas). Some of the all-optical PASS systems require only two optical-fibers to communicate with the opto-electronic power and readout systems that exist outside of the material environment. Methods for improving the signal-to-noise are provided and enable mirco-scale systems and methods for operating such systems.
NASA Astrophysics Data System (ADS)
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
Postprocessing for character recognition using pattern features and linguistic information
NASA Astrophysics Data System (ADS)
Yoshikawa, Takatoshi; Okamoto, Masayosi; Horii, Hiroshi
1993-04-01
We propose a new method of post-processing for character recognition using pattern features and linguistic information. This method corrects errors in the recognition of handwritten Japanese sentences containing Kanji characters. This post-process method is characterized by having two types of character recognition. Improving the accuracy of the character recognition rate of Japanese characters is made difficult by the large number of characters, and the existence of characters with similar patterns. Therefore, it is not practical for a character recognition system to recognize all characters in detail. First, this post-processing method generates a candidate character table by recognizing the simplest features of characters. Then, it selects words corresponding to the character from the candidate character table by referring to a word and grammar dictionary before selecting suitable words. If the correct character is included in the candidate character table, this process can correct an error, however, if the character is not included, it cannot correct an error. Therefore, if this method can presume a character does not exist in a candidate character table by using linguistic information (word and grammar dictionary). It then can verify a presumed character by character recognition using complex features. When this method is applied to an online character recognition system, the accuracy of character recognition improves 93.5% to 94.7%. This proved to be the case when it was used for the editorials of a Japanese newspaper (Asahi Shinbun).
NASA Astrophysics Data System (ADS)
Rohling, E. J.
2014-12-01
Ice volume (and hence sea level) and deep-sea temperature are key measures of global climate change. Sea level has been documented using several independent methods over the past 0.5 million years (Myr). Older periods, however, lack such independent validation; all existing records are related to deep-sea oxygen isotope (d18O) data that are influenced by processes unrelated to sea level. For deep-sea temperature, only one continuous high-resolution (Mg/Ca-based) record exists, with related sea-level estimates, spanning the past 1.5 Myr. We have recently presented a novel sea-level reconstruction, with associated estimates of deep-sea temperature, which independently validates the previous 0-1.5 Myr reconstruction and extends it back to 5.3 Myr ago. A serious of caveats applies to this new method, especially in older times of its application, as is always the case with new methods. Independent validation exercises are needed to elucidate where consistency exists, and where solutions drift away from each other. A key observation from our new method is that a large temporal offset existed during the onset of Plio-Pleistocene ice ages, between a marked cooling step at 2.73 Myr ago and the first major glaciation at 2.15 Myr ago. This observation relies on relative changes within the dataset, which are more robust than absolute values. I will discuss our method and its main caveats and avenues for improvement.
Landsat D Thematic Mapper image dimensionality reduction and geometric correction accuracy
NASA Technical Reports Server (NTRS)
Ford, G. E.
1986-01-01
To characterize and quantify the performance of the Landsat thematic mapper (TM), techniques for dimensionality reduction by linear transformation have been studied and evaluated and the accuracy of the correction of geometric errors in TM images analyzed. Theoretical evaluations and comparisons for existing methods for the design of linear transformation for dimensionality reduction are presented. These methods include the discrete Karhunen Loeve (KL) expansion, Multiple Discriminant Analysis (MDA), Thematic Mapper (TM)-Tasseled Cap Linear Transformation and Singular Value Decomposition (SVD). A unified approach to these design problems is presented in which each method involves optimizing an objective function with respect to the linear transformation matrix. From these studies, four modified methods are proposed. They are referred to as the Space Variant Linear Transformation, the KL Transform-MDA hybrid method, and the First and Second Version of the Weighted MDA method. The modifications involve the assignment of weights to classes to achieve improvements in the class conditional probability of error for classes with high weights. Experimental evaluations of the existing and proposed methods have been performed using the six reflective bands of the TM data. It is shown that in terms of probability of classification error and the percentage of the cumulative eigenvalues, the six reflective bands of the TM data require only a three dimensional feature space. It is shown experimentally as well that for the proposed methods, the classes with high weights have improvements in class conditional probability of error estimates as expected.
Good Laboratory Practices of Materials Testing at NASA White Sands Test Facility
NASA Technical Reports Server (NTRS)
Hirsch, David; Williams, James H.
2005-01-01
An approach to good laboratory practices of materials testing at NASA White Sands Test Facility is presented. The contents include: 1) Current approach; 2) Data analysis; and 3) Improvements sought by WSTF to enhance the diagnostic capability of existing methods.
This project will use existing and developing information to evaluate and demonstrate procedures for more fully characterizing risks of non-bioaccumulative toxicants to aquatic organisms, and for incorporating these risks into aquatic life criteria. These efforts will address a v...
Multiplexed biosensors for detection of mycotoxins
USDA-ARS?s Scientific Manuscript database
As analytical methods have improved it has become apparent that mycotoxins exist in many forms within a commodity or food. For the established toxins there has been increased interest in the presence of metabolites that might also harbor toxicity. These include biosynthetic precursors as well as pro...
23 CFR 972.214 - Federal lands congestion management system (CMS).
Code of Federal Regulations, 2011 CFR
2011-04-01
... management strategies; (v) Determine methods to monitor and evaluate the performance of the multi-modal... means the level at which transportation system performance is no longer acceptable due to traffic... improve existing transportation system efficiency. Approaches may include the use of alternate mode...
23 CFR 972.214 - Federal lands congestion management system (CMS).
Code of Federal Regulations, 2010 CFR
2010-04-01
... management strategies; (v) Determine methods to monitor and evaluate the performance of the multi-modal... means the level at which transportation system performance is no longer acceptable due to traffic... improve existing transportation system efficiency. Approaches may include the use of alternate mode...
Conomos, Matthew P; Miller, Michael B; Thornton, Timothy A
2015-05-01
Population structure inference with genetic data has been motivated by a variety of applications in population genetics and genetic association studies. Several approaches have been proposed for the identification of genetic ancestry differences in samples where study participants are assumed to be unrelated, including principal components analysis (PCA), multidimensional scaling (MDS), and model-based methods for proportional ancestry estimation. Many genetic studies, however, include individuals with some degree of relatedness, and existing methods for inferring genetic ancestry fail in related samples. We present a method, PC-AiR, for robust population structure inference in the presence of known or cryptic relatedness. PC-AiR utilizes genome-screen data and an efficient algorithm to identify a diverse subset of unrelated individuals that is representative of all ancestries in the sample. The PC-AiR method directly performs PCA on the identified ancestry representative subset and then predicts components of variation for all remaining individuals based on genetic similarities. In simulation studies and in applications to real data from Phase III of the HapMap Project, we demonstrate that PC-AiR provides a substantial improvement over existing approaches for population structure inference in related samples. We also demonstrate significant efficiency gains, where a single axis of variation from PC-AiR provides better prediction of ancestry in a variety of structure settings than using 10 (or more) components of variation from widely used PCA and MDS approaches. Finally, we illustrate that PC-AiR can provide improved population stratification correction over existing methods in genetic association studies with population structure and relatedness. © 2015 WILEY PERIODICALS, INC.
NASA Astrophysics Data System (ADS)
Mansourian, Leila; Taufik Abdullah, Muhamad; Nurliyana Abdullah, Lili; Azman, Azreen; Mustaffa, Mas Rina
2017-02-01
Pyramid Histogram of Words (PHOW), combined Bag of Visual Words (BoVW) with the spatial pyramid matching (SPM) in order to add location information to extracted features. However, different PHOW extracted from various color spaces, and they did not extract color information individually, that means they discard color information, which is an important characteristic of any image that is motivated by human vision. This article, concatenated PHOW Multi-Scale Dense Scale Invariant Feature Transform (MSDSIFT) histogram and a proposed Color histogram to improve the performance of existing image classification algorithms. Performance evaluation on several datasets proves that the new approach outperforms other existing, state-of-the-art methods.
How to improve the comfort of Kesawan Heritage Corridor, Medan City
NASA Astrophysics Data System (ADS)
Tegar; Ginting, Nurlisa; Suwantoro, H.
2018-03-01
Comfort is indispensable to make a friendly neighborhood or city. Especially the comfort of the infrastructure in the corridor. People must be able to feel comfortable to act rationally in their physical environment. Existing infrastructure must able to support Kesawan as a historic district. Kesawan is an area that is filled with so many unique buildings. Without comfort, how good the existing buildings’ architecture cannot be enjoyed. It will also affect the identity of a region or city. The aim of this research is to re-design the public facilities from Kesawan corridor’s comfort aspect: orientation, traffic calming, vegetation, signage, public facilities (toilet, seating place, bus station, bins), information center, parking and pedestrian path. It will translate the design concept in the form of design criteria. This research uses qualitative methods. Some facilities in this corridor are unsuitable even some of them are not available. So, we need some improvements and additions to the existing facilities. It is expected that by upgrading the existing facilities, visitors who come to Kesawan will be able to enjoy more and able to make Medan city more friendly.
Compensation of Verdet Constant Temperature Dependence by Crystal Core Temperature Measurement
Petricevic, Slobodan J.; Mihailovic, Pedja M.
2016-01-01
Compensation of the temperature dependence of the Verdet constant in a polarimetric extrinsic Faraday sensor is of major importance for applying the magneto-optical effect to AC current measurements and magnetic field sensing. This paper presents a method for compensating the temperature effect on the Faraday rotation in a Bi12GeO20 crystal by sensing its optical activity effect on the polarization of a light beam. The method measures the temperature of the same volume of crystal that effects the beam polarization in a magnetic field or current sensing process. This eliminates the effect of temperature difference found in other indirect temperature compensation methods, thus allowing more accurate temperature compensation for the temperature dependence of the Verdet constant. The method does not require additional changes to an existing Δ/Σ configuration and is thus applicable for improving the performance of existing sensing devices. PMID:27706043
Fast reconstruction of off-axis digital holograms based on digital spatial multiplexing.
Sha, Bei; Liu, Xuan; Ge, Xiao-Lu; Guo, Cheng-Shan
2014-09-22
A method for fast reconstruction of off-axis digital holograms based on digital multiplexing algorithm is proposed. Instead of the existed angular multiplexing (AM), the new method utilizes a spatial multiplexing (SM) algorithm, in which four off-axis holograms recorded in sequence are synthesized into one SM function through multiplying each hologram with a tilted plane wave and then adding them up. In comparison with the conventional methods, the SM algorithm simplifies two-dimensional (2-D) Fourier transforms (FTs) of four N*N arrays into a 1.25-D FTs of one N*N arrays. Experimental results demonstrate that, using the SM algorithm, the computational efficiency can be improved and the reconstructed wavefronts keep the same quality as those retrieved based on the existed AM method. This algorithm may be useful in design of a fast preview system of dynamic wavefront imaging in digital holography.
Vessel extraction in retinal images using automatic thresholding and Gabor Wavelet.
Ali, Aziah; Hussain, Aini; Wan Zaki, Wan Mimi Diyana
2017-07-01
Retinal image analysis has been widely used for early detection and diagnosis of multiple systemic diseases. Accurate vessel extraction in retinal image is a crucial step towards a fully automated diagnosis system. This work affords an efficient unsupervised method for extracting blood vessels from retinal images by combining existing Gabor Wavelet (GW) method with automatic thresholding. Green channel image is extracted from color retinal image and used to produce Gabor feature image using GW. Both green channel image and Gabor feature image undergo vessel-enhancement step in order to highlight blood vessels. Next, the two vessel-enhanced images are transformed to binary images using automatic thresholding before combined to produce the final vessel output. Combining the images results in significant improvement of blood vessel extraction performance compared to using individual image. Effectiveness of the proposed method was proven via comparative analysis with existing methods validated using publicly available database, DRIVE.
Normal response function method for mass and stiffness matrix updating using complex FRFs
NASA Astrophysics Data System (ADS)
Pradhan, S.; Modak, S. V.
2012-10-01
Quite often a structural dynamic finite element model is required to be updated so as to accurately predict the dynamic characteristics like natural frequencies and the mode shapes. Since in many situations undamped natural frequencies and mode shapes need to be predicted, it has generally been the practice in these situations to seek updating of only mass and stiffness matrix so as to obtain a reliable prediction model. Updating using frequency response functions (FRFs) has been one of the widely used approaches for updating, including updating of mass and stiffness matrices. However, the problem with FRF based methods, for updating mass and stiffness matrices, is that these methods are based on use of complex FRFs. Use of complex FRFs to update mass and stiffness matrices is not theoretically correct as complex FRFs are not only affected by these two matrices but also by the damping matrix. Therefore, in situations where updating of only mass and stiffness matrices using FRFs is required, the use of complex FRFs based updating formulation is not fully justified and would lead to inaccurate updated models. This paper addresses this difficulty and proposes an improved FRF based finite element model updating procedure using the concept of normal FRFs. The proposed method is a modified version of the existing response function method that is based on the complex FRFs. The effectiveness of the proposed method is validated through a numerical study of a simple but representative beam structure. The effect of coordinate incompleteness and robustness of method under presence of noise is investigated. The results of updating obtained by the improved method are compared with the existing response function method. The performance of the two approaches is compared for cases of light, medium and heavily damped structures. It is found that the proposed improved method is effective in updating of mass and stiffness matrices in all the cases of complete and incomplete data and with all levels and types of damping.
Research on hotspot discovery in internet public opinions based on improved K-means.
Wang, Gensheng
2013-01-01
How to discover hotspot in the Internet public opinions effectively is a hot research field for the researchers related which plays a key role for governments and corporations to find useful information from mass data in the Internet. An improved K-means algorithm for hotspot discovery in internet public opinions is presented based on the analysis of existing defects and calculation principle of original K-means algorithm. First, some new methods are designed to preprocess website texts, select and express the characteristics of website texts, and define the similarity between two website texts, respectively. Second, clustering principle and the method of initial classification centers selection are analyzed and improved in order to overcome the limitations of original K-means algorithm. Finally, the experimental results verify that the improved algorithm can improve the clustering stability and classification accuracy of hotspot discovery in internet public opinions when used in practice.
Research on Hotspot Discovery in Internet Public Opinions Based on Improved K-Means
2013-01-01
How to discover hotspot in the Internet public opinions effectively is a hot research field for the researchers related which plays a key role for governments and corporations to find useful information from mass data in the Internet. An improved K-means algorithm for hotspot discovery in internet public opinions is presented based on the analysis of existing defects and calculation principle of original K-means algorithm. First, some new methods are designed to preprocess website texts, select and express the characteristics of website texts, and define the similarity between two website texts, respectively. Second, clustering principle and the method of initial classification centers selection are analyzed and improved in order to overcome the limitations of original K-means algorithm. Finally, the experimental results verify that the improved algorithm can improve the clustering stability and classification accuracy of hotspot discovery in internet public opinions when used in practice. PMID:24106496
Ecological Momentary Assessment and Alcohol Use Disorder Treatment
Morgenstern, Jon; Kuerbis, Alexis; Muench, Frederick
2014-01-01
The ability to capture real-time data on human behavior inexpensively, efficiently, and accurately holds promise to transform and broaden our understanding of many areas of health science. One approach to acquiring this type of real-time data is ecological momentary assessment (EMA). This method has been used to collect data in many domains of addiction research, including research on the treatment of alcohol use disorders (AUDs). Empirical evidence supports the hypothesis that use of EMA can improve the quality of AUD treatment research when compared with standard assessment methods because it provides more accurate reporting, allows investigators to examine the dynamic unfolding of the behavior change process at an individual level, and can be used to augment and improve clinical assessment and treatment. Overall, the existing literature provides strong support for the advantages of EMA when combined with standard assessment of addictive behaviors in general. Nevertheless, use of EMA in AUD treatment research thus far has been limited, especially in the area of research on mechanisms of behavior change. Existing research indicates, however, that EMA can be used to deliver tailored feedback as a novel and potentially transformative approach to improving AUD treatment. This research area clearly warrants additional future efforts. PMID:26259004
Coady, Katherine K.; Biever, Ronald C.; Denslow, Nancy D.; Gross, Melanie; Guiney, Patrick D.; Holbech, Henrik; Karouna-Renier, Natalie K.; Katsiadaki, Ioanna; Krueger, Hank; Levine, Steven L.; Maack, Gerd; Williams, Mike; Wolf, Jeffrey C.; Ankley, Gerald T.
2017-01-01
In the present study, existing regulatory frameworks and test systems for assessing potential endocrine active chemicals are described, and associated challenges are discussed, along with proposed approaches to address these challenges. Regulatory frameworks vary somewhat across geographies, but all basically evaluate whether a chemical possesses endocrine activity and whether this activity can result in adverse outcomes either to humans or to the environment. Current test systems include in silico, in vitro, and in vivo techniques focused on detecting potential endocrine activity, and in vivo tests that collect apical data to detect possible adverse effects. These test systems are currently designed to robustly assess endocrine activity and/or adverse effects in the estrogen, androgen, and thyroid hormone signaling pathways; however, there are some limitations of current test systems for evaluating endocrine hazard and risk. These limitations include a lack of certainty regarding: 1) adequately sensitive species and life stages; 2) mechanistic endpoints that are diagnostic for endocrine pathways of concern; and 3) the linkage between mechanistic responses and apical, adverse outcomes. Furthermore, some existing test methods are resource intensive with regard to time, cost, and use of animals. However, based on recent experiences, there are opportunities to improve approaches to and guidance for existing test methods and to reduce uncertainty. For example, in vitro high-throughput screening could be used to prioritize chemicals for testing and provide insights as to the most appropriate assays for characterizing hazard and risk. Other recommendations include adding endpoints for elucidating connections between mechanistic effects and adverse outcomes, identifying potentially sensitive taxa for which test methods currently do not exist, and addressing key endocrine pathways of possible concern in addition to those associated with estrogen, androgen, and thyroid signaling.
Improving Physical Properties via C–H Oxidation: Chemical and Enzymatic Approaches
Michaudel, Quentin; Journot, Guillaume; Regueiro-Ren, Alicia; Goswami, Animesh; Guo, Zhiwei; Tully, Thomas P.; Zou, Lufeng; Ramabhadran, Raghunath O.; Houk, Kendall N.
2014-01-01
Physicochemical properties constitute a key factor for the success of a drug candidate. Whereas many strategies to improve the physicochemical properties of small heterocycle-type leads exist, complex hydrocarbon skeletons are more challenging to derivatize due to the absence of functional groups. A variety of C–H oxidation methods have been explored on the betulin skeleton to improve the solubility of this very bioactive, yet poorly water soluble, natural product. Capitalizing on the innate reactivity of the molecule, as well as the few molecular handles present on the core, allowed for oxidations at different positions across the pentacyclic structure. Enzymatic oxidations afforded several orthogonal oxidations to chemical methods. Solubility measurements showed an enhancement for many of the synthesized compounds. PMID:25244630
Padula, William V; Lee, Ken K H; Pronovost, Peter J
2017-08-03
To scale and sustain successful quality improvement (QI) interventions, it is recommended for health system leaders to calculate the economic and financial sustainability of the intervention. Many methods of economic evaluation exist, and the type of method depends on the audience: providers, researchers, and hospital executives. This is a primer to introduce cost-effectiveness analysis, budget impact analysis, and return on investment calculation as 3 distinct methods for each stakeholder needing a measurement of the value of QI at the health system level. Using cases for the QI of hospital-acquired condition rates (e.g., pressure injuries), this primer proceeds stepwise through each method beginning from the same starting point of constructing a model so that the repetition of steps is minimized and thereby capturing the attention of all intended audiences.
Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.
Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek
2016-02-01
Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at http://kniahini.med.virginia.edu/fitmunk/server/ or at http://fitmunk.bitbucket.org/.
An improved method to detect correct protein folds using partial clustering.
Zhou, Jianjun; Wishart, David S
2013-01-16
Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient "partial" clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either C(α) RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance.
An improved method to detect correct protein folds using partial clustering
2013-01-01
Background Structure-based clustering is commonly used to identify correct protein folds among candidate folds (also called decoys) generated by protein structure prediction programs. However, traditional clustering methods exhibit a poor runtime performance on large decoy sets. We hypothesized that a more efficient “partial“ clustering approach in combination with an improved scoring scheme could significantly improve both the speed and performance of existing candidate selection methods. Results We propose a new scheme that performs rapid but incomplete clustering on protein decoys. Our method detects structurally similar decoys (measured using either Cα RMSD or GDT-TS score) and extracts representatives from them without assigning every decoy to a cluster. We integrated our new clustering strategy with several different scoring functions to assess both the performance and speed in identifying correct or near-correct folds. Experimental results on 35 Rosetta decoy sets and 40 I-TASSER decoy sets show that our method can improve the correct fold detection rate as assessed by two different quality criteria. This improvement is significantly better than two recently published clustering methods, Durandal and Calibur-lite. Speed and efficiency testing shows that our method can handle much larger decoy sets and is up to 22 times faster than Durandal and Calibur-lite. Conclusions The new method, named HS-Forest, avoids the computationally expensive task of clustering every decoy, yet still allows superior correct-fold selection. Its improved speed, efficiency and decoy-selection performance should enable structure prediction researchers to work with larger decoy sets and significantly improve their ab initio structure prediction performance. PMID:23323835
Character recognition from trajectory by recurrent spiking neural networks.
Jiangrong Shen; Kang Lin; Yueming Wang; Gang Pan
2017-07-01
Spiking neural networks are biologically plausible and power-efficient on neuromorphic hardware, while recurrent neural networks have been proven to be efficient on time series data. However, how to use the recurrent property to improve the performance of spiking neural networks is still a problem. This paper proposes a recurrent spiking neural network for character recognition using trajectories. In the network, a new encoding method is designed, in which varying time ranges of input streams are used in different recurrent layers. This is able to improve the generalization ability of our model compared with general encoding methods. The experiments are conducted on four groups of the character data set from University of Edinburgh. The results show that our method can achieve a higher average recognition accuracy than existing methods.
Konishi, Tatsunori; Harata, Masahiko
2014-01-01
We show here that the transformation efficiency of Saccharomyces cerevisiae is improved by altering carbon sources in media for pre-culturing cells prior to the transformation reactions. The transformation efficiency was increased up to sixfold by combination with existing transformation protocols. This method is widely applicable for yeast research since efficient transformation can be performed easily without changing any of the other procedures in the transformation.
Improving the Effectiveness of Speaker Verification Domain Adaptation With Inadequate In-Domain Data
2017-08-20
Improving the Effectiveness of Speaker Verification Domain Adaptation With Inadequate In-Domain Data Bengt J. Borgström1, Elliot Singer1, Douglas...ll.mit.edu.edu, dar@ll.mit.edu, es@ll.mit.edu, omid.sadjadi@nist.gov Abstract This paper addresses speaker verification domain adaptation with...contain speakers with low channel diversity. Existing domain adaptation methods are reviewed, and their shortcomings are discussed. We derive an
Future needs for biomedical transducers
NASA Technical Reports Server (NTRS)
Wooten, F. T.
1971-01-01
In summary there are three major classes of transducer improvements required: improvements in existing transducers, needs for unexploited physical science phenomena in transducer design, and needs for unutilized physiological phenomena in transducer design. During the next decade, increasing emphasis will be placed on noninvasive measurement in all of these areas. Patient safety, patient comfort, and the need for efficient utilization of the time of both patient and physician requires that noninvasive methods of monitoring be developed.
SU-F-T-350: Continuous Leaf Optimization (CLO) for IMRT Leaf Sequencing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Long, T; Chen, M; Jiang, S
Purpose: To study a new step-and-shoot IMRT leaf sequencing model that avoids the two main pitfalls of conventional leaf sequencing: (1) target fluence being stratified into a fixed number of discrete levels and/or (2) aperture leaf positions being restricted to a discrete set of locations. These assumptions induce error into the sequence or reduce the feasible region of potential plans, respectively. Methods: We develop a one-dimensional (single leaf pair) methodology that does not make assumptions (1) or (2) that can be easily extended to a multi-row model. The proposed continuous leaf optimization (CLO) methodology takes in an existing set ofmore » apertures and associated intensities, or solution “seed,” and improves the plan without the restrictiveness of 1or (2). It then uses a first-order descent algorithm to converge onto a locally optimal solution. A seed solution can come from models that assume (1) and (2), thus allowing the CLO model to improve upon existing leaf sequencing methodologies. Results: The CLO model was applied to 208 generated target fluence maps in one dimension. In all cases for all tested sequencing strategies, the CLO model made improvements on the starting seed objective function. The CLO model also was able to keep MUs low. Conclusion: The CLO model can improve upon existing leaf sequencing methods by avoiding the restrictions of (1) and (2). By allowing for more flexible leaf positioning, error can be reduced when matching some target fluence. This study lays the foundation for future models and solution methodologies that can incorporate continuous leaf positions explicitly into the IMRT treatment planning model. Supported by Cancer Prevention & Research Institute of Texas (CPRIT) - ID RP150485.« less
Engaging academia to advance the science and practice of environmental public health tracking.
Strosnider, Heather; Zhou, Ying; Balluz, Lina; Qualters, Judith
2014-10-01
Public health agencies at the federal, state, and local level are responsible for implementing actions and policies that address health problems related to environmental hazards. These actions and policies can be informed by integrating or linking data on health, exposure, hazards, and population. The mission of the Centers for Disease Control and Prevention׳s National Environmental Public Health Tracking Program (Tracking Program) is to provide information from a nationwide network of integrated health, environmental hazard, and exposure data that drives actions to improve the health of communities. The Tracking Program and federal, state, and local partners collect, integrate, analyze, and disseminate data and information to inform environmental public health actions. However, many challenges exist regarding the availability and quality of data, the application of appropriate methods and tools to link data, and the state of the science needed to link and analyze health and environmental data. The Tracking Program has collaborated with academia to address key challenges in these areas. The collaboration has improved our understanding of the uses and limitations of available data and methods, expanded the use of existing data and methods, and increased our knowledge about the connections between health and environment. Valuable working relationships have been forged in this process, and together we have identified opportunities and improvements for future collaborations to further advance the science and practice of environmental public health tracking. Published by Elsevier Inc.
Risk Classification with an Adaptive Naive Bayes Kernel Machine Model.
Minnier, Jessica; Yuan, Ming; Liu, Jun S; Cai, Tianxi
2015-04-22
Genetic studies of complex traits have uncovered only a small number of risk markers explaining a small fraction of heritability and adding little improvement to disease risk prediction. Standard single marker methods may lack power in selecting informative markers or estimating effects. Most existing methods also typically do not account for non-linearity. Identifying markers with weak signals and estimating their joint effects among many non-informative markers remains challenging. One potential approach is to group markers based on biological knowledge such as gene structure. If markers in a group tend to have similar effects, proper usage of the group structure could improve power and efficiency in estimation. We propose a two-stage method relating markers to disease risk by taking advantage of known gene-set structures. Imposing a naive bayes kernel machine (KM) model, we estimate gene-set specific risk models that relate each gene-set to the outcome in stage I. The KM framework efficiently models potentially non-linear effects of predictors without requiring explicit specification of functional forms. In stage II, we aggregate information across gene-sets via a regularization procedure. Estimation and computational efficiency is further improved with kernel principle component analysis. Asymptotic results for model estimation and gene set selection are derived and numerical studies suggest that the proposed procedure could outperform existing procedures for constructing genetic risk models.
Six-sigma application in tire-manufacturing company: a case study
NASA Astrophysics Data System (ADS)
Gupta, Vikash; Jain, Rahul; Meena, M. L.; Dangayach, G. S.
2017-09-01
Globalization, advancement of technologies, and increment in the demand of the customer change the way of doing business in the companies. To overcome these barriers, the six-sigma define-measure-analyze-improve-control (DMAIC) method is most popular and useful. This method helps to trim down the wastes and generating the potential ways of improvement in the process as well as service industries. In the current research, the DMAIC method was used for decreasing the process variations of bead splice causing wastage of material. This six-sigma DMAIC research was initiated by problem identification through voice of customer in the define step. The subsequent step constitutes of gathering the specification data of existing tire bead. This step was followed by the analysis and improvement steps, where the six-sigma quality tools such as cause-effect diagram, statistical process control, and substantial analysis of existing system were implemented for root cause identification and reduction in process variation. The process control charts were used for systematic observation and control the process. Utilizing DMAIC methodology, the standard deviation was decreased from 2.17 to 1.69. The process capability index (C p) value was enhanced from 1.65 to 2.95 and the process performance capability index (C pk) value was enhanced from 0.94 to 2.66. A DMAIC methodology was established that can play a key role for reducing defects in the tire-manufacturing process in India.
NASA Astrophysics Data System (ADS)
Jin, N.; Yang, F.; Shang, S. Y.; Tao, T.; Liu, J. S.
2016-08-01
According to the limitations of the LVRT technology of traditional photovoltaic inverter existed, this paper proposes a low voltage ride through (LVRT) control method based on model current predictive control (MCPC). This method can effectively improve the photovoltaic inverter output characteristics and response speed. The MCPC method of photovoltaic grid-connected inverter designed, the sum of the absolute value of the predictive current and the given current error is adopted as the cost function with the model predictive control method. According to the MCPC, the optimal space voltage vector is selected. Photovoltaic inverter has achieved automatically switches of priority active or reactive power control of two control modes according to the different operating states, which effectively improve the inverter capability of LVRT. The simulation and experimental results proves that the proposed method is correct and effective.
Guided SAR image despeckling with probabilistic non local weights
NASA Astrophysics Data System (ADS)
Gokul, Jithin; Nair, Madhu S.; Rajan, Jeny
2017-12-01
SAR images are generally corrupted by granular disturbances called speckle, which makes visual analysis and detail extraction a difficult task. Non Local despeckling techniques with probabilistic similarity has been a recent trend in SAR despeckling. To achieve effective speckle suppression without compromising detail preservation, we propose an improvement for the existing Generalized Guided Filter with Bayesian Non-Local Means (GGF-BNLM) method. The proposed method (Guided SAR Image Despeckling with Probabilistic Non Local Weights) replaces parametric constants based on heuristics in GGF-BNLM method with dynamically derived values based on the image statistics for weight computation. Proposed changes make GGF-BNLM method adaptive and as a result, significant improvement is achieved in terms of performance. Experimental analysis on SAR images shows excellent speckle reduction without compromising feature preservation when compared to GGF-BNLM method. Results are also compared with other state-of-the-art and classic SAR depseckling techniques to demonstrate the effectiveness of the proposed method.
Compensated Box-Jenkins transfer function for short term load forecast
DOE Office of Scientific and Technical Information (OSTI.GOV)
Breipohl, A.; Yu, Z.; Lee, F.N.
In the past years, the Box-Jenkins ARIMA method and the Box-Jenkins transfer function method (BJTF) have been among the most commonly used methods for short term electrical load forecasting. But when there exists a sudden change in the temperature, both methods tend to exhibit larger errors in the forecast. This paper demonstrates that the load forecasting errors resulting from either the BJ ARIMA model or the BJTF model are not simply white noise, but rather well-patterned noise, and the patterns in the noise can be used to improve the forecasts. Thus a compensated Box-Jenkins transfer method (CBJTF) is proposed tomore » improve the accuracy of the load prediction. Some case studies have been made which result in about a 14-33% reduction of the root mean square (RMS) errors of the forecasts, depending on the compensation time period as well as the compensation method used.« less
48 CFR 1852.227-70 - New technology.
Code of Federal Regulations, 2011 CFR
2011-10-01
... method; or to operate, in case of a machine or system; and, in each case, under such conditions as to... contract. Reportable items include, but are not limited to, new processes, machines, manufactures, and compositions of matter, and improvements to, or new applications of, existing processes, machines, manufactures...
Searching for Ideal Contraceptives.
ERIC Educational Resources Information Center
Djerassi, Carl
1985-01-01
Discusses the problem of adolescent pregnancy and focuses on improving contraception as a practical solution. Describes the advantages and disadvantages of existing methods (the condom, the pill, and the contraceptive sponge). Predicts that the development of a fundamentally new contraceptive, such as a monthly menses-inducer pill, will not occur…
An EST database of the Caribbean fruit fly, Anastrephas suspensa (Diptera:Tephritidae)
USDA-ARS?s Scientific Manuscript database
The ability to create transgenic strains of economically and medically important insect species has the potential to greatly improve existing biological control methods, which is a major goal of our laboratory at the Center for Medical, Agricultural and Veterinary Entomology, USDA, Agricultural Rese...
Improving Clinical Practices for Children with Language and Learning Disorders
ERIC Educational Resources Information Center
Kamhi, Alan G.
2014-01-01
Purpose: This lead article of the Clinical Forum addresses some of the gaps that exist between clinical practice and current knowledge about instructional factors that influence learning and language development. Method: Topics reviewed and discussed include principles of learning, generalization, treatment intensity, processing interventions,…
Ma, Liheng; Zhan, Dejun; Jiang, Guangwen; Fu, Sihua; Jia, Hui; Wang, Xingshu; Huang, Zongsheng; Zheng, Jiaxing; Hu, Feng; Wu, Wei; Qin, Shiqiao
2015-09-01
The attitude accuracy of a star sensor decreases rapidly when star images become motion-blurred under dynamic conditions. Existing techniques concentrate on a single frame of star images to solve this problem and improvements are obtained to a certain extent. An attitude-correlated frames (ACF) approach, which concentrates on the features of the attitude transforms of the adjacent star image frames, is proposed to improve upon the existing techniques. The attitude transforms between different star image frames are measured by the strap-down gyro unit precisely. With the ACF method, a much larger star image frame is obtained through the combination of adjacent frames. As a result, the degradation of attitude accuracy caused by motion-blurring are compensated for. The improvement of the attitude accuracy is approximately proportional to the square root of the number of correlated star image frames. Simulations and experimental results indicate that the ACF approach is effective in removing random noises and improving the attitude determination accuracy of the star sensor under highly dynamic conditions.
Utilizing Wisconsin Afterschool Programs to Increase Physical Activity in Youth.
Cavanagh, Bradley D; Meinen, Amy
2015-10-01
Approximately 31.7% of children in the United States are overweight or obese. Interventions in the afterschool setting may help combat childhood obesity. Research exists on interventions in school settings, but a few data exist for interventions about afterschool programs. This study investigates increasing physical activity (PA) in Wisconsin afterschool programs. A literature review was used to develop key informant interviews. Utilizing a constant comparative method, interview data were coded and themes were identified. The themes, literature review, and expert opinions were used to formulate recommendations for improving PA in afterschool programs. Programs had success in utilizing different resources to improve PA. Key barriers to improving PA included grant-imposed academic restrictions, the need for provider education, fears of conflict and competitiveness, and a lack of understanding between health and sedentariness. There is a clear need for additional exploration into improving PA in Wisconsin afterschool programs. This study resulted in specific recommendations to increase PA in afterschool programming, including utilizing school wellness policies and staff professional development to improve PA in afterschool programs. © 2015, American School Health Association.
Ren, Xiaojie; Zhao, Xinhe; Turcotte, François; Deschênes, Jean-Sébastien; Tremblay, Réjean; Jolicoeur, Mario
2017-02-11
Microalgae have the potential to rapidly accumulate lipids of high interest for the food, cosmetics, pharmaceutical and energy (e.g. biodiesel) industries. However, current lipid extraction methods show efficiency limitation and until now, extraction protocols have not been fully optimized for specific lipid compounds. The present study thus presents a novel lipid extraction method, consisting in the addition of a water treatment of biomass between the two-stage solvent extraction steps of current extraction methods. The resulting modified method not only enhances lipid extraction efficiency, but also yields a higher triacylglycerols (TAG) ratio, which is highly desirable for biodiesel production. Modification of four existing methods using acetone, chloroform/methanol (Chl/Met), chloroform/methanol/H 2 O (Chl/Met/H 2 O) and dichloromethane/methanol (Dic/Met) showed respective lipid extraction yield enhancement of 72.3, 35.8, 60.3 and 60.9%. The modified acetone method resulted in the highest extraction yield, with 68.9 ± 0.2% DW total lipids. Extraction of TAG was particularly improved with the water treatment, especially for the Chl/Met/H 2 O and Dic/Met methods. The acetone method with the water treatment led to the highest extraction level of TAG with 73.7 ± 7.3 µg/mg DW, which is 130.8 ± 10.6% higher than the maximum value obtained for the four classical methods (31.9 ± 4.6 µg/mg DW). Interestingly, the water treatment preferentially improved the extraction of intracellular fractions, i.e. TAG, sterols, and free fatty acids, compared to the lipid fractions of the cell membranes, which are constituted of phospholipids (PL), acetone mobile polar lipids and hydrocarbons. Finally, from the 32 fatty acids analyzed for both neutral lipids (NL) and polar lipids (PL) fractions, it is clear that the water treatment greatly improves NL-to-PL ratio for the four standard methods assessed. Water treatment of biomass after the first solvent extraction step helps the subsequent release of intracellular lipids in the second extraction step, thus improving the global lipids extraction yield. In addition, the water treatment positively modifies the intracellular lipid class ratios of the final extract, in which TAG ratio is significantly increased without changes in the fatty acids composition. The novel method thus provides an efficient way to improve lipid extraction yield of existing methods, as well as selectively favoring TAG, a lipid of the upmost interest for biodiesel production.
Method of Improved Fuzzy Contrast Combined Adaptive Threshold in NSCT for Medical Image Enhancement
Yang, Jie; Kasabov, Nikola
2017-01-01
Noises and artifacts are introduced to medical images due to acquisition techniques and systems. This interference leads to low contrast and distortion in images, which not only impacts the effectiveness of the medical image but also seriously affects the clinical diagnoses. This paper proposes an algorithm for medical image enhancement based on the nonsubsampled contourlet transform (NSCT), which combines adaptive threshold and an improved fuzzy set. First, the original image is decomposed into the NSCT domain with a low-frequency subband and several high-frequency subbands. Then, a linear transformation is adopted for the coefficients of the low-frequency component. An adaptive threshold method is used for the removal of high-frequency image noise. Finally, the improved fuzzy set is used to enhance the global contrast and the Laplace operator is used to enhance the details of the medical images. Experiments and simulation results show that the proposed method is superior to existing methods of image noise removal, improves the contrast of the image significantly, and obtains a better visual effect. PMID:28744464
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muhleman, T.; Dempsey, P.
Examples of new technology in drilling reflect, for the most part, the industry's determination to overcome harsh drilling environments and to improve drilling efficiency through new methods and better equipment. The technology addressed includes a BOP fire prevention device; a diverter systems for floaters; a unique telescoping derrick; Sohio's mobile drilling island; more power from existing SCR's; a radio-based MWD system; better field tool joint inspection; a combined drilling/production platform, and a subsea BOP protection method.
Analytical concepts for health management systems of liquid rocket engines
NASA Technical Reports Server (NTRS)
Williams, Richard; Tulpule, Sharayu; Hawman, Michael
1990-01-01
Substantial improvement in health management systems performance can be realized by implementing advanced analytical methods of processing existing liquid rocket engine sensor data. In this paper, such techniques ranging from time series analysis to multisensor pattern recognition to expert systems to fault isolation models are examined and contrasted. The performance of several of these methods is evaluated using data from test firings of the Space Shuttle main engines.
Comparison of Provider Types Who Performed Prehospital Lifesaving Interventions: A Prospective Study
2014-12-01
In less than 2 hours, 15 critically ill children were triaged and admitted to the PICU or surge spaces. Conclusions:Identified strengths included...details increasing telemedicine uti - lization during a 4 year period and outlines program structural changes that improved utilization. Methods: The study...population survival. CSC ICU resource- allocation algorithms (ALGs) exist for adults. Our goal was to evaluate a CSC pandemic ALG for children . Methods
Defense Small Business Innovation Research Program (SBIR) FY 1984.
1984-01-12
nuclear submarine non-metallic, light weight, high strength piping . Includes the development of adequate fabrication procedures for attaching pipe ...waste heat economizer methods, require development. Improved conventional and hybrid heat pipes and/or two phase transport devices 149 IF are required...DESCRIPTION: A need exists to conceive, design, fabricate and test a method of adjusting the length of the individual legs of nylon or Kevlar rope sling
Andrew N. Gray; Thomas R. Whittier; David L. Azuma
2014-01-01
A substantial portion of the carbon (C) emitted by human activity is apparently being stored in forest ecosystems in the Northern Hemisphere, but the magnitude and cause are not precisely understood. Current official estimates of forest C flux are based on a combination of field measurements and other methods. The goal of this study was to improve on existing methods...
Propagation-based x-ray phase contrast imaging using an iterative phase diversity technique
NASA Astrophysics Data System (ADS)
Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.
2018-03-01
Through the use of a phase diversity technique, we demonstrate a near-field in-line x-ray phase contrast algorithm that provides improved object reconstruction when compared to our previous iterative methods for a homogeneous sample. Like our previous methods, the new technique uses the sample refractive index distribution during the reconstruction process. The technique complements existing monochromatic and polychromatic methods and is useful in situations where experimental phase contrast data is affected by noise.
Mines Systems Safety Improvement Using an Integrated Event Tree and Fault Tree Analysis
NASA Astrophysics Data System (ADS)
Kumar, Ranjan; Ghosh, Achyuta Krishna
2017-04-01
Mines systems such as ventilation system, strata support system, flame proof safety equipment, are exposed to dynamic operational conditions such as stress, humidity, dust, temperature, etc., and safety improvement of such systems can be done preferably during planning and design stage. However, the existing safety analysis methods do not handle the accident initiation and progression of mine systems explicitly. To bridge this gap, this paper presents an integrated Event Tree (ET) and Fault Tree (FT) approach for safety analysis and improvement of mine systems design. This approach includes ET and FT modeling coupled with redundancy allocation technique. In this method, a concept of top hazard probability is introduced for identifying system failure probability and redundancy is allocated to the system either at component or system level. A case study on mine methane explosion safety with two initiating events is performed. The results demonstrate that the presented method can reveal the accident scenarios and improve the safety of complex mine systems simultaneously.
Removing sun glint from optical remote sensing images of shallow rivers
Overstreet, Brandon T.; Legleiter, Carl
2017-01-01
Sun glint is the specular reflection of light from the water surface, which often causes unusually bright pixel values that can dominate fluvial remote sensing imagery and obscure the water-leaving radiance signal of interest for mapping bathymetry, bottom type, or water column optical characteristics. Although sun glint is ubiquitous in fluvial remote sensing imagery, river-specific methods for removing sun glint are not yet available. We show that existing sun glint-removal methods developed for multispectral images of marine shallow water environments over-correct shallow portions of fluvial remote sensing imagery resulting in regions of unreliable data along channel margins. We build on existing marine glint-removal methods to develop a river-specific technique that removes sun glint from shallow areas of the channel without overcorrection by accounting for non-negligible water-leaving near-infrared radiance. This new sun glint-removal method can improve the accuracy of spectrally-based depth retrieval in cases where sun glint dominates the at-sensor radiance. For an example image of the gravel-bed Snake River, Wyoming, USA, observed-vs.-predicted R2 values for depth retrieval improved from 0.66 to 0.76 following sun glint removal. The methodology presented here is straightforward to implement and could be incorporated into image processing workflows for multispectral images that include a near-infrared band.
Detecting bursts in the EEG of very and extremely premature infants using a multi-feature approach.
O'Toole, John M; Boylan, Geraldine B; Lloyd, Rhodri O; Goulding, Robert M; Vanhatalo, Sampsa; Stevenson, Nathan J
2017-07-01
To develop a method that segments preterm EEG into bursts and inter-bursts by extracting and combining multiple EEG features. Two EEG experts annotated bursts in individual EEG channels for 36 preterm infants with gestational age < 30 weeks. The feature set included spectral, amplitude, and frequency-weighted energy features. Using a consensus annotation, feature selection removed redundant features and a support vector machine combined features. Area under the receiver operator characteristic (AUC) and Cohen's kappa (κ) evaluated performance within a cross-validation procedure. The proposed channel-independent method improves AUC by 4-5% over existing methods (p < 0.001, n=36), with median (95% confidence interval) AUC of 0.989 (0.973-0.997) and sensitivity-specificity of 95.8-94.4%. Agreement rates between the detector and experts' annotations, κ=0.72 (0.36-0.83) and κ=0.65 (0.32-0.81), are comparable to inter-rater agreement, κ=0.60 (0.21-0.74). Automating the visual identification of bursts in preterm EEG is achievable with a high level of accuracy. Multiple features, combined using a data-driven approach, improves on existing single-feature methods. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
NASA Astrophysics Data System (ADS)
Koval, Viacheslav
The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.
Distortion Correction of OCT Images of the Crystalline Lens: GRIN Approach
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-01-01
Purpose To propose a method to correct Optical Coherence Tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Methods 2-D images of 9 human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared to the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley and lens thickness shifts from the nominal data. Results Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface, in terms of RMS and peak values, with errors less than 6μm and 13μm respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8μm. Conclusions The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in 2-D, it is expected that 3-D imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations. PMID:22466105
Samardžić, Selena; Milošević, Miodrag; Todorović, Nataša; Lakatoš, Robert
2018-04-04
The development of new methods and improvements of existing methods for the specific activity determination of 90 Sr and other distinct beta emitters has been of considerable interest. The reason for this interest is that the notably small number of methods that are able to meet all the set criteria, such as reliability of the results, measurement uncertainty and time, and minimum production of radioactive waste, as well as applicability to various samples with reference to their nature, geometry and composition. In this paper, two methods for rapid 90 Sr activity determination based on Monte Carlo simulations are used, one for a Si semiconductor detector for beta spectrometric measurements and the other for the Geiger-Muller (GM) ionization probe. To improve the reliability of the measurement results, samples with high and low strontium activity solutions were prepared in the form of dry residues. The results of the proposed methodology were verified with a standard method using a liquid scintillation counter, and notably good agreements are achieved. Copyright © 2018 Elsevier Ltd. All rights reserved.
Current limitations and recommendations to improve testing ...
In this paper existing regulatory frameworks and test systems for assessing potential endocrine-active chemicals are described, and associated challenges discussed, along with proposed approaches to address these challenges. Regulatory frameworks vary somewhat across organizations, but all basically evaluate whether a chemical possesses endocrine activity and whether this activity can result in adverse outcomes either to humans or the environment. Current test systems include in silico, in vitro and in vivo techniques focused on detecting potential endocrine activity, and in vivo tests that collect apical data to detect possible adverse effects. These test systems are currently designed to robustly assess endocrine activity and/or adverse effects in the estrogen, androgen, and thyroid hormonal pathways; however, there are some limitations of current test systems for evaluating endocrine hazard and risk. These limitations include a lack of certainty regarding: 1)adequately sensitive species and life-stages, 2) mechanistic endpoints that are diagnostic for endocrine pathways of concern, and 3) the linkage between mechanistic responses and apical, adverse outcomes. Furthermore, some existing test methods are resource intensive in regard to time, cost, and use of animals. However, based on recent experiences, there are opportunities to improve approaches to, and guidance for existing test methods, and to reduce uncertainty. For example, in vitro high throughput
Variable Selection in the Presence of Missing Data: Imputation-based Methods.
Zhao, Yize; Long, Qi
2017-01-01
Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.
The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations
Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka
2011-01-01
Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007
Improved artificial bee colony algorithm based gravity matching navigation method.
Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang
2014-07-18
Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position.
Improved Artificial Bee Colony Algorithm Based Gravity Matching Navigation Method
Gao, Wei; Zhao, Bo; Zhou, Guang Tao; Wang, Qiu Ying; Yu, Chun Yang
2014-01-01
Gravity matching navigation algorithm is one of the key technologies for gravity aided inertial navigation systems. With the development of intelligent algorithms, the powerful search ability of the Artificial Bee Colony (ABC) algorithm makes it possible to be applied to the gravity matching navigation field. However, existing search mechanisms of basic ABC algorithms cannot meet the need for high accuracy in gravity aided navigation. Firstly, proper modifications are proposed to improve the performance of the basic ABC algorithm. Secondly, a new search mechanism is presented in this paper which is based on an improved ABC algorithm using external speed information. At last, modified Hausdorff distance is introduced to screen the possible matching results. Both simulations and ocean experiments verify the feasibility of the method, and results show that the matching rate of the method is high enough to obtain a precise matching position. PMID:25046019
Soliton solutions of the quantum Zakharov-Kuznetsov equation which arises in quantum magneto-plasmas
NASA Astrophysics Data System (ADS)
Sindi, Cevat Teymuri; Manafian, Jalil
2017-02-01
In this paper, we extended the improved tan(φ/2)-expansion method (ITEM) and the generalized G'/G-expansion method (GGEM) proposed by Manafian and Fazli (Opt. Quantum Electron. 48, 413 (2016)) to construct new types of soliton wave solutions of nonlinear partial differential equations (NPDEs). Moreover, we use of the improvement of the Exp-function method (IEFM) proposed by Jahani and Manafian (Eur. Phys. J. Plus 131, 54 (2016)) for obtaining solutions of NPDEs. The merit of the presented three methods is they can find further solutions to the considered problems, including soliton, periodic, kink, kink-singular wave solutions. This paper studies the quantum Zakharov-Kuznetsov (QZK) equation by the aid of the improved tan(φ/2)-expansion method, the generalized G'/G-expansion method and the improvement of the Exp-function method. Moreover, the 1-soliton solution of the modified QZK equation with power law nonlinearity is obtained by the aid of traveling wave hypothesis with the necessary constraints in place for the existence of the soliton. Comparing our new results with Ebadi et al. results (Astrophys. Space Sci. 341, 507 (2012)), namely, G'/G-expansion method, exp-function method, modified F-expansion method, shows that our results give further solutions. Finally, these solutions might play an important role in engineering, physics and applied mathematics fields.
Real time algorithms for sharp wave ripple detection.
Sethi, Ankit; Kemere, Caleb
2014-01-01
Neural activity during sharp wave ripples (SWR), short bursts of co-ordinated oscillatory activity in the CA1 region of the rodent hippocampus, is implicated in a variety of memory functions from consolidation to recall. Detection of these events in an algorithmic framework, has thus far relied on simple thresholding techniques with heuristically derived parameters. This study is an investigation into testing and improving the current methods for detection of SWR events in neural recordings. We propose and profile methods to reduce latency in ripple detection. Proposed algorithms are tested on simulated ripple data. The findings show that simple realtime algorithms can improve upon existing power thresholding methods and can detect ripple activity with latencies in the range of 10-20 ms.
NASA Astrophysics Data System (ADS)
McCray, Wilmon Wil L., Jr.
The research was prompted by a need to conduct a study that assesses process improvement, quality management and analytical techniques taught to students in U.S. colleges and universities undergraduate and graduate systems engineering and the computing science discipline (e.g., software engineering, computer science, and information technology) degree programs during their academic training that can be applied to quantitatively manage processes for performance. Everyone involved in executing repeatable processes in the software and systems development lifecycle processes needs to become familiar with the concepts of quantitative management, statistical thinking, process improvement methods and how they relate to process-performance. Organizations are starting to embrace the de facto Software Engineering Institute (SEI) Capability Maturity Model Integration (CMMI RTM) Models as process improvement frameworks to improve business processes performance. High maturity process areas in the CMMI model imply the use of analytical, statistical, quantitative management techniques, and process performance modeling to identify and eliminate sources of variation, continually improve process-performance; reduce cost and predict future outcomes. The research study identifies and provides a detail discussion of the gap analysis findings of process improvement and quantitative analysis techniques taught in U.S. universities systems engineering and computing science degree programs, gaps that exist in the literature, and a comparison analysis which identifies the gaps that exist between the SEI's "healthy ingredients " of a process performance model and courses taught in U.S. universities degree program. The research also heightens awareness that academicians have conducted little research on applicable statistics and quantitative techniques that can be used to demonstrate high maturity as implied in the CMMI models. The research also includes a Monte Carlo simulation optimization model and dashboard that demonstrates the use of statistical methods, statistical process control, sensitivity analysis, quantitative and optimization techniques to establish a baseline and predict future customer satisfaction index scores (outcomes). The American Customer Satisfaction Index (ACSI) model and industry benchmarks were used as a framework for the simulation model.
Clare, John; McKinney, Shawn T.; DePue, John E.; Loftin, Cynthia S.
2017-01-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture–recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters.
Dynamically Evolving Sectors for Convective Weather Impact
NASA Technical Reports Server (NTRS)
Drew, Michael C.
2010-01-01
A new strategy for altering existing sector boundaries in response to blocking convective weather is presented. This method seeks to improve the reduced capacity of sectors directly affected by weather by moving boundaries in a direction that offers the greatest capacity improvement. The boundary deformations are shared by neighboring sectors within the region in a manner that preserves their shapes and sizes as much as possible. This reduces the controller workload involved with learning new sector designs. The algorithm that produces the altered sectors is based on a force-deflection mesh model that needs only nominal traffic patterns and the shape of the blocking weather for input. It does not require weather-affected traffic patterns that would have to be predicted by simulation. When compared to an existing optimal sector design method, the sectors produced by the new algorithm are more similar to the original sector shapes, resulting in sectors that may be more suitable for operational use because the change is not as drastic. Also, preliminary results show that this method produces sectors that can equitably distribute the workload of rerouted weather-affected traffic throughout the region where inclement weather is present. This is demonstrated by sector aircraft count distributions of simulated traffic in weather-affected regions.
NASA Astrophysics Data System (ADS)
Botha, J. D. M.; Shahroki, A.; Rice, H.
2017-12-01
This paper presents an enhanced method for predicting aerodynamically generated broadband noise produced by a Vertical Axis Wind Turbine (VAWT). The method improves on existing work for VAWT noise prediction and incorporates recently developed airfoil noise prediction models. Inflow-turbulence and airfoil self-noise mechanisms are both considered. Airfoil noise predictions are dependent on aerodynamic input data and time dependent Computational Fluid Dynamics (CFD) calculations are carried out to solve for the aerodynamic solution. Analytical flow methods are also benchmarked against the CFD informed noise prediction results to quantify errors in the former approach. Comparisons to experimental noise measurements for an existing turbine are encouraging. A parameter study is performed and shows the sensitivity of overall noise levels to changes in inflow velocity and inflow turbulence. Noise sources are characterised and the location and mechanism of the primary sources is determined, inflow-turbulence noise is seen to be the dominant source. The use of CFD calculations is seen to improve the accuracy of noise predictions when compared to the analytic flow solution as well as showing that, for inflow-turbulence noise sources, blade generated turbulence dominates the atmospheric inflow turbulence.
A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting
Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao
2014-01-01
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813
Byabagambi, John B; Broughton, Edward; Heltebeitel, Simon; Wuliji, Tana; Karamagi, Esther
2017-01-01
Inadequate medication dispensing and management by healthcare providers can contribute to poor outcomes among HIV-positive patients. Gaps in medication availability, often associated with pharmacy workforce shortages, are an important barrier to retention in HIV care in Uganda. An intervention to address pharmacy staffing constraints through strengthening pharmaceutical management, dispensing practices, and general competencies of facility clinical and pharmacy staff was implemented in 14 facilities in three districts in eastern Uganda. Teams of staff were organised in each facility and supported to apply quality improvement (QI) methods to address deficits in availability and rational use of HIV drugs. To evaluate the intervention, baseline and end line data were collected 24 months apart. Dispensing practices, clinical wellness and adherence to antiretrovirals improved by 45%, 28% and 20% from baseline to end line, respectively. All clients at end line received the medications prescribed, and medications were correctly, completely and legibly labelled more often. Clients better understood when, how much and for how long they were supposed to take their prescribed medicines at end line. Pharmaceutical management practices also improved from baseline in most categories by statistically significant margins. Facilities significantly improved on correctly recording stock information about antiretroviral drugs (53%vs100%, P<0.0001). Coinciding with existing staff taking on pharmaceutical roles, facilities improved management of unwanted and expired drugs, notably by optimising use of existing health workers and making pharmaceutical management processes more efficient. Implementation of this improvement intervention in the 14 facilities appeared to have a positive impact on client outcomes, pharmacy department management and providers' self-reported knowledge of QI methods. These results were achieved at a cost of about US$5.50 per client receiving HIV services at participating facilities.
A knowledge-based design framework for airplane conceptual and preliminary design
NASA Astrophysics Data System (ADS)
Anemaat, Wilhelmus A. J.
The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane. Using these tools will show an improvement in efficiency over using separate programs due to the automatic recalculation with any change of input data. The direct visual feedback of 3D geometry in the AAA-AML, will lead to quicker resolving of problems as opposed to conventional methods.
Determining Semantically Related Significant Genes.
Taha, Kamal
2014-01-01
GO relation embodies some aspects of existence dependency. If GO term xis existence-dependent on GO term y, the presence of y implies the presence of x. Therefore, the genes annotated with the function of the GO term y are usually functionally and semantically related to the genes annotated with the function of the GO term x. A large number of gene set enrichment analysis methods have been developed in recent years for analyzing gene sets enrichment. However, most of these methods overlook the structural dependencies between GO terms in GO graph by not considering the concept of existence dependency. We propose in this paper a biological search engine called RSGSearch that identifies enriched sets of genes annotated with different functions using the concept of existence dependency. We observe that GO term xcannot be existence-dependent on GO term y, if x- and y- have the same specificity (biological characteristics). After encoding into a numeric format the contributions of GO terms annotating target genes to the semantics of their lowest common ancestors (LCAs), RSGSearch uses microarray experiment to identify the most significant LCA that annotates the result genes. We evaluated RSGSearch experimentally and compared it with five gene set enrichment systems. Results showed marked improvement.
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
A vibroacoustic diagnostic system as an element improving road transport safety.
Komorska, Iwona
2013-01-01
Mechanical defects of a vehicle driving system can be dangerous on the road. Diagnostic systems, which monitor operations of electric and electronic elements and devices of vehicles, are continuously developed and improved, while defects of mechanical systems are still not managed properly. This article proposes supplementing existing on-board diagnostics with a system of diagnosing selected defects to minimize their impact. It presents a method of diagnosing mechanical defects of the engine, gearbox and other elements of the driving system on the basis of a model of the vibration signal obtained adaptively. This method is suitable for engine valves, engine head gasket, main gearbox, joints, etc.
Patient safety, quality of care, and knowledge translation in the intensive care unit.
Needham, Dale M
2010-07-01
A large gap exists between the completion of clinical research demonstrating the benefit of new treatment interventions and improved patient outcomes resulting from implementation of these interventions as part of routine clinical practice. This gap clearly affects patient safety and quality of care. Knowledge translation is important for addressing this gap, but evaluation of the most appropriate and effective knowledge translation methods is still ongoing. Through describing one model for knowledge translation and an example of its implementation, insights can be gained into systematic methods for advancing the implementation of evidence-based interventions to improve safety, quality, and patient outcomes.
Interacting multiple model forward filtering and backward smoothing for maneuvering target tracking
NASA Astrophysics Data System (ADS)
Nandakumaran, N.; Sutharsan, S.; Tharmarasa, R.; Lang, Tom; McDonald, Mike; Kirubarajan, T.
2009-08-01
The Interacting Multiple Model (IMM) estimator has been proven to be effective in tracking agile targets. Smoothing or retrodiction, which uses measurements beyond the current estimation time, provides better estimates of target states. Various methods have been proposed for multiple model smoothing in the literature. In this paper, a new smoothing method, which involves forward filtering followed by backward smoothing while maintaining the fundamental spirit of the IMM, is proposed. The forward filtering is performed using the standard IMM recursion, while the backward smoothing is performed using a novel interacting smoothing recursion. This backward recursion mimics the IMM estimator in the backward direction, where each mode conditioned smoother uses standard Kalman smoothing recursion. Resulting algorithm provides improved but delayed estimates of target states. Simulation studies are performed to demonstrate the improved performance with a maneuvering target scenario. The comparison with existing methods confirms the improved smoothing accuracy. This improvement results from avoiding the augmented state vector used by other algorithms. In addition, the new technique to account for model switching in smoothing is a key in improving the performance.
Image superresolution by midfrequency sparse representation and total variation regularization
NASA Astrophysics Data System (ADS)
Xu, Jian; Chang, Zhiguo; Fan, Jiulun; Zhao, Xiaoqiang; Wu, Xiaomin; Wang, Yanzi
2015-01-01
Machine learning has provided many good tools for superresolution, whereas existing methods still need to be improved in many aspects. On one hand, the memory and time cost should be reduced. On the other hand, the step edges of the results obtained by the existing methods are not clear enough. We do the following work. First, we propose a method to extract the midfrequency features for dictionary learning. This method brings the benefit of a reduction of the memory and time complexity without sacrificing the performance. Second, we propose a detailed wiping-off total variation (DWO-TV) regularization model to reconstruct the sharp step edges. This model adds a novel constraint on the downsampling version of the high-resolution image to wipe off the details and artifacts and sharpen the step edges. Finally, step edges produced by the DWO-TV regularization and the details provided by learning are fused. Experimental results show that the proposed method offers a desirable compromise between low time and memory cost and the reconstruction quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mcwilliams, A. J.
2015-09-08
This report reviews literature on reprocessing high temperature gas-cooled reactor graphite fuel components. A basic review of the various fuel components used in the pebble bed type reactors is provided along with a survey of synthesis methods for the fabrication of the fuel components. Several disposal options are considered for the graphite pebble fuel elements including the storage of intact pebbles, volume reduction by separating the graphite from fuel kernels, and complete processing of the pebbles for waste storage. Existing methods for graphite removal are presented and generally consist of mechanical separation techniques such as crushing and grinding chemical techniquesmore » through the use of acid digestion and oxidation. Potential methods for reprocessing the graphite pebbles include improvements to existing methods and novel technologies that have not previously been investigated for nuclear graphite waste applications. The best overall method will be dependent on the desired final waste form and needs to factor in the technical efficiency, political concerns, cost, and implementation.« less
Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)
Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K
2011-01-01
To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069
Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.
Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng
2018-05-01
Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.
Palmer, Cameron; Pe’er, Itsik
2016-01-01
Missing data are an unavoidable component of modern statistical genetics. Different array or sequencing technologies cover different single nucleotide polymorphisms (SNPs), leading to a complicated mosaic pattern of missingness where both individual genotypes and entire SNPs are sporadically absent. Such missing data patterns cannot be ignored without introducing bias, yet cannot be inferred exclusively from nonmissing data. In genome-wide association studies, the accepted solution to missingness is to impute missing data using external reference haplotypes. The resulting probabilistic genotypes may be analyzed in the place of genotype calls. A general-purpose paradigm, called Multiple Imputation (MI), is known to model uncertainty in many contexts, yet it is not widely used in association studies. Here, we undertake a systematic evaluation of existing imputed data analysis methods and MI. We characterize biases related to uncertainty in association studies, and find that bias is introduced both at the imputation level, when imputation algorithms generate inconsistent genotype probabilities, and at the association level, when analysis methods inadequately model genotype uncertainty. We find that MI performs at least as well as existing methods or in some cases much better, and provides a straightforward paradigm for adapting existing genotype association methods to uncertain data. PMID:27310603
Defining Learning Disability: Does IQ Have Anything Significant to Say?
ERIC Educational Resources Information Center
Dunn, Michael W.
2010-01-01
A debate exists in the research community about replacing the traditional IQ/achievement discrepancy method for learning disability identification with a "response-to-intervention model". This new assessment paradigm uses a student's level of improvement with small-group or individual programming to determine a possible need for…
Courseware Authoring and Delivering System for Chinese Language Instruction. Final Report.
ERIC Educational Resources Information Center
Mao, Tang
A study investigated technical methods for simplifying and improving the creation of software for teaching uncommonly taught languages such as Chinese. Research consisted of assessment of existing authoring systems, domestic and overseas, available hardware, peripherals, and software packages that could be integrated into this project. Then some…
USDA-ARS?s Scientific Manuscript database
A new method to refine existing dietary supplements for improving production of the yellow mealworm, Tenebrio molitor L. (Coleoptera: Tenebrionidae), was tested. Self selected ratios of 6 dietary ingredients by T. molitor larvae were used to produce a dietary supplement. This supplement was compared...
Xi, Jianing; Wang, Minghui; Li, Ao
2018-06-05
Discovery of mutated driver genes is one of the primary objective for studying tumorigenesis. To discover some relatively low frequently mutated driver genes from somatic mutation data, many existing methods incorporate interaction network as prior information. However, the prior information of mRNA expression patterns are not exploited by these existing network-based methods, which is also proven to be highly informative of cancer progressions. To incorporate prior information from both interaction network and mRNA expressions, we propose a robust and sparse co-regularized nonnegative matrix factorization to discover driver genes from mutation data. Furthermore, our framework also conducts Frobenius norm regularization to overcome overfitting issue. Sparsity-inducing penalty is employed to obtain sparse scores in gene representations, of which the top scored genes are selected as driver candidates. Evaluation experiments by known benchmarking genes indicate that the performance of our method benefits from the two type of prior information. Our method also outperforms the existing network-based methods, and detect some driver genes that are not predicted by the competing methods. In summary, our proposed method can improve the performance of driver gene discovery by effectively incorporating prior information from interaction network and mRNA expression patterns into a robust and sparse co-regularized matrix factorization framework.
Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.
Wang, Xiao; Li, Guo-Zheng
2013-03-12
Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.
Computational inverse methods of heat source in fatigue damage problems
NASA Astrophysics Data System (ADS)
Chen, Aizhou; Li, Yuan; Yan, Bo
2018-04-01
Fatigue dissipation energy is the research focus in field of fatigue damage at present. It is a new idea to solve the problem of calculating fatigue dissipation energy by introducing inverse method of heat source into parameter identification of fatigue dissipation energy model. This paper introduces the research advances on computational inverse method of heat source and regularization technique to solve inverse problem, as well as the existing heat source solution method in fatigue process, prospects inverse method of heat source applying in fatigue damage field, lays the foundation for further improving the effectiveness of fatigue dissipation energy rapid prediction.
Improving Medical Device Regulation: The United States and Europe in Perspective
SORENSON, CORINNA; DRUMMOND, MICHAEL
2014-01-01
Context: Recent debates and events have brought into question the effectiveness of existing regulatory frameworks for medical devices in the United States and Europe to ensure their performance, safety, and quality. This article provides a comparative analysis of medical device regulation in the two jurisdictions, explores current reforms to improve the existing systems, and discusses additional actions that should be considered to fully meet this aim. Medical device regulation must be improved to safeguard public health and ensure that high-quality and effective technologies reach patients. Methods: We explored and analyzed medical device regulatory systems in the United States and Europe in accordance with the available gray and peer-reviewed literature and legislative documents. Findings: The two regulatory systems differ in their mandate and orientation, organization, pre-and postmarket evidence requirements, and transparency of process. Despite these differences, both jurisdictions face similar challenges for ensuring that only safe and effective devices reach the market, monitoring real-world use, and exchanging pertinent information on devices with key users such as clinicians and patients. To address these issues, reforms have recently been introduced or debated in the United States and Europe that are principally focused on strengthening regulatory processes, enhancing postmarket regulation through more robust surveillance systems, and improving the traceability and monitoring of devices. Some changes in premarket requirements for devices are being considered. Conclusions: Although the current reforms address some of the outstanding challenges in device regulation, additional steps are needed to improve existing policy. We examine a number of actions to be considered, such as requiring high-quality evidence of benefit for medium-and high-risk devices; moving toward greater centralization and coordination of regulatory approval in Europe; creating links between device identifier systems and existing data collection tools, such as electronic health records; and fostering increased and more effective use of registries to ensure safe postmarket use of new and existing devices. PMID:24597558
Li, Jian-Long; Wang, Peng; Fung, Wing Kam; Zhou, Ji-Yuan
2017-10-16
For dichotomous traits, the generalized disequilibrium test with the moment estimate of the variance (GDT-ME) is a powerful family-based association method. Genomic imprinting is an important epigenetic phenomenon and currently, there has been increasing interest of incorporating imprinting to improve the test power of association analysis. However, GDT-ME does not take imprinting effects into account, and it has not been investigated whether it can be used for association analysis when the effects indeed exist. In this article, based on a novel decomposition of the genotype score according to the paternal or maternal source of the allele, we propose the generalized disequilibrium test with imprinting (GDTI) for complete pedigrees without any missing genotypes. Then, we extend GDTI and GDT-ME to accommodate incomplete pedigrees with some pedigrees having missing genotypes, by using a Monte Carlo (MC) sampling and estimation scheme to infer missing genotypes given available genotypes in each pedigree, denoted by MCGDTI and MCGDT-ME, respectively. The proposed GDTI and MCGDTI methods evaluate the differences of the paternal as well as maternal allele scores for all discordant relative pairs in a pedigree, including beyond first-degree relative pairs. Advantages of the proposed GDTI and MCGDTI test statistics over existing methods are demonstrated by simulation studies under various simulation settings and by application to the rheumatoid arthritis dataset. Simulation results show that the proposed tests control the size well under the null hypothesis of no association, and outperform the existing methods under various imprinting effect models. The existing GDT-ME and the proposed MCGDT-ME can be used to test for association even when imprinting effects exist. For the application to the rheumatoid arthritis data, compared to the existing methods, MCGDTI identifies more loci statistically significantly associated with the disease. Under complete and incomplete imprinting effect models, our proposed GDTI and MCGDTI methods, by considering the information on imprinting effects and all discordant relative pairs within each pedigree, outperform all the existing test statistics and MCGDTI can recapture much of the missing information. Therefore, MCGDTI is recommended in practice.
Ovretveit, John; Mittman, Brian; Rubenstein, Lisa; Ganz, David A
2017-10-09
Purpose The purpose of this paper is to enable improvers to use recent knowledge from implementation science to carry out improvement changes more effectively. It also highlights the importance of converting research findings into practical tools and guidance for improvers so as to make research easier to apply in practice. Design/methodology/approach This study provides an illustration of how a quality improvement (QI) team project can make use of recent findings from implementation research so as to make their improvement changes more effective and sustainable. The guidance is based on a review and synthesis of improvement and implementation methods. Findings The paper illustrates how research can help a quality project team in the phases of problem definition and preparation, in design and planning, in implementation, and in sustaining and spreading a QI. Examples of the use of different ideas and methods are cited where they exist. Research limitations/implications The example is illustrative and there is little limited experimental evidence of whether using all the steps and tools in the one approach proposed do enable a quality team to be more effective. Evidence supporting individual guidance proposals is cited where it exists. Practical implications If the steps proposed and illustrated in the paper were followed, it is possible that quality projects could avoid waste by ensuring the conditions they need for success are in place, and sustain and spread improvement changes more effectively. Social implications More patients could benefit more quickly from more effective implementation of proven interventions. Originality/value The paper is the first to describe how improvement and implementation science can be combined in a tangible way that practical improvers can use in their projects. It shows how QI project teams can take advantage of recent advances in improvement and implementation science to make their work more effective and sustainable.
Mandoda, Shilpa; Landry, Michel D.
2011-01-01
ABSTRACT Purpose: To explore the potential for different models of incorporating physical therapy (PT) services within the emerging network of family health teams (FHTs) in Ontario and to identify challenges and opportunities of each model. Methods: A two-phase mixed-methods qualitative descriptive approach was used. First, FHTs were mapped in relation to existing community-based PT practices. Second, semi-structured key-informant interviews were conducted with representatives from urban and rural FHTs and from a variety of community-based PT practices. Interviews were digitally recorded, transcribed verbatim, and analyzed using a categorizing/editing approach. Results: Most participants agreed that the ideal model involves embedding physical therapists directly into FHTs; in some situations, however, partnering with an existing external PT provider may be more feasible and sustainable. Access and funding remain the key issues, regardless of the model adopted. Conclusion: Although there are differences across the urban/rural divide, there exist opportunities to enhance and optimize existing delivery models so as to improve client access and address emerging demand for community-based PT services. PMID:22654231
Distortion correction of OCT images of the crystalline lens: gradient index approach.
Siedlecki, Damian; de Castro, Alberto; Gambra, Enrique; Ortiz, Sergio; Borja, David; Uhlhorn, Stephen; Manns, Fabrice; Marcos, Susana; Parel, Jean-Marie
2012-05-01
To propose a method to correct optical coherence tomography (OCT) images of posterior surface of the crystalline lens incorporating its gradient index (GRIN) distribution and explore its possibilities for posterior surface shape reconstruction in comparison to existing methods of correction. Two-dimensional images of nine human lenses were obtained with a time-domain OCT system. The shape of the posterior lens surface was corrected using the proposed iterative correction method. The parameters defining the GRIN distribution used for the correction were taken from a previous publication. The results of correction were evaluated relative to the nominal surface shape (accessible in vitro) and compared with the performance of two other existing methods (simple division, refraction correction: assuming a homogeneous index). Comparisons were made in terms of posterior surface radius, conic constant, root mean square, peak to valley, and lens thickness shifts from the nominal data. Differences in the retrieved radius and conic constant were not statistically significant across methods. However, GRIN distortion correction with optimal shape GRIN parameters provided more accurate estimates of the posterior lens surface in terms of root mean square and peak values, with errors <6 and 13 μm, respectively, on average. Thickness was also more accurately estimated with the new method, with a mean discrepancy of 8 μm. The posterior surface of the crystalline lens and lens thickness can be accurately reconstructed from OCT images, with the accuracy improving with an accurate model of the GRIN distribution. The algorithm can be used to improve quantitative knowledge of the crystalline lens from OCT imaging in vivo. Although the improvements over other methods are modest in two dimension, it is expected that three-dimensional imaging will fully exploit the potential of the technique. The method will also benefit from increasing experimental data of GRIN distribution in the lens of larger populations.
Clifton, Abigail; Lee, Geraldine; Norman, Ian J; O'Callaghan, David; Tierney, Karen; Richards, Derek
2015-01-01
Background Poor self-management of symptoms and psychological distress leads to worse outcomes and excess health service use in cardiovascular disease (CVD). Online-delivered therapy is effective, but generic interventions lack relevance for people with specific long-term conditions, such as cardiovascular disease. Objective To develop a comprehensive online CVD-specific intervention to improve both self-management and well-being, and to test acceptability and feasibility. Methods Informed by the Medical Research Council (MRC) guidance for the development of complex interventions, we adapted an existing evidence-based generic intervention for depression and anxiety for people with CVD. Content was informed by a literature review of existing resources and trial evidence, and the findings of a focus group study. Think-aloud usability testing was conducted to identify improvements to design and content. Acceptability and feasibility were tested in a cross-sectional study. Results Focus group participants (n=10) agreed that no existing resource met all their needs. Improvements such as "collapse and expand" features were added based on findings that participants’ information needs varied, and specific information, such as detecting heart attacks and when to seek help, was added. Think-aloud testing (n=2) led to changes in font size and design changes around navigation. All participants of the cross-sectional study (10/10, 100%) were able to access and use the intervention. Reported satisfaction was good, although the intervention was perceived to lack relevance for people without comorbid psychological distress. Conclusions We have developed an evidence-based, theory-informed, user-led online intervention for improving self-management and well-being in CVD. The use of multiple evaluation tests informed improvements to content and usability. Preliminary acceptability and feasibility has been demonstrated. The Space from Heart Disease intervention is now ready to be tested for effectiveness. This work has also identified that people with CVD symptoms and comorbid distress would be the most appropriate sample for a future randomized controlled trial to evaluate its effectiveness. PMID:26133739
Pacini, Clare; Ajioka, James W; Micklem, Gos
2017-04-12
Correlation matrices are important in inferring relationships and networks between regulatory or signalling elements in biological systems. With currently available technology sample sizes for experiments are typically small, meaning that these correlations can be difficult to estimate. At a genome-wide scale estimation of correlation matrices can also be computationally demanding. We develop an empirical Bayes approach to improve covariance estimates for gene expression, where we assume the covariance matrix takes a block diagonal form. Our method shows lower false discovery rates than existing methods on simulated data. Applied to a real data set from Bacillus subtilis we demonstrate it's ability to detecting known regulatory units and interactions between them. We demonstrate that, compared to existing methods, our method is able to find significant covariances and also to control false discovery rates, even when the sample size is small (n=10). The method can be used to find potential regulatory networks, and it may also be used as a pre-processing step for methods that calculate, for example, partial correlations, so enabling the inference of the causal and hierarchical structure of the networks.
Cormack Research Project: Glasgow University
NASA Technical Reports Server (NTRS)
Skinner, Susan; Ryan, James M.
1998-01-01
The aim of this project was to investigate and improve upon existing methods of analysing data from COMITEL on the Gamma Ray Observatory for neutrons emitted during solar flares. In particular, a strategy for placing confidence intervals on neutron energy distributions, due to uncertainties on the response matrix has been developed. We have also been able to demonstrate the superior performance of one of a range of possible statistical regularization strategies. A method of generating likely models of neutron energy distributions has also been developed as a tool to this end. The project involved solving an inverse problem with noise being added to the data in various ways. To achieve this pre-existing C code was used to run Fortran subroutines which performed statistical regularization on the data.
Iris movement based wheel chair control using raspberry pi
NASA Astrophysics Data System (ADS)
Sharma, Jatin; Anbarasu, M.; Chakraborty, Chandan; Shanmugasundaram, M.
2017-11-01
Paralysis is considered as a major curse in this world. The number of persons who are paralyzed and therefore dependent on others due to loss of self-mobility is growing with the population. Quadriplegia is a form of Paralysis in which you can only move your eyes. Much work has been done to help disabled persons to live independently. Various methods are used for the same and this paper enlists some of the already existing methods along with some add-ons to improve the existing system. Add-ons include a system, which will be designed using Raspberry Pi and IR Camera Module. OpenCV will be used for image processing and Python is used for programming the Raspberry Pi.
Hue-preserving and saturation-improved color histogram equalization algorithm.
Song, Ki Sun; Kang, Hee; Kang, Moon Gi
2016-06-01
In this paper, an algorithm is proposed to improve contrast and saturation without color degradation. The local histogram equalization (HE) method offers better performance than the global HE method, whereas the local HE method sometimes produces undesirable results due to the block-based processing. The proposed contrast-enhancement (CE) algorithm reflects the characteristics of the global HE method in the local HE method to avoid the artifacts, while global and local contrasts are enhanced. There are two ways to apply the proposed CE algorithm to color images. One is luminance processing methods, and the other one is each channel processing methods. However, these ways incur excessive or reduced saturation and color degradation problems. The proposed algorithm solves these problems by using channel adaptive equalization and similarity of ratios between the channels. Experimental results show that the proposed algorithm enhances contrast and saturation while preserving the hue and producing better performance than existing methods in terms of objective evaluation metrics.
Fan, Bingfei; Li, Qingguo; Liu, Tao
2017-12-28
With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method.
Toward cost-efficient sampling methods
NASA Astrophysics Data System (ADS)
Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie
2015-09-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.
Cue-based assertion classification for Swedish clinical text – developing a lexicon for pyConTextSwe
Velupillai, Sumithra; Skeppstedt, Maria; Kvist, Maria; Mowery, Danielle; Chapman, Brian E.; Dalianis, Hercules; Chapman, Wendy W.
2014-01-01
Objective The ability of a cue-based system to accurately assert whether a disorder is affirmed, negated, or uncertain is dependent, in part, on its cue lexicon. In this paper, we continue our study of porting an assertion system (pyConTextNLP) from English to Swedish (pyConTextSwe) by creating an optimized assertion lexicon for clinical Swedish. Methods and material We integrated cues from four external lexicons, along with generated inflections and combinations. We used subsets of a clinical corpus in Swedish. We applied four assertion classes (definite existence, probable existence, probable negated existence and definite negated existence) and two binary classes (existence yes/no and uncertainty yes/no) to pyConTextSwe. We compared pyConTextSwe’s performance with and without the added cues on a development set, and improved the lexicon further after an error analysis. On a separate evaluation set, we calculated the system’s final performance. Results Following integration steps, we added 454 cues to pyConTextSwe. The optimized lexicon developed after an error analysis resulted in statistically significant improvements on the development set (83% F-score, overall). The system’s final F-scores on an evaluation set were 81% (overall). For the individual assertion classes, F-score results were 88% (definite existence), 81% (probable existence), 55% (probable negated existence), and 63% (definite negated existence). For the binary classifications existence yes/no and uncertainty yes/no, final system performance was 97%/87% and 78%/86% F-score, respectively. Conclusions We have successfully ported pyConTextNLP to Swedish (pyConTextSwe). We have created an extensive and useful assertion lexicon for Swedish clinical text, which could form a valuable resource for similar studies, and which is publicly available. PMID:24556644
Exploiting salient semantic analysis for information retrieval
NASA Astrophysics Data System (ADS)
Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui
2016-11-01
Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.
Clarifying values: an updated review
2013-01-01
Background Consensus guidelines have recommended that decision aids include a process for helping patients clarify their values. We sought to examine the theoretical and empirical evidence related to the use of values clarification methods in patient decision aids. Methods Building on the International Patient Decision Aid Standards (IPDAS) Collaboration’s 2005 review of values clarification methods in decision aids, we convened a multi-disciplinary expert group to examine key definitions, decision-making process theories, and empirical evidence about the effects of values clarification methods in decision aids. To summarize the current state of theory and evidence about the role of values clarification methods in decision aids, we undertook a process of evidence review and summary. Results Values clarification methods (VCMs) are best defined as methods to help patients think about the desirability of options or attributes of options within a specific decision context, in order to identify which option he/she prefers. Several decision making process theories were identified that can inform the design of values clarification methods, but no single “best” practice for how such methods should be constructed was determined. Our evidence review found that existing VCMs were used for a variety of different decisions, rarely referenced underlying theory for their design, but generally were well described in regard to their development process. Listing the pros and cons of a decision was the most common method used. The 13 trials that compared decision support with or without VCMs reached mixed results: some found that VCMs improved some decision-making processes, while others found no effect. Conclusions Values clarification methods may improve decision-making processes and potentially more distal outcomes. However, the small number of evaluations of VCMs and, where evaluations exist, the heterogeneity in outcome measures makes it difficult to determine their overall effectiveness or the specific characteristics that increase effectiveness. PMID:24625261
Nasir, Muhammad; Attique Khan, Muhammad; Sharif, Muhammad; Lali, Ikram Ullah; Saba, Tanzila; Iqbal, Tassawar
2018-02-21
Melanoma is the deadliest type of skin cancer with highest mortality rate. However, the annihilation in early stage implies a high survival rate therefore, it demands early diagnosis. The accustomed diagnosis methods are costly and cumbersome due to the involvement of experienced experts as well as the requirements for highly equipped environment. The recent advancements in computerized solutions for these diagnoses are highly promising with improved accuracy and efficiency. In this article, we proposed a method for the classification of melanoma and benign skin lesions. Our approach integrates preprocessing, lesion segmentation, features extraction, features selection, and classification. Preprocessing is executed in the context of hair removal by DullRazor, whereas lesion texture and color information are utilized to enhance the lesion contrast. In lesion segmentation, a hybrid technique has been implemented and results are fused using additive law of probability. Serial based method is applied subsequently that extracts and fuses the traits such as color, texture, and HOG (shape). The fused features are selected afterwards by implementing a novel Boltzman Entropy method. Finally, the selected features are classified by Support Vector Machine. The proposed method is evaluated on publically available data set PH2. Our approach has provided promising results of sensitivity 97.7%, specificity 96.7%, accuracy 97.5%, and F-score 97.5%, which are significantly better than the results of existing methods available on the same data set. The proposed method detects and classifies melanoma significantly good as compared to existing methods. © 2018 Wiley Periodicals, Inc.
Nagare, Mukund B; Patil, Bhushan D; Holambe, Raghunath S
2017-02-01
B-Mode ultrasound images are degraded by inherent noise called Speckle, which creates a considerable impact on image quality. This noise reduces the accuracy of image analysis and interpretation. Therefore, reduction of speckle noise is an essential task which improves the accuracy of the clinical diagnostics. In this paper, a Multi-directional perfect-reconstruction (PR) filter bank is proposed based on 2-D eigenfilter approach. The proposed method used for the design of two-dimensional (2-D) two-channel linear-phase FIR perfect-reconstruction filter bank. In this method, the fan shaped, diamond shaped and checkerboard shaped filters are designed. The quadratic measure of the error function between the passband and stopband of the filter has been used an objective function. First, the low-pass analysis filter is designed and then the PR condition has been expressed as a set of linear constraints on the corresponding synthesis low-pass filter. Subsequently, the corresponding synthesis filter is designed using the eigenfilter design method with linear constraints. The newly designed 2-D filters are used in translation invariant pyramidal directional filter bank (TIPDFB) for reduction of speckle noise in ultrasound images. The proposed 2-D filters give better symmetry, regularity and frequency selectivity of the filters in comparison to existing design methods. The proposed method is validated on synthetic and real ultrasound data which ensures improvement in the quality of ultrasound images and efficiently suppresses the speckle noise compared to existing methods.
Piao, Xinglin; Zhang, Yong; Li, Tingshu; Hu, Yongli; Liu, Hao; Zhang, Ke; Ge, Yun
2016-01-01
The Received Signal Strength (RSS) fingerprint-based indoor localization is an important research topic in wireless network communications. Most current RSS fingerprint-based indoor localization methods do not explore and utilize the spatial or temporal correlation existing in fingerprint data and measurement data, which is helpful for improving localization accuracy. In this paper, we propose an RSS fingerprint-based indoor localization method by integrating the spatio-temporal constraints into the sparse representation model. The proposed model utilizes the inherent spatial correlation of fingerprint data in the fingerprint matching and uses the temporal continuity of the RSS measurement data in the localization phase. Experiments on the simulated data and the localization tests in the real scenes show that the proposed method improves the localization accuracy and stability effectively compared with state-of-the-art indoor localization methods. PMID:27827882
Improving physical properties via C-H oxidation: chemical and enzymatic approaches.
Michaudel, Quentin; Journot, Guillaume; Regueiro-Ren, Alicia; Goswami, Animesh; Guo, Zhiwei; Tully, Thomas P; Zou, Lufeng; Ramabhadran, Raghunath O; Houk, Kendall N; Baran, Phil S
2014-11-03
Physicochemical properties constitute a key factor for the success of a drug candidate. Whereas many strategies to improve the physicochemical properties of small heterocycle-type leads exist, complex hydrocarbon skeletons are more challenging to derivatize because of the absence of functional groups. A variety of C-H oxidation methods have been explored on the betulin skeleton to improve the solubility of this very bioactive, yet poorly water-soluble, natural product. Capitalizing on the innate reactivity of the molecule, as well as the few molecular handles present on the core, allowed oxidations at different positions across the pentacyclic structure. Enzymatic oxidations afforded several orthogonal oxidations to chemical methods. Solubility measurements showed an enhancement for many of the synthesized compounds. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Comparison of Different Attitude Correction Models for ZY-3 Satellite Imagery
NASA Astrophysics Data System (ADS)
Song, Wenping; Liu, Shijie; Tong, Xiaohua; Niu, Changling; Ye, Zhen; Zhang, Han; Jin, Yanmin
2018-04-01
ZY-3 satellite, launched in 2012, is the first civilian high resolution stereo mapping satellite of China. This paper analyzed the positioning errors of ZY-3 satellite imagery and conducted compensation for geo-position accuracy improvement using different correction models, including attitude quaternion correction, attitude angle offset correction, and attitude angle linear correction. The experimental results revealed that there exist systematic errors with ZY-3 attitude observations and the positioning accuracy can be improved after attitude correction with aid of ground controls. There is no significant difference between the results of attitude quaternion correction method and the attitude angle correction method. However, the attitude angle offset correction model produced steady improvement than the linear correction model when limited ground control points are available for single scene.
Naccarella, Lucio; Biuso, Catuscia; Jennings, Amanda; Patsamanis, Harry
2018-05-29
Evidence exists for the association between health literacy and heart health outcomes. Cardiac rehabilitation is critical for recovery from heart attack and reducing hospital readmissions. Despite this, <30% of people participate in a program. Significant patient, hospital and health system challenges exist to improve recovery through increased heart health literacy. This brief case study reflects and documents practice-based initiatives by Heart Foundation Victoria to improve access to recovery information for patients with low literacy levels. Three key initiatives, namely the Six Steps To Cardiac Recovery resource, the Love Your Heart book and the nurse ambassador program, were implemented informed by mixed methods that assessed need and capacity at the individual, organisational and systems levels. Key outcomes included increased access to recovery information for patients with low health literacy, nurse knowledge and confidence to engage with patients on recovery information, improved education of patients and improved availability and accessibility of information for patients in diverse formats. Given the challenges involved in addressing heart health literacy, multifaceted practice-based approaches are essential to improve access to recovery information for patients with low literacy levels. What is known about the topic? Significant challenges exist for patients with lower health literacy receiving recovery information after a heart attack in hospitals. What does this paper add? This case study provides insights into a practice-based initiative by Heart Foundation Victoria to improve access to recovery information for patients with low literacy levels. What are the implications for practitioners? Strategies to improve recovery through increased heart health literacy must address the needs of patients, nursing staff and the health system within hospitals. Such strategies need to be multifaceted and designed to build the capacity of nurses, heart patients and their carers, as well as support from hospital management.
3D shape reconstruction of specular surfaces by using phase measuring deflectometry
NASA Astrophysics Data System (ADS)
Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan
2016-10-01
The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.
Quality Improvement on the Acute Inpatient Psychiatry Unit Using the Model for Improvement
Singh, Kuldeep; Sanderson, Joshua; Galarneau, David; Keister, Thomas; Hickman, Dean
2013-01-01
Background A need exists for constant evaluation and modification of processes within healthcare systems to achieve quality improvement. One common approach is the Model for Improvement that can be used to clearly define aims, measures, and changes that are then implemented through a plan-do-study-act (PDSA) cycle. This approach is a commonly used method for improving quality in a wide range of fields. The Model for Improvement allows for a systematic process that can be revised at set time intervals to achieve a desired result. Methods We used the Model for Improvement in an acute psychiatry unit (APU) to improve the screening incidence of abnormal involuntary movements in eligible patients—those starting or continuing on standing neuroleptics—with the Abnormal Involuntary Movement Scale (AIMS). Results After 8 weeks of using the Model for Improvement, both of the participating inpatient services in the APU showed substantial overall improvement in screening for abnormal involuntary movements using the AIMS. Conclusion Crucial aspects of a successful quality improvement initiative based on the Model for Improvement are well-defined goals, process measures, and structured PDSA cycles. Success also requires communication, organization, and participation of the entire team. PMID:24052768
Lam, Lun Tak; Sun, Yi; Davey, Neil; Adams, Rod; Prapopoulou, Maria; Brown, Marc B; Moss, Gary P
2010-06-01
The aim was to employ Gaussian processes to assess mathematically the nature of a skin permeability dataset and to employ these methods, particularly feature selection, to determine the key physicochemical descriptors which exert the most significant influence on percutaneous absorption, and to compare such models with established existing models. Gaussian processes, including automatic relevance detection (GPRARD) methods, were employed to develop models of percutaneous absorption that identified key physicochemical descriptors of percutaneous absorption. Using MatLab software, the statistical performance of these models was compared with single linear networks (SLN) and quantitative structure-permeability relationships (QSPRs). Feature selection methods were used to examine in more detail the physicochemical parameters used in this study. A range of statistical measures to determine model quality were used. The inherently nonlinear nature of the skin data set was confirmed. The Gaussian process regression (GPR) methods yielded predictive models that offered statistically significant improvements over SLN and QSPR models with regard to predictivity (where the rank order was: GPR > SLN > QSPR). Feature selection analysis determined that the best GPR models were those that contained log P, melting point and the number of hydrogen bond donor groups as significant descriptors. Further statistical analysis also found that great synergy existed between certain parameters. It suggested that a number of the descriptors employed were effectively interchangeable, thus questioning the use of models where discrete variables are output, usually in the form of an equation. The use of a nonlinear GPR method produced models with significantly improved predictivity, compared with SLN or QSPR models. Feature selection methods were able to provide important mechanistic information. However, it was also shown that significant synergy existed between certain parameters, and as such it was possible to interchange certain descriptors (i.e. molecular weight and melting point) without incurring a loss of model quality. Such synergy suggested that a model constructed from discrete terms in an equation may not be the most appropriate way of representing mechanistic understandings of skin absorption.
Highly accurate adaptive TOF determination method for ultrasonic thickness measurement
NASA Astrophysics Data System (ADS)
Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing
2018-04-01
Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.
A Bayesian model averaging method for improving SMT phrase table
NASA Astrophysics Data System (ADS)
Duan, Nan
2013-03-01
Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing.
Li, Chunhua; Zhao, Pengpeng; Sheng, Victor S; Xian, Xuefeng; Wu, Jian; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.
A novel double loop control model design for chemical unstable processes.
Cong, Er-Ding; Hu, Ming-Hui; Tu, Shan-Tung; Xuan, Fu-Zhen; Shao, Hui-He
2014-03-01
In this manuscript, based on Smith predictor control scheme for unstable process in industry, an improved double loop control model is proposed for chemical unstable processes. Inner loop is to stabilize integrating the unstable process and transform the original process to first-order plus pure dead-time dynamic stable process. Outer loop is to enhance the performance of set point response. Disturbance controller is designed to enhance the performance of disturbance response. The improved control system is simple with exact physical meaning. The characteristic equation is easy to realize stabilization. Three controllers are separately design in the improved scheme. It is easy to design each controller and good control performance for the respective closed-loop transfer function separately. The robust stability of the proposed control scheme is analyzed. Finally, case studies illustrate that the improved method can give better system performance than existing design methods. © 2013 ISA Published by ISA All rights reserved.
Implicity restarted Arnoldi/Lanczos methods for large scale eigenvalue calculations
NASA Technical Reports Server (NTRS)
Sorensen, Danny C.
1996-01-01
Eigenvalues and eigenfunctions of linear operators are important to many areas of applied mathematics. The ability to approximate these quantities numerically is becoming increasingly important in a wide variety of applications. This increasing demand has fueled interest in the development of new methods and software for the numerical solution of large-scale algebraic eigenvalue problems. In turn, the existence of these new methods and software, along with the dramatically increased computational capabilities now available, has enabled the solution of problems that would not even have been posed five or ten years ago. Until very recently, software for large-scale nonsymmetric problems was virtually non-existent. Fortunately, the situation is improving rapidly. The purpose of this article is to provide an overview of the numerical solution of large-scale algebraic eigenvalue problems. The focus will be on a class of methods called Krylov subspace projection methods. The well-known Lanczos method is the premier member of this class. The Arnoldi method generalizes the Lanczos method to the nonsymmetric case. A recently developed variant of the Arnoldi/Lanczos scheme called the Implicitly Restarted Arnoldi Method is presented here in some depth. This method is highlighted because of its suitability as a basis for software development.
Remote sensing of suspended sediment water research: principles, methods, and progress
NASA Astrophysics Data System (ADS)
Shen, Ping; Zhang, Jing
2011-12-01
In this paper, we reviewed the principle, data, methods and steps in suspended sediment research by using remote sensing, summed up some representative models and methods, and analyzes the deficiencies of existing methods. Combined with the recent progress of remote sensing theory and application in water suspended sediment research, we introduced in some data processing methods such as atmospheric correction method, adjacent effect correction, and some intelligence algorithms such as neural networks, genetic algorithms, support vector machines into the suspended sediment inversion research, combined with other geographic information, based on Bayesian theory, we improved the suspended sediment inversion precision, and aim to give references to the related researchers.
NASA Astrophysics Data System (ADS)
Ham, Youngjib
The emerging energy crisis in the building sector and the legislative measures on improving energy efficiency are steering the construction industry towards adopting new energy efficient design concepts and construction methods that decrease the overall energy loads. However, the problems of energy efficiency are not only limited to the design and construction of new buildings. Today, a significant amount of input energy in existing buildings is still being wasted during the operational phase. One primary source of the energy waste is attributed to unnecessary heat flows through building envelopes during hot and cold seasons. This inefficiency increases the operational frequency of heating and cooling systems to keep the desired thermal comfort of building occupants, and ultimately results in excessive energy use. Improving thermal performance of building envelopes can reduce the energy consumption required for space conditioning and in turn provide building occupants with an optimal thermal comfort at a lower energy cost. In this sense, energy diagnostics and retrofit analysis for existing building envelopes are key enablers for improving energy efficiency. Since proper retrofit decisions of existing buildings directly translate into energy cost saving in the future, building practitioners are increasingly interested in methods for reliable identification of potential performance problems so that they can take timely corrective actions. However, sensing what and where energy problems are emerging or are likely to emerge and then analyzing how the problems influence the energy consumption are not trivial tasks. The overarching goal of this dissertation focuses on understanding the gaps in knowledge in methods for building energy diagnostics and retrofit analysis, and filling these gaps by devising a new method for multi-modal visual sensing and analytics using thermography and Building Information Modeling (BIM). First, to address the challenges in scaling and localization issues of 2D thermal image-based inspection, a new computer vision-based method is presented for automated 3D spatio-thermal modeling of building environments from images and localizing the thermal images into the 3D reconstructed scenes, which helps better characterize the as-is condition of existing buildings in 3D. By using these models, auditors can conduct virtual walk-through in buildings and explore the as-is condition of building geometry and the associated thermal conditions in 3D. Second, to address the challenges in qualitative and subjective interpretation of visual data, a new model-based method is presented to convert the 3D thermal profiles of building environments into their associated energy performance metrics. More specifically, the Energy Performance Augmented Reality (EPAR) models are formed which integrate the actual 3D spatio-thermal models ('as-is') with energy performance benchmarks ('as-designed') in 3D. In the EPAR models, the presence and location of potential energy problems in building environments are inferred based on performance deviations. The as-is thermal resistances of the building assemblies are also calculated at the level of mesh vertex in 3D. Then, based on the historical weather data reflecting energy load for space conditioning, the amount of heat transfer that can be saved by improving the as-is thermal resistances of the defective areas to the recommended level is calculated, and the equivalent energy cost for this saving is estimated. The outcome provides building practitioners with unique information that can facilitate energy efficient retrofit decision-makings. This is a major departure from offhand calculations that are based on historical cost data of industry best practices. Finally, to improve the reliability of BIM-based energy performance modeling and analysis for existing buildings, a new model-based automated method is presented to map actual thermal resistance measurements at the level of 3D vertexes to the associated BIM elements and update their corresponding thermal properties in the gbXML schema. By reflecting the as-is building condition in the BIM-based energy modeling process, this method bridges over the gap between the architectural information in the as-designed BIM and the as-is building condition for accurate energy performance analysis. The performance of each method was validated on ten case studies from interiors and exteriors of existing residential and instructional buildings in IL and VA. The extensive experimental results show the promise of the proposed methods in addressing the fundamental challenges of (1) visual sensing : scaling 2D visual assessments to real-world building environments and localizing energy problems; (2) analytics: subjective and qualitative assessments; and (3) BIM-based building energy analysis : a lack of procedures for reflecting the as-is building condition in the energy modeling process. Beyond the technical contributions, the domain expert surveys conducted in this dissertation show that the proposed methods have potential to improve the quality of thermographic inspection processes and complement the current building energy analysis tools.
Reyes, Teija; Quiroz, Roberto; Msikula, Shija
2005-11-01
The East Usambara Mountains, recognized as one of the 25 most important biodiversity hot spots in the world, have a high degree of species diversity and endemism that is threatened by increasing human pressure on resources. Traditional slash and burn cultivation in the area is no longer sustainable. However, it is possible to maintain land productivity, decrease land degradation, and improve rural people's livelihood by ameliorating cultivation methods. Improved agroforestry seems to be a very convincing and suitable method for buffer zones of conservation areas. Farmers could receive a reasonable net income from their farm with little investment in terms of time, capital, and labor. By increasing the diversity and production of already existing cultivations, the pressure on natural forests can be diminished. The present study shows a significant gap between traditional cultivation methods and improved agroforestry systems in socio-economic terms. Improved agroforestry systems provide approximately double income per capita in comparison to traditional methods. More intensified cash crop cultivation in the highlands of the East Usambara also results in double income compared to that in the lowlands. However, people are sensitive to risks of changing farming practices. Encouraging farmers to apply better land management and practice sustainable cultivation of cash crops in combination with multipurpose trees would be relevant in improving their economic situation in the relatively short term. The markets of most cash crops are already available. Improved agroforestry methods could ameliorate the living conditions of the local population and protect the natural reserves from human disturbance.
A conjugate gradient method with descent properties under strong Wolfe line search
NASA Astrophysics Data System (ADS)
Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.
2017-09-01
The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.
Sim, K S; Yeap, Z X; Tso, C P
2016-11-01
An improvement to the existing technique of quantifying signal-to-noise ratio (SNR) of scanning electron microscope (SEM) images using piecewise cubic Hermite interpolation (PCHIP) technique is proposed. The new technique uses an adaptive tuning onto the PCHIP, and is thus named as ATPCHIP. To test its accuracy, 70 images are corrupted with noise and their autocorrelation functions are then plotted. The ATPCHIP technique is applied to estimate the uncorrupted noise-free zero offset point from a corrupted image. Three existing methods, the nearest neighborhood, first order interpolation and original PCHIP, are used to compare with the performance of the proposed ATPCHIP method, with respect to their calculated SNR values. Results show that ATPCHIP is an accurate and reliable method to estimate SNR values from SEM images. SCANNING 38:502-514, 2016. © 2015 Wiley Periodicals, Inc. © Wiley Periodicals, Inc.
Human error mitigation initiative (HEMI) : summary report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.
2004-11-01
Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operationsmore » indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.« less
NASA Astrophysics Data System (ADS)
Guerra, J. E.; Ullrich, P. A.
2014-12-01
Tempest is a new non-hydrostatic atmospheric modeling framework that allows for investigation and intercomparison of high-order numerical methods. It is composed of a dynamical core based on a finite-element formulation of arbitrary order operating on cubed-sphere and Cartesian meshes with topography. The underlying technology is briefly discussed, including a novel Hybrid Finite Element Method (HFEM) vertical coordinate coupled with high-order Implicit/Explicit (IMEX) time integration to control vertically propagating sound waves. Here, we show results from a suite of Mesoscale testing cases from the literature that demonstrate the accuracy, performance, and properties of Tempest on regular Cartesian meshes. The test cases include wave propagation behavior, Kelvin-Helmholtz instabilities, and flow interaction with topography. Comparisons are made to existing results highlighting improvements made in resolving atmospheric dynamics in the vertical direction where many existing methods are deficient.
Herrera, Samantha; Enuameh, Yeetey; Adjei, George; Ae-Ngibise, Kenneth Ayuurebobi; Asante, Kwaku Poku; Sankoh, Osman; Owusu-Agyei, Seth; Yé, Yazoume
2017-10-23
Lack of valid and reliable data on malaria deaths continues to be a problem that plagues the global health community. To address this gap, the verbal autopsy (VA) method was developed to ascertain cause of death at the population level. Despite the adoption and wide use of VA, there are many recognized limitations of VA tools and methods, especially for measuring malaria mortality. This study synthesizes the strengths and limitations of existing VA tools and methods for measuring malaria mortality (MM) in low- and middle-income countries through a systematic literature review. The authors searched PubMed, Cochrane Library, Popline, WHOLIS, Google Scholar, and INDEPTH Network Health and Demographic Surveillance System sites' websites from 1 January 1990 to 15 January 2016 for articles and reports on MM measurement through VA. article presented results from a VA study where malaria was a cause of death; article discussed limitations/challenges related to measurement of MM through VA. Two authors independently searched the databases and websites and conducted a synthesis of articles using a standard matrix. The authors identified 828 publications; 88 were included in the final review. Most publications were VA studies; others were systematic reviews discussing VA tools or methods; editorials or commentaries; and studies using VA data to develop MM estimates. The main limitation were low sensitivity and specificity of VA tools for measuring MM. Other limitations included lack of standardized VA tools and methods, lack of a 'true' gold standard to assess accuracy of VA malaria mortality. Existing VA tools and methods for measuring MM have limitations. Given the need for data to measure progress toward the World Health Organization's Global Technical Strategy for Malaria 2016-2030 goals, the malaria community should define strategies for improving MM estimates, including exploring whether VA tools and methods could be further improved. Longer term strategies should focus on improving countries' vital registration systems for more robust and timely cause of death data.
Nolte, Kurt B; Stewart, Douglas M; O'Hair, Kevin C; Gannon, William L; Briggs, Michael S; Barron, A Marie; Pointer, Judy; Larson, Richard S
2008-10-01
The authors developed a novel continuous quality improvement (CQI) process for academic biomedical research compliance administration. A challenge in developing a quality improvement program in a nonbusiness environment is that the terminology and processes are often foreign. Rather than training staff in an existing quality improvement process, the authors opted to develop a novel process based on the scientific method--a paradigm familiar to all team members. The CQI process included our research compliance units. Unit leaders identified problems in compliance administration where a resolution would have a positive impact and which could be resolved or improved with current resources. They then generated testable hypotheses about a change to standard practice expected to improve the problem, and they developed methods and metrics to assess the impact of the change. The CQI process was managed in a "peer review" environment. The program included processes to reduce the incidence of infections in animal colonies, decrease research protocol-approval times, improve compliance and protection of animal and human research subjects, and improve research protocol quality. This novel CQI approach is well suited to the needs and the unique processes of research compliance administration. Using the scientific method as the improvement paradigm fostered acceptance of the project by unit leaders and facilitated the development of specific improvement projects. These quality initiatives will allow us to improve support for investigators while ensuring that compliance standards continue to be met. We believe that our CQI process can readily be used in other academically based offices of research.
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
NASA Astrophysics Data System (ADS)
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
Cheating prevention in visual cryptography.
Hu, Chih-Ming; Tzeng, Wen-Guey
2007-01-01
Visual cryptography (VC) is a method of encrypting a secret image into shares such that stacking a sufficient number of shares reveals the secret image. Shares are usually presented in transparencies. Each participant holds a transparency. Most of the previous research work on VC focuses on improving two parameters: pixel expansion and contrast. In this paper, we studied the cheating problem in VC and extended VC. We considered the attacks of malicious adversaries who may deviate from the scheme in any way. We presented three cheating methods and applied them on attacking existent VC or extended VC schemes. We improved one cheat-preventing scheme. We proposed a generic method that converts a VCS to another VCS that has the property of cheating prevention. The overhead of the conversion is near optimal in both contrast degression and pixel expansion.
NASA Technical Reports Server (NTRS)
Lewis, Michael
1994-01-01
Statistical encoding techniques enable the reduction of the number of bits required to encode a set of symbols, and are derived from their probabilities. Huffman encoding is an example of statistical encoding that has been used for error-free data compression. The degree of compression given by Huffman encoding in this application can be improved by the use of prediction methods. These replace the set of elevations by a set of corrections that have a more advantageous probability distribution. In particular, the method of Lagrange Multipliers for minimization of the mean square error has been applied to local geometrical predictors. Using this technique, an 8-point predictor achieved about a 7 percent improvement over an existing simple triangular predictor.
Bearing, gearing, and lubrication technology
NASA Technical Reports Server (NTRS)
Anderson, W. J.
1978-01-01
Results of selected NASA research programs on rolling-element and fluid-film bearings, gears, and elastohydrodynamic lubrication are reported. Advances in rolling-element bearing material technology, which have resulted in a significant improvement in fatigue life, and which make possible new applications for rolling bearings, are discussed. Research on whirl-resistant, fluid-film bearings, suitable for very high-speed applications, is discussed. An improved method for predicting gear pitting life is reported. An improved formula for calculating the thickness of elastohydrodynamic films (the existence of which help to define the operating regime of concentrated contact mechanisms such as bearings, gears, and cams) is described.
A Secure and Efficient Handover Authentication Protocol for Wireless Networks
Wang, Weijia; Hu, Lei
2014-01-01
Handover authentication protocol is a promising access control technology in the fields of WLANs and mobile wireless sensor networks. In this paper, we firstly review an efficient handover authentication protocol, named PairHand, and its existing security attacks and improvements. Then, we present an improved key recovery attack by using the linearly combining method and reanalyze its feasibility on the improved PairHand protocol. Finally, we present a new handover authentication protocol, which not only achieves the same desirable efficiency features of PairHand, but enjoys the provable security in the random oracle model. PMID:24971471
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Jianxin; Mei, Deqing, E-mail: meidq-127@zju.edu.cn; Yang, Keji
2014-08-14
In existing ultrasonic transportation methods, the long-range transportation of micro-particles is always realized in step-by-step way. Due to the substantial decrease of the driving force in each step, the transportation is lower-speed and stair-stepping. To improve the transporting velocity, a non-stepping ultrasonic transportation approach is proposed. By quantitatively analyzing the acoustic potential well, an optimal region is defined as the position, where the largest driving force is provided under the condition that the driving force is simultaneously the major component of an acoustic radiation force. To keep the micro-particle trapped in the optimal region during the whole transportation process, anmore » approach of optimizing the phase-shifting velocity and phase-shifting step is adopted. Due to the stable and large driving force, the displacement of the micro-particle is an approximately linear function of time, instead of a stair-stepping function of time as in the existing step-by-step methods. An experimental setup is also developed to validate this approach. Long-range ultrasonic transportations of zirconium beads with high transporting velocity were realized. The experimental results demonstrated that this approach is an effective way to improve transporting velocity in the long-range ultrasonic transportation of micro-particles.« less
Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones
Wang, Zhen; Jin, Bingwen; Geng, Weidong
2017-01-01
The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765
Improving estimates of genetic maps: a meta-analysis-based approach.
Stewart, William C L
2007-07-01
Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.
Transforming Teacher Preparation to Ensure Long-Term Improvement in STEM Teaching
ERIC Educational Resources Information Center
Hiebert, James
2013-01-01
An alternative mathematics preparation program for K-8 teachers is described as an existence proof that steadily increasing effectiveness of STEM (science, technology, engineering, and mathematics) preparation is possible. The program is based on treating every lesson in each of five mathematics content and methods courses as objects of study.…
An Integrated Model for Effective Knowledge Management in Chinese Organizations
ERIC Educational Resources Information Center
An, Xiaomi; Deng, Hepu; Wang, Yiwen; Chao, Lemen
2013-01-01
Purpose: The purpose of this paper is to provide organizations in the Chinese cultural context with a conceptual model for an integrated adoption of existing knowledge management (KM) methods and to improve the effectiveness of their KM activities. Design/methodology/approaches: A comparative analysis is conducted between China and the western…
"Dear Fresher …"--How Online Questionnaires Can Improve Learning and Teaching Statistics
ERIC Educational Resources Information Center
Bebermeier, Sarah; Nussbeck, Fridtjof W.; Ontrup, Greta
2015-01-01
Lecturers teaching statistics are faced with several challenges supporting students' learning in appropriate ways. A variety of methods and tools exist to facilitate students' learning on statistics courses. The online questionnaires presented in this report are a new, slightly different computer-based tool: the central aim was to support students…
Undergraduate Chemistry Education in Chinese Universities: Addressing the Challenges of Rapid Growth
ERIC Educational Resources Information Center
Gou, Xiaojun; Cao, Haishi
2010-01-01
In the past 30 years, university-level chemistry education in China has been experiencing significant changes because of the rapid expansion of its university education system. These changes are reflected in improvements to the existing education goals, classroom teaching methods, textbooks, teaching facilities, teacher profiles, lab activities,…
USDA-ARS?s Scientific Manuscript database
MicroRNAs (miRNAs) ubiquitously exist in microorganisms, plants and animals, and appear to modulate a wide range of critical biological processes. However, no definitive conclusion has been reached regarding the uptake of exogenous dietary small RNAs into mammalian circulation and organs and cross-k...
An Exploratory Study of Software Cost Estimating at the Electronic Systems Division.
1976-07-01
action’. to improve the software cost Sestimating proces., While thin research was limited to the M.nD onvironment, the same types of problema may exist...Methods in Social Science. Now York: Random House, 1969. 57. Smith, Ronald L. Structured Programming Series (Vol. XI) - Estimating Software Project
A Statewide Examination of the Training Satisfaction of Instructional Coaches
ERIC Educational Resources Information Center
Lubbers, Susanne L.
2017-01-01
As the roles of instructional coaches are expanding in school districts, little research exists about how instructional coaches are initially trained for their positions. Much of the research base of teachers coach peers shows it is a strong method of helping teachers improve their classroom effectiveness, but little is known about an…
ERIC Educational Resources Information Center
Wu, Ting-Ting
2018-01-01
Memorizing English vocabulary is often considered uninteresting, and a lack of motivation exists during learning activities. Moreover, most vocabulary practice systems automatically select words from articles and do not provide integrated model methods for students. Therefore, this study constructed a mobile game-based English vocabulary practice…
Janke, Christopher J.; Dai, Sheng; Oyola, Yatsandra
2016-05-03
A powder-based adsorbent and a related method of manufacture are provided. The powder-based adsorbent includes polymer powder with grafted side chains and an increased surface area per unit weight to increase the adsorption of dissolved metals, for example uranium, from aqueous solutions. A method for forming the powder-based adsorbent includes irradiating polymer powder, grafting with polymerizable reactive monomers, reacting with hydroxylamine, and conditioning with an alkaline solution. Powder-based adsorbents formed according to the present method demonstrated a significantly improved uranium adsorption capacity per unit weight over existing adsorbents.
Janke, Christopher J.; Dai, Sheng; Oyola, Yatsandra
2015-06-02
Foam-based adsorbents and a related method of manufacture are provided. The foam-based adsorbents include polymer foam with grafted side chains and an increased surface area per unit weight to increase the adsorption of dissolved metals, for example uranium, from aqueous solutions. A method for forming the foam-based adsorbents includes irradiating polymer foam, grafting with polymerizable reactive monomers, reacting with hydroxylamine, and conditioning with an alkaline solution. Foam-based adsorbents formed according to the present method demonstrated a significantly improved uranium adsorption capacity per unit weight over existing adsorbents.
Stabilization of computational procedures for constrained dynamical systems
NASA Technical Reports Server (NTRS)
Park, K. C.; Chiou, J. C.
1988-01-01
A new stabilization method of treating constraints in multibody dynamical systems is presented. By tailoring a penalty form of the constraint equations, the method achieves stabilization without artificial damping and yields a companion matrix differential equation for the constraint forces; hence, the constraint forces are obtained by integrating the companion differential equation for the constraint forces in time. A principal feature of the method is that the errors committed in each constraint condition decay with its corresponding characteristic time scale associated with its constraint force. Numerical experiments indicate that the method yields a marked improvement over existing techniques.
Improving nuclear data accuracy of 241Am and 237Np capture cross sections
NASA Astrophysics Data System (ADS)
Žerovnik, Gašper; Schillebeeckx, Peter; Cano-Ott, Daniel; Jandel, Marian; Hori, Jun-ichi; Kimura, Atsushi; Rossbach, Matthias; Letourneau, Alain; Noguere, Gilles; Leconte, Pierre; Sano, Tadafumi; Kellett, Mark A.; Iwamoto, Osamu; Ignatyuk, Anatoly V.; Cabellos, Oscar; Genreith, Christoph; Harada, Hideo
2017-09-01
In the framework of the OECD/NEA WPEC subgroup 41, ways to improve neutron induced capture cross sections for 241Am and 237Np are being sought. Decay data, energy dependent cross section data and neutron spectrum averaged data are important for that purpose and were investigated. New time-of-flight measurements were performed and analyzed, and considerable effort was put into development of methods for analysis of spectrum averaged data and re-analysis of existing experimental data.
2013-07-02
in streamer discharge afterglow in a variety of fueVair mixtures in order to account for the 0 reaction pathways in transient plasma ignition. It is... plasma ignition (TPI), the use of streamers for ignition in combustion engines, holds great promise for improving performance. TPI has been tested...standard spark gap or arc ignition methods [1-4]. These improvements to combustion allow increasing power and efficiency in existing engines such as
Increasingly mobile: How new technologies can enhance qualitative research
Moylan, Carrie Ann; Derr, Amelia Seraphia; Lindhorst, Taryn
2015-01-01
Advances in technology, such as the growth of smart phones, tablet computing, and improved access to the internet have resulted in many new tools and applications designed to increase efficiency and improve workflow. Some of these tools will assist scholars using qualitative methods with their research processes. We describe emerging technologies for use in data collection, analysis, and dissemination that each offer enhancements to existing research processes. Suggestions for keeping pace with the ever-evolving technological landscape are also offered. PMID:25798072
NASA Astrophysics Data System (ADS)
Levitan, Nathaniel; Gross, Barry
2016-10-01
New, high-resolution aerosol products are required in urban areas to improve the spatial coverage of the products, in terms of both resolution and retrieval frequency. These new products will improve our understanding of the spatial variability of aerosols in urban areas and will be useful in the detection of localized aerosol emissions. Urban aerosol retrieval is challenging for existing algorithms because of the high spatial variability of the surface reflectance, indicating the need for improved urban surface reflectance models. This problem can be stated in the language of novelty detection as the problem of selecting aerosol parameters whose effective surface reflectance spectrum is not an outlier in some space. In this paper, empirical orthogonal functions, a reconstruction-based novelty detection technique, is used to perform single-pixel aerosol retrieval using the single angular and temporal sample provided by the MODIS sensor. The empirical orthogonal basis functions are trained for different land classes using the MODIS BRDF MCD43 product. Existing land classification products are used in training and aerosol retrieval. The retrieval is compared against the existing operational MODIS 3 KM Dark Target (DT) aerosol product and co-located AERONET data. Based on the comparison, our method allows for a significant increase in retrieval frequency and a moderate decrease in the known biases of MODIS urban aerosol retrievals.
An Efficient, Lossless Database for Storing and Transmitting Medical Images
NASA Technical Reports Server (NTRS)
Fenstermacher, Marc J.
1998-01-01
This research aimed in creating new compression methods based on the central idea of Set Redundancy Compression (SRC). Set Redundancy refers to the common information that exists in a set of similar images. SRC compression methods take advantage of this common information and can achieve improved compression of similar images by reducing their Set Redundancy. The current research resulted in the development of three new lossless SRC compression methods: MARS (Median-Aided Region Sorting), MAZE (Max-Aided Zero Elimination) and MaxGBA (Max-Guided Bit Allocation).
Restoration of hot pixels in digital imagers using lossless approximation techniques
NASA Astrophysics Data System (ADS)
Hadar, O.; Shleifer, A.; Cohen, E.; Dotan, Y.
2015-09-01
During the last twenty years, digital imagers have spread into industrial and everyday devices, such as satellites, security cameras, cell phones, laptops and more. "Hot pixels" are the main defects in remote digital cameras. In this paper we prove an improvement of existing restoration methods that use (solely or as an auxiliary tool) some average of the surrounding single pixel, such as the method of the Chapman-Koren study 1,2. The proposed method uses the CALIC algorithm and adapts it to a full use of the surrounding pixels.
Alternating method applied to edge and surface crack problems
NASA Technical Reports Server (NTRS)
Hartranft, R. J.; Sih, G. C.
1972-01-01
The Schwarz-Neumann alternating method is employed to obtain stress intensity solutions to two crack problems of practical importance: a semi-infinite elastic plate containing an edge crack which is subjected to concentrated normal and tangential forces, and an elastic half space containing a semicircular surface crack which is subjected to uniform opening pressure. The solution to the semicircular surface crack is seen to be a significant improvement over existing approximate solutions. Application of the alternating method to other crack problems of current interest is briefly discussed.
Fan, M; Wang, K; Jiang, D
1999-08-01
In this paper, we study the existence and global attractivity of positive periodic solutions of periodic n-species Lotka-Volterra competition systems. By using the method of coincidence degree and Lyapunov functional, a set of easily verifiable sufficient conditions are derived for the existence of at least one strictly positive (componentwise) periodic solution of periodic n-species Lotka-Volterra competition systems with several deviating arguments and the existence of a unique globally asymptotically stable periodic solution with strictly positive components of periodic n-species Lotka-Volterra competition system with several delays. Some new results are obtained. As an application, we also examine some special cases of the system we considered, which have been studied extensively in the literature. Some known results are improved and generalized.
Dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization
NASA Astrophysics Data System (ADS)
Li, Li
2018-03-01
In order to extract target from complex background more quickly and accurately, and to further improve the detection effect of defects, a method of dual-threshold segmentation using Arimoto entropy based on chaotic bee colony optimization was proposed. Firstly, the method of single-threshold selection based on Arimoto entropy was extended to dual-threshold selection in order to separate the target from the background more accurately. Then intermediate variables in formulae of Arimoto entropy dual-threshold selection was calculated by recursion to eliminate redundant computation effectively and to reduce the amount of calculation. Finally, the local search phase of artificial bee colony algorithm was improved by chaotic sequence based on tent mapping. The fast search for two optimal thresholds was achieved using the improved bee colony optimization algorithm, thus the search could be accelerated obviously. A large number of experimental results show that, compared with the existing segmentation methods such as multi-threshold segmentation method using maximum Shannon entropy, two-dimensional Shannon entropy segmentation method, two-dimensional Tsallis gray entropy segmentation method and multi-threshold segmentation method using reciprocal gray entropy, the proposed method can segment target more quickly and accurately with superior segmentation effect. It proves to be an instant and effective method for image segmentation.
NASA Astrophysics Data System (ADS)
Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu
2016-05-01
The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.
A New Approach for Mining Order-Preserving Submatrices Based on All Common Subsequences.
Xue, Yun; Liao, Zhengling; Li, Meihang; Luo, Jie; Kuang, Qiuhua; Hu, Xiaohui; Li, Tiechen
2015-01-01
Order-preserving submatrices (OPSMs) have been applied in many fields, such as DNA microarray data analysis, automatic recommendation systems, and target marketing systems, as an important unsupervised learning model. Unfortunately, most existing methods are heuristic algorithms which are unable to reveal OPSMs entirely in NP-complete problem. In particular, deep OPSMs, corresponding to long patterns with few supporting sequences, incur explosive computational costs and are completely pruned by most popular methods. In this paper, we propose an exact method to discover all OPSMs based on frequent sequential pattern mining. First, an existing algorithm was adjusted to disclose all common subsequence (ACS) between every two row sequences, and therefore all deep OPSMs will not be missed. Then, an improved data structure for prefix tree was used to store and traverse ACS, and Apriori principle was employed to efficiently mine the frequent sequential pattern. Finally, experiments were implemented on gene and synthetic datasets. Results demonstrated the effectiveness and efficiency of this method.
Improving Quality of Shoe Soles Product using Six Sigma
NASA Astrophysics Data System (ADS)
Jesslyn Wijaya, Athalia; Trusaji, Wildan; Akbar, Muhammad; Ma’ruf, Anas; Irianto, Dradjad
2018-03-01
A manufacture in Bandung produce kind of rubber-based product i.e. trim, rice rollers, shoe soles, etc. After penetrating the shoe soles market, the manufacture has met customer with tight quality control. Based on the past data, defect level of this product was 18.08% that caused the manufacture’s loss of time and money. Quality improvement effort was done using six sigma method that included phases of define, measure, analyse, improve, and control (DMAIC). In the design phase, the object’s problem and definition were defined. Delphi method was also used in this phase to identify critical factors. In the measure phase, the existing process stability and sigma quality level were measured. Fishbone diagram and failure mode and effect analysis (FMEA) were used in the next phase to analyse the root cause and determine the priority issues. Improve phase was done by designing alternative improvement strategy using 5W1H method. Some improvement efforts were identified, i.e. (i) modifying design of the hanging rack, (ii) create pantone colour book and check sheet, (iii) provide pedestrian line at compound department, (iv) buying stop watch, and (v) modifying shoe soles dies. Some control strategies for continuous improvement were proposed such as SOP or reward and punishment system.
Apparatus for passive removal of subsurface contaminants and mass flow measurement
Jackson, Dennis G [Augusta, GA; Rossabi, Joseph [Aiken, SC; Riha, Brian D [Augusta, GA
2003-07-15
A system for improving the Baroball valve and a method for retrofitting an existing Baroball valve. This invention improves upon the Baroball valve by reshaping the interior chamber of the valve to form a flow meter measuring chamber. The Baroball valve sealing mechanism acts as a rotameter bob for determining mass flow rate through the Baroball valve. A method for retrofitting a Baroball valve includes providing static pressure ports and connecting a measuring device, to these ports, for measuring the pressure differential between the Baroball chamber and the well. A standard curve of nominal device measurements allows the mass flow rate to be determined through the retrofitted Baroball valve.
Apparatus for passive removal of subsurface contaminants and volume flow measurement
Jackson, Dennis G.; Rossabi, Joseph; Riha, Brian D.
2002-01-01
A system for improving the Baroball valve and a method for retrofitting an existing Baroball valve. This invention improves upon the Baroball valve by reshaping the interior chamber of the valve to form a flow meter measuring chamber. The Baroball valve sealing mechanism acts as a rotameter bob for determining volume flow rate through the Baroball valve. A method for retrofitting a Baroball valve includes providing static pressure ports and connecting a measuring device, to these ports, for measuring the pressure differential between the Baroball chamber and the well. A standard curve of nominal device measurements allows the volume flow rate to be determined through the retrofitted Baroball valve.
Loeffler, Troy D; Sepehri, Aliasghar; Chen, Bin
2015-09-08
Reformulation of existing Monte Carlo algorithms used in the study of grand canonical systems has yielded massive improvements in efficiency. Here we present an energy biasing scheme designed to address targeting issues encountered in particle swap moves using sophisticated algorithms such as the Aggregation-Volume-Bias and Unbonding-Bonding methods. Specifically, this energy biasing scheme allows a particle to be inserted to (or removed from) a region that is more acceptable. As a result, this new method showed a several-fold increase in insertion/removal efficiency in addition to an accelerated rate of convergence for the thermodynamic properties of the system.
The algorithm of central axis in surface reconstruction
NASA Astrophysics Data System (ADS)
Zhao, Bao Ping; Zhang, Zheng Mei; Cai Li, Ji; Sun, Da Ming; Cao, Hui Ying; Xing, Bao Liang
2017-09-01
Reverse engineering is an important technique means of product imitation and new product development. Its core technology -- surface reconstruction is the current research for scholars. In the various algorithms of surface reconstruction, using axis reconstruction is a kind of important method. For the various reconstruction, using medial axis algorithm was summarized, pointed out the problems existed in various methods, as well as the place needs to be improved. Also discussed the later surface reconstruction and development of axial direction.
Development of solid dispersion systems of dapivirine to enhance its solubility.
Gorajana, Adinarayana; Ying, Chan Chiew; Shuang, Yeen; Fong, Pooi; Tan, Zhi; Gupta, Jyoti; Talekar, Meghna; Sharma, Manisha; Garg, Sanjay
2013-06-01
Dapivirine, formerly known as TMC 120, is a poorly-water soluble anti-HIV drug, currently being developed as a vaginal microbicide. The clinical use of this drug has been limited due to its poor solubility. The aim of this study was to design solid dispersion systems of Dapivirine to improve its solubility. Solid dispersions were prepared by solvent and fusion methods. Dapivirine release from the solid dispersion system was determined by conducting in-vitro dissolution studies. The physicochemical characteristics of the drug and its formulation were studied using Differential Scanning Calorimetry (DSC), powder X-ray Diffraction (XRD), Fourier-transform Infrared Spectroscopy (FTIR) and Scanning Electron Microscopy (SEM). A significant improvement in drug dissolution rate was observed with the solid dispersion systems. XRD, SEM and DSC results indicated the transformation of pure Dapivirine which exists in crystalline form into an amorphous form in selected solid dispersion formulations. FTIR and HPLC analysis confirmed the absence of drug-excipient interactions. Solid dispersion systems can be used to improve the dissolution rate of Dapivirine. This improvement could be attributed to the reduction or absence of drug crystallinity, existence of drug particles in an amorphous form and improved wettability of the drug.
Ridge, S E; Vizard, A L
1993-01-01
Traditionally, in order to improve diagnostic accuracy, existing tests have been replaced with newly developed diagnostic tests with superior sensitivity and specificity. However, it is possible to improve existing tests by altering the cutoff value chosen to distinguish infected individuals from uninfected individuals. This paper uses data obtained from an investigation of the operating characteristics of the Johne's Absorbed EIA to demonstrate a method of determining a preferred cutoff value from several potentially useful cutoff settings. A method of determining the financial gain from using the preferred rather than the current cutoff value and a decision analysis method to assist in determining the optimal cutoff value when critical population parameters are not known with certainty are demonstrated. The results of this study indicate that the currently recommended cutoff value for the Johne's Absorbed EIA is only close to optimal when the disease prevalence is very low and false-positive test results are deemed to be very costly. In other situations, there were considerable financial advantages to using cutoff values calculated to maximize the benefit of testing. It is probable that the current cutoff values for other diagnostic tests may not be the most appropriate for every testing situation. This paper offers methods for identifying the cutoff value that maximizes the benefit of medical and veterinary diagnostic tests. PMID:8501227
Applications of He's semi-inverse method, ITEM and GGM to the Davey-Stewartson equation
NASA Astrophysics Data System (ADS)
Zinati, Reza Farshbaf; Manafian, Jalil
2017-04-01
We investigate the Davey-Stewartson (DS) equation. Travelling wave solutions were found. In this paper, we demonstrate the effectiveness of the analytical methods, namely, He's semi-inverse variational principle method (SIVPM), the improved tan(φ/2)-expansion method (ITEM) and generalized G'/G-expansion method (GGM) for seeking more exact solutions via the DS equation. These methods are direct, concise and simple to implement compared to other existing methods. The exact solutions containing four types solutions have been achieved. The results demonstrate that the aforementioned methods are more efficient than the Ansatz method applied by Mirzazadeh (2015). Abundant exact travelling wave solutions including solitons, kink, periodic and rational solutions have been found by the improved tan(φ/2)-expansion and generalized G'/G-expansion methods. By He's semi-inverse variational principle we have obtained dark and bright soliton wave solutions. Also, the obtained semi-inverse variational principle has profound implications in physical understandings. These solutions might play important role in engineering and physics fields. Moreover, by using Matlab, some graphical simulations were done to see the behavior of these solutions.
Lee, Woo-Chun; Lee, Sang-Woo; Yun, Seong-Taek; Lee, Pyeong-Koo; Hwang, Yu Sik; Kim, Soon-Oh
2016-01-15
Numerous technologies have been developed and applied to remediate AMD, but each has specific drawbacks. To overcome the limitations of existing methods and improve their effectiveness, we propose a novel method utilizing permeable reactive kiddle (PRK). This manuscript explores the performance of the PRK method. In line with the concept of green technology, the PRK method recycles industrial waste, such as steel slag and waste cast iron. Our results demonstrate that the PRK method can be applied to remediate AMD under optimal operational conditions. Especially, this method allows for simple installation and cheap expenditure, compared with established technologies. Copyright © 2015 Elsevier B.V. All rights reserved.
Hole filling with oriented sticks in ultrasound volume reconstruction
Vaughan, Thomas; Lasso, Andras; Ungi, Tamas; Fichtinger, Gabor
2015-01-01
Abstract. Volumes reconstructed from tracked planar ultrasound images often contain regions where no information was recorded. Existing interpolation methods introduce image artifacts and tend to be slow in filling large missing regions. Our goal was to develop a computationally efficient method that fills missing regions while adequately preserving image features. We use directional sticks to interpolate between pairs of known opposing voxels in nearby images. We tested our method on 30 volumetric ultrasound scans acquired from human subjects, and compared its performance to that of other published hole-filling methods. Reconstruction accuracy, fidelity, and time were improved compared with other methods. PMID:26839907
NASA Technical Reports Server (NTRS)
Thomsen, III, Donald Laurence (Inventor); Cano, Roberto J. (Inventor); Jensen, Brian J. (Inventor); Hales, Stephen J. (Inventor); Alexa, Joel A. (Inventor)
2014-01-01
Methods of building Z-graded radiation shielding and covers. In one aspect, the method includes: providing a substrate surface having about medium Z-grade; plasma spraying a first metal having higher Z-grade than the substrate surface; and infusing a polymer layer to form a laminate. In another aspect, the method includes electro/electroless plating a first metal having higher Z-grade than the substrate surface. In other aspects, the methods include improving an existing electronics enclosure to build a Z-graded radiation shield by applying a temperature controller to at least part of the enclosure and affixing at least one layer of a first metal having higher Z-grade from the enclosure.
Combinational Reasoning of Quantitative Fuzzy Topological Relations for Simple Fuzzy Regions
Liu, Bo; Li, Dajun; Xia, Yuanping; Ruan, Jian; Xu, Lili; Wu, Huanyi
2015-01-01
In recent years, formalization and reasoning of topological relations have become a hot topic as a means to generate knowledge about the relations between spatial objects at the conceptual and geometrical levels. These mechanisms have been widely used in spatial data query, spatial data mining, evaluation of equivalence and similarity in a spatial scene, as well as for consistency assessment of the topological relations of multi-resolution spatial databases. The concept of computational fuzzy topological space is applied to simple fuzzy regions to efficiently and more accurately solve fuzzy topological relations. Thus, extending the existing research and improving upon the previous work, this paper presents a new method to describe fuzzy topological relations between simple spatial regions in Geographic Information Sciences (GIS) and Artificial Intelligence (AI). Firstly, we propose a new definition for simple fuzzy line segments and simple fuzzy regions based on the computational fuzzy topology. And then, based on the new definitions, we also propose a new combinational reasoning method to compute the topological relations between simple fuzzy regions, moreover, this study has discovered that there are (1) 23 different topological relations between a simple crisp region and a simple fuzzy region; (2) 152 different topological relations between two simple fuzzy regions. In the end, we have discussed some examples to demonstrate the validity of the new method, through comparisons with existing fuzzy models, we showed that the proposed method can compute more than the existing models, as it is more expressive than the existing fuzzy models. PMID:25775452
Measurement of plasma unbound unconjugated bilirubin.
Ahlfors, C E
2000-03-15
A method is described for measuring the unconjugated fraction of the unbound bilirubin concentration in plasma by combining the peroxidase method for determining unbound bilirubin with a diazo method for measuring conjugated and unconjugated bilirubin. The accuracy of the unbound bilirubin determination is improved by decreasing sample dilution, eliminating interference by conjugated bilirubin, monitoring changes in bilirubin concentration using diazo derivatives, and correcting for rate-limiting dissociation of bilirubin from albumin. The unbound unconjugated bilirubin concentration by the combined method in plasma from 20 jaundiced newborns was significantly greater than and poorly correlated with the unbound bilirubin determined by the existing peroxidase method (r = 0.7), possibly due to differences in sample dilution between the methods. The unbound unconjugated bilirubin was an unpredictable fraction of the unbound bilirubin in plasma samples from patients with similar total bilirubin concentrations but varying levels of conjugated bilirubin. A bilirubin-binding competitor was readily detected at a sample dilution typically used for the combined test but not at the dilution used for the existing peroxidase method. The combined method is ideally suited to measuring unbound unconjugated bilirubin in jaundiced human newborns or animal models of kernicterus. Copyright 2000 Academic Press.
Improved regulatory element prediction based on tissue-specific local epigenomic signatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Yupeng; Gorkin, David U.; Dickel, Diane E.
Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulator y element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared withmore » existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types.« less
NASA Technical Reports Server (NTRS)
Gates, Thomas S.; Johnson, Theodore F.; Whitley, Karen S.
2005-01-01
The objective of this report is to contribute to the independent assessment of the Space Shuttle External Tank Foam Material. This report specifically addresses material modeling, characterization testing, data reduction methods, and data pedigree. A brief description of the External Tank foam materials, locations, and standard failure modes is provided to develop suitable background information. A review of mechanics based analysis methods from the open literature is used to provide an assessment of the state-of-the-art in material modeling of closed cell foams. Further, this report assesses the existing material property database and investigates sources of material property variability. The report presents identified deficiencies in testing methods and procedures, recommendations for additional testing as required, identification of near-term improvements that should be pursued, and long-term capabilities or enhancements that should be developed.
Information Filtering via Heterogeneous Diffusion in Online Bipartite Networks
Zhang, Fu-Guo; Zeng, An
2015-01-01
The rapid expansion of Internet brings us overwhelming online information, which is impossible for an individual to go through all of it. Therefore, recommender systems were created to help people dig through this abundance of information. In networks composed by users and objects, recommender algorithms based on diffusion have been proven to be one of the best performing methods. Previous works considered the diffusion process from user to object, and from object to user to be equivalent. We show in this work that it is not the case and we improve the quality of the recommendation by taking into account the asymmetrical nature of this process. We apply this idea to modify the state-of-the-art recommendation methods. The simulation results show that the new methods can outperform these existing methods in both recommendation accuracy and diversity. Finally, this modification is checked to be able to improve the recommendation in a realistic case. PMID:26125631
Spectrophotometric analyses of hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) in water.
Shi, Cong; Xu, Zhonghou; Smolinski, Benjamin L; Arienti, Per M; O'Connor, Gregory; Meng, Xiaoguang
2015-07-01
A simple and accurate spectrophotometric method for on-site analysis of royal demolition explosive (RDX) in water samples was developed based on the Berthelot reaction. The sensitivity and accuracy of an existing spectrophotometric method was improved by: replacing toxic chemicals with more stable and safer reagents; optimizing the reagent dose and reaction time; improving color stability; and eliminating the interference from inorganic nitrogen compounds in water samples. Cation and anion exchange resin cartridges were developed and used for sample pretreatment to eliminate the effect of ammonia and nitrate on RDX analyses. The detection limit of the method was determined to be 100 μg/L. The method was used successfully for analysis of RDX in untreated industrial wastewater samples. It can be used for on-site monitoring of RDX in wastewater for early detection of chemical spills and failure of wastewater treatment systems. Copyright © 2015. Published by Elsevier B.V.
Improved regulatory element prediction based on tissue-specific local epigenomic signatures
He, Yupeng; Gorkin, David U.; Dickel, Diane E.; ...
2017-02-13
Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulator y element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared withmore » existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types.« less
Information Filtering via Heterogeneous Diffusion in Online Bipartite Networks.
Zhang, Fu-Guo; Zeng, An
2015-01-01
The rapid expansion of Internet brings us overwhelming online information, which is impossible for an individual to go through all of it. Therefore, recommender systems were created to help people dig through this abundance of information. In networks composed by users and objects, recommender algorithms based on diffusion have been proven to be one of the best performing methods. Previous works considered the diffusion process from user to object, and from object to user to be equivalent. We show in this work that it is not the case and we improve the quality of the recommendation by taking into account the asymmetrical nature of this process. We apply this idea to modify the state-of-the-art recommendation methods. The simulation results show that the new methods can outperform these existing methods in both recommendation accuracy and diversity. Finally, this modification is checked to be able to improve the recommendation in a realistic case.
Conflict management based on belief function entropy in sensor fusion.
Yuan, Kaijuan; Xiao, Fuyuan; Fei, Liguo; Kang, Bingyi; Deng, Yong
2016-01-01
Wireless sensor network plays an important role in intelligent navigation. It incorporates a group of sensors to overcome the limitation of single detection system. Dempster-Shafer evidence theory can combine the sensor data of the wireless sensor network by data fusion, which contributes to the improvement of accuracy and reliability of the detection system. However, due to different sources of sensors, there may be conflict among the sensor data under uncertain environment. Thus, this paper proposes a new method combining Deng entropy and evidence distance to address the issue. First, Deng entropy is adopted to measure the uncertain information. Then, evidence distance is applied to measure the conflict degree. The new method can cope with conflict effectually and improve the accuracy and reliability of the detection system. An example is illustrated to show the efficiency of the new method and the result is compared with that of the existing methods.
Tabu Search enhances network robustness under targeted attacks
NASA Astrophysics Data System (ADS)
Sun, Shi-wen; Ma, Yi-lin; Li, Rui-qi; Wang, Li; Xia, Cheng-yi
2016-03-01
We focus on the optimization of network robustness with respect to intentional attacks on high-degree nodes. Given an existing network, this problem can be considered as a typical single-objective combinatorial optimization problem. Based on the heuristic Tabu Search optimization algorithm, a link-rewiring method is applied to reconstruct the network while keeping the degree of every node unchanged. Through numerical simulations, BA scale-free network and two real-world networks are investigated to verify the effectiveness of the proposed optimization method. Meanwhile, we analyze how the optimization affects other topological properties of the networks, including natural connectivity, clustering coefficient and degree-degree correlation. The current results can help to improve the robustness of existing complex real-world systems, as well as to provide some insights into the design of robust networks.
NASA Astrophysics Data System (ADS)
Lange, Jacob; O'Shaughnessy, Richard; Healy, James; Lousto, Carlos; Shoemaker, Deirdre; Lovelace, Geoffrey; Scheel, Mark; Ossokine, Serguei
2016-03-01
In this talk, we describe a procedure to reconstruct the parameters of sufficiently massive coalescing compact binaries via direct comparison with numerical relativity simulations. For sufficiently massive sources, existing numerical relativity simulations are long enough to cover the observationally accessible part of the signal. Due to the signal's brevity, the posterior parameter distribution it implies is broad, simple, and easily reconstructed from information gained by comparing to only the sparse sample of existing numerical relativity simulations. We describe how followup simulations can corroborate and improve our understanding of a detected source. Since our method can include all physics provided by full numerical relativity simulations of coalescing binaries, it provides a valuable complement to alternative techniques which employ approximations to reconstruct source parameters. Supported by NSF Grant PHY-1505629.
The Taguchi Method Application to Improve the Quality of a Sustainable Process
NASA Astrophysics Data System (ADS)
Titu, A. M.; Sandu, A. V.; Pop, A. B.; Titu, S.; Ciungu, T. C.
2018-06-01
Taguchi’s method has always been a method used to improve the quality of the analyzed processes and products. This research shows an unusual situation, namely the modeling of some parameters, considered technical parameters, in a process that is wanted to be durable by improving the quality process and by ensuring quality using an experimental research method. Modern experimental techniques can be applied in any field and this study reflects the benefits of interacting between the agriculture sustainability principles and the Taguchi’s Method application. The experimental method used in this practical study consists of combining engineering techniques with experimental statistical modeling to achieve rapid improvement of quality costs, in fact seeking optimization at the level of existing processes and the main technical parameters. The paper is actually a purely technical research that promotes a technical experiment using the Taguchi method, considered to be an effective method since it allows for rapid achievement of 70 to 90% of the desired optimization of the technical parameters. The missing 10 to 30 percent can be obtained with one or two complementary experiments, limited to 2 to 4 technical parameters that are considered to be the most influential. Applying the Taguchi’s Method in the technique and not only, allowed the simultaneous study in the same experiment of the influence factors considered to be the most important in different combinations and, at the same time, determining each factor contribution.
Pittenger, Amy L; Westberg, Sarah; Rowan, Mary; Schweiss, Sarah
2013-11-12
To improve pharmacy and nursing students' competency in collaborative practice by having them participate in an interprofessional diabetes experience involving social networking. An existing elective course on diabetes management was modified to include interprofessional content based on Interprofessional Education Collaborative (IPEC) competency domains. Web-based collaborative tools (social networking and video chat) were used to allow nursing and pharmacy students located on 2 different campuses to apply diabetes management content as an interprofessional team. Mixed-method analyses demonstrated an increase in students' knowledge of the roles and responsibilities of the other profession and developed an understanding of interprofessional communication strategies and their central role in effective teamwork. Interprofessional content and activities can be effectively integrated into an existing course and offered successfully to students from other professional programs and on remote campuses.
Distribution system model calibration with big data from AMI and PV inverters
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.; ...
2016-03-03
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Distribution system model calibration with big data from AMI and PV inverters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Peppanen, Jouni; Reno, Matthew J.; Broderick, Robert J.
Efficient management and coordination of distributed energy resources with advanced automation schemes requires accurate distribution system modeling and monitoring. Big data from smart meters and photovoltaic (PV) micro-inverters can be leveraged to calibrate existing utility models. This paper presents computationally efficient distribution system parameter estimation algorithms to improve the accuracy of existing utility feeder radial secondary circuit model parameters. The method is demonstrated using a real utility feeder model with advanced metering infrastructure (AMI) and PV micro-inverters, along with alternative parameter estimation approaches that can be used to improve secondary circuit models when limited measurement data is available. Lastly, themore » parameter estimation accuracy is demonstrated for both a three-phase test circuit with typical secondary circuit topologies and single-phase secondary circuits in a real mixed-phase test system.« less
Advanced statistical methods for improved data analysis of NASA astrophysics missions
NASA Technical Reports Server (NTRS)
Feigelson, Eric D.
1992-01-01
The investigators under this grant studied ways to improve the statistical analysis of astronomical data. They looked at existing techniques, the development of new techniques, and the production and distribution of specialized software to the astronomical community. Abstracts of nine papers that were produced are included, as well as brief descriptions of four software packages. The articles that are abstracted discuss analytical and Monte Carlo comparisons of six different linear least squares fits, a (second) paper on linear regression in astronomy, two reviews of public domain software for the astronomer, subsample and half-sample methods for estimating sampling distributions, a nonparametric estimation of survival functions under dependent competing risks, censoring in astronomical data due to nondetections, an astronomy survival analysis computer package called ASURV, and improving the statistical methodology of astronomical data analysis.
Recent Improvements in Aerodynamic Design Optimization on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Anderson, W. Kyle
2000-01-01
Recent improvements in an unstructured-grid method for large-scale aerodynamic design are presented. Previous work had shown such computations to be prohibitively long in a sequential processing environment. Also, robust adjoint solutions and mesh movement procedures were difficult to realize, particularly for viscous flows. To overcome these limiting factors, a set of design codes based on a discrete adjoint method is extended to a multiprocessor environment using a shared memory approach. A nearly linear speedup is demonstrated, and the consistency of the linearizations is shown to remain valid. The full linearization of the residual is used to precondition the adjoint system, and a significantly improved convergence rate is obtained. A new mesh movement algorithm is implemented and several advantages over an existing technique are presented. Several design cases are shown for turbulent flows in two and three dimensions.
Brown, K M; Elliott, S J; Leatherdale, S T; Robertson-Wilson, J
2015-12-01
The environments in which population health interventions occur shape both their implementation and outcomes. Hence, when evaluating these interventions, we must explore both intervention content and context. Mixed methods (integrating quantitative and qualitative methods) provide this opportunity. However, although criteria exist for establishing rigour in quantitative and qualitative research, there is poor consensus regarding rigour in mixed methods. Using the empirical example of school-based obesity interventions, this methodological review examined how mixed methods have been used and reported, and how rigour has been addressed. Twenty-three peer-reviewed mixed methods studies were identified through a systematic search of five databases and appraised using the guidelines for Good Reporting of a Mixed Methods Study. In general, more detailed description of data collection and analysis, integration, inferences and justifying the use of mixed methods is needed. Additionally, improved reporting of methodological rigour is required. This review calls for increased discussion of practical techniques for establishing rigour in mixed methods research, beyond those for quantitative and qualitative criteria individually. A guide for reporting mixed methods research in population health should be developed to improve the reporting quality of mixed methods studies. Through improved reporting, mixed methods can provide strong evidence to inform policy and practice. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Buchari; Tarigan, U.; Ambarita, M. B.
2018-02-01
PT. XYZ is a wood processing company which produce semi-finished wood with production system is make to order. In the production process, it can be seen that the production line is not balanced. The imbalance of the production line is caused by the difference in cycle time between work stations. In addition, there are other issues, namely the existence of material flow pattern is irregular so it resulted in the backtracking and displacement distance away. This study aimed to obtain the allocation of work elements to specific work stations and propose an improvement of the production layout based on the result of improvements in the line balancing. The method used in the balancing is Ranked Positional Weight (RPW) or also known as Helgeson Birnie method. While the methods used in the improvement of the layout is the method of Systematic Layout Planning (SLP). By using Ranked Positional Weight (RPW) obtained increase in line efficiency becomes 84,86% and decreased balance delay becomes 15,14%. Repairing the layout using the method of Systematic Layout Planning (SLP) also give good results with a reduction in path length becomes 133,82 meters from 213,09 meters previously or a decrease of 37.2%.
Chinda, Betty; Medvedev, George; Siu, William; Ester, Martin; Arab, Ali; Gu, Tao; Moreno, Sylvain; D’Arcy, Ryan C N; Song, Xiaowei
2018-01-01
Introduction Haemorrhagic stroke is of significant healthcare concern due to its association with high mortality and lasting impact on the survivors’ quality of life. Treatment decisions and clinical outcomes depend strongly on the size, spread and location of the haematoma. Non-contrast CT (NCCT) is the primary neuroimaging modality for haematoma assessment in haemorrhagic stroke diagnosis. Current procedures do not allow convenient NCCT-based haemorrhage volume calculation in clinical settings, while research-based approaches are yet to be tested for clinical utility; there is a demonstrated need for developing effective solutions. The project under review investigates the development of an automatic NCCT-based haematoma computation tool in support of accurate quantification of haematoma volumes. Methods and analysis Several existing research methods for haematoma volume estimation are studied. Selected methods are tested using NCCT images of patients diagnosed with acute haemorrhagic stroke. For inter-rater and intrarater reliability evaluation, different raters will analyse haemorrhage volumes independently. The efficiency with respect to time of haematoma volume assessments will be examined to compare with the results from routine clinical evaluations and planimetry assessment that are known to be more accurate. The project will target the development of an enhanced solution by adapting existing methods and integrating machine learning algorithms. NCCT-based information of brain haemorrhage (eg, size, volume, location) and other relevant information (eg, age, sex, risk factor, comorbidities) will be used in relation to clinical outcomes with future project development. Validity and reliability of the solution will be examined for potential clinical utility. Ethics and dissemination The project including procedures for deidentification of NCCT data has been ethically approved. The study involves secondary use of existing data and does not require new consent of participation. The team consists of clinical neuroimaging scientists, computing scientists and clinical professionals in neurology and neuroradiology and includes patient representatives. Research outputs will be disseminated following knowledge translation plans towards improving stroke patient care. Significant findings will be published in scientific journals. Anticipated deliverables include computer solutions for improved clinical assessment of haematoma using NCCT. PMID:29674371
Douville, Christopher; Masica, David L.; Stenson, Peter D.; Cooper, David N.; Gygax, Derek M.; Kim, Rick; Ryan, Michael
2015-01-01
ABSTRACT Insertion/deletion variants (indels) alter protein sequence and length, yet are highly prevalent in healthy populations, presenting a challenge to bioinformatics classifiers. Commonly used features—DNA and protein sequence conservation, indel length, and occurrence in repeat regions—are useful for inference of protein damage. However, these features can cause false positives when predicting the impact of indels on disease. Existing methods for indel classification suffer from low specificities, severely limiting clinical utility. Here, we further develop our variant effect scoring tool (VEST) to include the classification of in‐frame and frameshift indels (VEST‐indel) as pathogenic or benign. We apply 24 features, including a new “PubMed” feature, to estimate a gene's importance in human disease. When compared with four existing indel classifiers, our method achieves a drastically reduced false‐positive rate, improving specificity by as much as 90%. This approach of estimating gene importance might be generally applicable to missense and other bioinformatics pathogenicity predictors, which often fail to achieve high specificity. Finally, we tested all possible meta‐predictors that can be obtained from combining the four different indel classifiers using Boolean conjunctions and disjunctions, and derived a meta‐predictor with improved performance over any individual method. PMID:26442818
Douville, Christopher; Masica, David L; Stenson, Peter D; Cooper, David N; Gygax, Derek M; Kim, Rick; Ryan, Michael; Karchin, Rachel
2016-01-01
Insertion/deletion variants (indels) alter protein sequence and length, yet are highly prevalent in healthy populations, presenting a challenge to bioinformatics classifiers. Commonly used features--DNA and protein sequence conservation, indel length, and occurrence in repeat regions--are useful for inference of protein damage. However, these features can cause false positives when predicting the impact of indels on disease. Existing methods for indel classification suffer from low specificities, severely limiting clinical utility. Here, we further develop our variant effect scoring tool (VEST) to include the classification of in-frame and frameshift indels (VEST-indel) as pathogenic or benign. We apply 24 features, including a new "PubMed" feature, to estimate a gene's importance in human disease. When compared with four existing indel classifiers, our method achieves a drastically reduced false-positive rate, improving specificity by as much as 90%. This approach of estimating gene importance might be generally applicable to missense and other bioinformatics pathogenicity predictors, which often fail to achieve high specificity. Finally, we tested all possible meta-predictors that can be obtained from combining the four different indel classifiers using Boolean conjunctions and disjunctions, and derived a meta-predictor with improved performance over any individual method. © 2015 The Authors. **Human Mutation published by Wiley Periodicals, Inc.
2013-01-01
Background Healthcare technology and quality improvement programs have been identified as a means to influence healthcare costs and healthcare quality in Canada. This study seeks to identify whether the ability to implement healthcare technology by a hospital was related to usage of quality improvement programs within the hospital and whether the culture within a hospital plays a role in the adoption of quality improvement programs. Methods A cross-sectional study of Canadian hospitals was conducted in 2010. The sample consisted of hospital administrators that were selected by provincial review boards. The questionnaire consisted of 3 sections: 20 healthcare technology items, 16 quality improvement program items and 63 culture items. Results Rasch model analysis revealed that a hierarchy existed among the healthcare technologies based upon the difficulty of implementation. The results also showed a significant relationship existed between the ability to implement healthcare technologies and the number of quality improvement programs adopted. In addition, culture within a hospital served a mediating role in quality improvement programs adoption. Conclusions Healthcare technologies each have different levels of difficulty. As a consequence, hospitals need to understand their current level of capability before selecting a particular technology in order to assess the level of resources needed. Further the usage of quality improvement programs is related to the ability to implement technology and the culture within a hospital. PMID:24119419
Li, Qingguo
2017-01-01
With the advancements in micro-electromechanical systems (MEMS) technologies, magnetic and inertial sensors are becoming more and more accurate, lightweight, smaller in size as well as low-cost, which in turn boosts their applications in human movement analysis. However, challenges still exist in the field of sensor orientation estimation, where magnetic disturbance represents one of the obstacles limiting their practical application. The objective of this paper is to systematically analyze exactly how magnetic disturbances affects the attitude and heading estimation for a magnetic and inertial sensor. First, we reviewed four major components dealing with magnetic disturbance, namely decoupling attitude estimation from magnetic reading, gyro bias estimation, adaptive strategies of compensating magnetic disturbance and sensor fusion algorithms. We review and analyze the features of existing methods of each component. Second, to understand each component in magnetic disturbance rejection, four representative sensor fusion methods were implemented, including gradient descent algorithms, improved explicit complementary filter, dual-linear Kalman filter and extended Kalman filter. Finally, a new standardized testing procedure has been developed to objectively assess the performance of each method against magnetic disturbance. Based upon the testing results, the strength and weakness of the existing sensor fusion methods were easily examined, and suggestions were presented for selecting a proper sensor fusion algorithm or developing new sensor fusion method. PMID:29283432
Ilovitsh, Tali; Meiri, Amihai; Ebeling, Carl G.; Menon, Rajesh; Gerton, Jordan M.; Jorgensen, Erik M.; Zalevsky, Zeev
2013-01-01
Localization of a single fluorescent particle with sub-diffraction-limit accuracy is a key merit in localization microscopy. Existing methods such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) achieve localization accuracies of single emitters that can reach an order of magnitude lower than the conventional resolving capabilities of optical microscopy. However, these techniques require a sparse distribution of simultaneously activated fluorophores in the field of view, resulting in larger time needed for the construction of the full image. In this paper we present the use of a nonlinear image decomposition algorithm termed K-factor, which reduces an image into a nonlinear set of contrast-ordered decompositions whose joint product reassembles the original image. The K-factor technique, when implemented on raw data prior to localization, can improve the localization accuracy of standard existing methods, and also enable the localization of overlapping particles, allowing the use of increased fluorophore activation density, and thereby increased data collection speed. Numerical simulations of fluorescence data with random probe positions, and especially at high densities of activated fluorophores, demonstrate an improvement of up to 85% in the localization precision compared to single fitting techniques. Implementing the proposed concept on experimental data of cellular structures yielded a 37% improvement in resolution for the same super-resolution image acquisition time, and a decrease of 42% in the collection time of super-resolution data with the same resolution. PMID:24466491
Cement bond evaluation method in horizontal wells using segmented bond tool
NASA Astrophysics Data System (ADS)
Song, Ruolong; He, Li
2018-06-01
Most of the existing cement evaluation technologies suffer from tool eccentralization due to gravity in highly deviated wells and horizontal wells. This paper proposes a correction method to lessen the effects of tool eccentralization on evaluation results of cement bond using segmented bond tool, which has an omnidirectional sonic transmitter and eight segmented receivers evenly arranged around the tool 2 ft from the transmitter. Using 3-D finite difference parallel numerical simulation method, we investigate the logging responses of centred and eccentred segmented bond tool in a variety of bond conditions. From the numerical results, we find that the tool eccentricity and channel azimuth can be estimated from measured sector amplitude. The average of the sector amplitude when the tool is eccentred can be corrected to the one when the tool is centred. Then the corrected amplitude will be used to calculate the channel size. The proposed method is applied to both synthetic and field data. For synthetic data, it turns out that this method can estimate the tool eccentricity with small error and the bond map is improved after correction. For field data, the tool eccentricity has a good agreement with the measured well deviation angle. Though this method still suffers from the low accuracy of calculating channel azimuth, the credibility of corrected bond map is improved especially in horizontal wells. It gives us a choice to evaluate the bond condition for horizontal wells using existing logging tool. The numerical results in this paper can provide aids for understanding measurements of segmented tool in both vertical and horizontal wells.
Disability Diversity Training in the Workplace: Systematic Review and Future Directions.
Phillips, Brian N; Deiches, Jon; Morrison, Blaise; Chan, Fong; Bezyak, Jill L
2016-09-01
Purpose Misinformation and negative attitudes toward disability contribute to lower employment rates among people with disabilities. Diversity training is an intervention intended to improve intergroup relations and reduce prejudice. We conducted a systematic review to determine the use and effectiveness of disability diversity training aimed at improving employment outcomes for employees with disabilities. Methods Five databases were searched for peer-reviewed studies of disability diversity training interventions provided within the workplace. Studies identified for inclusion were assessed for quality of methodology. Results Of the total of 1322 articles identified by the search, three studies met the criteria for inclusion. Two of the three articles focused specifically on training to improve outcomes related to workplace injuries among existing employees. The other study provided an initial test of a more general disability diversity training program. Conclusions There is currently a lack of empirically validated diversity training programs that focus specifically on disability. A number of disability diversity trainings and resources exist, but none have been well researched. Related literature on diversity training and disability awareness suggests the possibility for enhancing diversity training practices through training design, content, participant, and outcomes considerations. By integrating best practices in workplace diversity training with existing disability training resources, practitioners and researchers may be able to design effective disability diversity training programs.
Fincel, M.J.; Chipps, S.R.; Bennett, D.H.
2009-01-01
Methods for improving spawning habitat for lakeshore spawning kokanee, Oncorhynchus nerka (Walbaum), were explored by quantifying incubation success of embryos exposed to three substrate treatments in Lake Pend Oreille, Idaho, USA. Substrate treatments included no modification that used existing gravels in the lake (EXISTING), a cleaned substrate treatment where existing gravels were sifted in the water column to remove silt (CLEANED) and the addition of new, silt-free gravel (ADDED). Incubation success was evaluated using Whitlock-Vibert incubation boxes buried within each substrate treatment that contained recently fertilised embryos. Upon retrieval, live and dead sac fry and eyed eggs were enumerated to determine incubation success (sac fry and eyed eggs ?? 100/number of fertilised embryos). Incubation success varied significantly among locations and redd treatments. In general, incubation success among ADDED redds (0.0-13.0%) was significantly lower than that for EXISTING (1.4-61.0%) and CLEANED (0.4-62.5%) redds. Adding new gravel to spawning areas changed the morphometry of the gravel-water interface and probably exposed embryos to disturbance from wave action and reduced embryo survival. Moreover, efforts to improve spawning habitat for lakeshore spawning kokanee should consider water depth and location (e.g. protected shorelines) as important variables. Adding clean gravel to existing spawning areas may provide little benefit if water depth or lake-bottom morphometry are altered. ?? 2009 Blackwell Publishing Ltd.
Tatinati, Sivanagaraja; Nazarpour, Kianoush; Tech Ang, Wei; Veluvolu, Kalyana C
2016-08-01
Successful treatment of tumors with motion-adaptive radiotherapy requires accurate prediction of respiratory motion, ideally with a prediction horizon larger than the latency in radiotherapy system. Accurate prediction of respiratory motion is however a non-trivial task due to the presence of irregularities and intra-trace variabilities, such as baseline drift and temporal changes in fundamental frequency pattern. In this paper, to enhance the accuracy of the respiratory motion prediction, we propose a stacked regression ensemble framework that integrates heterogeneous respiratory motion prediction algorithms. We further address two crucial issues for developing a successful ensemble framework: (1) selection of appropriate prediction methods to ensemble (level-0 methods) among the best existing prediction methods; and (2) finding a suitable generalization approach that can successfully exploit the relative advantages of the chosen level-0 methods. The efficacy of the developed ensemble framework is assessed with real respiratory motion traces acquired from 31 patients undergoing treatment. Results show that the developed ensemble framework improves the prediction performance significantly compared to the best existing methods. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.
Yong, Kar Wey; Wan Safwani, Wan Kamarul Zaman; Xu, Feng; Wan Abas, Wan Abu Bakar; Choi, Jane Ru; Pingguan-Murphy, Belinda
2015-08-01
Mesenchymal stem cells (MSCs) hold many advantages over embryonic stem cells (ESCs) and other somatic cells in clinical applications. MSCs are multipotent cells with strong immunosuppressive properties. They can be harvested from various locations in the human body (e.g., bone marrow and adipose tissues). Cryopreservation represents an efficient method for the preservation and pooling of MSCs, to obtain the cell counts required for clinical applications, such as cell-based therapies and regenerative medicine. Upon cryopreservation, it is important to preserve MSCs functional properties including immunomodulatory properties and multilineage differentiation ability. Further, a biosafety evaluation of cryopreserved MSCs is essential prior to their clinical applications. However, the existing cryopreservation methods for MSCs are associated with notable limitations, leading to a need for new or improved methods to be established for a more efficient application of cryopreserved MSCs in stem cell-based therapies. We review the important parameters for cryopreservation of MSCs and the existing cryopreservation methods for MSCs. Further, we also discuss the challenges to be addressed in order to preserve MSCs effectively for clinical applications.
Clare, John; McKinney, Shawn T; DePue, John E; Loftin, Cynthia S
2017-10-01
It is common to use multiple field sampling methods when implementing wildlife surveys to compare method efficacy or cost efficiency, integrate distinct pieces of information provided by separate methods, or evaluate method-specific biases and misclassification error. Existing models that combine information from multiple field methods or sampling devices permit rigorous comparison of method-specific detection parameters, enable estimation of additional parameters such as false-positive detection probability, and improve occurrence or abundance estimates, but with the assumption that the separate sampling methods produce detections independently of one another. This assumption is tenuous if methods are paired or deployed in close proximity simultaneously, a common practice that reduces the additional effort required to implement multiple methods and reduces the risk that differences between method-specific detection parameters are confounded by other environmental factors. We develop occupancy and spatial capture-recapture models that permit covariance between the detections produced by different methods, use simulation to compare estimator performance of the new models to models assuming independence, and provide an empirical application based on American marten (Martes americana) surveys using paired remote cameras, hair catches, and snow tracking. Simulation results indicate existing models that assume that methods independently detect organisms produce biased parameter estimates and substantially understate estimate uncertainty when this assumption is violated, while our reformulated models are robust to either methodological independence or covariance. Empirical results suggested that remote cameras and snow tracking had comparable probability of detecting present martens, but that snow tracking also produced false-positive marten detections that could potentially substantially bias distribution estimates if not corrected for. Remote cameras detected marten individuals more readily than passive hair catches. Inability to photographically distinguish individual sex did not appear to induce negative bias in camera density estimates; instead, hair catches appeared to produce detection competition between individuals that may have been a source of negative bias. Our model reformulations broaden the range of circumstances in which analyses incorporating multiple sources of information can be robustly used, and our empirical results demonstrate that using multiple field-methods can enhance inferences regarding ecological parameters of interest and improve understanding of how reliably survey methods sample these parameters. © 2017 by the Ecological Society of America.
McCaul, Michael; de Waal, Ben; Hodkinson, Peter; Pigoga, Jennifer L; Young, Taryn; Wallis, Lee A
2018-02-05
Methods on developing new (de novo) clinical practice guidelines (CPGs) have received substantial attention. However, the volume of literature is not matched by research into alternative methods of CPG development using existing CPG documents-a specific issue for guideline development groups in low- and middle-income countries. We report on how we developed a context specific prehospital CPG using an alternative guideline development method. Difficulties experienced and lessons learnt in applying existing global guidelines' recommendations to a national context are highlighted. The project produced the first emergency care CPG for prehospital providers in Africa. It included > 270 CPGs and produced over 1000 recommendations for prehospital emergency care. We encountered various difficulties, including (1) applicability issues: few pre-hospital CPGs applicable to Africa, (2) evidence synthesis: heterogeneous levels of evidence classifications and (3) guideline quality. Learning points included (1) focusing on key CPGs and evidence mapping, (2) searching other resources for CPGs, (3) broad representation on CPG advisory boards and (4) transparency and knowledge translation. Re-inventing the wheel to produce CPGs is not always feasible. We hope this paper will encourage further projects to use existing CPGs in developing guidance to improve patient care in resource-limited settings.
Ocular Chromatic Aberrations and Their Effects on Polychromatic Retinal Image Quality
NASA Astrophysics Data System (ADS)
Zhang, Xiaoxiao
Previous studies of ocular chromatic aberrations have concentrated on chromatic difference of focus (CDF). Less is known about the chromatic difference of image position (CDP) in the peripheral retina and no experimental attempt has been made to measure the ocular chromatic difference of magnification (CDM). Consequently, theoretical modelling of human eyes is incomplete. The insufficient knowledge of ocular chromatic aberrations is partially responsible for two unsolved applied vision problems: (1) how to improve vision by correcting ocular chromatic aberration? (2) what is the impact of ocular chromatic aberration on the use of isoluminance gratings as a tool in spatial-color vision?. Using optical ray tracing methods, MTF analysis methods of image quality, and psychophysical methods, I have developed a more complete model of ocular chromatic aberrations and their effects on vision. The ocular CDM was determined psychophysically by measuring the tilt in the apparent frontal parallel plane (AFPP) induced by interocular difference in image wavelength. This experimental result was then used to verify a theoretical relationship between the ocular CDM, the ocular CDF and the entrance pupil of the eye. In the retinal image after correcting the ocular CDF with existing achromatizing methods, two forms of chromatic aberration (CDM and chromatic parallax) were examined. The CDM was predicted by theoretical ray tracing and measured with the same method used to determine ocular CDM. The chromatic parallax was predicted with a nodal ray model and measured with the two-color vernier alignment method. The influence of these two aberrations on polychromatic MTF were calculated. Using this improved model of ocular chromatic aberration, luminance artifacts in the images of isoluminance gratings were calculated. The predicted luminance artifacts were then compared with experimental data from previous investigators. The results show that: (1) A simple relationship exists between two major chromatic aberrations and the location of the pupil; (2) The ocular CDM is measurable and varies among individuals; (3) All existing methods to correct ocular chromatic aberration face another aberration, chromatic parallax, which is inherent in the methodology; (4) Ocular chromatic aberrations have the potential to contaminate psychophysical experimental results on human spatial-color vision.
Determination of laser cutting process conditions using the preference selection index method
NASA Astrophysics Data System (ADS)
Madić, Miloš; Antucheviciene, Jurgita; Radovanović, Miroslav; Petković, Dušan
2017-03-01
Determination of adequate parameter settings for improvement of multiple quality and productivity characteristics at the same time is of great practical importance in laser cutting. This paper discusses the application of the preference selection index (PSI) method for discrete optimization of the CO2 laser cutting of stainless steel. The main motivation for application of the PSI method is that it represents an almost unexplored multi-criteria decision making (MCDM) method, and moreover, this method does not require assessment of the considered criteria relative significances. After reviewing and comparing the existing approaches for determination of laser cutting parameter settings, the application of the PSI method was explained in detail. Experiment realization was conducted by using Taguchi's L27 orthogonal array. Roughness of the cut surface, heat affected zone (HAZ), kerf width and material removal rate (MRR) were considered as optimization criteria. The proposed methodology is found to be very useful in real manufacturing environment since it involves simple calculations which are easy to understand and implement. However, while applying the PSI method it was observed that it can not be useful in situations where there exist a large number of alternatives which have attribute values (performances) very close to those which are preferred.
Combining large number of weak biomarkers based on AUC.
Yan, Li; Tian, Lili; Liu, Song
2015-12-20
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. Copyright © 2015 John Wiley & Sons, Ltd.
Combining large number of weak biomarkers based on AUC
Yan, Li; Tian, Lili; Liu, Song
2018-01-01
Combining multiple biomarkers to improve diagnosis and/or prognosis accuracy is a common practice in clinical medicine. Both parametric and non-parametric methods have been developed for finding the optimal linear combination of biomarkers to maximize the area under the receiver operating characteristic curve (AUC), primarily focusing on the setting with a small number of well-defined biomarkers. This problem becomes more challenging when the number of observations is not order of magnitude greater than the number of variables, especially when the involved biomarkers are relatively weak. Such settings are not uncommon in certain applied fields. The first aim of this paper is to empirically evaluate the performance of existing linear combination methods under such settings. The second aim is to propose a new combination method, namely, the pairwise approach, to maximize AUC. Our simulation studies demonstrated that the performance of several existing methods can become unsatisfactory as the number of markers becomes large, while the newly proposed pairwise method performs reasonably well. Furthermore, we apply all the combination methods to real datasets used for the development and validation of MammaPrint. The implication of our study for the design of optimal linear combination methods is discussed. PMID:26227901
ERIC Educational Resources Information Center
Rogers, Valerie; Salzeider, Christine; Holzum, Laura; Milbrandt, Tracy; Zahnd, Whitney; Puczynski, Mark
2016-01-01
Background: It is important that collaborative relationships exist in a community to improve access to needed services for children. Such partnerships foster preventive services, such as immunizations, and other services that protect the health and well-being of all children. Methods: A collaborative relationship in Illinois involving an academic…
ERIC Educational Resources Information Center
Shaikh, Ulfat; Nettiksimmons, Jasmine; Romano, Patrick
2011-01-01
Objective: To determine health care provider needs related to pediatric obesity management in rural California and to explore strategies to improve care through telehealth. Methods: Cross-sectional survey of health care providers who treated children and adolescents at 41 rural clinics with existing telehealth connectivity. Results: Most of the…
USDA-ARS?s Scientific Manuscript database
Unintended effects to food quality and composition occur no matter the method of plant improvement. The existence of unintended effects is perhaps not the most important point within the discussion, but rather the identity and significance of the compositional changes observed. Here we report the ex...
ERIC Educational Resources Information Center
Dryden, Eileen M.; Desmarais, Jeffrey; Arsenault, Lisa
2017-01-01
Background: Research shows that individuals with disabilities are more likely to experience abuse than their peers without disabilities. Yet, few evidenced-based abuse prevention interventions exist. This study examines whether positive outcomes identified previously in an evaluation of IMPACT:Ability were maintained 1 year later. Methods: A…
Pre-Service Teachers Learn to Teach Geography: A Suggested Course Model
ERIC Educational Resources Information Center
Mitchell, Jerry T.
2018-01-01
How to improve geography education via teacher preparation programs has been a concern for nearly three decades, but few examples of a single, comprehensive university-level course exist. The purpose of this article is to share the model of a pre-service geography education methods course. Within the course, geography content (physical and social)…
A review of methods for predicting air pollution dispersion
NASA Technical Reports Server (NTRS)
Mathis, J. J., Jr.; Grose, W. L.
1973-01-01
Air pollution modeling, and problem areas in air pollution dispersion modeling were surveyed. Emission source inventory, meteorological data, and turbulent diffusion are discussed in terms of developing a dispersion model. Existing mathematical models of urban air pollution, and highway and airport models are discussed along with their limitations. Recommendations for improving modeling capabilities are included.
Quality of Care for Myocardial Infarction in Rural and Urban Hospitals
ERIC Educational Resources Information Center
Baldwin, Laura-Mae; Chan, Leighton; Andrilla, C. Holly A.; Huff, Edwin D.; Hart, L. Gary
2010-01-01
Background: In the mid-1990s, significant gaps existed in the quality of acute myocardial infarction (AMI) care between rural and urban hospitals. Since then, overall AMI care quality has improved. This study uses more recent data to determine whether rural-urban AMI quality gaps have persisted. Methods: Using inpatient records data for 34,776…
Akins, Ralitsa B.; Handal, Gilbert A.
2009-01-01
Objective Although there is an expectation for outcomes-oriented training in residency programs, the reality is that few guidelines and examples exist as to how to provide this type of education and training. We aimed to improve patient care outcomes in our pediatric residency program by using quality improvement (QI) methods, tools, and approaches. Methods A series of QI projects were implemented over a 3-year period in a pediatric residency program to improve patient care outcomes and teach the residents how to use QI methods, tools, and approaches. Residents experienced practice-based learning and systems-based assessment through group projects and review of their own patient outcomes. Resident QI experiences were reviewed quarterly by the program director and were a mandatory part of resident training portfolios. Results Using QI methodology, we were able to improve management of children with obesity, to achieve high compliance with the national patient safety goals, improve the pediatric hotline service, and implement better patient flow in resident continuity clinic. Conclusion Based on our experiences, we conclude that to successfully implement QI projects in residency programs, QI techniques must be formally taught, the opportunities for resident participation must be multiple and diverse, and QI outcomes should be incorporated in resident training and assessment so that they experience the benefits of the QI intervention. The lessons learned from our experiences, as well as the projects we describe, can be easily deployed and implemented in other residency programs. PMID:21975995
Sheng, Ben; Marsh, Kimberly; Slavkovic, Aleksandra B; Gregson, Simon; Eaton, Jeffrey W; Bao, Le
2017-04-01
HIV prevalence data collected from routine HIV testing of pregnant women at antenatal clinics (ANC-RT) are potentially available from all facilities that offer testing services to pregnant women and can be used to improve estimates of national and subnational HIV prevalence trends. We develop methods to incorporate these new data source into the Joint United Nations Programme on AIDS Estimation and Projection Package in Spectrum 2017. We develop a new statistical model for incorporating ANC-RT HIV prevalence data, aggregated either to the health facility level (site-level) or regionally (census-level), to estimate HIV prevalence alongside existing sources of HIV prevalence data from ANC unlinked anonymous testing (ANC-UAT) and household-based national population surveys. Synthetic data are generated to understand how the availability of ANC-RT data affects the accuracy of various parameter estimates. We estimate HIV prevalence and additional parameters using both ANC-RT and other existing data. Fitting HIV prevalence using synthetic data generally gives precise estimates of the underlying trend and other parameters. More years of ANC-RT data should improve prevalence estimates. More ANC-RT sites and continuation with existing ANC-UAT sites may improve the estimate of calibration between ANC-UAT and ANC-RT sites. We have proposed methods to incorporate ANC-RT data into Spectrum to obtain more precise estimates of prevalence and other measures of the epidemic. Many assumptions about the accuracy, consistency, and representativeness of ANC-RT prevalence underlie the use of these data for monitoring HIV epidemic trends and should be tested as more data become available from national ANC-RT programs.
Yin, Zheng; Zhou, Xiaobo; Bakal, Chris; Li, Fuhai; Sun, Youxian; Perrimon, Norbert; Wong, Stephen TC
2008-01-01
Background The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi) or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens. Results Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM) is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of image-based datasets derived from a wide spectrum of experimental conditions and is suitable to adaptively process new images which are continuously added to existing datasets. Validations were carried out on different dataset, including published RNAi screening using Drosophila embryos [Additional files 1, 2], dataset for cell cycle phase identification using HeLa cells [Additional files 1, 3, 4] and synthetic dataset using polygons, our methods tackled three aforementioned tasks effectively with an accuracy range of 85%–90%. When our method is implemented in the context of a Drosophila genome-scale RNAi image-based screening of cultured cells aimed to identifying the contribution of individual genes towards the regulation of cell-shape, it efficiently discovers meaningful new phenotypes and provides novel biological insight. We also propose a two-step procedure to modify the novelty detection method based on one-class SVM, so that it can be used to online phenotype discovery. In different conditions, we compared the SVM based method with our method using various datasets and our methods consistently outperformed SVM based method in at least two of three tasks by 2% to 5%. These results demonstrate that our methods can be used to better identify novel phenotypes in image-based datasets from a wide range of conditions and organisms. Conclusion We demonstrate that our method can detect various novel phenotypes effectively in complex datasets. Experiment results also validate that our method performs consistently under different order of image input, variation of starting conditions including the number and composition of existing phenotypes, and dataset from different screens. In our findings, the proposed method is suitable for online phenotype discovery in diverse high-throughput image-based genetic and chemical screens. PMID:18534020
Photometry unlocks 3D information from 2D localization microscopy data.
Franke, Christian; Sauer, Markus; van de Linde, Sebastian
2017-01-01
We developed a straightforward photometric method, temporal, radial-aperture-based intensity estimation (TRABI), that allows users to extract 3D information from existing 2D localization microscopy data. TRABI uses the accurate determination of photon numbers in different regions of the emission pattern of single emitters to generate a z-dependent photometric parameter. This method can determine fluorophore positions up to 600 nm from the focal plane and can be combined with biplane detection to further improve axial localization.
IMU-based online kinematic calibration of robot manipulator.
Du, Guanglong; Zhang, Ping
2013-01-01
Robot calibration is a useful diagnostic method for improving the positioning accuracy in robot production and maintenance. An online robot self-calibration method based on inertial measurement unit (IMU) is presented in this paper. The method requires that the IMU is rigidly attached to the robot manipulator, which makes it possible to obtain the orientation of the manipulator with the orientation of the IMU in real time. This paper proposed an efficient approach which incorporates Factored Quaternion Algorithm (FQA) and Kalman Filter (KF) to estimate the orientation of the IMU. Then, an Extended Kalman Filter (EKF) is used to estimate kinematic parameter errors. Using this proposed orientation estimation method will result in improved reliability and accuracy in determining the orientation of the manipulator. Compared with the existing vision-based self-calibration methods, the great advantage of this method is that it does not need the complex steps, such as camera calibration, images capture, and corner detection, which make the robot calibration procedure more autonomous in a dynamic manufacturing environment. Experimental studies on a GOOGOL GRB3016 robot show that this method has better accuracy, convenience, and effectiveness than vision-based methods.
Indonesian name matching using machine learning supervised approach
NASA Astrophysics Data System (ADS)
Alifikri, Mohamad; Arif Bijaksana, Moch.
2018-03-01
Most existing name matching methods are developed for English language and so they cover the characteristics of this language. Up to this moment, there is no specific one has been designed and implemented for Indonesian names. The purpose of this thesis is to develop Indonesian name matching dataset as a contribution to academic research and to propose suitable feature set by utilizing combination of context of name strings and its permute-winkler score. Machine learning classification algorithms is taken as the method for performing name matching. Based on the experiments, by using tuned Random Forest algorithm and proposed features, there is an improvement of matching performance by approximately 1.7% and it is able to reduce until 70% misclassification result of the state of the arts methods. This improving performance makes the matching system more effective and reduces the risk of misclassified matches.
Engagement Assessment Using EEG Signals
NASA Technical Reports Server (NTRS)
Li, Feng; Li, Jiang; McKenzie, Frederic; Zhang, Guangfan; Wang, Wei; Pepe, Aaron; Xu, Roger; Schnell, Thomas; Anderson, Nick; Heitkamp, Dean
2012-01-01
In this paper, we present methods to analyze and improve an EEG-based engagement assessment approach, consisting of data preprocessing, feature extraction and engagement state classification. During data preprocessing, spikes, baseline drift and saturation caused by recording devices in EEG signals are identified and eliminated, and a wavelet based method is utilized to remove ocular and muscular artifacts in the EEG recordings. In feature extraction, power spectrum densities with 1 Hz bin are calculated as features, and these features are analyzed using the Fisher score and the one way ANOVA method. In the classification step, a committee classifier is trained based on the extracted features to assess engagement status. Finally, experiment results showed that there exist significant differences in the extracted features among different subjects, and we have implemented a feature normalization procedure to mitigate the differences and significantly improved the engagement assessment performance.
Compact illumination optic with three freeform surfaces for improved beam control.
Sorgato, Simone; Mohedano, Rubén; Chaves, Julio; Hernández, Maikel; Blen, José; Grabovičkić, Dejan; Benítez, Pablo; Miñano, Juan Carlos; Thienpont, Hugo; Duerr, Fabian
2017-11-27
Multi-chip and large size LEDs dominate the lighting market in developed countries these days. Nevertheless, a general optical design method to create prescribed intensity patterns for this type of extended sources does not exist. We present a design strategy in which the source and the target pattern are described by means of "edge wavefronts" of the system. The goal is then finding an optic coupling these wavefronts, which in the current work is a monolithic part comprising up to three freeform surfaces calculated with the simultaneous multiple surface (SMS) method. The resulting optic fully controls, for the first time, three freeform wavefronts, one more than previous SMS designs. Simulations with extended LEDs demonstrate improved intensity tailoring capabilities, confirming the effectiveness of our method and suggesting that enhanced performance features can be achieved by controlling additional wavefronts.
Multilabel learning via random label selection for protein subcellular multilocations prediction.
Wang, Xiao; Li, Guo-Zheng
2013-01-01
Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multilocation proteins to multiple proteins with single location, which does not take correlations among different subcellular locations into account. In this paper, a novel method named random label selection (RALS) (multilabel learning via RALS), which extends the simple binary relevance (BR) method, is proposed to learn from multilocation proteins in an effective and efficient way. RALS does not explicitly find the correlations among labels, but rather implicitly attempts to learn the label correlations from data by augmenting original feature space with randomly selected labels as its additional input features. Through the fivefold cross-validation test on a benchmark data set, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark data sets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multilocations of proteins. The prediction web server is available at >http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.
Improve accuracy for automatic acetabulum segmentation in CT images.
Liu, Hao; Zhao, Jianning; Dai, Ning; Qian, Hongbo; Tang, Yuehong
2014-01-01
Separation of the femur head and acetabulum is one of main difficulties in the diseased hip joint due to deformed shapes and extreme narrowness of the joint space. To improve the segmentation accuracy is the key point of existing automatic or semi-automatic segmentation methods. In this paper, we propose a new method to improve the accuracy of the segmented acetabulum using surface fitting techniques, which essentially consists of three parts: (1) design a surface iterative process to obtain an optimization surface; (2) change the ellipsoid fitting to two-phase quadric surface fitting; (3) bring in a normal matching method and an optimization region method to capture edge points for the fitting quadric surface. Furthermore, this paper cited vivo CT data sets of 40 actual patients (with 79 hip joints). Test results for these clinical cases show that: (1) the average error of the quadric surface fitting method is 2.3 (mm); (2) the accuracy ratio of automatically recognized contours is larger than 89.4%; (3) the error ratio of section contours is less than 10% for acetabulums without severe malformation and less than 30% for acetabulums with severe malformation. Compared with similar methods, the accuracy of our method, which is applied in a software system, is significantly enhanced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCombes, Lucy, E-mail: l.mccombes@leedsbeckett.ac.uk; Vanclay, Frank, E-mail: frank.vanclay@rug.nl; Evers, Yvette, E-mail: y.evers@tft-earth.org
The discourse on the social impacts of tourism needs to shift from the current descriptive critique of tourism to considering what can be done in actual practice to embed the management of tourism's social impacts into the existing planning, product development and operational processes of tourism businesses. A pragmatic approach for designing research methodologies, social management systems and initial actions, which is shaped by the real world operational constraints and existing systems used in the tourism industry, is needed. Our pilot study with a small Bulgarian travel company put social impact assessment (SIA) to the test to see if itmore » could provide this desired approach and assist in implementing responsible tourism development practice, especially in small tourism businesses. Our findings showed that our adapted SIA method has value as a practical method for embedding a responsible tourism approach. While there were some challenges, SIA proved to be effective in assisting the staff of our test case tourism business to better understand their social impacts on their local communities and to identify actions to take. - Highlights: • Pragmatic approach is needed for the responsible management of social impacts of tourism. • Our adapted Social impact Assessment (SIA) method has value as a practical method. • SIA can be embedded into tourism businesses existing ‘ways of doing things’. • We identified challenges and ways to improve our method to better suit small tourism business context.« less
Methods for artifact detection and removal from scalp EEG: A review.
Islam, Md Kafiul; Rastegarnia, Amir; Yang, Zhi
2016-11-01
Electroencephalography (EEG) is the most popular brain activity recording technique used in wide range of applications. One of the commonly faced problems in EEG recordings is the presence of artifacts that come from sources other than brain and contaminate the acquired signals significantly. Therefore, much research over the past 15 years has focused on identifying ways for handling such artifacts in the preprocessing stage. However, this is still an active area of research as no single existing artifact detection/removal method is complete or universal. This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. First, a general overview of the different artifact types that are found in scalp EEG and their effect on particular applications are presented. In addition, the methods are compared based on their ability to remove certain types of artifacts and their suitability in relevant applications (only functional comparison is provided not performance evaluation of methods). Finally, the future direction and expected challenges of current research is discussed. Therefore, this review is expected to be helpful for interested researchers who will develop and/or apply artifact handling algorithm/technique in future for their applications as well as for those willing to improve the existing algorithms or propose a new solution in this particular area of research. Copyright © 2016 Elsevier Masson SAS. All rights reserved.
Rural women in Africa and technological change: some issues.
Date-bah, E; Stevens, Y
1981-01-01
The attempt is made in this discussion to highlight some of the important sociological and technical issues relating to rural women in Africa and technological change which appear to have been underplayed, misconceived or overlooked in the past. Attention is directed to the rural woman as a member of the family unit, the image of the rural man, rural women as a diversified group, community and national governmental commitment to rural technology innovations, the use of already existing traditional groups and institutions to effect rural technological change, and design specifications and shortcomings of equipment and tools (manufacturing costs, exploitation of locally available energy resources, the simplicity of the devices), and infrastructural and marketing problems. Numberous projects aimed at improving the lot of women in the rural areas have focused only on women, rather than the woman as a member of an extended as well as a nuclear family unit. Consequently, they have failed, for rural women do not exist or operate in isolation. It is difficult to believe the overall image in much of the literature that the husbands of rural women show no sympathy or regard for their wives. In the effort to attract investment to improve upon the position of rural women, reality should not be distorted with this one-sided view. Men should be involved in the technology planned for rural women, and the technological change should be planned and implemented in such a way that it results in an improvement in the relationship between the rural couple and generally between members of the rural family and between males and females in the village. Another problem is overgeneralization, and it must be recognized that considerable differentiation exists between rural women themselves. The importance of community, governmental and political commitment to rural technology innovations in order to ensure their success is neglected in the literature. The government and polictical leadership can do much to introduce improved technologies in the rural areas. The use of existing traditional institutions to bring about tecnological change in the rural areas needs to be stressed. Primary reasons why some of the improved devices introduced for use by rural women have been rejected include the following: the devices fail to meet the priority needs of the women and socioeconomic and cultural factors are not considered in their design. Most developing countries are without the required industries to produce the needed basic components. Exploitation of some of the available natural resources would make life much easier for rural women. As rural societies are usually imperfectly linked to bacly organized markets, infrastructural facilities, such as feeder roads, would have to be improved. The following are among the hypotheses suggested by this review: technological innovations linked to existing traditional skills and methods are likely to have easier acceptance in the rural areas than those divorced from these skills and methods; and technological innovations which are disseminated through existing traditional institutions and groups are likely to have easier acceptance. Guidelines for future research are included.
Weighted spline based integration for reconstruction of freeform wavefront.
Pant, Kamal K; Burada, Dali R; Bichra, Mohamed; Ghosh, Amitava; Khan, Gufran S; Sinzinger, Stefan; Shakher, Chandra
2018-02-10
In the present work, a spline-based integration technique for the reconstruction of a freeform wavefront from the slope data has been implemented. The slope data of a freeform surface contain noise due to their machining process and that introduces reconstruction error. We have proposed a weighted cubic spline based least square integration method (WCSLI) for the faithful reconstruction of a wavefront from noisy slope data. In the proposed method, the measured slope data are fitted into a piecewise polynomial. The fitted coefficients are determined by using a smoothing cubic spline fitting method. The smoothing parameter locally assigns relative weight to the fitted slope data. The fitted slope data are then integrated using the standard least squares technique to reconstruct the freeform wavefront. Simulation studies show the improved result using the proposed technique as compared to the existing cubic spline-based integration (CSLI) and the Southwell methods. The proposed reconstruction method has been experimentally implemented to a subaperture stitching-based measurement of a freeform wavefront using a scanning Shack-Hartmann sensor. The boundary artifacts are minimal in WCSLI which improves the subaperture stitching accuracy and demonstrates an improved Shack-Hartmann sensor for freeform metrology application.
Improved LSB matching steganography with histogram characters reserved
NASA Astrophysics Data System (ADS)
Chen, Zhihong; Liu, Wenyao
2008-03-01
This letter bases on the researches of LSB (least significant bit, i.e. the last bit of a binary pixel value) matching steganographic method and the steganalytic method which aims at histograms of cover images, and proposes a modification to LSB matching. In the LSB matching, if the LSB of the next cover pixel matches the next bit of secret data, do nothing; otherwise, choose to add or subtract one from the cover pixel value at random. In our improved method, a steganographic information table is defined and records the changes which embedded secrete bits introduce in. Through the table, the next LSB which has the same pixel value will be judged to add or subtract one dynamically in order to ensure the histogram's change of cover image is minimized. Therefore, the modified method allows embedding the same payload as the LSB matching but with improved steganographic security and less vulnerability to attacks compared with LSB matching. The experimental results of the new method show that the histograms maintain their attributes, such as peak values and alternative trends, in an acceptable degree and have better performance than LSB matching in the respects of histogram distortion and resistance against existing steganalysis.
Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure.
Zhang, Wen; Xiao, Fan; Li, Bin; Zhang, Siguang
2016-01-01
Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods.
Using SVD on Clusters to Improve Precision of Interdocument Similarity Measure
Xiao, Fan; Li, Bin; Zhang, Siguang
2016-01-01
Recently, LSI (Latent Semantic Indexing) based on SVD (Singular Value Decomposition) is proposed to overcome the problems of polysemy and homonym in traditional lexical matching. However, it is usually criticized as with low discriminative power for representing documents although it has been validated as with good representative quality. In this paper, SVD on clusters is proposed to improve the discriminative power of LSI. The contribution of this paper is three manifolds. Firstly, we make a survey of existing linear algebra methods for LSI, including both SVD based methods and non-SVD based methods. Secondly, we propose SVD on clusters for LSI and theoretically explain that dimension expansion of document vectors and dimension projection using SVD are the two manipulations involved in SVD on clusters. Moreover, we develop updating processes to fold in new documents and terms in a decomposed matrix by SVD on clusters. Thirdly, two corpora, a Chinese corpus and an English corpus, are used to evaluate the performances of the proposed methods. Experiments demonstrate that, to some extent, SVD on clusters can improve the precision of interdocument similarity measure in comparison with other SVD based LSI methods. PMID:27579031
Improvement of a method for positioning of pithead by considering motion of the surface water
NASA Astrophysics Data System (ADS)
Yi, H.; Lee, D. K.
2016-12-01
Underground mining has weakness compared with open pit mining in aspects of efficiency, economy and working environment. However, the method has applied for the development of a deep orebody. Development plan is established when the economic valuation and technical analysis of the deposits is completed through exploration of mineral resources. Development is a process to open a passage from the ground surface to the orebody as one of the steps of mining process. In the planning, there are details such as pithead positioning, mining method selection, and shaft design, etc. Among these, pithead positioning is implemented by considering infrastructures, watershed, geology, and economy. In this study, we propose a method to consider the motion of the surface waters in order to improve the existing pithead positioning techniques. The method contemplates the terrain around the mine and makes the surface water flow information. Then, the drainage treatment cost for each candidate location of pithead is suggested. This study covers the concept and design of the scheme.
NASA Astrophysics Data System (ADS)
Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang
2016-10-01
In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.
A design procedure for the handling qualities optimization of the X-29A aircraft
NASA Technical Reports Server (NTRS)
Bosworth, John T.; Cox, Timothy H.
1989-01-01
The techniques used to improve the pitch-axis handling qualities of the X-29A wing-canard-planform fighter aircraft are reviewed. The aircraft and its FCS are briefly described, and the design method, which works within the existing FCS architecture, is characterized in detail. Consideration is given to the selection of design goals and design variables, the definition and calculation of the cost function, the validation of the mathematical model on the basis of flight-test data, and the validation of the improved design by means of nonlinear simulations. Flight tests of the improved design are shown to verify the simulation results.
Saturation-inversion-recovery: A method for T1 measurement
NASA Astrophysics Data System (ADS)
Wang, Hongzhi; Zhao, Ming; Ackerman, Jerome L.; Song, Yiqiao
2017-01-01
Spin-lattice relaxation (T1) has always been measured by inversion-recovery (IR), saturation-recovery (SR), or related methods. These existing methods share a common behavior in that the function describing T1 sensitivity is the exponential, e.g., exp(- τ /T1), where τ is the recovery time. In this paper, we describe a saturation-inversion-recovery (SIR) sequence for T1 measurement with considerably sharper T1-dependence than those of the IR and SR sequences, and demonstrate it experimentally. The SIR method could be useful in improving the contrast between regions of differing T1 in T1-weighted MRI.
Processes for manufacturing multifocal diffractive-refractive intraocular lenses
NASA Astrophysics Data System (ADS)
Iskakov, I. A.
2017-09-01
Manufacturing methods and design features of modern diffractive-refractive intraocular lenses are discussed. The implantation of multifocal intraocular lenses is the most optimal method of restoring the accommodative ability of the eye after removal of the natural lens. Diffractive-refractive intraocular lenses are the most widely used implantable multifocal lenses worldwide. Existing methods for manufacturing such lenses implement various design solutions to provide the best vision function after surgery. The wide variety of available diffractive-refractive intraocular lens designs reflects the demand for this method of vision correction in clinical practice and the importance of further applied research and development of new technologies for designing improved lens models.
Kazemian, Majid; Zhu, Qiyun; Halfon, Marc S.; Sinha, Saurabh
2011-01-01
Despite recent advances in experimental approaches for identifying transcriptional cis-regulatory modules (CRMs, ‘enhancers’), direct empirical discovery of CRMs for all genes in all cell types and environmental conditions is likely to remain an elusive goal. Effective methods for computational CRM discovery are thus a critically needed complement to empirical approaches. However, existing computational methods that search for clusters of putative binding sites are ineffective if the relevant TFs and/or their binding specificities are unknown. Here, we provide a significantly improved method for ‘motif-blind’ CRM discovery that does not depend on knowledge or accurate prediction of TF-binding motifs and is effective when limited knowledge of functional CRMs is available to ‘supervise’ the search. We propose a new statistical method, based on ‘Interpolated Markov Models’, for motif-blind, genome-wide CRM discovery. It captures the statistical profile of variable length words in known CRMs of a regulatory network and finds candidate CRMs that match this profile. The method also uses orthologs of the known CRMs from closely related genomes. We perform in silico evaluation of predicted CRMs by assessing whether their neighboring genes are enriched for the expected expression patterns. This assessment uses a novel statistical test that extends the widely used Hypergeometric test of gene set enrichment to account for variability in intergenic lengths. We find that the new CRM prediction method is superior to existing methods. Finally, we experimentally validate 12 new CRM predictions by examining their regulatory activity in vivo in Drosophila; 10 of the tested CRMs were found to be functional, while 6 of the top 7 predictions showed the expected activity patterns. We make our program available as downloadable source code, and as a plugin for a genome browser installed on our servers. PMID:21821659
LaPelle, Nancy R; Luckmann, Roger; Simpson, E Hatheway; Martin, Elaine R
2006-01-01
Background Movement towards evidence-based practices in many fields suggests that public health (PH) challenges may be better addressed if credible information about health risks and effective PH practices is readily available. However, research has shown that many PH information needs are unmet. In addition to reviewing relevant literature, this study performed a comprehensive review of existing information resources and collected data from two representative PH groups, focusing on identifying current practices, expressed information needs, and ideal systems for information access. Methods Nineteen individual interviews were conducted among employees of two domains in a state health department – communicable disease control and community health promotion. Subsequent focus groups gathered additional data on preferences for methods of information access and delivery as well as information format and content. Qualitative methods were used to identify themes in the interview and focus group transcripts. Results Informants expressed similar needs for improved information access including single portal access with a good search engine; automatic notification regarding newly available information; access to best practice information in many areas of interest that extend beyond biomedical subject matter; improved access to grey literature as well as to more systematic reviews, summaries, and full-text articles; better methods for indexing, filtering, and searching for information; and effective ways to archive information accessed. Informants expressed a preference for improving systems with which they were already familiar such as PubMed and listservs rather than introducing new systems of information organization and delivery. A hypothetical ideal model for information organization and delivery was developed based on informants' stated information needs and preferred means of delivery. Features of the model were endorsed by the subjects who reviewed it. Conclusion Many critical information needs of PH practitioners are not being met efficiently or at all. We propose a dual strategy of: 1) promoting incremental improvements in existing information delivery systems based on the expressed preferences of the PH users of the systems and 2) the concurrent development and rigorous evaluation of new models of information organization and delivery that draw on successful resources already operating to deliver information to clinical medical practitioners. PMID:16597331
Improved dual-porosity models for petrophysical analysis of vuggy reservoirs
NASA Astrophysics Data System (ADS)
Wang, Haitao
2017-08-01
A new vug interconnection, isolated vug (IVG), was investigated through resistivity modeling and the dual-porosity model for connected vug (CVG) vuggy reservoirs was tested. The vuggy models were built by pore-scale modeling, and their electrical resistivity was calculated by the finite difference method. For CVG vuggy reservoirs, the CVG reduced formation factors and increased the porosity exponents, and the existing dual-porosity model failed to match these results. Based on the existing dual-porosity model, a conceptual dual-porosity model for CVG was developed by introducing a decoupled term to reduce the resistivity of the model. For IVG vuggy reservoirs, IVG increased the formation factors and porosity exponents. The existing dual-porosity model succeeded due to accurate calculation of the formation factors of the deformed interparticle porous media caused by the insertion of the IVG. Based on the existing dual-porosity model, a new porosity model for IVG vuggy reservoirs was developed by simultaneously recalculating the formation factors of the altered interparticle pore-scale models. The formation factors and porosity exponents from the improved and extended dual-porosity models for CVG and IVG vuggy reservoirs well matched the simulated formation factors and porosity exponents. This work is helpful for understanding the influence of connected and disconnected vugs on resistivity factors—an issue of particular importance in carbonates.
Iterative methods for dose reduction and image enhancement in tomography
Miao, Jianwei; Fahimian, Benjamin Pooya
2012-09-18
A system and method for creating a three dimensional cross sectional image of an object by the reconstruction of its projections that have been iteratively refined through modification in object space and Fourier space is disclosed. The invention provides systems and methods for use with any tomographic imaging system that reconstructs an object from its projections. In one embodiment, the invention presents a method to eliminate interpolations present in conventional tomography. The method has been experimentally shown to provide higher resolution and improved image quality parameters over existing approaches. A primary benefit of the method is radiation dose reduction since the invention can produce an image of a desired quality with a fewer number projections than seen with conventional methods.
Innovative use of technologies and methods to redesign care: the problem of care transitions.
Richman, Mark; Sklaroff, Laura Myerchin; Hoang, Khathy; Wasson, Elijah; Gross-Schulman, Sandra
2014-01-01
Organizations are redesigning models of care in today's rapidly changing health care environment. Using proven innovation techniques maximizes likelihood of effective change. Our safety-net hospital aims to reduce high emergency department visit, admission, and readmission rates, key components to health care cost control. Twenty-five clinical stakeholders participated in mixed-methods innovation exercises to understand stakeholders, frame problems, and explore solutions. We identified existing barriers and means to improve post-emergency department/post-inpatient discharge care coordination/communication among patient-centered medical home care team members, including patients. Physicians and staff preferred automated e-mail notifications, including patient identifiers, medical home/primary care provider information, and relevant clinical documentation, to improve communication efficiency/efficacy.
Flow chemistry using milli- and microstructured reactors-from conventional to novel process windows.
Illg, Tobias; Löb, Patrick; Hessel, Volker
2010-06-01
The terminology Novel Process Window unites different methods to improve existing processes by applying unconventional and harsh process conditions like: process routes at much elevated pressure, much elevated temperature, or processing in a thermal runaway regime to achieve a significant impact on process performance. This paper is a review of parts of IMM's works in particular the applicability of above mentioned Novel Process Windows on selected chemical reactions. First, general characteristics of microreactors are discussed like excellent mass and heat transfer and improved mixing quality. Different types of reactions are presented in which the use of microstructured devices led to an increased process performance by applying Novel Process Windows. These examples were chosen to demonstrate how chemical reactions can benefit from the use of milli- and microstructured devices and how existing protocols can be changed toward process conditions hitherto not applicable in standard laboratory equipment. The used milli- and microstructured reactors can also offer advantages in other areas, for example, high-throughput screening of catalysts and better control of size distribution in a particle synthesis process by improved mixing, etc. The chemical industry is under continuous improvement. So, a lot of research is being done to synthesize high value chemicals, to optimize existing processes in view of process safety and energy consumption and to search for new routes to produce such chemicals. Leitmotifs of such undertakings are often sustainable development(1) and Green Chemistry(2).
An improved, robust, axial line singularity method for bodies of revolution
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.
1989-01-01
The failures encountered in attempts to increase the range of applicability of the axial line singularity method for representing incompressible, inviscid flow about an inclined and slender body-of-revolution are presently noted to be common to all efforts to solve Fredholm equations of the first kind. It is shown that a previously developed smoothing technique yields a robust method for numerical solution of the governing equations; this technique is easily retrofitted to existing codes, and allows the number of circularities to be increased until the most accurate line singularity solution is obtained.
Quality: performance improvement, teamwork, information technology and protocols.
Coleman, Nana E; Pon, Steven
2013-04-01
Using the Institute of Medicine framework that outlines the domains of quality, this article considers four key aspects of health care delivery which have the potential to significantly affect the quality of health care within the pediatric intensive care unit. The discussion covers: performance improvement and how existing methods for reporting, review, and analysis of medical error relate to patient care; team composition and workflow; and the impact of information technologies on clinical practice. Also considered is how protocol-driven and standardized practice affects both patients and the fiscal interests of the health care system.
Refining Automatically Extracted Knowledge Bases Using Crowdsourcing
Xian, Xuefeng; Cui, Zhiming
2017-01-01
Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost. PMID:28588611
An improved KCF tracking algorithm based on multi-feature and multi-scale
NASA Astrophysics Data System (ADS)
Wu, Wei; Wang, Ding; Luo, Xin; Su, Yang; Tian, Weiye
2018-02-01
The purpose of visual tracking is to associate the target object in a continuous video frame. In recent years, the method based on the kernel correlation filter has become the research hotspot. However, the algorithm still has some problems such as video capture equipment fast jitter, tracking scale transformation. In order to improve the ability of scale transformation and feature description, this paper has carried an innovative algorithm based on the multi feature fusion and multi-scale transform. The experimental results show that our method solves the problem that the target model update when is blocked or its scale transforms. The accuracy of the evaluation (OPE) is 77.0%, 75.4% and the success rate is 69.7%, 66.4% on the VOT and OTB datasets. Compared with the optimal one of the existing target-based tracking algorithms, the accuracy of the algorithm is improved by 6.7% and 6.3% respectively. The success rates are improved by 13.7% and 14.2% respectively.
Image-guided filtering for improving photoacoustic tomographic image reconstruction.
Awasthi, Navchetan; Kalva, Sandeep Kumar; Pramanik, Manojit; Yalavarthy, Phaneendra K
2018-06-01
Several algorithms exist to solve the photoacoustic image reconstruction problem depending on the expected reconstructed image features. These reconstruction algorithms promote typically one feature, such as being smooth or sharp, in the output image. Combining these features using a guided filtering approach was attempted in this work, which requires an input and guiding image. This approach act as a postprocessing step to improve commonly used Tikhonov or total variational regularization method. The result obtained from linear backprojection was used as a guiding image to improve these results. Using both numerical and experimental phantom cases, it was shown that the proposed guided filtering approach was able to improve (as high as 11.23 dB) the signal-to-noise ratio of the reconstructed images with the added advantage being computationally efficient. This approach was compared with state-of-the-art basis pursuit deconvolution as well as standard denoising methods and shown to outperform them. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Brain tumor segmentation in MR slices using improved GrowCut algorithm
NASA Astrophysics Data System (ADS)
Ji, Chunhong; Yu, Jinhua; Wang, Yuanyuan; Chen, Liang; Shi, Zhifeng; Mao, Ying
2015-12-01
The detection of brain tumor from MR images is very significant for medical diagnosis and treatment. However, the existing methods are mostly based on manual or semiautomatic segmentation which are awkward when dealing with a large amount of MR slices. In this paper, a new fully automatic method for the segmentation of brain tumors in MR slices is presented. Based on the hypothesis of the symmetric brain structure, the method improves the interactive GrowCut algorithm by further using the bounding box algorithm in the pre-processing step. More importantly, local reflectional symmetry is used to make up the deficiency of the bounding box method. After segmentation, 3D tumor image is reconstructed. We evaluate the accuracy of the proposed method on MR slices with synthetic tumors and actual clinical MR images. Result of the proposed method is compared with the actual position of simulated 3D tumor qualitatively and quantitatively. In addition, our automatic method produces equivalent performance as manual segmentation and the interactive GrowCut with manual interference while providing fully automatic segmentation.
Prediction of forces and moments for hypersonic flight vehicle control effectors
NASA Technical Reports Server (NTRS)
Maughmer, Mark D.; Long, Lyle N.; Pagano, Peter J.
1991-01-01
Developing methods of predicting flight control forces and moments for hypersonic vehicles, included a preliminary assessment of subsonic/supersonic panel methods and hypersonic local flow inclination methods for such predictions. While these findings clearly indicate the usefulness of such methods for conceptual design activities, deficiencies exist in some areas. Thus, a second phase of research was proposed in which a better understanding is sought for the reasons of the successes and failures of the methods considered, particularly for the cases at hypersonic Mach numbers. To obtain this additional understanding, a more careful study of the results obtained relative to the methods used was undertaken. In addition, where appropriate and necessary, a more complete modeling of the flow was performed using well proven methods of computational fluid dynamics. As a result, assessments will be made which are more quantitative than those of phase 1 regarding the uncertainty involved in the prediction of the aerodynamic derivatives. In addition, with improved understanding, it is anticipated that improvements resulting in better accuracy will be made to the simple force and moment prediction.
Modeling Input Errors to Improve Uncertainty Estimates for Sediment Transport Model Predictions
NASA Astrophysics Data System (ADS)
Jung, J. Y.; Niemann, J. D.; Greimann, B. P.
2016-12-01
Bayesian methods using Markov chain Monte Carlo algorithms have recently been applied to sediment transport models to assess the uncertainty in the model predictions due to the parameter values. Unfortunately, the existing approaches can only attribute overall uncertainty to the parameters. This limitation is critical because no model can produce accurate forecasts if forced with inaccurate input data, even if the model is well founded in physical theory. In this research, an existing Bayesian method is modified to consider the potential errors in input data during the uncertainty evaluation process. The input error is modeled using Gaussian distributions, and the means and standard deviations are treated as uncertain parameters. The proposed approach is tested by coupling it to the Sedimentation and River Hydraulics - One Dimension (SRH-1D) model and simulating a 23-km reach of the Tachia River in Taiwan. The Wu equation in SRH-1D is used for computing the transport capacity for a bed material load of non-cohesive material. Three types of input data are considered uncertain: (1) the input flowrate at the upstream boundary, (2) the water surface elevation at the downstream boundary, and (3) the water surface elevation at a hydraulic structure in the middle of the reach. The benefits of modeling the input errors in the uncertainty analysis are evaluated by comparing the accuracy of the most likely forecast and the coverage of the observed data by the credible intervals to those of the existing method. The results indicate that the internal boundary condition has the largest uncertainty among those considered. Overall, the uncertainty estimates from the new method are notably different from those of the existing method for both the calibration and forecast periods.
Multi-Sensor Fusion with Interacting Multiple Model Filter for Improved Aircraft Position Accuracy
Cho, Taehwan; Lee, Changho; Choi, Sangbang
2013-01-01
The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter. PMID:23535715
Multi-sensor fusion with interacting multiple model filter for improved aircraft position accuracy.
Cho, Taehwan; Lee, Changho; Choi, Sangbang
2013-03-27
The International Civil Aviation Organization (ICAO) has decided to adopt Communications, Navigation, and Surveillance/Air Traffic Management (CNS/ATM) as the 21st century standard for navigation. Accordingly, ICAO members have provided an impetus to develop related technology and build sufficient infrastructure. For aviation surveillance with CNS/ATM, Ground-Based Augmentation System (GBAS), Automatic Dependent Surveillance-Broadcast (ADS-B), multilateration (MLAT) and wide-area multilateration (WAM) systems are being established. These sensors can track aircraft positions more accurately than existing radar and can compensate for the blind spots in aircraft surveillance. In this paper, we applied a novel sensor fusion method with Interacting Multiple Model (IMM) filter to GBAS, ADS-B, MLAT, and WAM data in order to improve the reliability of the aircraft position. Results of performance analysis show that the position accuracy is improved by the proposed sensor fusion method with the IMM filter.
Computer assisted diagnostic system in tumor radiography.
Faisal, Ahmed; Parveen, Sharmin; Badsha, Shahriar; Sarwar, Hasan; Reza, Ahmed Wasif
2013-06-01
An improved and efficient method is presented in this paper to achieve a better trade-off between noise removal and edge preservation, thereby detecting the tumor region of MRI brain images automatically. Compass operator has been used in the fourth order Partial Differential Equation (PDE) based denoising technique to preserve the anatomically significant information at the edges. A new morphological technique is also introduced for stripping skull region from the brain images, which consequently leading to the process of detecting tumor accurately. Finally, automatic seeded region growing segmentation based on an improved single seed point selection algorithm is applied to detect the tumor. The method is tested on publicly available MRI brain images and it gives an average PSNR (Peak Signal to Noise Ratio) of 36.49. The obtained results also show detection accuracy of 99.46%, which is a significant improvement than that of the existing results.
Satellite estimation of incident photosynthetically active radiation using ultraviolet reflectance
NASA Technical Reports Server (NTRS)
Eck, Thomas F.; Dye, Dennis G.
1991-01-01
A new satellite remote sensing method for estimating the amount of photosynthetically active radiation (PAR, 400-700 nm) incident at the earth's surface is described and tested. Potential incident PAR for clear sky conditions is computed from an existing spectral model. A major advantage of the UV approach over existing visible band approaches to estimating insolation is the improved ability to discriminate clouds from high-albedo background surfaces. UV spectral reflectance data from the Total Ozone Mapping Spectrometer (TOMS) were used to test the approach for three climatically distinct, midlatitude locations. Estimates of monthly total incident PAR from the satellite technique differed from values computed from ground-based pyranometer measurements by less than 6 percent. This UV remote sensing method can be applied to estimate PAR insolation over ocean and land surfaces which are free of ice and snow.
Light Field Imaging Based Accurate Image Specular Highlight Removal
Wang, Haoqian; Xu, Chenxue; Wang, Xingzheng; Zhang, Yongbing; Peng, Bo
2016-01-01
Specular reflection removal is indispensable to many computer vision tasks. However, most existing methods fail or degrade in complex real scenarios for their individual drawbacks. Benefiting from the light field imaging technology, this paper proposes a novel and accurate approach to remove specularity and improve image quality. We first capture images with specularity by the light field camera (Lytro ILLUM). After accurately estimating the image depth, a simple and concise threshold strategy is adopted to cluster the specular pixels into “unsaturated” and “saturated” category. Finally, a color variance analysis of multiple views and a local color refinement are individually conducted on the two categories to recover diffuse color information. Experimental evaluation by comparison with existed methods based on our light field dataset together with Stanford light field archive verifies the effectiveness of our proposed algorithm. PMID:27253083
Bromaghin, Jeffrey F.; McDonald, Trent L.; Amstrup, Steven C.
2013-01-01
Mark-recapture models are extensively used in quantitative population ecology, providing estimates of population vital rates, such as survival, that are difficult to obtain using other methods. Vital rates are commonly modeled as functions of explanatory covariates, adding considerable flexibility to mark-recapture models, but also increasing the subjectivity and complexity of the modeling process. Consequently, model selection and the evaluation of covariate structure remain critical aspects of mark-recapture modeling. The difficulties involved in model selection are compounded in Cormack-Jolly- Seber models because they are composed of separate sub-models for survival and recapture probabilities, which are conceptualized independently even though their parameters are not statistically independent. The construction of models as combinations of sub-models, together with multiple potential covariates, can lead to a large model set. Although desirable, estimation of the parameters of all models may not be feasible. Strategies to search a model space and base inference on a subset of all models exist and enjoy widespread use. However, even though the methods used to search a model space can be expected to influence parameter estimation, the assessment of covariate importance, and therefore the ecological interpretation of the modeling results, the performance of these strategies has received limited investigation. We present a new strategy for searching the space of a candidate set of Cormack-Jolly-Seber models and explore its performance relative to existing strategies using computer simulation. The new strategy provides an improved assessment of the importance of covariates and covariate combinations used to model survival and recapture probabilities, while requiring only a modest increase in the number of models on which inference is based in comparison to existing techniques.
NASA Astrophysics Data System (ADS)
Fan, Xiao-Ning; Zhi, Bo
2017-07-01
Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.
Microbiome and pancreatic cancer: A comprehensive topic review of literature
Ertz-Archambault, Natalie; Keim, Paul; Von Hoff, Daniel
2017-01-01
AIM To review microbiome alterations associated with pancreatic cancer, its potential utility in diagnostics, risk assessment, and influence on disease outcomes. METHODS A comprehensive literature review was conducted by all-inclusive topic review from PubMed, MEDLINE, and Web of Science. The last search was performed in October 2016. RESULTS Diverse microbiome alterations exist among several body sites including oral, gut, and pancreatic tissue, in patients with pancreatic cancer compared to healthy populations. CONCLUSION Pilot study successes in non-invasive screening strategies warrant further investigation for future translational application in early diagnostics and to learn modifiable risk factors relevant to disease prevention. Pre-clinical investigations exist in other tumor types that suggest microbiome manipulation provides opportunity to favorably transform cancer response to existing treatment protocols and improve survival. PMID:28348497
Shrinkage regression-based methods for microarray missing value imputation.
Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng
2013-01-01
Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.
NASA Astrophysics Data System (ADS)
Tripathi, Anjan Kumar
Electrically charged particles are found in a wide range of applications ranging from electrostatic powder coating, mineral processing, and powder handling to rain-producing cloud formation in atmospheric turbulent flows. In turbulent flows, particle dynamics is influenced by the electric force due to particle charge generation. Quantifying particle charges in such systems will help in better predicting and controlling particle clustering, relative motion, collision, and growth. However, there is a lack of noninvasive techniques to measure particle charges. Recently, a non-invasive method for particle charge measurement using in-line Digital Holographic Particle Tracking Velocimetry (DHPTV) technique was developed in our lab, where charged particles to be measured were introduced to a uniform electric field, and their movement towards the oppositely charged electrode was deemed proportional to the amount of charge on the particles (Fan Yang, 2014 [1]). However, inherent speckle noise associated with reconstructed images was not adequately removed and therefore particle tracking data was contaminated. Furthermore, particle charge calculation based on particle deflection velocity neglected the particle drag force and rebound effect of the highly charged particles from the electrodes. We improved upon the existing particle charge measurement method by: 1) hologram post processing, 2) taking drag force into account in charge calculation, 3) considering rebound effect. The improved method was first fine-tuned through a calibration experiment. The complete method was then applied to two different experiments, namely conduction charging and enclosed fan-driven turbulence chamber, to measure particle charges. In all three experiments conducted, the particle charge was found to obey non-central t-location scale family of distribution. It was also noted that the charge distribution was insensitive to the change in voltage applied between the electrodes. The range of voltage applied where reliable particle charges can be measured was also quantified by taking into account the rebound effect of highly charged particles. Finally, in the enclosed chamber experiment, it was found that using carbon conductive coating on the inner walls of the chamber minimized the charge generation inside the chamber when glass bubble particles were used. The value of electric charges obtained in calibration experiment through the improved method was found to have the same order as reported in the existing work (Y.C Ahn et al. 2004 [2]), indicating that the method is indeed effective.
Gao, Yandong; Zhang, Shubi; Li, Tao; Chen, Qianfu; Li, Shijin; Meng, Pengfei
2018-06-02
Phase unwrapping (PU) is a key step in the reconstruction of digital elevation models (DEMs) and the monitoring of surface deformation from interferometric synthetic aperture radar (SAR, InSAR) data. In this paper, an improved PU method that combines an amended matrix pencil model, an adaptive unscented kalman filter (AUKF), an efficient quality-guided strategy based on heapsort, and a circular median filter is proposed. PU theory and the existing UKFPU method are covered. Then, the improved method is presented with emphasis on the AUKF and the circular median filter. AUKF has been well used in other fields, but it is for the first time applied to interferometric images PU, to the best of our knowledge. First, the amended matrix pencil model is used to estimate the phase gradient. Then, an AUKF model is used to unwrap the interferometric phase based on an efficient quality-guided strategy based on heapsort. Finally, the key results are obtained by filtering the results using a circular median. The proposed method is compared with the minimum cost network flow (MCF), statistical cost network flow (SNAPHU), regularized phase tracking technique (RPTPU), and UKFPU methods using two sets of simulated data and two sets of experimental GF-3 SAR data. The improved method is shown to yield the greatest accuracy in the interferometric phase maps compared to the methods considered in this paper. Furthermore, the improved method is shown to be the most robust to noise and is thus most suitable for PU of GF-3 SAR data in high-noise and low-coherence regions.
Pre-processing by data augmentation for improved ellipse fitting.
Kumar, Pankaj; Belchamber, Erika R; Miklavcic, Stanley J
2018-01-01
Ellipse fitting is a highly researched and mature topic. Surprisingly, however, no existing method has thus far considered the data point eccentricity in its ellipse fitting procedure. Here, we introduce the concept of eccentricity of a data point, in analogy with the idea of ellipse eccentricity. We then show empirically that, irrespective of ellipse fitting method used, the root mean square error (RMSE) of a fit increases with the eccentricity of the data point set. The main contribution of the paper is based on the hypothesis that if the data point set were pre-processed to strategically add additional data points in regions of high eccentricity, then the quality of a fit could be improved. Conditional validity of this hypothesis is demonstrated mathematically using a model scenario. Based on this confirmation we propose an algorithm that pre-processes the data so that data points with high eccentricity are replicated. The improvement of ellipse fitting is then demonstrated empirically in real-world application of 3D reconstruction of a plant root system for phenotypic analysis. The degree of improvement for different underlying ellipse fitting methods as a function of data noise level is also analysed. We show that almost every method tested, irrespective of whether it minimizes algebraic error or geometric error, shows improvement in the fit following data augmentation using the proposed pre-processing algorithm.
A new method to improve network topological similarity search: applied to fold recognition
Lhota, John; Hauptman, Ruth; Hart, Thomas; Ng, Clara; Xie, Lei
2015-01-01
Motivation: Similarity search is the foundation of bioinformatics. It plays a key role in establishing structural, functional and evolutionary relationships between biological sequences. Although the power of the similarity search has increased steadily in recent years, a high percentage of sequences remain uncharacterized in the protein universe. Thus, new similarity search strategies are needed to efficiently and reliably infer the structure and function of new sequences. The existing paradigm for studying protein sequence, structure, function and evolution has been established based on the assumption that the protein universe is discrete and hierarchical. Cumulative evidence suggests that the protein universe is continuous. As a result, conventional sequence homology search methods may be not able to detect novel structural, functional and evolutionary relationships between proteins from weak and noisy sequence signals. To overcome the limitations in existing similarity search methods, we propose a new algorithmic framework—Enrichment of Network Topological Similarity (ENTS)—to improve the performance of large scale similarity searches in bioinformatics. Results: We apply ENTS to a challenging unsolved problem: protein fold recognition. Our rigorous benchmark studies demonstrate that ENTS considerably outperforms state-of-the-art methods. As the concept of ENTS can be applied to any similarity metric, it may provide a general framework for similarity search on any set of biological entities, given their representation as a network. Availability and implementation: Source code freely available upon request Contact: lxie@iscb.org PMID:25717198
DOE Office of Scientific and Technical Information (OSTI.GOV)
April M. Whaley; Stacey M. L. Hendrickson; Ronald L. Boring
In response to Staff Requirements Memorandum (SRM) SRM-M061020, the U.S. Nuclear Regulatory Commission (NRC) is sponsoring work to update the technical basis underlying human reliability analysis (HRA) in an effort to improve the robustness of HRA. The ultimate goal of this work is to develop a hybrid of existing methods addressing limitations of current HRA models and in particular issues related to intra- and inter-method variabilities and results. This hybrid method is now known as the Integrated Decision-tree Human Event Analysis System (IDHEAS). Existing HRA methods have looked at elements of the psychological literature, but there has not previously beenmore » a systematic attempt to translate the complete span of cognition from perception to action into mechanisms that can inform HRA. Therefore, a first step of this effort was to perform a literature search of psychology, cognition, behavioral science, teamwork, and operating performance to incorporate current understanding of human performance in operating environments, thus affording an improved technical foundation for HRA. However, this literature review went one step further by mining the literature findings to establish causal relationships and explicit links between the different types of human failures, performance drivers and associated performance measures ultimately used for quantification. This is the first of two papers that detail the literature review (paper 1) and its product (paper 2). This paper describes the literature review and the high-level architecture used to organize the literature review, and the second paper (Whaley, Hendrickson, Boring, & Xing, these proceedings) describes the resultant cognitive framework.« less
Padula, Daniele; Cerezo, Javier; Pescitelli, Gennaro; Santoro, Fabrizio
2017-12-13
Comparison between chiroptical spectra and theoretical predictions is the method of choice for the assignment of the absolute configuration of chiral compounds in solution. Here we report the case of an apparently simple biarylcarbinol, whose electronic circular dichroism (ECD) in the 1 L b region exhibits a peculiar alternation of negative and positive bands. Adopting Density Functional Theory, and describing solvent effects with implicit methods, we found three stable conformers in ethanol, each of them with two close lying states corresponding to similar local 1 L b excitations on the two phenyls. We computed the corresponding vibronic ECD spectra in harmonic approximation, including Duschinsky mixings as well as both Franck Condon (FC) and Herzberg Teller (HT) effects. Exploiting a recently developed mixed quantum/classical method, we further investigated the contribution of the vibronic spectra of out-of-equilibrium structures along the interconversion path connecting the different conformers. In this way, we achieved a reasonable agreement with experiment and attributed the alternating signs of the bands to the existence of different conformers. The remaining discrepancies with experiment indicate that specific solute-solvent interactions modulate the relative conformers' stabilities, calling for new methods able to combine Molecular Dynamics explorations and vibronic calculations. Moreover, the poor performance of HT approaches and the existence of two closely-lying states suggest the necessity of an improved fully-nonadiabatic vibronic approach. These findings demonstrate that even for such a simple system as the biarylcarbinol investigated here, a full reproduction of the fine details of the ECD spectrum requires the development of new improved methods.
Scott A. Stolnack; Mason D. Bryant; Robert C. Wissmar
2005-01-01
This document reviews existing and proposed protocols used to monitor stream ecosystem conditions and responses to land management activities in the Pacific Northwest. Because of recent work aimed at improving the utility of habitat survey and fish abundance assessment methods, this review focuses on current (since 1993) monitoring efforts that assess stream habitat...
Background/Question/MethodsLake and stream conditions respond to both natural and human-related landscape features. Characterizing these features within contributing areas (i.e., delineated watersheds) of streams and lakes could improve our understanding of how biological conditi...
Multi-dimensional tunnelling and complex momentum
NASA Technical Reports Server (NTRS)
Bowcock, Peter; Gregory, Ruth
1991-01-01
The problem of modeling tunneling phenomena in more than one dimension is examined. It is found that existing techniques are inadequate in a wide class of situations, due to their inability to deal with concurrent classical motion. The generalization of these methods to allow for complex momenta is shown, and improved techniques are demonstrated with a selection of illustrative examples. Possible applications are presented.
Methods to Improve Establishment and Growth of Bottomland Hardwood Artificial Regeneration
Callie Jo Schweitzer; Emile S. Gardiner; John A. Stanturf; Andrew W. Ezell
1999-01-01
With ongoing attempts to reforest both cut-over and abandoned agricultural land in the lower Mississippi alluvial plain, it has become evident that there exists a need for an efficient regeneration system that makes biological and economic sense. Also, there is a need to address how to minimize oomoetitkm from invading weeds, to deter predation by small mammals, and...
Relative loading on biplane wings
NASA Technical Reports Server (NTRS)
Diehl, Walter S
1934-01-01
Recent improvements in stress analysis methods have made it necessary to revise and to extend the loading curves to cover all conditions of flight. This report is concerned with a study of existing biplane data by combining the experimental and theoretical data to derive a series of curves from which the lift curves of the individual wings of a biplane may be obtained.
Job Knowledge Test Design: A Cognitively-Oriented Approach. Institute Report No. 241.
ERIC Educational Resources Information Center
DuBois, David; And Others
Selected cognitive science methods were used to modify existing test development procedures so that the modified procedures could in turn be used to improve the usefulness of job knowledge tests as a proxy for hands-on performance. A plan-goal graph representation was used to capture the knowledge content and goal structure of the task of using a…
ERIC Educational Resources Information Center
Van Hoye, A.; Heuzé, J.-P.; Larsen, T.; Sarrazin, P.
2016-01-01
Despite the call to improve health promotion (HP) in sport clubs in the existing literature, little is known about sport clubs' organizational capacity. Grounded within the setting-based framework, this study compares HP activities and guidance among 10 football clubs. At least three grassroots coaches from each club (n = 68) completed the Health…
ERIC Educational Resources Information Center
Commission des Communautes Europeennes (Luxembourg).
The papers presented here have a double objective: to give those responsible for the Action plan for the improvement of information transfer between European languages a good view of existing and developing systems and to make future users of EURONET acquainted with methods and tools that will soon be available. The papers are arranged under six…
The Impact of a Therapy Dog Program on Children's Reading Skills and Attitudes toward Reading
ERIC Educational Resources Information Center
Kirnan, Jean; Siminerio, Steven; Wong, Zachary
2016-01-01
An existing school program in which therapy dogs are integrated into the reading curriculum was analyzed to determine the effect on student reading. Previous literature suggests an improvement in both reading skills and attitudes towards reading when students read in the presence of a therapy dog. Using a mixed method model, the researchers…
Learning context-sensitive shape similarity by graph transduction.
Bai, Xiang; Yang, Xingwei; Latecki, Longin Jan; Liu, Wenyu; Tu, Zhuowen
2010-05-01
Shape similarity and shape retrieval are very important topics in computer vision. The recent progress in this domain has been mostly driven by designing smart shape descriptors for providing better similarity measure between pairs of shapes. In this paper, we provide a new perspective to this problem by considering the existing shapes as a group, and study their similarity measures to the query shape in a graph structure. Our method is general and can be built on top of any existing shape similarity measure. For a given similarity measure, a new similarity is learned through graph transduction. The new similarity is learned iteratively so that the neighbors of a given shape influence its final similarity to the query. The basic idea here is related to PageRank ranking, which forms a foundation of Google Web search. The presented experimental results demonstrate that the proposed approach yields significant improvements over the state-of-art shape matching algorithms. We obtained a retrieval rate of 91.61 percent on the MPEG-7 data set, which is the highest ever reported in the literature. Moreover, the learned similarity by the proposed method also achieves promising improvements on both shape classification and shape clustering.
Improving Measures of Work-Related Physical Functioning
McDonough, Christine M.; Ni, Pengsheng; Peterik, Kara; Marfeo, Elizabeth E.; Marino, Molly E.; Meterko, Mark; Rasch, Elizabeth K; Brandt, Diane E.; Jette, Alan M; Chan, Leighton
2016-01-01
Purpose To expand content of the physical function domain of the Work Disability Functional Assessment Battery (WD-FAB), developed for the US Social Security Administration’s (SSA) disability determination process. Methods Newly developed questions were administered to 3,532 recent SSA applicants for work disability benefits and 2,025 US adults. Factor analyses and item response theory (IRT) methods were used to calibrate and link the new items to existing WD-FAB, and computer-adaptive test simulations were conducted. Results Factor and IRT analyses supported integration of 44 new items into 3 existing WD-FAB scales and the addition of a new 11-item scale (Community Mobility). The final physical function domain consisting of: Basic Mobility (56 items), Upper Body Function (34 items), Fine Motor Function (45 items), and Community Mobility (11 items) demonstrated acceptable psychometric properties. Conclusions The WD-FAB offers an important tool for enhancement of work disability determination. The FAB could provide relevant information about work-related functioning for initial assessment of claimants, identifying denied applicants who may benefit from interventions to improve work and health outcomes; enhancing periodic review of work disability beneficiaries; and assessing outcomes for policies, programs and services targeting people with work disability. PMID:28005243
Precision enhancement of pavement roughness localization with connected vehicles
NASA Astrophysics Data System (ADS)
Bridgelall, R.; Huang, Y.; Zhang, Z.; Deng, F.
2016-02-01
Transportation agencies rely on the accurate localization and reporting of roadway anomalies that could pose serious hazards to the traveling public. However, the cost and technical limitations of present methods prevent their scaling to all roadways. Connected vehicles with on-board accelerometers and conventional geospatial position receivers offer an attractive alternative because of their potential to monitor all roadways in real-time. The conventional global positioning system is ubiquitous and essentially free to use but it produces impractically large position errors. This study evaluated the improvement in precision achievable by augmenting the conventional geo-fence system with a standard speed bump or an existing anomaly at a pre-determined position to establish a reference inertial marker. The speed sensor subsequently generates position tags for the remaining inertial samples by computing their path distances relative to the reference position. The error model and a case study using smartphones to emulate connected vehicles revealed that the precision in localization improves from tens of metres to sub-centimetre levels, and the accuracy of measuring localized roughness more than doubles. The research results demonstrate that transportation agencies will benefit from using the connected vehicle method to achieve precision and accuracy levels that are comparable to existing laser-based inertial profilers.
Golder, Janet; Farlie, Melanie K; Sevenhuysen, Samantha
2016-01-01
Efficient utilisation of education resources is required for the delivery of effective learning opportunities for allied health professionals. This study aimed to develop an education framework to support delivery of high-quality education within existing education resources. This study was conducted in a large metropolitan health service. Homogenous and purposive sampling methods were utilised in Phase 1 (n=43) and 2 (n=14) consultation stages. Participants included 25 allied health professionals, 22 managers, 1 educator, and 3 executives. Field notes taken during 43 semi-structured interviews and 4 focus groups were member-checked, and semantic thematic analysis methods were utilised. Framework design was informed by existing published framework development guides. The framework model contains governance, planning, delivery, and evaluation and research elements and identifies performance indicators, practice examples, and support tools for a range of stakeholders. Themes integrated into framework content include improving quality of education and training provided and delivery efficiency, greater understanding of education role requirements, and workforce support for education-specific knowledge and skill development. This framework supports efficient delivery of allied health workforce education and training to the highest standard, whilst pragmatically considering current allied health education workforce demands.
NASA Astrophysics Data System (ADS)
Zhao, Jin; Han-Ming, Zhang; Bin, Yan; Lei, Li; Lin-Yuan, Wang; Ai-Long, Cai
2016-03-01
Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. Projected supported by the National High Technology Research and Development Program of China (Grant No. 2012AA011603) and the National Natural Science Foundation of China (Grant No. 61372172).
Electrocardiogram signal denoising based on a new improved wavelet thresholding
NASA Astrophysics Data System (ADS)
Han, Guoqiang; Xu, Zhijun
2016-08-01
Good quality electrocardiogram (ECG) is utilized by physicians for the interpretation and identification of physiological and pathological phenomena. In general, ECG signals may mix various noises such as baseline wander, power line interference, and electromagnetic interference in gathering and recording process. As ECG signals are non-stationary physiological signals, wavelet transform is investigated to be an effective tool to discard noises from corrupted signals. A new compromising threshold function called sigmoid function-based thresholding scheme is adopted in processing ECG signals. Compared with other methods such as hard/soft thresholding or other existing thresholding functions, the new algorithm has many advantages in the noise reduction of ECG signals. It perfectly overcomes the discontinuity at ±T of hard thresholding and reduces the fixed deviation of soft thresholding. The improved wavelet thresholding denoising can be proved to be more efficient than existing algorithms in ECG signal denoising. The signal to noise ratio, mean square error, and percent root mean square difference are calculated to verify the denoising performance as quantitative tools. The experimental results reveal that the waves including P, Q, R, and S waves of ECG signals after denoising coincide with the original ECG signals by employing the new proposed method.
Singh, Anushikha; Dutta, Malay Kishore; ParthaSarathi, M; Uher, Vaclav; Burget, Radim
2016-02-01
Glaucoma is a disease of the retina which is one of the most common causes of permanent blindness worldwide. This paper presents an automatic image processing based method for glaucoma diagnosis from the digital fundus image. In this paper wavelet feature extraction has been followed by optimized genetic feature selection combined with several learning algorithms and various parameter settings. Unlike the existing research works where the features are considered from the complete fundus or a sub image of the fundus, this work is based on feature extraction from the segmented and blood vessel removed optic disc to improve the accuracy of identification. The experimental results presented in this paper indicate that the wavelet features of the segmented optic disc image are clinically more significant in comparison to features of the whole or sub fundus image in the detection of glaucoma from fundus image. Accuracy of glaucoma identification achieved in this work is 94.7% and a comparison with existing methods of glaucoma detection from fundus image indicates that the proposed approach has improved accuracy of classification. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.