Fuzzy support vector machine: an efficient rule-based classification technique for microarrays.
Hajiloo, Mohsen; Rabiee, Hamid R; Anooshahpour, Mahdi
2013-01-01
The abundance of gene expression microarray data has led to the development of machine learning algorithms applicable for tackling disease diagnosis, disease prognosis, and treatment selection problems. However, these algorithms often produce classifiers with weaknesses in terms of accuracy, robustness, and interpretability. This paper introduces fuzzy support vector machine which is a learning algorithm based on combination of fuzzy classifiers and kernel machines for microarray classification. Experimental results on public leukemia, prostate, and colon cancer datasets show that fuzzy support vector machine applied in combination with filter or wrapper feature selection methods develops a robust model with higher accuracy than the conventional microarray classification models such as support vector machine, artificial neural network, decision trees, k nearest neighbors, and diagonal linear discriminant analysis. Furthermore, the interpretable rule-base inferred from fuzzy support vector machine helps extracting biological knowledge from microarray data. Fuzzy support vector machine as a new classification model with high generalization power, robustness, and good interpretability seems to be a promising tool for gene expression microarray classification.
Yu, Hualong; Hong, Shufang; Yang, Xibei; Ni, Jun; Dan, Yuanyuan; Qin, Bin
2013-01-01
DNA microarray technology can measure the activities of tens of thousands of genes simultaneously, which provides an efficient way to diagnose cancer at the molecular level. Although this strategy has attracted significant research attention, most studies neglect an important problem, namely, that most DNA microarray datasets are skewed, which causes traditional learning algorithms to produce inaccurate results. Some studies have considered this problem, yet they merely focus on binary-class problem. In this paper, we dealt with multiclass imbalanced classification problem, as encountered in cancer DNA microarray, by using ensemble learning. We utilized one-against-all coding strategy to transform multiclass to multiple binary classes, each of them carrying out feature subspace, which is an evolving version of random subspace that generates multiple diverse training subsets. Next, we introduced one of two different correction technologies, namely, decision threshold adjustment or random undersampling, into each training subset to alleviate the damage of class imbalance. Specifically, support vector machine was used as base classifier, and a novel voting rule called counter voting was presented for making a final decision. Experimental results on eight skewed multiclass cancer microarray datasets indicate that unlike many traditional classification approaches, our methods are insensitive to class imbalance.
Multiclassifier information fusion methods for microarray pattern recognition
NASA Astrophysics Data System (ADS)
Braun, Jerome J.; Glina, Yan; Judson, Nicholas; Herzig-Marx, Rachel
2004-04-01
This paper addresses automatic recognition of microarray patterns, a capability that could have a major significance for medical diagnostics, enabling development of diagnostic tools for automatic discrimination of specific diseases. The paper presents multiclassifier information fusion methods for microarray pattern recognition. The input space partitioning approach based on fitness measures that constitute an a-priori gauging of classification efficacy for each subspace is investigated. Methods for generation of fitness measures, generation of input subspaces and their use in the multiclassifier fusion architecture are presented. In particular, two-level quantification of fitness that accounts for the quality of each subspace as well as the quality of individual neighborhoods within the subspace is described. Individual-subspace classifiers are Support Vector Machine based. The decision fusion stage fuses the information from mulitple SVMs along with the multi-level fitness information. Final decision fusion stage techniques, including weighted fusion as well as Dempster-Shafer theory based fusion are investigated. It should be noted that while the above methods are discussed in the context of microarray pattern recognition, they are applicable to a broader range of discrimination problems, in particular to problems involving a large number of information sources irreducible to a low-dimensional feature space.
Hierarchical Gene Selection and Genetic Fuzzy System for Cancer Microarray Data Classification
Nguyen, Thanh; Khosravi, Abbas; Creighton, Douglas; Nahavandi, Saeid
2015-01-01
This paper introduces a novel approach to gene selection based on a substantial modification of analytic hierarchy process (AHP). The modified AHP systematically integrates outcomes of individual filter methods to select the most informative genes for microarray classification. Five individual ranking methods including t-test, entropy, receiver operating characteristic (ROC) curve, Wilcoxon and signal to noise ratio are employed to rank genes. These ranked genes are then considered as inputs for the modified AHP. Additionally, a method that uses fuzzy standard additive model (FSAM) for cancer classification based on genes selected by AHP is also proposed in this paper. Traditional FSAM learning is a hybrid process comprising unsupervised structure learning and supervised parameter tuning. Genetic algorithm (GA) is incorporated in-between unsupervised and supervised training to optimize the number of fuzzy rules. The integration of GA enables FSAM to deal with the high-dimensional-low-sample nature of microarray data and thus enhance the efficiency of the classification. Experiments are carried out on numerous microarray datasets. Results demonstrate the performance dominance of the AHP-based gene selection against the single ranking methods. Furthermore, the combination of AHP-FSAM shows a great accuracy in microarray data classification compared to various competing classifiers. The proposed approach therefore is useful for medical practitioners and clinicians as a decision support system that can be implemented in the real medical practice. PMID:25823003
Hierarchical gene selection and genetic fuzzy system for cancer microarray data classification.
Nguyen, Thanh; Khosravi, Abbas; Creighton, Douglas; Nahavandi, Saeid
2015-01-01
This paper introduces a novel approach to gene selection based on a substantial modification of analytic hierarchy process (AHP). The modified AHP systematically integrates outcomes of individual filter methods to select the most informative genes for microarray classification. Five individual ranking methods including t-test, entropy, receiver operating characteristic (ROC) curve, Wilcoxon and signal to noise ratio are employed to rank genes. These ranked genes are then considered as inputs for the modified AHP. Additionally, a method that uses fuzzy standard additive model (FSAM) for cancer classification based on genes selected by AHP is also proposed in this paper. Traditional FSAM learning is a hybrid process comprising unsupervised structure learning and supervised parameter tuning. Genetic algorithm (GA) is incorporated in-between unsupervised and supervised training to optimize the number of fuzzy rules. The integration of GA enables FSAM to deal with the high-dimensional-low-sample nature of microarray data and thus enhance the efficiency of the classification. Experiments are carried out on numerous microarray datasets. Results demonstrate the performance dominance of the AHP-based gene selection against the single ranking methods. Furthermore, the combination of AHP-FSAM shows a great accuracy in microarray data classification compared to various competing classifiers. The proposed approach therefore is useful for medical practitioners and clinicians as a decision support system that can be implemented in the real medical practice.
Burgarella, Sarah; Cattaneo, Dario; Masseroli, Marco
2006-01-01
We developed MicroGen, a multi-database Web based system for managing all the information characterizing spotted microarray experiments. It supports information gathering and storing according to the Minimum Information About Microarray Experiments (MIAME) standard. It also allows easy sharing of information and data among all multidisciplinary actors involved in spotted microarray experiments. PMID:17238488
Swertz, Morris A; De Brock, E O; Van Hijum, Sacha A F T; De Jong, Anne; Buist, Girbe; Baerends, Richard J S; Kok, Jan; Kuipers, Oscar P; Jansen, Ritsert C
2004-09-01
Genomic research laboratories need adequate infrastructure to support management of their data production and research workflow. But what makes infrastructure adequate? A lack of appropriate criteria makes any decision on buying or developing a system difficult. Here, we report on the decision process for the case of a molecular genetics group establishing a microarray laboratory. Five typical requirements for experimental genomics database systems were identified: (i) evolution ability to keep up with the fast developing genomics field; (ii) a suitable data model to deal with local diversity; (iii) suitable storage of data files in the system; (iv) easy exchange with other software; and (v) low maintenance costs. The computer scientists and the researchers of the local microarray laboratory considered alternative solutions for these five requirements and chose the following options: (i) use of automatic code generation; (ii) a customized data model based on standards; (iii) storage of datasets as black boxes instead of decomposing them in database tables; (iv) loosely linking to other programs for improved flexibility; and (v) a low-maintenance web-based user interface. Our team evaluated existing microarray databases and then decided to build a new system, Molecular Genetics Information System (MOLGENIS), implemented using code generation in a period of three months. This case can provide valuable insights and lessons to both software developers and a user community embarking on large-scale genomic projects. http://www.molgenis.nl
Honoré, Paul; Granjeaud, Samuel; Tagett, Rebecca; Deraco, Stéphane; Beaudoing, Emmanuel; Rougemont, Jacques; Debono, Stéphane; Hingamp, Pascal
2006-09-20
High throughput gene expression profiling (GEP) is becoming a routine technique in life science laboratories. With experimental designs that repeatedly span thousands of genes and hundreds of samples, relying on a dedicated database infrastructure is no longer an option.GEP technology is a fast moving target, with new approaches constantly broadening the field diversity. This technology heterogeneity, compounded by the informatics complexity of GEP databases, means that software developments have so far focused on mainstream techniques, leaving less typical yet established techniques such as Nylon microarrays at best partially supported. MAF (MicroArray Facility) is the laboratory database system we have developed for managing the design, production and hybridization of spotted microarrays. Although it can support the widely used glass microarrays and oligo-chips, MAF was designed with the specific idiosyncrasies of Nylon based microarrays in mind. Notably single channel radioactive probes, microarray stripping and reuse, vector control hybridizations and spike-in controls are all natively supported by the software suite. MicroArray Facility is MIAME supportive and dynamically provides feedback on missing annotations to help users estimate effective MIAME compliance. Genomic data such as clone identifiers and gene symbols are also directly annotated by MAF software using standard public resources. The MAGE-ML data format is implemented for full data export. Journalized database operations (audit tracking), data anonymization, material traceability and user/project level confidentiality policies are also managed by MAF. MicroArray Facility is a complete data management system for microarray producers and end-users. Particular care has been devoted to adequately model Nylon based microarrays. The MAF system, developed and implemented in both private and academic environments, has proved a robust solution for shared facilities and industry service providers alike.
Honoré, Paul; Granjeaud, Samuel; Tagett, Rebecca; Deraco, Stéphane; Beaudoing, Emmanuel; Rougemont, Jacques; Debono, Stéphane; Hingamp, Pascal
2006-01-01
Background High throughput gene expression profiling (GEP) is becoming a routine technique in life science laboratories. With experimental designs that repeatedly span thousands of genes and hundreds of samples, relying on a dedicated database infrastructure is no longer an option. GEP technology is a fast moving target, with new approaches constantly broadening the field diversity. This technology heterogeneity, compounded by the informatics complexity of GEP databases, means that software developments have so far focused on mainstream techniques, leaving less typical yet established techniques such as Nylon microarrays at best partially supported. Results MAF (MicroArray Facility) is the laboratory database system we have developed for managing the design, production and hybridization of spotted microarrays. Although it can support the widely used glass microarrays and oligo-chips, MAF was designed with the specific idiosyncrasies of Nylon based microarrays in mind. Notably single channel radioactive probes, microarray stripping and reuse, vector control hybridizations and spike-in controls are all natively supported by the software suite. MicroArray Facility is MIAME supportive and dynamically provides feedback on missing annotations to help users estimate effective MIAME compliance. Genomic data such as clone identifiers and gene symbols are also directly annotated by MAF software using standard public resources. The MAGE-ML data format is implemented for full data export. Journalized database operations (audit tracking), data anonymization, material traceability and user/project level confidentiality policies are also managed by MAF. Conclusion MicroArray Facility is a complete data management system for microarray producers and end-users. Particular care has been devoted to adequately model Nylon based microarrays. The MAF system, developed and implemented in both private and academic environments, has proved a robust solution for shared facilities and industry service providers alike. PMID:16987406
Prediction of clinical behaviour and treatment for cancers.
Futschik, Matthias E; Sullivan, Mike; Reeve, Anthony; Kasabov, Nikola
2003-01-01
Prediction of clinical behaviour and treatment for cancers is based on the integration of clinical and pathological parameters. Recent reports have demonstrated that gene expression profiling provides a powerful new approach for determining disease outcome. If clinical and microarray data each contain independent information then it should be possible to combine these datasets to gain more accurate prognostic information. Here, we have used existing clinical information and microarray data to generate a combined prognostic model for outcome prediction for diffuse large B-cell lymphoma (DLBCL). A prediction accuracy of 87.5% was achieved. This constitutes a significant improvement compared to the previously most accurate prognostic model with an accuracy of 77.6%. The model introduced here may be generally applicable to the combination of various types of molecular and clinical data for improving medical decision support systems and individualising patient care.
Multi-test decision tree and its application to microarray data classification.
Czajkowski, Marcin; Grześ, Marek; Kretowski, Marek
2014-05-01
The desirable property of tools used to investigate biological data is easy to understand models and predictive decisions. Decision trees are particularly promising in this regard due to their comprehensible nature that resembles the hierarchical process of human decision making. However, existing algorithms for learning decision trees have tendency to underfit gene expression data. The main aim of this work is to improve the performance and stability of decision trees with only a small increase in their complexity. We propose a multi-test decision tree (MTDT); our main contribution is the application of several univariate tests in each non-terminal node of the decision tree. We also search for alternative, lower-ranked features in order to obtain more stable and reliable predictions. Experimental validation was performed on several real-life gene expression datasets. Comparison results with eight classifiers show that MTDT has a statistically significantly higher accuracy than popular decision tree classifiers, and it was highly competitive with ensemble learning algorithms. The proposed solution managed to outperform its baseline algorithm on 14 datasets by an average 6%. A study performed on one of the datasets showed that the discovered genes used in the MTDT classification model are supported by biological evidence in the literature. This paper introduces a new type of decision tree which is more suitable for solving biological problems. MTDTs are relatively easy to analyze and much more powerful in modeling high dimensional microarray data than their popular counterparts. Copyright © 2014 Elsevier B.V. All rights reserved.
Abstract for presentation. Advances in genomics will have significant implications for risk assessment policies and regulatory decision making. In 2002, EPA issued its lnterim Policy on Genomics which stated that such data may be considered in the decision making process, but tha...
Pre- and post-test genetic counseling for chromosomal and Mendelian disorders.
Fonda Allen, Jill; Stoll, Katie; Bernhardt, Barbara A
2016-02-01
Genetic carrier screening, prenatal screening for aneuploidy, and prenatal diagnostic testing have expanded dramatically over the past 2 decades. Driven in part by powerful market forces, new complex testing modalities have become available after limited clinical research. The responsibility for offering these tests lies primarily on the obstetrical care provider and has become more burdensome as the number of testing options expands. Genetic testing in pregnancy is optional, and decisions about undergoing tests, as well as follow-up testing, should be informed and based on individual patients' values and needs. Careful pre- and post-test counseling is central to supporting informed decision-making. This article explores three areas of technical expansion in genetic testing: expanded carrier screening, non-invasive prenatal screening for fetal aneuploidies using cell-free DNA, and diagnostic testing using fetal chromosomal microarray testing, and provides insights aimed at enabling the obstetrical practitioner to better support patients considering these tests. Copyright © 2016 Elsevier Inc. All rights reserved.
Cangelosi, Davide; Muselli, Marco; Parodi, Stefano; Blengio, Fabiola; Becherini, Pamela; Versteeg, Rogier; Conte, Massimo; Varesio, Luigi
2014-01-01
Cancer patient's outcome is written, in part, in the gene expression profile of the tumor. We previously identified a 62-probe sets signature (NB-hypo) to identify tissue hypoxia in neuroblastoma tumors and showed that NB-hypo stratified neuroblastoma patients in good and poor outcome 1. It was important to develop a prognostic classifier to cluster patients into risk groups benefiting of defined therapeutic approaches. Novel classification and data discretization approaches can be instrumental for the generation of accurate predictors and robust tools for clinical decision support. We explored the application to gene expression data of Rulex, a novel software suite including the Attribute Driven Incremental Discretization technique for transforming continuous variables into simplified discrete ones and the Logic Learning Machine model for intelligible rule generation. We applied Rulex components to the problem of predicting the outcome of neuroblastoma patients on the bases of 62 probe sets NB-hypo gene expression signature. The resulting classifier consisted in 9 rules utilizing mainly two conditions of the relative expression of 11 probe sets. These rules were very effective predictors, as shown in an independent validation set, demonstrating the validity of the LLM algorithm applied to microarray data and patients' classification. The LLM performed as efficiently as Prediction Analysis of Microarray and Support Vector Machine, and outperformed other learning algorithms such as C4.5. Rulex carried out a feature selection by selecting a new signature (NB-hypo-II) of 11 probe sets that turned out to be the most relevant in predicting outcome among the 62 of the NB-hypo signature. Rules are easily interpretable as they involve only few conditions. Our findings provided evidence that the application of Rulex to the expression values of NB-hypo signature created a set of accurate, high quality, consistent and interpretable rules for the prediction of neuroblastoma patients' outcome. We identified the Rulex weighted classification as a flexible tool that can support clinical decisions. For these reasons, we consider Rulex to be a useful tool for cancer classification from microarray gene expression data.
THE MAQC PROJECT: ESTABLISHING QC METRICS AND THRESHOLDS FOR MICROARRAY QUALITY CONTROL
Microarrays represent a core technology in pharmacogenomics and toxicogenomics; however, before this technology can successfully and reliably be applied in clinical practice and regulatory decision-making, standards and quality measures need to be developed. The Microarray Qualit...
Diagnostic classification of cancer using DNA microarrays and artificial intelligence.
Greer, Braden T; Khan, Javed
2004-05-01
The application of artificial intelligence (AI) to microarray data has been receiving much attention in recent years because of the possibility of automated diagnosis in the near future. Studies have been published predicting tumor type, estrogen receptor status, and prognosis using a variety of AI algorithms. The performance of intelligent computing decisions based on gene expression signatures is in some cases comparable to or better than the current clinical decision schemas. The goal of these tools is not to make clinicians obsolete, but rather to give clinicians one more tool in their armamentarium to accurately diagnose and hence better treat cancer patients. Several such applications are summarized in this chapter, and some of the common pitfalls are noted.
Women's experiences receiving abnormal prenatal chromosomal microarray testing results.
Bernhardt, Barbara A; Soucier, Danielle; Hanson, Karen; Savage, Melissa S; Jackson, Laird; Wapner, Ronald J
2013-02-01
Genomic microarrays can detect copy-number variants not detectable by conventional cytogenetics. This technology is diffusing rapidly into prenatal settings even though the clinical implications of many copy-number variants are currently unknown. We conducted a qualitative pilot study to explore the experiences of women receiving abnormal results from prenatal microarray testing performed in a research setting. Participants were a subset of women participating in a multicenter prospective study "Prenatal Cytogenetic Diagnosis by Array-based Copy Number Analysis." Telephone interviews were conducted with 23 women receiving abnormal prenatal microarray results. We found that five key elements dominated the experiences of women who had received abnormal prenatal microarray results: an offer too good to pass up, blindsided by the results, uncertainty and unquantifiable risks, need for support, and toxic knowledge. As prenatal microarray testing is increasingly used, uncertain findings will be common, resulting in greater need for careful pre- and posttest counseling, and more education of and resources for providers so they can adequately support the women who are undergoing testing.
An efficient ensemble learning method for gene microarray classification.
Osareh, Alireza; Shadgar, Bita
2013-01-01
The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.
Informed Decision-Making in the Context of Prenatal Chromosomal Microarray.
Baker, Jessica; Shuman, Cheryl; Chitayat, David; Wasim, Syed; Okun, Nan; Keunen, Johannes; Hofstedter, Renee; Silver, Rachel
2018-03-07
The introduction of chromosomal microarray (CMA) into the prenatal setting has involved considerable deliberation due to the wide range of possible outcomes (e.g., copy number variants of uncertain clinical significance). Such issues are typically discussed in pre-test counseling for pregnant women to support informed decision-making regarding prenatal testing options. This research study aimed to assess the level of informed decision-making with respect to prenatal CMA and the factor(s) influencing decision-making to accept CMA for the selected prenatal testing procedure (i.e., chorionic villus sampling or amniocentesis). We employed a questionnaire that was adapted from a three-dimensional measure previously used to assess informed decision-making with respect to prenatal screening for Down syndrome and neural tube defects. This measure classifies an informed decision as one that is knowledgeable, value-consistent, and deliberated. Our questionnaire also included an optional open-ended question, soliciting factors that may have influenced the participants' decision to accept prenatal CMA; these responses were analyzed qualitatively. Data analysis on 106 participants indicated that 49% made an informed decision (i.e., meeting all three criteria of knowledgeable, deliberated, and value-consistent). Analysis of 59 responses to the open-ended question showed that "the more information the better" emerged as the dominant factor influencing both informed and uninformed participants' decisions to accept prenatal CMA. Despite learning about the key issues in pre-test genetic counseling, our study classified a significant portion of women as making uninformed decisions due to insufficient knowledge, lack of deliberation, value-inconsistency, or a combination of these three measures. Future efforts should focus on developing educational approaches and counseling strategies to effectively increase the rate of informed decision-making among women offered prenatal CMA.
Genetic programming based ensemble system for microarray data classification.
Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To
2015-01-01
Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved.
Genetic Programming Based Ensemble System for Microarray Data Classification
Liu, Kun-Hong; Tong, Muchenxuan; Xie, Shu-Tong; Yee Ng, Vincent To
2015-01-01
Recently, more and more machine learning techniques have been applied to microarray data analysis. The aim of this study is to propose a genetic programming (GP) based new ensemble system (named GPES), which can be used to effectively classify different types of cancers. Decision trees are deployed as base classifiers in this ensemble framework with three operators: Min, Max, and Average. Each individual of the GP is an ensemble system, and they become more and more accurate in the evolutionary process. The feature selection technique and balanced subsampling technique are applied to increase the diversity in each ensemble system. The final ensemble committee is selected by a forward search algorithm, which is shown to be capable of fitting data automatically. The performance of GPES is evaluated using five binary class and six multiclass microarray datasets, and results show that the algorithm can achieve better results in most cases compared with some other ensemble systems. By using elaborate base classifiers or applying other sampling techniques, the performance of GPES may be further improved. PMID:25810748
Glez-Peña, Daniel; Díaz, Fernando; Hernández, Jesús M; Corchado, Juan M; Fdez-Riverola, Florentino
2009-06-18
Bioinformatics and medical informatics are two research fields that serve the needs of different but related communities. Both domains share the common goal of providing new algorithms, methods and technological solutions to biomedical research, and contributing to the treatment and cure of diseases. Although different microarray techniques have been successfully used to investigate useful information for cancer diagnosis at the gene expression level, the true integration of existing methods into day-to-day clinical practice is still a long way off. Within this context, case-based reasoning emerges as a suitable paradigm specially intended for the development of biomedical informatics applications and decision support systems, given the support and collaboration involved in such a translational development. With the goals of removing barriers against multi-disciplinary collaboration and facilitating the dissemination and transfer of knowledge to real practice, case-based reasoning systems have the potential to be applied to translational research mainly because their computational reasoning paradigm is similar to the way clinicians gather, analyze and process information in their own practice of clinical medicine. In addressing the issue of bridging the existing gap between biomedical researchers and clinicians who work in the domain of cancer diagnosis, prognosis and treatment, we have developed and made accessible a common interactive framework. Our geneCBR system implements a freely available software tool that allows the use of combined techniques that can be applied to gene selection, clustering, knowledge extraction and prediction for aiding diagnosis in cancer research. For biomedical researches, geneCBR expert mode offers a core workbench for designing and testing new techniques and experiments. For pathologists or oncologists, geneCBR diagnostic mode implements an effective and reliable system that can diagnose cancer subtypes based on the analysis of microarray data using a CBR architecture. For programmers, geneCBR programming mode includes an advanced edition module for run-time modification of previous coded techniques. geneCBR is a new translational tool that can effectively support the integrative work of programmers, biomedical researches and clinicians working together in a common framework. The code is freely available under the GPL license and can be obtained at http://www.genecbr.org.
Fully automated analysis of multi-resolution four-channel micro-array genotyping data
NASA Astrophysics Data System (ADS)
Abbaspour, Mohsen; Abugharbieh, Rafeef; Podder, Mohua; Tebbutt, Scott J.
2006-03-01
We present a fully-automated and robust microarray image analysis system for handling multi-resolution images (down to 3-micron with sizes up to 80 MBs per channel). The system is developed to provide rapid and accurate data extraction for our recently developed microarray analysis and quality control tool (SNP Chart). Currently available commercial microarray image analysis applications are inefficient, due to the considerable user interaction typically required. Four-channel DNA microarray technology is a robust and accurate tool for determining genotypes of multiple genetic markers in individuals. It plays an important role in the state of the art trend where traditional medical treatments are to be replaced by personalized genetic medicine, i.e. individualized therapy based on the patient's genetic heritage. However, fast, robust, and precise image processing tools are required for the prospective practical use of microarray-based genetic testing for predicting disease susceptibilities and drug effects in clinical practice, which require a turn-around timeline compatible with clinical decision-making. In this paper we have developed a fully-automated image analysis platform for the rapid investigation of hundreds of genetic variations across multiple genes. Validation tests indicate very high accuracy levels for genotyping results. Our method achieves a significant reduction in analysis time, from several hours to just a few minutes, and is completely automated requiring no manual interaction or guidance.
Barton, G; Abbott, J; Chiba, N; Huang, DW; Huang, Y; Krznaric, M; Mack-Smith, J; Saleem, A; Sherman, BT; Tiwari, B; Tomlinson, C; Aitman, T; Darlington, J; Game, L; Sternberg, MJE; Butcher, SA
2008-01-01
Background Microarray experimentation requires the application of complex analysis methods as well as the use of non-trivial computer technologies to manage the resultant large data sets. This, together with the proliferation of tools and techniques for microarray data analysis, makes it very challenging for a laboratory scientist to keep up-to-date with the latest developments in this field. Our aim was to develop a distributed e-support system for microarray data analysis and management. Results EMAAS (Extensible MicroArray Analysis System) is a multi-user rich internet application (RIA) providing simple, robust access to up-to-date resources for microarray data storage and analysis, combined with integrated tools to optimise real time user support and training. The system leverages the power of distributed computing to perform microarray analyses, and provides seamless access to resources located at various remote facilities. The EMAAS framework allows users to import microarray data from several sources to an underlying database, to pre-process, quality assess and analyse the data, to perform functional analyses, and to track data analysis steps, all through a single easy to use web portal. This interface offers distance support to users both in the form of video tutorials and via live screen feeds using the web conferencing tool EVO. A number of analysis packages, including R-Bioconductor and Affymetrix Power Tools have been integrated on the server side and are available programmatically through the Postgres-PLR library or on grid compute clusters. Integrated distributed resources include the functional annotation tool DAVID, GeneCards and the microarray data repositories GEO, CELSIUS and MiMiR. EMAAS currently supports analysis of Affymetrix 3' and Exon expression arrays, and the system is extensible to cater for other microarray and transcriptomic platforms. Conclusion EMAAS enables users to track and perform microarray data management and analysis tasks through a single easy-to-use web application. The system architecture is flexible and scalable to allow new array types, analysis algorithms and tools to be added with relative ease and to cope with large increases in data volume. PMID:19032776
A Granular Self-Organizing Map for Clustering and Gene Selection in Microarray Data.
Ray, Shubhra Sankar; Ganivada, Avatharam; Pal, Sankar K
2016-09-01
A new granular self-organizing map (GSOM) is developed by integrating the concept of a fuzzy rough set with the SOM. While training the GSOM, the weights of a winning neuron and the neighborhood neurons are updated through a modified learning procedure. The neighborhood is newly defined using the fuzzy rough sets. The clusters (granules) evolved by the GSOM are presented to a decision table as its decision classes. Based on the decision table, a method of gene selection is developed. The effectiveness of the GSOM is shown in both clustering samples and developing an unsupervised fuzzy rough feature selection (UFRFS) method for gene selection in microarray data. While the superior results of the GSOM, as compared with the related clustering methods, are provided in terms of β -index, DB-index, Dunn-index, and fuzzy rough entropy, the genes selected by the UFRFS are not only better in terms of classification accuracy and a feature evaluation index, but also statistically more significant than the related unsupervised methods. The C-codes of the GSOM and UFRFS are available online at http://avatharamg.webs.com/software-code.
Recursive feature selection with significant variables of support vectors.
Tsai, Chen-An; Huang, Chien-Hsun; Chang, Ching-Wei; Chen, Chun-Houh
2012-01-01
The development of DNA microarray makes researchers screen thousands of genes simultaneously and it also helps determine high- and low-expression level genes in normal and disease tissues. Selecting relevant genes for cancer classification is an important issue. Most of the gene selection methods use univariate ranking criteria and arbitrarily choose a threshold to choose genes. However, the parameter setting may not be compatible to the selected classification algorithms. In this paper, we propose a new gene selection method (SVM-t) based on the use of t-statistics embedded in support vector machine. We compared the performance to two similar SVM-based methods: SVM recursive feature elimination (SVMRFE) and recursive support vector machine (RSVM). The three methods were compared based on extensive simulation experiments and analyses of two published microarray datasets. In the simulation experiments, we found that the proposed method is more robust in selecting informative genes than SVMRFE and RSVM and capable to attain good classification performance when the variations of informative and noninformative genes are different. In the analysis of two microarray datasets, the proposed method yields better performance in identifying fewer genes with good prediction accuracy, compared to SVMRFE and RSVM.
Ontology-based, Tissue MicroArray oriented, image centered tissue bank
Viti, Federica; Merelli, Ivan; Caprera, Andrea; Lazzari, Barbara; Stella, Alessandra; Milanesi, Luciano
2008-01-01
Background Tissue MicroArray technique is becoming increasingly important in pathology for the validation of experimental data from transcriptomic analysis. This approach produces many images which need to be properly managed, if possible with an infrastructure able to support tissue sharing between institutes. Moreover, the available frameworks oriented to Tissue MicroArray provide good storage for clinical patient, sample treatment and block construction information, but their utility is limited by the lack of data integration with biomolecular information. Results In this work we propose a Tissue MicroArray web oriented system to support researchers in managing bio-samples and, through the use of ontologies, enables tissue sharing aimed at the design of Tissue MicroArray experiments and results evaluation. Indeed, our system provides ontological description both for pre-analysis tissue images and for post-process analysis image results, which is crucial for information exchange. Moreover, working on well-defined terms it is then possible to query web resources for literature articles to integrate both pathology and bioinformatics data. Conclusions Using this system, users associate an ontology-based description to each image uploaded into the database and also integrate results with the ontological description of biosequences identified in every tissue. Moreover, it is possible to integrate the ontological description provided by the user with a full compliant gene ontology definition, enabling statistical studies about correlation between the analyzed pathology and the most commonly related biological processes. PMID:18460177
cluML: A markup language for clustering and cluster validity assessment of microarray data.
Bolshakova, Nadia; Cunningham, Pádraig
2005-01-01
cluML is a new markup language for microarray data clustering and cluster validity assessment. The XML-based format has been designed to address some of the limitations observed in traditional formats, such as inability to store multiple clustering (including biclustering) and validation results within a dataset. cluML is an effective tool to support biomedical knowledge representation in gene expression data analysis. Although cluML was developed for DNA microarray analysis applications, it can be effectively used for the representation of clustering and for the validation of other biomedical and physical data that has no limitations.
Zhao, Zhengshan; Peytavi, Régis; Diaz-Quijada, Gerardo A.; Picard, Francois J.; Huletsky, Ann; Leblanc, Éric; Frenette, Johanne; Boivin, Guy; Veres, Teodor; Dumoulin, Michel M.; Bergeron, Michel G.
2008-01-01
Fabrication of microarray devices using traditional glass slides is not easily adaptable to integration into microfluidic systems. There is thus a need for the development of polymeric materials showing a high hybridization signal-to-background ratio, enabling sensitive detection of microbial pathogens. We have developed such plastic supports suitable for highly sensitive DNA microarray hybridizations. The proof of concept of this microarray technology was done through the detection of four human respiratory viruses that were amplified and labeled with a fluorescent dye via a sensitive reverse transcriptase PCR (RT-PCR) assay. The performance of the microarray hybridization with plastic supports made of PMMA [poly(methylmethacrylate)]-VSUVT or Zeonor 1060R was compared to that with high-quality glass slide microarrays by using both passive and microfluidic hybridization systems. Specific hybridization signal-to-background ratios comparable to that obtained with high-quality commercial glass slides were achieved with both polymeric substrates. Microarray hybridizations demonstrated an analytical sensitivity equivalent to approximately 100 viral genome copies per RT-PCR, which is at least 100-fold higher than the sensitivities of previously reported DNA hybridizations on plastic supports. Testing of these plastic polymers using a microfluidic microarray hybridization platform also showed results that were comparable to those with glass supports. In conclusion, PMMA-VSUVT and Zeonor 1060R are both suitable for highly sensitive microarray hybridizations. PMID:18784318
Cruella: developing a scalable tissue microarray data management system.
Cowan, James D; Rimm, David L; Tuck, David P
2006-06-01
Compared with DNA microarray technology, relatively little information is available concerning the special requirements, design influences, and implementation strategies of data systems for tissue microarray technology. These issues include the requirement to accommodate new and different data elements for each new project as well as the need to interact with pre-existing models for clinical, biological, and specimen-related data. To design and implement a flexible, scalable tissue microarray data storage and management system that could accommodate information regarding different disease types and different clinical investigators, and different clinical investigation questions, all of which could potentially contribute unforeseen data types that require dynamic integration with existing data. The unpredictability of the data elements combined with the novelty of automated analysis algorithms and controlled vocabulary standards in this area require flexible designs and practical decisions. Our design includes a custom Java-based persistence layer to mediate and facilitate interaction with an object-relational database model and a novel database schema. User interaction is provided through a Java Servlet-based Web interface. Cruella has become an indispensable resource and is used by dozens of researchers every day. The system stores millions of experimental values covering more than 300 biological markers and more than 30 disease types. The experimental data are merged with clinical data that has been aggregated from multiple sources and is available to the researchers for management, analysis, and export. Cruella addresses many of the special considerations for managing tissue microarray experimental data and the associated clinical information. A metadata-driven approach provides a practical solution to many of the unique issues inherent in tissue microarray research, and allows relatively straightforward interoperability with and accommodation of new data models.
Gene-Based Multiclass Cancer Diagnosis with Class-Selective Rejections
Jrad, Nisrine; Grall-Maës, Edith; Beauseroy, Pierre
2009-01-01
Supervised learning of microarray data is receiving much attention in recent years. Multiclass cancer diagnosis, based on selected gene profiles, are used as adjunct of clinical diagnosis. However, supervised diagnosis may hinder patient care, add expense or confound a result. To avoid this misleading, a multiclass cancer diagnosis with class-selective rejection is proposed. It rejects some patients from one, some, or all classes in order to ensure a higher reliability while reducing time and expense costs. Moreover, this classifier takes into account asymmetric penalties dependant on each class and on each wrong or partially correct decision. It is based on ν-1-SVM coupled with its regularization path and minimizes a general loss function defined in the class-selective rejection scheme. The state of art multiclass algorithms can be considered as a particular case of the proposed algorithm where the number of decisions is given by the classes and the loss function is defined by the Bayesian risk. Two experiments are carried out in the Bayesian and the class selective rejection frameworks. Five genes selected datasets are used to assess the performance of the proposed method. Results are discussed and accuracies are compared with those computed by the Naive Bayes, Nearest Neighbor, Linear Perceptron, Multilayer Perceptron, and Support Vector Machines classifiers. PMID:19584932
Bountris, Panagiotis; Haritou, Maria; Pouliakis, Abraham; Margari, Niki; Kyrgiou, Maria; Spathis, Aris; Pappas, Asimakis; Panayiotides, Ioannis; Paraskevaidis, Evangelos A; Karakitsos, Petros; Koutsouris, Dimitrios-Dionyssios
2014-01-01
Nowadays, there are molecular biology techniques providing information related to cervical cancer and its cause: the human Papillomavirus (HPV), including DNA microarrays identifying HPV subtypes, mRNA techniques such as nucleic acid based amplification or flow cytometry identifying E6/E7 oncogenes, and immunocytochemistry techniques such as overexpression of p16. Each one of these techniques has its own performance, limitations and advantages, thus a combinatorial approach via computational intelligence methods could exploit the benefits of each method and produce more accurate results. In this article we propose a clinical decision support system (CDSS), composed by artificial neural networks, intelligently combining the results of classic and ancillary techniques for diagnostic accuracy improvement. We evaluated this method on 740 cases with complete series of cytological assessment, molecular tests, and colposcopy examination. The CDSS demonstrated high sensitivity (89.4%), high specificity (97.1%), high positive predictive value (89.4%), and high negative predictive value (97.1%), for detecting cervical intraepithelial neoplasia grade 2 or worse (CIN2+). In comparison to the tests involved in this study and their combinations, the CDSS produced the most balanced results in terms of sensitivity, specificity, PPV, and NPV. The proposed system may reduce the referral rate for colposcopy and guide personalised management and therapeutic interventions.
Bountris, Panagiotis; Haritou, Maria; Pouliakis, Abraham; Margari, Niki; Kyrgiou, Maria; Spathis, Aris; Pappas, Asimakis; Panayiotides, Ioannis; Paraskevaidis, Evangelos A.; Karakitsos, Petros; Koutsouris, Dimitrios-Dionyssios
2014-01-01
Nowadays, there are molecular biology techniques providing information related to cervical cancer and its cause: the human Papillomavirus (HPV), including DNA microarrays identifying HPV subtypes, mRNA techniques such as nucleic acid based amplification or flow cytometry identifying E6/E7 oncogenes, and immunocytochemistry techniques such as overexpression of p16. Each one of these techniques has its own performance, limitations and advantages, thus a combinatorial approach via computational intelligence methods could exploit the benefits of each method and produce more accurate results. In this article we propose a clinical decision support system (CDSS), composed by artificial neural networks, intelligently combining the results of classic and ancillary techniques for diagnostic accuracy improvement. We evaluated this method on 740 cases with complete series of cytological assessment, molecular tests, and colposcopy examination. The CDSS demonstrated high sensitivity (89.4%), high specificity (97.1%), high positive predictive value (89.4%), and high negative predictive value (97.1%), for detecting cervical intraepithelial neoplasia grade 2 or worse (CIN2+). In comparison to the tests involved in this study and their combinations, the CDSS produced the most balanced results in terms of sensitivity, specificity, PPV, and NPV. The proposed system may reduce the referral rate for colposcopy and guide personalised management and therapeutic interventions. PMID:24812614
MADGE: scalable distributed data management software for cDNA microarrays.
McIndoe, Richard A; Lanzen, Aaron; Hurtz, Kimberly
2003-01-01
The human genome project and the development of new high-throughput technologies have created unparalleled opportunities to study the mechanism of diseases, monitor the disease progression and evaluate effective therapies. Gene expression profiling is a critical tool to accomplish these goals. The use of nucleic acid microarrays to assess the gene expression of thousands of genes simultaneously has seen phenomenal growth over the past five years. Although commercial sources of microarrays exist, investigators wanting more flexibility in the genes represented on the array will turn to in-house production. The creation and use of cDNA microarrays is a complicated process that generates an enormous amount of information. Effective data management of this information is essential to efficiently access, analyze, troubleshoot and evaluate the microarray experiments. We have developed a distributable software package designed to track and store the various pieces of data generated by a cDNA microarray facility. This includes the clone collection storage data, annotation data, workflow queues, microarray data, data repositories, sample submission information, and project/investigator information. This application was designed using a 3-tier client server model. The data access layer (1st tier) contains the relational database system tuned to support a large number of transactions. The data services layer (2nd tier) is a distributed COM server with full database transaction support. The application layer (3rd tier) is an internet based user interface that contains both client and server side code for dynamic interactions with the user. This software is freely available to academic institutions and non-profit organizations at http://www.genomics.mcg.edu/niddkbtc.
Reboiro-Jato, Miguel; Arrais, Joel P; Oliveira, José Luis; Fdez-Riverola, Florentino
2014-01-30
The diagnosis and prognosis of several diseases can be shortened through the use of different large-scale genome experiments. In this context, microarrays can generate expression data for a huge set of genes. However, to obtain solid statistical evidence from the resulting data, it is necessary to train and to validate many classification techniques in order to find the best discriminative method. This is a time-consuming process that normally depends on intricate statistical tools. geneCommittee is a web-based interactive tool for routinely evaluating the discriminative classification power of custom hypothesis in the form of biologically relevant gene sets. While the user can work with different gene set collections and several microarray data files to configure specific classification experiments, the tool is able to run several tests in parallel. Provided with a straightforward and intuitive interface, geneCommittee is able to render valuable information for diagnostic analyses and clinical management decisions based on systematically evaluating custom hypothesis over different data sets using complementary classifiers, a key aspect in clinical research. geneCommittee allows the enrichment of microarrays raw data with gene functional annotations, producing integrated datasets that simplify the construction of better discriminative hypothesis, and allows the creation of a set of complementary classifiers. The trained committees can then be used for clinical research and diagnosis. Full documentation including common use cases and guided analysis workflows is freely available at http://sing.ei.uvigo.es/GC/.
Pashaei, Elnaz; Ozen, Mustafa; Aydin, Nizamettin
2015-08-01
Improving accuracy of supervised classification algorithms in biomedical applications is one of active area of research. In this study, we improve the performance of Particle Swarm Optimization (PSO) combined with C4.5 decision tree (PSO+C4.5) classifier by applying Boosted C5.0 decision tree as the fitness function. To evaluate the effectiveness of our proposed method, it is implemented on 1 microarray dataset and 5 different medical data sets obtained from UCI machine learning databases. Moreover, the results of PSO + Boosted C5.0 implementation are compared to eight well-known benchmark classification methods (PSO+C4.5, support vector machine under the kernel of Radial Basis Function, Classification And Regression Tree (CART), C4.5 decision tree, C5.0 decision tree, Boosted C5.0 decision tree, Naive Bayes and Weighted K-Nearest neighbor). Repeated five-fold cross-validation method was used to justify the performance of classifiers. Experimental results show that our proposed method not only improve the performance of PSO+C4.5 but also obtains higher classification accuracy compared to the other classification methods.
Bennet, Jaison; Ganaprakasam, Chilambuchelvan Arul; Arputharaj, Kannan
2014-01-01
Cancer classification by doctors and radiologists was based on morphological and clinical features and had limited diagnostic ability in olden days. The recent arrival of DNA microarray technology has led to the concurrent monitoring of thousands of gene expressions in a single chip which stimulates the progress in cancer classification. In this paper, we have proposed a hybrid approach for microarray data classification based on nearest neighbor (KNN), naive Bayes, and support vector machine (SVM). Feature selection prior to classification plays a vital role and a feature selection technique which combines discrete wavelet transform (DWT) and moving window technique (MWT) is used. The performance of the proposed method is compared with the conventional classifiers like support vector machine, nearest neighbor, and naive Bayes. Experiments have been conducted on both real and benchmark datasets and the results indicate that the ensemble approach produces higher classification accuracy than conventional classifiers. This paper serves as an automated system for the classification of cancer and can be applied by doctors in real cases which serve as a boon to the medical community. This work further reduces the misclassification of cancers which is highly not allowed in cancer detection.
On the classification techniques in data mining for microarray data classification
NASA Astrophysics Data System (ADS)
Aydadenta, Husna; Adiwijaya
2018-03-01
Cancer is one of the deadly diseases, according to data from WHO by 2015 there are 8.8 million more deaths caused by cancer, and this will increase every year if not resolved earlier. Microarray data has become one of the most popular cancer-identification studies in the field of health, since microarray data can be used to look at levels of gene expression in certain cell samples that serve to analyze thousands of genes simultaneously. By using data mining technique, we can classify the sample of microarray data thus it can be identified with cancer or not. In this paper we will discuss some research using some data mining techniques using microarray data, such as Support Vector Machine (SVM), Artificial Neural Network (ANN), Naive Bayes, k-Nearest Neighbor (kNN), and C4.5, and simulation of Random Forest algorithm with technique of reduction dimension using Relief. The result of this paper show performance measure (accuracy) from classification algorithm (SVM, ANN, Naive Bayes, kNN, C4.5, and Random Forets).The results in this paper show the accuracy of Random Forest algorithm higher than other classification algorithms (Support Vector Machine (SVM), Artificial Neural Network (ANN), Naive Bayes, k-Nearest Neighbor (kNN), and C4.5). It is hoped that this paper can provide some information about the speed, accuracy, performance and computational cost generated from each Data Mining Classification Technique based on microarray data.
Fuzzy support vector machine for microarray imbalanced data classification
NASA Astrophysics Data System (ADS)
Ladayya, Faroh; Purnami, Santi Wulan; Irhamah
2017-11-01
DNA microarrays are data containing gene expression with small sample sizes and high number of features. Furthermore, imbalanced classes is a common problem in microarray data. This occurs when a dataset is dominated by a class which have significantly more instances than the other minority classes. Therefore, it is needed a classification method that solve the problem of high dimensional and imbalanced data. Support Vector Machine (SVM) is one of the classification methods that is capable of handling large or small samples, nonlinear, high dimensional, over learning and local minimum issues. SVM has been widely applied to DNA microarray data classification and it has been shown that SVM provides the best performance among other machine learning methods. However, imbalanced data will be a problem because SVM treats all samples in the same importance thus the results is bias for minority class. To overcome the imbalanced data, Fuzzy SVM (FSVM) is proposed. This method apply a fuzzy membership to each input point and reformulate the SVM such that different input points provide different contributions to the classifier. The minority classes have large fuzzy membership so FSVM can pay more attention to the samples with larger fuzzy membership. Given DNA microarray data is a high dimensional data with a very large number of features, it is necessary to do feature selection first using Fast Correlation based Filter (FCBF). In this study will be analyzed by SVM, FSVM and both methods by applying FCBF and get the classification performance of them. Based on the overall results, FSVM on selected features has the best classification performance compared to SVM.
Pathogen profiling for disease management and surveillance.
Sintchenko, Vitali; Iredell, Jonathan R; Gilbert, Gwendolyn L
2007-06-01
The usefulness of rapid pathogen genotyping is widely recognized, but its effective interpretation and application requires integration into clinical and public health decision-making. How can pathogen genotyping data best be translated to inform disease management and surveillance? Pathogen profiling integrates microbial genomics data into communicable disease control by consolidating phenotypic identity-based methods with DNA microarrays, proteomics, metabolomics and sequence-based typing. Sharing data on pathogen profiles should facilitate our understanding of transmission patterns and the dynamics of epidemics.
Dolled-Filhart, Marisa P; Gustavson, Mark D
2012-11-01
Translational oncology has been improved by using tissue microarrays (TMAs), which facilitate biomarker analysis of large cohorts on a single slide. This has allowed for rapid analysis and validation of potential biomarkers for prognostic and predictive value, as well as for evaluation of biomarker prevalence. Coupled with quantitative analysis of immunohistochemical (IHC) staining, objective and standardized biomarker data from tumor samples can further advance companion diagnostic approaches for the identification of drug-responsive or resistant patient subpopulations. This review covers the advantages, disadvantages and applications of TMAs for biomarker research. Research literature and reviews of TMAs and quantitative image analysis methodology have been surveyed for this review (with an AQUA® analysis focus). Applications such as multi-marker diagnostic development and pathway-based biomarker subpopulation analyses are described. Tissue microarrays are a useful tool for biomarker analyses including prevalence surveys, disease progression assessment and addressing potential prognostic or predictive value. By combining quantitative image analysis with TMAs, analyses will be more objective and reproducible, allowing for more robust IHC-based diagnostic test development. Quantitative multi-biomarker IHC diagnostic tests that can predict drug response will allow for greater success of clinical trials for targeted therapies and provide more personalized clinical decision making.
Rubel, M A; Werner-Lin, A; Barg, F K; Bernhardt, B A
2017-09-01
To assess how participants receiving abnormal prenatal genetic testing results seek information and understand the implications of results, 27 US female patients and 12 of their male partners receiving positive prenatal microarray testing results completed semi-structured phone interviews. These interviews documented participant experiences with chromosomal microarray testing, understanding of and emotional response to receiving results, factors affecting decision-making about testing and pregnancy termination, and psychosocial needs throughout the testing process. Interview data were analyzed using a modified grounded theory approach. In the absence of certainty about the implications of results, understanding of results is shaped by biomedical expert knowledge (BEK) and cultural expert knowledge (CEK). When there is a dearth of BEK, as in the case of receiving results of uncertain significance, participants rely on CEK, including religious/spiritual beliefs, "gut instinct," embodied knowledge, and social network informants. CEK is a powerful platform to guide understanding of prenatal genetic testing results. The utility of culturally situated expert knowledge during testing uncertainty emphasizes that decision-making occurs within discourses beyond the biomedical domain. These forms of "knowing" may be integrated into clinical consideration of efficacious patient assessment and counseling.
NASA Astrophysics Data System (ADS)
Phan, Sieu; Famili, Fazel; Liu, Ziying; Peña-Castillo, Lourdes
The advancement of omics technologies in concert with the enabling information technology development has accelerated biological research to a new realm in a blazing speed and sophistication. The limited single gene assay to the high throughput microarray assay and the laborious manual count of base-pairs to the robotic assisted machinery in genome sequencing are two examples to name. Yet even more sophisticated, the recent development in literature mining and artificial intelligence has allowed researchers to construct complex gene networks unraveling many formidable biological puzzles. To harness these emerging technologies to their full potential to medical applications, the Bio-intelligence program at the Institute for Information Technology, National Research Council Canada, aims to develop and exploit artificial intelligence and bioinformatics technologies to facilitate the development of intelligent decision support tools and systems to improve patient care - for early detection, accurate diagnosis/prognosis of disease, and better personalized therapeutic management.
BioconductorBuntu: a Linux distribution that implements a web-based DNA microarray analysis server.
Geeleher, Paul; Morris, Dermot; Hinde, John P; Golden, Aaron
2009-06-01
BioconductorBuntu is a custom distribution of Ubuntu Linux that automatically installs a server-side microarray processing environment, providing a user-friendly web-based GUI to many of the tools developed by the Bioconductor Project, accessible locally or across a network. System installation is via booting off a CD image or by using a Debian package provided to upgrade an existing Ubuntu installation. In its current version, several microarray analysis pipelines are supported including oligonucleotide, dual-or single-dye experiments, including post-processing with Gene Set Enrichment Analysis. BioconductorBuntu is designed to be extensible, by server-side integration of further relevant Bioconductor modules as required, facilitated by its straightforward underlying Python-based infrastructure. BioconductorBuntu offers an ideal environment for the development of processing procedures to facilitate the analysis of next-generation sequencing datasets. BioconductorBuntu is available for download under a creative commons license along with additional documentation and a tutorial from (http://bioinf.nuigalway.ie).
Classification of a large microarray data set: Algorithm comparison and analysis of drug signatures
Natsoulis, Georges; El Ghaoui, Laurent; Lanckriet, Gert R.G.; Tolley, Alexander M.; Leroy, Fabrice; Dunlea, Shane; Eynon, Barrett P.; Pearson, Cecelia I.; Tugendreich, Stuart; Jarnagin, Kurt
2005-01-01
A large gene expression database has been produced that characterizes the gene expression and physiological effects of hundreds of approved and withdrawn drugs, toxicants, and biochemical standards in various organs of live rats. In order to derive useful biological knowledge from this large database, a variety of supervised classification algorithms were compared using a 597-microarray subset of the data. Our studies show that several types of linear classifiers based on Support Vector Machines (SVMs) and Logistic Regression can be used to derive readily interpretable drug signatures with high classification performance. Both methods can be tuned to produce classifiers of drug treatments in the form of short, weighted gene lists which upon analysis reveal that some of the signature genes have a positive contribution (act as “rewards” for the class-of-interest) while others have a negative contribution (act as “penalties”) to the classification decision. The combination of reward and penalty genes enhances performance by keeping the number of false positive treatments low. The results of these algorithms are combined with feature selection techniques that further reduce the length of the drug signatures, an important step towards the development of useful diagnostic biomarkers and low-cost assays. Multiple signatures with no genes in common can be generated for the same classification end-point. Comparison of these gene lists identifies biological processes characteristic of a given class. PMID:15867433
NASA Astrophysics Data System (ADS)
Zachary, Wayne; Eggleston, Robert; Donmoyer, Jason; Schremmer, Serge
2003-09-01
Decision-making is strongly shaped and influenced by the work context in which decisions are embedded. This suggests that decision support needs to be anchored by a model (implicit or explicit) of the work process, in contrast to traditional approaches that anchor decision support to either context free decision models (e.g., utility theory) or to detailed models of the external (e.g., battlespace) environment. An architecture for cognitively-based, work centered decision support called the Work-centered Informediary Layer (WIL) is presented. WIL separates decision support into three overall processes that build and dynamically maintain an explicit context model, use the context model to identify opportunities for decision support and tailor generic decision-support strategies to the current context and offer them to the system-user/decision-maker. The generic decision support strategies include such things as activity/attention aiding, decision process structuring, work performance support (selective, contextual automation), explanation/ elaboration, infosphere data retrieval, and what if/action-projection and visualization. A WIL-based application is a work-centered decision support layer that provides active support without intent inferencing, and that is cognitively based without requiring classical cognitive task analyses. Example WIL applications are detailed and discussed.
Xu, Joshua; Gong, Binsheng; Wu, Leihong; Thakkar, Shraddha; Hong, Huixiao; Tong, Weida
2016-03-15
Studies on gene expression in response to therapy have led to the discovery of pharmacogenomics biomarkers and advances in precision medicine. Whole transcriptome sequencing (RNA-seq) is an emerging tool for profiling gene expression and has received wide adoption in the biomedical research community. However, its value in regulatory decision making requires rigorous assessment and consensus between various stakeholders, including the research community, regulatory agencies, and industry. The FDA-led SEquencing Quality Control (SEQC) consortium has made considerable progress in this direction, and is the subject of this review. Specifically, three RNA-seq platforms (Illumina HiSeq, Life Technologies SOLiD, and Roche 454) were extensively evaluated at multiple sites to assess cross-site and cross-platform reproducibility. The results demonstrated that relative gene expression measurements were consistently comparable across labs and platforms, but not so for the measurement of absolute expression levels. As part of the quality evaluation several studies were included to evaluate the utility of RNA-seq in clinical settings and safety assessment. The neuroblastoma study profiled tumor samples from 498 pediatric neuroblastoma patients by both microarray and RNA-seq. RNA-seq offers more utilities than microarray in determining the transcriptomic characteristics of cancer. However, RNA-seq and microarray-based models were comparable in clinical endpoint prediction, even when including additional features unique to RNA-seq beyond gene expression. The toxicogenomics study compared microarray and RNA-seq profiles of the liver samples from rats exposed to 27 different chemicals representing multiple toxicity modes of action. Cross-platform concordance was dependent on chemical treatment and transcript abundance. Though both RNA-seq and microarray are suitable for developing gene expression based predictive models with comparable prediction performance, RNA-seq offers advantages over microarray in profiling genes with low expression. The rat BodyMap study provided a comprehensive rat transcriptomic body map by performing RNA-Seq on 320 samples from 11 organs in either sex of juvenile, adolescent, adult and aged Fischer 344 rats. Lastly, the transferability study demonstrated that signature genes of predictive models are reciprocally transferable between microarray and RNA-seq data for model development using a comprehensive approach with two large clinical data sets. This result suggests continued usefulness of legacy microarray data in the coming RNA-seq era. In conclusion, the SEQC project enhances our understanding of RNA-seq and provides valuable guidelines for RNA-seq based clinical application and safety evaluation to advance precision medicine.
Direct on-chip DNA synthesis using electrochemically modified gold electrodes as solid support
NASA Astrophysics Data System (ADS)
Levrie, Karen; Jans, Karolien; Schepers, Guy; Vos, Rita; Van Dorpe, Pol; Lagae, Liesbet; Van Hoof, Chris; Van Aerschot, Arthur; Stakenborg, Tim
2018-04-01
DNA microarrays have propelled important advancements in the field of genomic research by enabling the monitoring of thousands of genes in parallel. The throughput can be increased even further by scaling down the microarray feature size. In this respect, microelectronics-based DNA arrays are promising as they can leverage semiconductor processing techniques with lithographic resolutions. We propose a method that enables the use of metal electrodes for de novo DNA synthesis without the need for an insulating support. By electrochemically functionalizing gold electrodes, these electrodes can act as solid support for phosphoramidite-based synthesis. The proposed method relies on the electrochemical reduction of diazonium salts, enabling site-specific incorporation of hydroxyl groups onto the metal electrodes. An automated DNA synthesizer was used to couple phosphoramidite moieties directly onto the OH-modified electrodes to obtain the desired oligonucleotide sequence. Characterization was done via cyclic voltammetry and fluorescence microscopy. Our results present a valuable proof-of-concept for the integration of solid-phase DNA synthesis with microelectronics.
ELISA-BASE: An Integrated Bioinformatics Tool for Analyzing and Tracking ELISA Microarray Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Amanda M.; Collett, James L.; Seurynck-Servoss, Shannon L.
ELISA-BASE is an open-source database for capturing, organizing and analyzing protein enzyme-linked immunosorbent assay (ELISA) microarray data. ELISA-BASE is an extension of the BioArray Soft-ware Environment (BASE) database system, which was developed for DNA microarrays. In order to make BASE suitable for protein microarray experiments, we developed several plugins for importing and analyzing quantitative ELISA microarray data. Most notably, our Protein Microarray Analysis Tool (ProMAT) for processing quantita-tive ELISA data is now available as a plugin to the database.
Burgarella, Sarah; Cattaneo, Dario; Pinciroli, Francesco; Masseroli, Marco
2005-12-01
Improvements of bio-nano-technologies and biomolecular techniques have led to increasing production of high-throughput experimental data. Spotted cDNA microarray is one of the most diffuse technologies, used in single research laboratories and in biotechnology service facilities. Although they are routinely performed, spotted microarray experiments are complex procedures entailing several experimental steps and actors with different technical skills and roles. During an experiment, involved actors, who can also be located in a distance, need to access and share specific experiment information according to their roles. Furthermore, complete information describing all experimental steps must be orderly collected to allow subsequent correct interpretation of experimental results. We developed MicroGen, a web system for managing information and workflow in the production pipeline of spotted microarray experiments. It is constituted of a core multi-database system able to store all data completely characterizing different spotted microarray experiments according to the Minimum Information About Microarray Experiments (MIAME) standard, and of an intuitive and user-friendly web interface able to support the collaborative work required among multidisciplinary actors and roles involved in spotted microarray experiment production. MicroGen supports six types of user roles: the researcher who designs and requests the experiment, the spotting operator, the hybridisation operator, the image processing operator, the system administrator, and the generic public user who can access the unrestricted part of the system to get information about MicroGen services. MicroGen represents a MIAME compliant information system that enables managing workflow and supporting collaborative work in spotted microarray experiment production.
Automatic Identification and Quantification of Extra-Well Fluorescence in Microarray Images.
Rivera, Robert; Wang, Jie; Yu, Xiaobo; Demirkan, Gokhan; Hopper, Marika; Bian, Xiaofang; Tahsin, Tasnia; Magee, D Mitchell; Qiu, Ji; LaBaer, Joshua; Wallstrom, Garrick
2017-11-03
In recent studies involving NAPPA microarrays, extra-well fluorescence is used as a key measure for identifying disease biomarkers because there is evidence to support that it is better correlated with strong antibody responses than statistical analysis involving intraspot intensity. Because this feature is not well quantified by traditional image analysis software, identification and quantification of extra-well fluorescence is performed manually, which is both time-consuming and highly susceptible to variation between raters. A system that could automate this task efficiently and effectively would greatly improve the process of data acquisition in microarray studies, thereby accelerating the discovery of disease biomarkers. In this study, we experimented with different machine learning methods, as well as novel heuristics, for identifying spots exhibiting extra-well fluorescence (rings) in microarray images and assigning each ring a grade of 1-5 based on its intensity and morphology. The sensitivity of our final system for identifying rings was found to be 72% at 99% specificity and 98% at 92% specificity. Our system performs this task significantly faster than a human, while maintaining high performance, and therefore represents a valuable tool for microarray image analysis.
Systematic Omics Analysis Review (SOAR) Tool to Support Risk Assessment
McConnell, Emma R.; Bell, Shannon M.; Cote, Ila; Wang, Rong-Lin; Perkins, Edward J.; Garcia-Reyero, Natàlia; Gong, Ping; Burgoon, Lyle D.
2014-01-01
Environmental health risk assessors are challenged to understand and incorporate new data streams as the field of toxicology continues to adopt new molecular and systems biology technologies. Systematic screening reviews can help risk assessors and assessment teams determine which studies to consider for inclusion in a human health assessment. A tool for systematic reviews should be standardized and transparent in order to consistently determine which studies meet minimum quality criteria prior to performing in-depth analyses of the data. The Systematic Omics Analysis Review (SOAR) tool is focused on assisting risk assessment support teams in performing systematic reviews of transcriptomic studies. SOAR is a spreadsheet tool of 35 objective questions developed by domain experts, focused on transcriptomic microarray studies, and including four main topics: test system, test substance, experimental design, and microarray data. The tool will be used as a guide to identify studies that meet basic published quality criteria, such as those defined by the Minimum Information About a Microarray Experiment standard and the Toxicological Data Reliability Assessment Tool. Seven scientists were recruited to test the tool by using it to independently rate 15 published manuscripts that study chemical exposures with microarrays. Using their feedback, questions were weighted based on importance of the information and a suitability cutoff was set for each of the four topic sections. The final validation resulted in 100% agreement between the users on four separate manuscripts, showing that the SOAR tool may be used to facilitate the standardized and transparent screening of microarray literature for environmental human health risk assessment. PMID:25531884
A remark on copy number variation detection methods.
Li, Shuo; Dou, Xialiang; Gao, Ruiqi; Ge, Xinzhou; Qian, Minping; Wan, Lin
2018-01-01
Copy number variations (CNVs) are gain and loss of DNA sequence of a genome. High throughput platforms such as microarrays and next generation sequencing technologies (NGS) have been applied for genome wide copy number losses. Although progress has been made in both approaches, the accuracy and consistency of CNV calling from the two platforms remain in dispute. In this study, we perform a deep analysis on copy number losses on 254 human DNA samples, which have both SNP microarray data and NGS data publicly available from Hapmap Project and 1000 Genomes Project respectively. We show that the copy number losses reported from Hapmap Project and 1000 Genome Project only have < 30% overlap, while these reports are required to have cross-platform (e.g. PCR, microarray and high-throughput sequencing) experimental supporting by their corresponding projects, even though state-of-art calling methods were employed. On the other hand, copy number losses are found directly from HapMap microarray data by an accurate algorithm, i.e. CNVhac, almost all of which have lower read mapping depth in NGS data; furthermore, 88% of which can be supported by the sequences with breakpoint in NGS data. Our results suggest the ability of microarray calling CNVs and the possible introduction of false negatives from the unessential requirement of the additional cross-platform supporting. The inconsistency of CNV reports from Hapmap Project and 1000 Genomes Project might result from the inadequate information containing in microarray data, the inconsistent detection criteria, or the filtration effect of cross-platform supporting. The statistical test on CNVs called from CNVhac show that the microarray data can offer reliable CNV reports, and majority of CNV candidates can be confirmed by raw sequences. Therefore, the CNV candidates given by a good caller could be highly reliable without cross-platform supporting, so additional experimental information should be applied in need instead of necessarily.
Tall, Ben Davies; Gangiredla, Jayanthi; Gopinath, Gopal R.; Yan, Qiongqiong; Chase, Hannah R.; Lee, Boram; Hwang, Seongeun; Trach, Larisa; Park, Eunbi; Yoo, YeonJoo; Chung, TaeJung; Jackson, Scott A.; Patel, Isha R.; Sathyamoorthy, Venugopal; Pava-Ripoll, Monica; Kotewicz, Michael L.; Carter, Laurenda; Iversen, Carol; Pagotto, Franco; Stephan, Roger; Lehner, Angelika; Fanning, Séamus; Grim, Christopher J.
2015-01-01
Cronobacter species cause infections in all age groups; however neonates are at highest risk and remain the most susceptible age group for life-threatening invasive disease. The genus contains seven species:Cronobacter sakazakii, Cronobacter malonaticus, Cronobacter turicensis, Cronobacter muytjensii, Cronobacter dublinensis, Cronobacter universalis, and Cronobacter condimenti. Despite an abundance of published genomes of these species, genomics-based epidemiology of the genus is not well established. The gene content of a diverse group of 126 unique Cronobacter and taxonomically related isolates was determined using a pan genomic-based DNA microarray as a genotyping tool and as a means to identify outbreak isolates for food safety, environmental, and clinical surveillance purposes. The microarray constitutes 19,287 independent genes representing 15 Cronobacter genomes and 18 plasmids and 2,371 virulence factor genes of phylogenetically related Gram-negative bacteria. The Cronobacter microarray was able to distinguish the seven Cronobacter species from one another and from non-Cronobacter species; and within each species, strains grouped into distinct clusters based on their genomic diversity. These results also support the phylogenic divergence of the genus and clearly highlight the genomic diversity among each member of the genus. The current study establishes a powerful platform for further genomics research of this diverse genus, an important prerequisite toward the development of future countermeasures against this foodborne pathogen in the food safety and clinical arenas. PMID:25984509
Bayes multiple decision functions.
Wu, Wensong; Peña, Edsel A
2013-01-01
This paper deals with the problem of simultaneously making many ( M ) binary decisions based on one realization of a random data matrix X . M is typically large and X will usually have M rows associated with each of the M decisions to make, but for each row the data may be low dimensional. Such problems arise in many practical areas such as the biological and medical sciences, where the available dataset is from microarrays or other high-throughput technology and with the goal being to decide which among of many genes are relevant with respect to some phenotype of interest; in the engineering and reliability sciences; in astronomy; in education; and in business. A Bayesian decision-theoretic approach to this problem is implemented with the overall loss function being a cost-weighted linear combination of Type I and Type II loss functions. The class of loss functions considered allows for use of the false discovery rate (FDR), false nondiscovery rate (FNR), and missed discovery rate (MDR) in assessing the quality of decision. Through this Bayesian paradigm, the Bayes multiple decision function (BMDF) is derived and an efficient algorithm to obtain the optimal Bayes action is described. In contrast to many works in the literature where the rows of the matrix X are assumed to be stochastically independent, we allow a dependent data structure with the associations obtained through a class of frailty-induced Archimedean copulas. In particular, non-Gaussian dependent data structure, which is typical with failure-time data, can be entertained. The numerical implementation of the determination of the Bayes optimal action is facilitated through sequential Monte Carlo techniques. The theory developed could also be extended to the problem of multiple hypotheses testing, multiple classification and prediction, and high-dimensional variable selection. The proposed procedure is illustrated for the simple versus simple hypotheses setting and for the composite hypotheses setting through simulation studies. The procedure is also applied to a subset of a microarray data set from a colon cancer study.
The MGED Ontology: a resource for semantics-based description of microarray experiments.
Whetzel, Patricia L; Parkinson, Helen; Causton, Helen C; Fan, Liju; Fostel, Jennifer; Fragoso, Gilberto; Game, Laurence; Heiskanen, Mervi; Morrison, Norman; Rocca-Serra, Philippe; Sansone, Susanna-Assunta; Taylor, Chris; White, Joseph; Stoeckert, Christian J
2006-04-01
The generation of large amounts of microarray data and the need to share these data bring challenges for both data management and annotation and highlights the need for standards. MIAME specifies the minimum information needed to describe a microarray experiment and the Microarray Gene Expression Object Model (MAGE-OM) and resulting MAGE-ML provide a mechanism to standardize data representation for data exchange, however a common terminology for data annotation is needed to support these standards. Here we describe the MGED Ontology (MO) developed by the Ontology Working Group of the Microarray Gene Expression Data (MGED) Society. The MO provides terms for annotating all aspects of a microarray experiment from the design of the experiment and array layout, through to the preparation of the biological sample and the protocols used to hybridize the RNA and analyze the data. The MO was developed to provide terms for annotating experiments in line with the MIAME guidelines, i.e. to provide the semantics to describe a microarray experiment according to the concepts specified in MIAME. The MO does not attempt to incorporate terms from existing ontologies, e.g. those that deal with anatomical parts or developmental stages terms, but provides a framework to reference terms in other ontologies and therefore facilitates the use of ontologies in microarray data annotation. The MGED Ontology version.1.2.0 is available as a file in both DAML and OWL formats at http://mged.sourceforge.net/ontologies/index.php. Release notes and annotation examples are provided. The MO is also provided via the NCICB's Enterprise Vocabulary System (http://nciterms.nci.nih.gov/NCIBrowser/Dictionary.do). Stoeckrt@pcbi.upenn.edu Supplementary data are available at Bioinformatics online.
Fluorescence-based bioassays for the detection and evaluation of food materials.
Nishi, Kentaro; Isobe, Shin-Ichiro; Zhu, Yun; Kiyama, Ryoiti
2015-10-13
We summarize here the recent progress in fluorescence-based bioassays for the detection and evaluation of food materials by focusing on fluorescent dyes used in bioassays and applications of these assays for food safety, quality and efficacy. Fluorescent dyes have been used in various bioassays, such as biosensing, cell assay, energy transfer-based assay, probing, protein/immunological assay and microarray/biochip assay. Among the arrays used in microarray/biochip assay, fluorescence-based microarrays/biochips, such as antibody/protein microarrays, bead/suspension arrays, capillary/sensor arrays, DNA microarrays/polymerase chain reaction (PCR)-based arrays, glycan/lectin arrays, immunoassay/enzyme-linked immunosorbent assay (ELISA)-based arrays, microfluidic chips and tissue arrays, have been developed and used for the assessment of allergy/poisoning/toxicity, contamination and efficacy/mechanism, and quality control/safety. DNA microarray assays have been used widely for food safety and quality as well as searches for active components. DNA microarray-based gene expression profiling may be useful for such purposes due to its advantages in the evaluation of pathway-based intracellular signaling in response to food materials.
Fluorescence-Based Bioassays for the Detection and Evaluation of Food Materials
Nishi, Kentaro; Isobe, Shin-Ichiro; Zhu, Yun; Kiyama, Ryoiti
2015-01-01
We summarize here the recent progress in fluorescence-based bioassays for the detection and evaluation of food materials by focusing on fluorescent dyes used in bioassays and applications of these assays for food safety, quality and efficacy. Fluorescent dyes have been used in various bioassays, such as biosensing, cell assay, energy transfer-based assay, probing, protein/immunological assay and microarray/biochip assay. Among the arrays used in microarray/biochip assay, fluorescence-based microarrays/biochips, such as antibody/protein microarrays, bead/suspension arrays, capillary/sensor arrays, DNA microarrays/polymerase chain reaction (PCR)-based arrays, glycan/lectin arrays, immunoassay/enzyme-linked immunosorbent assay (ELISA)-based arrays, microfluidic chips and tissue arrays, have been developed and used for the assessment of allergy/poisoning/toxicity, contamination and efficacy/mechanism, and quality control/safety. DNA microarray assays have been used widely for food safety and quality as well as searches for active components. DNA microarray-based gene expression profiling may be useful for such purposes due to its advantages in the evaluation of pathway-based intracellular signaling in response to food materials. PMID:26473869
Ensemble Feature Learning of Genomic Data Using Support Vector Machine
Anaissi, Ali; Goyal, Madhu; Catchpoole, Daniel R.; Braytee, Ali; Kennedy, Paul J.
2016-01-01
The identification of a subset of genes having the ability to capture the necessary information to distinguish classes of patients is crucial in bioinformatics applications. Ensemble and bagging methods have been shown to work effectively in the process of gene selection and classification. Testament to that is random forest which combines random decision trees with bagging to improve overall feature selection and classification accuracy. Surprisingly, the adoption of these methods in support vector machines has only recently received attention but mostly on classification not gene selection. This paper introduces an ensemble SVM-Recursive Feature Elimination (ESVM-RFE) for gene selection that follows the concepts of ensemble and bagging used in random forest but adopts the backward elimination strategy which is the rationale of RFE algorithm. The rationale behind this is, building ensemble SVM models using randomly drawn bootstrap samples from the training set, will produce different feature rankings which will be subsequently aggregated as one feature ranking. As a result, the decision for elimination of features is based upon the ranking of multiple SVM models instead of choosing one particular model. Moreover, this approach will address the problem of imbalanced datasets by constructing a nearly balanced bootstrap sample. Our experiments show that ESVM-RFE for gene selection substantially increased the classification performance on five microarray datasets compared to state-of-the-art methods. Experiments on the childhood leukaemia dataset show that an average 9% better accuracy is achieved by ESVM-RFE over SVM-RFE, and 5% over random forest based approach. The selected genes by the ESVM-RFE algorithm were further explored with Singular Value Decomposition (SVD) which reveals significant clusters with the selected data. PMID:27304923
Missing value imputation for microarray data: a comprehensive comparison study and a web tool.
Chiu, Chia-Chun; Chan, Shih-Yao; Wang, Chung-Ching; Wu, Wei-Sheng
2013-01-01
Microarray data are usually peppered with missing values due to various reasons. However, most of the downstream analyses for microarray data require complete datasets. Therefore, accurate algorithms for missing value estimation are needed for improving the performance of microarray data analyses. Although many algorithms have been developed, there are many debates on the selection of the optimal algorithm. The studies about the performance comparison of different algorithms are still incomprehensive, especially in the number of benchmark datasets used, the number of algorithms compared, the rounds of simulation conducted, and the performance measures used. In this paper, we performed a comprehensive comparison by using (I) thirteen datasets, (II) nine algorithms, (III) 110 independent runs of simulation, and (IV) three types of measures to evaluate the performance of each imputation algorithm fairly. First, the effects of different types of microarray datasets on the performance of each imputation algorithm were evaluated. Second, we discussed whether the datasets from different species have different impact on the performance of different algorithms. To assess the performance of each algorithm fairly, all evaluations were performed using three types of measures. Our results indicate that the performance of an imputation algorithm mainly depends on the type of a dataset but not on the species where the samples come from. In addition to the statistical measure, two other measures with biological meanings are useful to reflect the impact of missing value imputation on the downstream data analyses. Our study suggests that local-least-squares-based methods are good choices to handle missing values for most of the microarray datasets. In this work, we carried out a comprehensive comparison of the algorithms for microarray missing value imputation. Based on such a comprehensive comparison, researchers could choose the optimal algorithm for their datasets easily. Moreover, new imputation algorithms could be compared with the existing algorithms using this comparison strategy as a standard protocol. In addition, to assist researchers in dealing with missing values easily, we built a web-based and easy-to-use imputation tool, MissVIA (http://cosbi.ee.ncku.edu.tw/MissVIA), which supports many imputation algorithms. Once users upload a real microarray dataset and choose the imputation algorithms, MissVIA will determine the optimal algorithm for the users' data through a series of simulations, and then the imputed results can be downloaded for the downstream data analyses.
How Decision Support Systems Can Benefit from a Theory of Change Approach.
Allen, Will; Cruz, Jennyffer; Warburton, Bruce
2017-06-01
Decision support systems are now mostly computer and internet-based information systems designed to support land managers with complex decision-making. However, there is concern that many environmental and agricultural decision support systems remain underutilized and ineffective. Recent efforts to improve decision support systems use have focused on enhancing stakeholder participation in their development, but a mismatch between stakeholders' expectations and the reality of decision support systems outputs continues to limit uptake. Additional challenges remain in problem-framing and evaluation. We propose using an outcomes-based approach called theory of change in conjunction with decision support systems development to support both wider problem-framing and outcomes-based monitoring and evaluation. The theory of change helps framing by placing the decision support systems within a wider context. It highlights how decision support systems use can "contribute" to long-term outcomes, and helps align decision support systems outputs with these larger goals. We illustrate the benefits of linking decision support systems development and application with a theory of change approach using an example of pest rabbit management in Australia. We develop a theory of change that outlines the activities required to achieve the outcomes desired from an effective rabbit management program, and two decision support systems that contribute to specific aspects of decision making in this wider problem context. Using a theory of change in this way should increase acceptance of the role of decision support systems by end-users, clarify their limitations and, importantly, increase effectiveness of rabbit management. The use of a theory of change should benefit those seeking to improve decision support systems design, use and, evaluation.
How Decision Support Systems Can Benefit from a Theory of Change Approach
NASA Astrophysics Data System (ADS)
Allen, Will; Cruz, Jennyffer; Warburton, Bruce
2017-06-01
Decision support systems are now mostly computer and internet-based information systems designed to support land managers with complex decision-making. However, there is concern that many environmental and agricultural decision support systems remain underutilized and ineffective. Recent efforts to improve decision support systems use have focused on enhancing stakeholder participation in their development, but a mismatch between stakeholders' expectations and the reality of decision support systems outputs continues to limit uptake. Additional challenges remain in problem-framing and evaluation. We propose using an outcomes-based approach called theory of change in conjunction with decision support systems development to support both wider problem-framing and outcomes-based monitoring and evaluation. The theory of change helps framing by placing the decision support systems within a wider context. It highlights how decision support systems use can "contribute" to long-term outcomes, and helps align decision support systems outputs with these larger goals. We illustrate the benefits of linking decision support systems development and application with a theory of change approach using an example of pest rabbit management in Australia. We develop a theory of change that outlines the activities required to achieve the outcomes desired from an effective rabbit management program, and two decision support systems that contribute to specific aspects of decision making in this wider problem context. Using a theory of change in this way should increase acceptance of the role of decision support systems by end-users, clarify their limitations and, importantly, increase effectiveness of rabbit management. The use of a theory of change should benefit those seeking to improve decision support systems design, use and, evaluation.
Multiclass classification of microarray data samples with a reduced number of genes
2011-01-01
Background Multiclass classification of microarray data samples with a reduced number of genes is a rich and challenging problem in Bioinformatics research. The problem gets harder as the number of classes is increased. In addition, the performance of most classifiers is tightly linked to the effectiveness of mandatory gene selection methods. Critical to gene selection is the availability of estimates about the maximum number of genes that can be handled by any classification algorithm. Lack of such estimates may lead to either computationally demanding explorations of a search space with thousands of dimensions or classification models based on gene sets of unrestricted size. In the former case, unbiased but possibly overfitted classification models may arise. In the latter case, biased classification models unable to support statistically significant findings may be obtained. Results A novel bound on the maximum number of genes that can be handled by binary classifiers in binary mediated multiclass classification algorithms of microarray data samples is presented. The bound suggests that high-dimensional binary output domains might favor the existence of accurate and sparse binary mediated multiclass classifiers for microarray data samples. Conclusions A comprehensive experimental work shows that the bound is indeed useful to induce accurate and sparse multiclass classifiers for microarray data samples. PMID:21342522
Microarray expression technology: from start to finish.
Elvidge, Gareth
2006-01-01
The recent introduction of new microarray expression technologies and the further development of established platforms ensure that the researcher is presented with a range of options for performing an experiment. Whilst this has opened up the possibilities for future applications, such as exon-specific arrays, increased sample throughput and 'chromatin immunoprecipitation (ChIP) on chip' experiments, the initial decision processes and experiment planning are made more difficult. This review will give an overview of the various technologies that are available to perform a microarray expression experiment, from the initial planning stages through to the final data analysis. Both practical aspects and data analysis options will be considered. The relative advantages and disadvantages will be discussed with insights provided for future directions of the technology.
An Introduction to MAMA (Meta-Analysis of MicroArray data) System.
Zhang, Zhe; Fenstermacher, David
2005-01-01
Analyzing microarray data across multiple experiments has been proven advantageous. To support this kind of analysis, we are developing a software system called MAMA (Meta-Analysis of MicroArray data). MAMA utilizes a client-server architecture with a relational database on the server-side for the storage of microarray datasets collected from various resources. The client-side is an application running on the end user's computer that allows the user to manipulate microarray data and analytical results locally. MAMA implementation will integrate several analytical methods, including meta-analysis within an open-source framework offering other developers the flexibility to plug in additional statistical algorithms.
Support vector machine and principal component analysis for microarray data classification
NASA Astrophysics Data System (ADS)
Astuti, Widi; Adiwijaya
2018-03-01
Cancer is a leading cause of death worldwide although a significant proportion of it can be cured if it is detected early. In recent decades, technology called microarray takes an important role in the diagnosis of cancer. By using data mining technique, microarray data classification can be performed to improve the accuracy of cancer diagnosis compared to traditional techniques. The characteristic of microarray data is small sample but it has huge dimension. Since that, there is a challenge for researcher to provide solutions for microarray data classification with high performance in both accuracy and running time. This research proposed the usage of Principal Component Analysis (PCA) as a dimension reduction method along with Support Vector Method (SVM) optimized by kernel functions as a classifier for microarray data classification. The proposed scheme was applied on seven data sets using 5-fold cross validation and then evaluation and analysis conducted on term of both accuracy and running time. The result showed that the scheme can obtained 100% accuracy for Ovarian and Lung Cancer data when Linear and Cubic kernel functions are used. In term of running time, PCA greatly reduced the running time for every data sets.
NASA Astrophysics Data System (ADS)
Bogdanov, Valery L.; Boyce-Jacino, Michael
1999-05-01
Confined arrays of biochemical probes deposited on a solid support surface (analytical microarray or 'chip') provide an opportunity to analysis multiple reactions simultaneously. Microarrays are increasingly used in genetics, medicine and environment scanning as research and analytical instruments. A power of microarray technology comes from its parallelism which grows with array miniaturization, minimization of reagent volume per reaction site and reaction multiplexing. An optical detector of microarray signals should combine high sensitivity, spatial and spectral resolution. Additionally, low-cost and a high processing rate are needed to transfer microarray technology into biomedical practice. We designed an imager that provides confocal and complete spectrum detection of entire fluorescently-labeled microarray in parallel. Imager uses microlens array, non-slit spectral decomposer, and high- sensitive detector (cooled CCD). Two imaging channels provide a simultaneous detection of localization, integrated and spectral intensities for each reaction site in microarray. A dimensional matching between microarray and imager's optics eliminates all in moving parts in instrumentation, enabling highly informative, fast and low-cost microarray detection. We report theory of confocal hyperspectral imaging with microlenses array and experimental data for implementation of developed imager to detect fluorescently labeled microarray with a density approximately 103 sites per cm2.
Predicting breast cancer using an expression values weighted clinical classifier.
Thomas, Minta; De Brabanter, Kris; Suykens, Johan A K; De Moor, Bart
2014-12-31
Clinical data, such as patient history, laboratory analysis, ultrasound parameters-which are the basis of day-to-day clinical decision support-are often used to guide the clinical management of cancer in the presence of microarray data. Several data fusion techniques are available to integrate genomics or proteomics data, but only a few studies have created a single prediction model using both gene expression and clinical data. These studies often remain inconclusive regarding an obtained improvement in prediction performance. To improve clinical management, these data should be fully exploited. This requires efficient algorithms to integrate these data sets and design a final classifier. LS-SVM classifiers and generalized eigenvalue/singular value decompositions are successfully used in many bioinformatics applications for prediction tasks. While bringing up the benefits of these two techniques, we propose a machine learning approach, a weighted LS-SVM classifier to integrate two data sources: microarray and clinical parameters. We compared and evaluated the proposed methods on five breast cancer case studies. Compared to LS-SVM classifier on individual data sets, generalized eigenvalue decomposition (GEVD) and kernel GEVD, the proposed weighted LS-SVM classifier offers good prediction performance, in terms of test area under ROC Curve (AUC), on all breast cancer case studies. Thus a clinical classifier weighted with microarray data set results in significantly improved diagnosis, prognosis and prediction responses to therapy. The proposed model has been shown as a promising mathematical framework in both data fusion and non-linear classification problems.
Research on web-based decision support system for sports competitions
NASA Astrophysics Data System (ADS)
Huo, Hanqiang
2010-07-01
This paper describes the system architecture and implementation technology of the decision support system for sports competitions, discusses the design of decision-making modules, management modules and security of the system, and proposes the development idea of building a web-based decision support system for sports competitions.
Microarray-integrated optoelectrofluidic immunoassay system
Han, Dongsik
2016-01-01
A microarray-based analytical platform has been utilized as a powerful tool in biological assay fields. However, an analyte depletion problem due to the slow mass transport based on molecular diffusion causes low reaction efficiency, resulting in a limitation for practical applications. This paper presents a novel method to improve the efficiency of microarray-based immunoassay via an optically induced electrokinetic phenomenon by integrating an optoelectrofluidic device with a conventional glass slide-based microarray format. A sample droplet was loaded between the microarray slide and the optoelectrofluidic device on which a photoconductive layer was deposited. Under the application of an AC voltage, optically induced AC electroosmotic flows caused by a microarray-patterned light actively enhanced the mass transport of target molecules at the multiple assay spots of the microarray simultaneously, which reduced tedious reaction time from more than 30 min to 10 min. Based on this enhancing effect, a heterogeneous immunoassay with a tiny volume of sample (5 μl) was successfully performed in the microarray-integrated optoelectrofluidic system using immunoglobulin G (IgG) and anti-IgG, resulting in improved efficiency compared to the static environment. Furthermore, the application of multiplex assays was also demonstrated by multiple protein detection. PMID:27190571
Microarray-integrated optoelectrofluidic immunoassay system.
Han, Dongsik; Park, Je-Kyun
2016-05-01
A microarray-based analytical platform has been utilized as a powerful tool in biological assay fields. However, an analyte depletion problem due to the slow mass transport based on molecular diffusion causes low reaction efficiency, resulting in a limitation for practical applications. This paper presents a novel method to improve the efficiency of microarray-based immunoassay via an optically induced electrokinetic phenomenon by integrating an optoelectrofluidic device with a conventional glass slide-based microarray format. A sample droplet was loaded between the microarray slide and the optoelectrofluidic device on which a photoconductive layer was deposited. Under the application of an AC voltage, optically induced AC electroosmotic flows caused by a microarray-patterned light actively enhanced the mass transport of target molecules at the multiple assay spots of the microarray simultaneously, which reduced tedious reaction time from more than 30 min to 10 min. Based on this enhancing effect, a heterogeneous immunoassay with a tiny volume of sample (5 μl) was successfully performed in the microarray-integrated optoelectrofluidic system using immunoglobulin G (IgG) and anti-IgG, resulting in improved efficiency compared to the static environment. Furthermore, the application of multiplex assays was also demonstrated by multiple protein detection.
The second phase of the MicroArray Quality Control (MAQC-II) project evaluated common practices for developing and validating microarray-based models aimed at predicting toxicological and clinical endpoints. Thirty-six teams developed classifiers for 13 endpoints - some easy, som...
An Advanced Approach to Simultaneous Monitoring of Multiple Bacteria in Space
NASA Technical Reports Server (NTRS)
Eggers, M.
1998-01-01
The utility of a novel microarray-based microbial analyzer was demonstrated by the rapid detection, imaging, and identification of a mixture of microorganisms found in a waste water sample from the Lunar-Mars Life Support Test Project through the synergistic combination of: (1) judicious RNA probe selection via algorithms developed by University of Houston scientists; (2) tuned surface chemistries developed by Baylor College of Medicine scientists to facilitate hybridization of rRNA targets to DNA probes under very low salt conditions, thereby minimizing secondary structure; and (3) integration of the microarray printing and detection/imaging instrumentation by Genometrix to complete the quantitative analysis of microorganism mixtures.
From guideline modeling to guideline execution: defining guideline-based decision-support services.
Tu, S. W.; Musen, M. A.
2000-01-01
We describe our task-based approach to defining the guideline-based decision-support services that the EON system provides. We categorize uses of guidelines in patient-specific decision support into a set of generic tasks--making of decisions, specification of work to be performed, interpretation of data, setting of goals, and issuance of alert and reminders--that can be solved using various techniques. Our model includes constructs required for representing the knowledge used by these techniques. These constructs form a toolkit from which developers can select modeling solutions for guideline task. Based on the tasks and the guideline model, we define a guideline-execution architecture and a model of interactions between a decision-support server and clients that invoke services provided by the server. These services use generic interfaces derived from guideline tasks and their associated modeling constructs. We describe two implementations of these decision-support services and discuss how this work can be generalized. We argue that a well-defined specification of guideline-based decision-support services will facilitate sharing of tools that implement computable clinical guidelines. PMID:11080007
Bal, Mert; Amasyali, M Fatih; Sever, Hayri; Kose, Guven; Demirhan, Ayse
2014-01-01
The importance of the decision support systems is increasingly supporting the decision making process in cases of uncertainty and the lack of information and they are widely used in various fields like engineering, finance, medicine, and so forth, Medical decision support systems help the healthcare personnel to select optimal method during the treatment of the patients. Decision support systems are intelligent software systems that support decision makers on their decisions. The design of decision support systems consists of four main subjects called inference mechanism, knowledge-base, explanation module, and active memory. Inference mechanism constitutes the basis of decision support systems. There are various methods that can be used in these mechanisms approaches. Some of these methods are decision trees, artificial neural networks, statistical methods, rule-based methods, and so forth. In decision support systems, those methods can be used separately or a hybrid system, and also combination of those methods. In this study, synthetic data with 10, 100, 1000, and 2000 records have been produced to reflect the probabilities on the ALARM network. The accuracy of 11 machine learning methods for the inference mechanism of medical decision support system is compared on various data sets.
Huser, Vojtech; Rasmussen, Luke V; Oberg, Ryan; Starren, Justin B
2011-04-10
Workflow engine technology represents a new class of software with the ability to graphically model step-based knowledge. We present application of this novel technology to the domain of clinical decision support. Successful implementation of decision support within an electronic health record (EHR) remains an unsolved research challenge. Previous research efforts were mostly based on healthcare-specific representation standards and execution engines and did not reach wide adoption. We focus on two challenges in decision support systems: the ability to test decision logic on retrospective data prior prospective deployment and the challenge of user-friendly representation of clinical logic. We present our implementation of a workflow engine technology that addresses the two above-described challenges in delivering clinical decision support. Our system is based on a cross-industry standard of XML (extensible markup language) process definition language (XPDL). The core components of the system are a workflow editor for modeling clinical scenarios and a workflow engine for execution of those scenarios. We demonstrate, with an open-source and publicly available workflow suite, that clinical decision support logic can be executed on retrospective data. The same flowchart-based representation can also function in a prospective mode where the system can be integrated with an EHR system and respond to real-time clinical events. We limit the scope of our implementation to decision support content generation (which can be EHR system vendor independent). We do not focus on supporting complex decision support content delivery mechanisms due to lack of standardization of EHR systems in this area. We present results of our evaluation of the flowchart-based graphical notation as well as architectural evaluation of our implementation using an established evaluation framework for clinical decision support architecture. We describe an implementation of a free workflow technology software suite (available at http://code.google.com/p/healthflow) and its application in the domain of clinical decision support. Our implementation seamlessly supports clinical logic testing on retrospective data and offers a user-friendly knowledge representation paradigm. With the presented software implementation, we demonstrate that workflow engine technology can provide a decision support platform which evaluates well against an established clinical decision support architecture evaluation framework. Due to cross-industry usage of workflow engine technology, we can expect significant future functionality enhancements that will further improve the technology's capacity to serve as a clinical decision support platform.
Salehi, Reza; Tsoi, Stephen C M; Colazo, Marcos G; Ambrose, Divakar J; Robert, Claude; Dyck, Michael K
2017-01-30
Early embryonic loss is a large contributor to infertility in cattle. Moreover, bovine becomes an interesting model to study human preimplantation embryo development due to their similar developmental process. Although genetic factors are known to affect early embryonic development, the discovery of such factors has been a serious challenge. Microarray technology allows quantitative measurement and gene expression profiling of transcript levels on a genome-wide basis. One of the main decisions that have to be made when planning a microarray experiment is whether to use a one- or two-color approach. Two-color design increases technical replication, minimizes variability, improves sensitivity and accuracy as well as allows having loop designs, defining the common reference samples. Although microarray is a powerful biological tool, there are potential pitfalls that can attenuate its power. Hence, in this technical paper we demonstrate an optimized protocol for RNA extraction, amplification, labeling, hybridization of the labeled amplified RNA to the array, array scanning and data analysis using the two-color analysis strategy.
Web-services-based spatial decision support system to facilitate nuclear waste siting
NASA Astrophysics Data System (ADS)
Huang, L. Xinglai; Sheng, Grant
2006-10-01
The availability of spatial web services enables data sharing among managers, decision and policy makers and other stakeholders in much simpler ways than before and subsequently has created completely new opportunities in the process of spatial decision making. Though generally designed for a certain problem domain, web-services-based spatial decision support systems (WSDSS) can provide a flexible problem-solving environment to explore the decision problem, understand and refine problem definition, and generate and evaluate multiple alternatives for decision. This paper presents a new framework for the development of a web-services-based spatial decision support system. The WSDSS is comprised of distributed web services that either have their own functions or provide different geospatial data and may reside in different computers and locations. WSDSS includes six key components, namely: database management system, catalog, analysis functions and models, GIS viewers and editors, report generators, and graphical user interfaces. In this study, the architecture of a web-services-based spatial decision support system to facilitate nuclear waste siting is described as an example. The theoretical, conceptual and methodological challenges and issues associated with developing web services-based spatial decision support system are described.
Big-Data Based Decision-Support Systems to Improve Clinicians' Cognition.
Roosan, Don; Samore, Matthew; Jones, Makoto; Livnat, Yarden; Clutter, Justin
2016-01-01
Complex clinical decision-making could be facilitated by using population health data to inform clinicians. In two previous studies, we interviewed 16 infectious disease experts to understand complex clinical reasoning. For this study, we focused on answers from the experts on how clinical reasoning can be supported by population-based Big-Data. We found cognitive strategies such as trajectory tracking, perspective taking, and metacognition has the potential to improve clinicians' cognition to deal with complex problems. These cognitive strategies could be supported by population health data, and all have important implications for the design of Big-Data based decision-support tools that could be embedded in electronic health records. Our findings provide directions for task allocation and design of decision-support applications for health care industry development of Big data based decision-support systems.
Big-Data Based Decision-Support Systems to Improve Clinicians’ Cognition
Roosan, Don; Samore, Matthew; Jones, Makoto; Livnat, Yarden; Clutter, Justin
2016-01-01
Complex clinical decision-making could be facilitated by using population health data to inform clinicians. In two previous studies, we interviewed 16 infectious disease experts to understand complex clinical reasoning. For this study, we focused on answers from the experts on how clinical reasoning can be supported by population-based Big-Data. We found cognitive strategies such as trajectory tracking, perspective taking, and metacognition has the potential to improve clinicians’ cognition to deal with complex problems. These cognitive strategies could be supported by population health data, and all have important implications for the design of Big-Data based decision-support tools that could be embedded in electronic health records. Our findings provide directions for task allocation and design of decision-support applications for health care industry development of Big data based decision-support systems. PMID:27990498
Abruzzi, Katharine; Denome, Sylvia; Olsen, Jens Raabjerg; Assenholt, Jannie; Haaning, Line Lindegaard; Jensen, Torben Heick; Rosbash, Michael
2007-01-01
Genetic screens in Saccharomyces cerevisiae provide novel information about interacting genes and pathways. We screened for high-copy-number suppressors of a strain with the gene encoding the nuclear exosome component Rrp6p deleted, with either a traditional plate screen for suppressors of rrp6Δ temperature sensitivity or a novel microarray enhancer/suppressor screening (MES) strategy. MES combines DNA microarray technology with high-copy-number plasmid expression in liquid media. The plate screen and MES identified overlapping, but also different, suppressor genes. Only MES identified the novel mRNP protein Nab6p and the tRNA transporter Los1p, which could not have been identified in a traditional plate screen; both genes are toxic when overexpressed in rrp6Δ strains at 37°C. Nab6p binds poly(A)+ RNA, and the functions of Nab6p and Los1p suggest that mRNA metabolism and/or protein synthesis are growth rate limiting in rrp6Δ strains. Microarray analyses of gene expression in rrp6Δ strains and a number of suppressor strains support this hypothesis. PMID:17101774
Introduction to Decision Support Systems for Risk Based Management of Contaminated Sites
A book on Decision Support Systems for Risk-based Management of contaminated sites is appealing for two reasons. First, it addresses the problem of contaminated sites, which has worldwide importance. Second, it presents Decision Support Systems (DSSs), which are powerful comput...
Comparison of RNA-seq and microarray-based models for clinical endpoint prediction.
Zhang, Wenqian; Yu, Ying; Hertwig, Falk; Thierry-Mieg, Jean; Zhang, Wenwei; Thierry-Mieg, Danielle; Wang, Jian; Furlanello, Cesare; Devanarayan, Viswanath; Cheng, Jie; Deng, Youping; Hero, Barbara; Hong, Huixiao; Jia, Meiwen; Li, Li; Lin, Simon M; Nikolsky, Yuri; Oberthuer, André; Qing, Tao; Su, Zhenqiang; Volland, Ruth; Wang, Charles; Wang, May D; Ai, Junmei; Albanese, Davide; Asgharzadeh, Shahab; Avigad, Smadar; Bao, Wenjun; Bessarabova, Marina; Brilliant, Murray H; Brors, Benedikt; Chierici, Marco; Chu, Tzu-Ming; Zhang, Jibin; Grundy, Richard G; He, Min Max; Hebbring, Scott; Kaufman, Howard L; Lababidi, Samir; Lancashire, Lee J; Li, Yan; Lu, Xin X; Luo, Heng; Ma, Xiwen; Ning, Baitang; Noguera, Rosa; Peifer, Martin; Phan, John H; Roels, Frederik; Rosswog, Carolina; Shao, Susan; Shen, Jie; Theissen, Jessica; Tonini, Gian Paolo; Vandesompele, Jo; Wu, Po-Yen; Xiao, Wenzhong; Xu, Joshua; Xu, Weihong; Xuan, Jiekun; Yang, Yong; Ye, Zhan; Dong, Zirui; Zhang, Ke K; Yin, Ye; Zhao, Chen; Zheng, Yuanting; Wolfinger, Russell D; Shi, Tieliu; Malkas, Linda H; Berthold, Frank; Wang, Jun; Tong, Weida; Shi, Leming; Peng, Zhiyu; Fischer, Matthias
2015-06-25
Gene expression profiling is being widely applied in cancer research to identify biomarkers for clinical endpoint prediction. Since RNA-seq provides a powerful tool for transcriptome-based applications beyond the limitations of microarrays, we sought to systematically evaluate the performance of RNA-seq-based and microarray-based classifiers in this MAQC-III/SEQC study for clinical endpoint prediction using neuroblastoma as a model. We generate gene expression profiles from 498 primary neuroblastomas using both RNA-seq and 44 k microarrays. Characterization of the neuroblastoma transcriptome by RNA-seq reveals that more than 48,000 genes and 200,000 transcripts are being expressed in this malignancy. We also find that RNA-seq provides much more detailed information on specific transcript expression patterns in clinico-genetic neuroblastoma subgroups than microarrays. To systematically compare the power of RNA-seq and microarray-based models in predicting clinical endpoints, we divide the cohort randomly into training and validation sets and develop 360 predictive models on six clinical endpoints of varying predictability. Evaluation of factors potentially affecting model performances reveals that prediction accuracies are most strongly influenced by the nature of the clinical endpoint, whereas technological platforms (RNA-seq vs. microarrays), RNA-seq data analysis pipelines, and feature levels (gene vs. transcript vs. exon-junction level) do not significantly affect performances of the models. We demonstrate that RNA-seq outperforms microarrays in determining the transcriptomic characteristics of cancer, while RNA-seq and microarray-based models perform similarly in clinical endpoint prediction. Our findings may be valuable to guide future studies on the development of gene expression-based predictive models and their implementation in clinical practice.
Missing value imputation for microarray data: a comprehensive comparison study and a web tool
2013-01-01
Background Microarray data are usually peppered with missing values due to various reasons. However, most of the downstream analyses for microarray data require complete datasets. Therefore, accurate algorithms for missing value estimation are needed for improving the performance of microarray data analyses. Although many algorithms have been developed, there are many debates on the selection of the optimal algorithm. The studies about the performance comparison of different algorithms are still incomprehensive, especially in the number of benchmark datasets used, the number of algorithms compared, the rounds of simulation conducted, and the performance measures used. Results In this paper, we performed a comprehensive comparison by using (I) thirteen datasets, (II) nine algorithms, (III) 110 independent runs of simulation, and (IV) three types of measures to evaluate the performance of each imputation algorithm fairly. First, the effects of different types of microarray datasets on the performance of each imputation algorithm were evaluated. Second, we discussed whether the datasets from different species have different impact on the performance of different algorithms. To assess the performance of each algorithm fairly, all evaluations were performed using three types of measures. Our results indicate that the performance of an imputation algorithm mainly depends on the type of a dataset but not on the species where the samples come from. In addition to the statistical measure, two other measures with biological meanings are useful to reflect the impact of missing value imputation on the downstream data analyses. Our study suggests that local-least-squares-based methods are good choices to handle missing values for most of the microarray datasets. Conclusions In this work, we carried out a comprehensive comparison of the algorithms for microarray missing value imputation. Based on such a comprehensive comparison, researchers could choose the optimal algorithm for their datasets easily. Moreover, new imputation algorithms could be compared with the existing algorithms using this comparison strategy as a standard protocol. In addition, to assist researchers in dealing with missing values easily, we built a web-based and easy-to-use imputation tool, MissVIA (http://cosbi.ee.ncku.edu.tw/MissVIA), which supports many imputation algorithms. Once users upload a real microarray dataset and choose the imputation algorithms, MissVIA will determine the optimal algorithm for the users' data through a series of simulations, and then the imputed results can be downloaded for the downstream data analyses. PMID:24565220
Zeller, Tanja; Wild, Philipp S.; Truong, Vinh; Trégouët, David-Alexandre; Munzel, Thomas; Ziegler, Andreas; Cambien, François; Blankenberg, Stefan; Tiret, Laurence
2011-01-01
Background The hypothesis of dosage compensation of genes of the X chromosome, supported by previous microarray studies, was recently challenged by RNA-sequencing data. It was suggested that microarray studies were biased toward an over-estimation of X-linked expression levels as a consequence of the filtering of genes below the detection threshold of microarrays. Methodology/Principal Findings To investigate this hypothesis, we used microarray expression data from circulating monocytes in 1,467 individuals. In total, 25,349 and 1,156 probes were unambiguously assigned to autosomes and the X chromosome, respectively. Globally, there was a clear shift of X-linked expressions toward lower levels than autosomes. We compared the ratio of expression levels of X-linked to autosomal transcripts (X∶AA) using two different filtering methods: 1. gene expressions were filtered out using a detection threshold irrespective of gene chromosomal location (the standard method in microarrays); 2. equal proportions of genes were filtered out separately on the X and on autosomes. For a wide range of filtering proportions, the X∶AA ratio estimated with the first method was not significantly different from 1, the value expected if dosage compensation was achieved, whereas it was significantly lower than 1 with the second method, leading to the rejection of the hypothesis of dosage compensation. We further showed in simulated data that the choice of the most appropriate method was dependent on biological assumptions regarding the proportion of actively expressed genes on the X chromosome comparative to the autosomes and the extent of dosage compensation. Conclusion/Significance This study shows that the method used for filtering out lowly expressed genes in microarrays may have a major impact according to the hypothesis investigated. The hypothesis of dosage compensation of X-linked genes cannot be firmly accepted or rejected using microarray-based data. PMID:21912656
ERIC Educational Resources Information Center
Ballantine, R. Malcolm
Decision Support Systems (DSSs) are computer-based decision aids to use when making decisions which are partially amenable to rational decision-making procedures but contain elements where intuitive judgment is an essential component. In such situations, DSSs are used to improve the quality of decision-making. The DSS approach is based on Simon's…
Li, Dongmei; Le Pape, Marc A; Parikh, Nisha I; Chen, Will X; Dye, Timothy D
2013-01-01
Microarrays are widely used for examining differential gene expression, identifying single nucleotide polymorphisms, and detecting methylation loci. Multiple testing methods in microarray data analysis aim at controlling both Type I and Type II error rates; however, real microarray data do not always fit their distribution assumptions. Smyth's ubiquitous parametric method, for example, inadequately accommodates violations of normality assumptions, resulting in inflated Type I error rates. The Significance Analysis of Microarrays, another widely used microarray data analysis method, is based on a permutation test and is robust to non-normally distributed data; however, the Significance Analysis of Microarrays method fold change criteria are problematic, and can critically alter the conclusion of a study, as a result of compositional changes of the control data set in the analysis. We propose a novel approach, combining resampling with empirical Bayes methods: the Resampling-based empirical Bayes Methods. This approach not only reduces false discovery rates for non-normally distributed microarray data, but it is also impervious to fold change threshold since no control data set selection is needed. Through simulation studies, sensitivities, specificities, total rejections, and false discovery rates are compared across the Smyth's parametric method, the Significance Analysis of Microarrays, and the Resampling-based empirical Bayes Methods. Differences in false discovery rates controls between each approach are illustrated through a preterm delivery methylation study. The results show that the Resampling-based empirical Bayes Methods offer significantly higher specificity and lower false discovery rates compared to Smyth's parametric method when data are not normally distributed. The Resampling-based empirical Bayes Methods also offers higher statistical power than the Significance Analysis of Microarrays method when the proportion of significantly differentially expressed genes is large for both normally and non-normally distributed data. Finally, the Resampling-based empirical Bayes Methods are generalizable to next generation sequencing RNA-seq data analysis.
Systematic Review of Medical Informatics-Supported Medication Decision Making.
Melton, Brittany L
2017-01-01
This systematic review sought to assess the applications and implications of current medical informatics-based decision support systems related to medication prescribing and use. Studies published between January 2006 and July 2016 which were indexed in PubMed and written in English were reviewed, and 39 studies were ultimately included. Most of the studies looked at computerized provider order entry or clinical decision support systems. Most studies examined decision support systems as a means of reducing errors or risk, particularly associated with medication prescribing, whereas a few studies evaluated the impact medical informatics-based decision support systems have on workflow or operations efficiency. Most studies identified benefits associated with decision support systems, but some indicate there is room for improvement.
Kilicoglu, Halil; Shin, Dongwook; Rindflesch, Thomas C.
2014-01-01
Gene regulatory networks are a crucial aspect of systems biology in describing molecular mechanisms of the cell. Various computational models rely on random gene selection to infer such networks from microarray data. While incorporation of prior knowledge into data analysis has been deemed important, in practice, it has generally been limited to referencing genes in probe sets and using curated knowledge bases. We investigate the impact of augmenting microarray data with semantic relations automatically extracted from the literature, with the view that relations encoding gene/protein interactions eliminate the need for random selection of components in non-exhaustive approaches, producing a more accurate model of cellular behavior. A genetic algorithm is then used to optimize the strength of interactions using microarray data and an artificial neural network fitness function. The result is a directed and weighted network providing the individual contribution of each gene to its target. For testing, we used invasive ductile carcinoma of the breast to query the literature and a microarray set containing gene expression changes in these cells over several time points. Our model demonstrates significantly better fitness than the state-of-the-art model, which relies on an initial random selection of genes. Comparison to the component pathways of the KEGG Pathways in Cancer map reveals that the resulting networks contain both known and novel relationships. The p53 pathway results were manually validated in the literature. 60% of non-KEGG relationships were supported (74% for highly weighted interactions). The method was then applied to yeast data and our model again outperformed the comparison model. Our results demonstrate the advantage of combining gene interactions extracted from the literature in the form of semantic relations with microarray analysis in generating contribution-weighted gene regulatory networks. This methodology can make a significant contribution to understanding the complex interactions involved in cellular behavior and molecular physiology. PMID:24921649
Chen, Guocai; Cairelli, Michael J; Kilicoglu, Halil; Shin, Dongwook; Rindflesch, Thomas C
2014-06-01
Gene regulatory networks are a crucial aspect of systems biology in describing molecular mechanisms of the cell. Various computational models rely on random gene selection to infer such networks from microarray data. While incorporation of prior knowledge into data analysis has been deemed important, in practice, it has generally been limited to referencing genes in probe sets and using curated knowledge bases. We investigate the impact of augmenting microarray data with semantic relations automatically extracted from the literature, with the view that relations encoding gene/protein interactions eliminate the need for random selection of components in non-exhaustive approaches, producing a more accurate model of cellular behavior. A genetic algorithm is then used to optimize the strength of interactions using microarray data and an artificial neural network fitness function. The result is a directed and weighted network providing the individual contribution of each gene to its target. For testing, we used invasive ductile carcinoma of the breast to query the literature and a microarray set containing gene expression changes in these cells over several time points. Our model demonstrates significantly better fitness than the state-of-the-art model, which relies on an initial random selection of genes. Comparison to the component pathways of the KEGG Pathways in Cancer map reveals that the resulting networks contain both known and novel relationships. The p53 pathway results were manually validated in the literature. 60% of non-KEGG relationships were supported (74% for highly weighted interactions). The method was then applied to yeast data and our model again outperformed the comparison model. Our results demonstrate the advantage of combining gene interactions extracted from the literature in the form of semantic relations with microarray analysis in generating contribution-weighted gene regulatory networks. This methodology can make a significant contribution to understanding the complex interactions involved in cellular behavior and molecular physiology.
Jupiter, Daniel; Chen, Hailin; VanBuren, Vincent
2009-01-01
Background Although expression microarrays have become a standard tool used by biologists, analysis of data produced by microarray experiments may still present challenges. Comparison of data from different platforms, organisms, and labs may involve complicated data processing, and inferring relationships between genes remains difficult. Results STARNET 2 is a new web-based tool that allows post hoc visual analysis of correlations that are derived from expression microarray data. STARNET 2 facilitates user discovery of putative gene regulatory networks in a variety of species (human, rat, mouse, chicken, zebrafish, Drosophila, C. elegans, S. cerevisiae, Arabidopsis and rice) by graphing networks of genes that are closely co-expressed across a large heterogeneous set of preselected microarray experiments. For each of the represented organisms, raw microarray data were retrieved from NCBI's Gene Expression Omnibus for a selected Affymetrix platform. All pairwise Pearson correlation coefficients were computed for expression profiles measured on each platform, respectively. These precompiled results were stored in a MySQL database, and supplemented by additional data retrieved from NCBI. A web-based tool allows user-specified queries of the database, centered at a gene of interest. The result of a query includes graphs of correlation networks, graphs of known interactions involving genes and gene products that are present in the correlation networks, and initial statistical analyses. Two analyses may be performed in parallel to compare networks, which is facilitated by the new HEATSEEKER module. Conclusion STARNET 2 is a useful tool for developing new hypotheses about regulatory relationships between genes and gene products, and has coverage for 10 species. Interpretation of the correlation networks is supported with a database of previously documented interactions, a test for enrichment of Gene Ontology terms, and heat maps of correlation distances that may be used to compare two networks. The list of genes in a STARNET network may be useful in developing a list of candidate genes to use for the inference of causal networks. The tool is freely available at , and does not require user registration. PMID:19828039
Home care decision support using an Arden engine--merging smart home and vital signs data.
Marschollek, Michael; Bott, Oliver J; Wolf, Klaus-H; Gietzelt, Matthias; Plischke, Maik; Madiesh, Moaaz; Song, Bianying; Haux, Reinhold
2009-01-01
The demographic change with a rising proportion of very old people and diminishing resources leads to an intensification of the use of telemedicine and home care concepts. To provide individualized decision support, data from different sources, e.g. vital signs sensors and home environmental sensors, need to be combined and analyzed together. Furthermore, a standardized decision support approach is necessary. The aim of our research work is to present a laboratory prototype home care architecture that integrates data from different sources and uses a decision support system based on the HL7 standard Arden Syntax for Medical Logical Modules. Data from environmental sensors connected to a home bus system are stored in a data base along with data from wireless medical sensors. All data are analyzed using an Arden engine with the medical knowledge represented in Medical Logic Modules. Multi-modal data from four different sensors in the home environment are stored in a single data base and are analyzed using an HL7 standard conformant decision support system. Individualized home care decision support must be based on all data available, including context data from smart home systems and medical data from electronic health records. Our prototype implementation shows the feasibility of using an Arden engine for decision support in a home setting. Our future work will include the utilization of medical background knowledge for individualized decision support, as there is no one-size-fits-all knowledge base in medicine.
2011-01-01
Background Workflow engine technology represents a new class of software with the ability to graphically model step-based knowledge. We present application of this novel technology to the domain of clinical decision support. Successful implementation of decision support within an electronic health record (EHR) remains an unsolved research challenge. Previous research efforts were mostly based on healthcare-specific representation standards and execution engines and did not reach wide adoption. We focus on two challenges in decision support systems: the ability to test decision logic on retrospective data prior prospective deployment and the challenge of user-friendly representation of clinical logic. Results We present our implementation of a workflow engine technology that addresses the two above-described challenges in delivering clinical decision support. Our system is based on a cross-industry standard of XML (extensible markup language) process definition language (XPDL). The core components of the system are a workflow editor for modeling clinical scenarios and a workflow engine for execution of those scenarios. We demonstrate, with an open-source and publicly available workflow suite, that clinical decision support logic can be executed on retrospective data. The same flowchart-based representation can also function in a prospective mode where the system can be integrated with an EHR system and respond to real-time clinical events. We limit the scope of our implementation to decision support content generation (which can be EHR system vendor independent). We do not focus on supporting complex decision support content delivery mechanisms due to lack of standardization of EHR systems in this area. We present results of our evaluation of the flowchart-based graphical notation as well as architectural evaluation of our implementation using an established evaluation framework for clinical decision support architecture. Conclusions We describe an implementation of a free workflow technology software suite (available at http://code.google.com/p/healthflow) and its application in the domain of clinical decision support. Our implementation seamlessly supports clinical logic testing on retrospective data and offers a user-friendly knowledge representation paradigm. With the presented software implementation, we demonstrate that workflow engine technology can provide a decision support platform which evaluates well against an established clinical decision support architecture evaluation framework. Due to cross-industry usage of workflow engine technology, we can expect significant future functionality enhancements that will further improve the technology's capacity to serve as a clinical decision support platform. PMID:21477364
A Web-Based Tool to Support Data-Based Early Intervention Decision Making
ERIC Educational Resources Information Center
Buzhardt, Jay; Greenwood, Charles; Walker, Dale; Carta, Judith; Terry, Barbara; Garrett, Matthew
2010-01-01
Progress monitoring and data-based intervention decision making have become key components of providing evidence-based early childhood special education services. Unfortunately, there is a lack of tools to support early childhood service providers' decision-making efforts. The authors describe a Web-based system that guides service providers…
Draghici, Sorin; Tarca, Adi L; Yu, Longfei; Ethier, Stephen; Romero, Roberto
2008-03-01
The BioArray Software Environment (BASE) is a very popular MIAME-compliant, web-based microarray data repository. However in BASE, like in most other microarray data repositories, the experiment annotation and raw data uploading can be very timeconsuming, especially for large microarray experiments. We developed KUTE (Karmanos Universal daTabase for microarray Experiments), as a plug-in for BASE 2.0 that addresses these issues. KUTE provides an automatic experiment annotation feature and a completely redesigned data work-flow that dramatically reduce the human-computer interaction time. For instance, in BASE 2.0 a typical Affymetrix experiment involving 100 arrays required 4 h 30 min of user interaction time forexperiment annotation, and 45 min for data upload/download. In contrast, for the same experiment, KUTE required only 28 min of user interaction time for experiment annotation, and 3.3 min for data upload/download. http://vortex.cs.wayne.edu/kute/index.html.
Clustering-based spot segmentation of cDNA microarray images.
Uslan, Volkan; Bucak, Ihsan Ömür
2010-01-01
Microarrays are utilized as that they provide useful information about thousands of gene expressions simultaneously. In this study segmentation step of microarray image processing has been implemented. Clustering-based methods, fuzzy c-means and k-means, have been applied for the segmentation step that separates the spots from the background. The experiments show that fuzzy c-means have segmented spots of the microarray image more accurately than the k-means.
Müller-Staub, Maria; de Graaf-Waar, Helen; Paans, Wolter
2016-11-01
Nurses are accountable to apply the nursing process, which is key for patient care: It is a problem-solving process providing the structure for care plans and documentation. The state-of-the art nursing process is based on classifications that contain standardized concepts, and therefore, it is named Advanced Nursing Process. It contains valid assessments, nursing diagnoses, interventions, and nursing-sensitive patient outcomes. Electronic decision support systems can assist nurses to apply the Advanced Nursing Process. However, nursing decision support systems are missing, and no "gold standard" is available. The study aim is to develop a valid Nursing Process-Clinical Decision Support System Standard to guide future developments of clinical decision support systems. In a multistep approach, a Nursing Process-Clinical Decision Support System Standard with 28 criteria was developed. After pilot testing (N = 29 nurses), the criteria were reduced to 25. The Nursing Process-Clinical Decision Support System Standard was then presented to eight internationally known experts, who performed qualitative interviews according to Mayring. Fourteen categories demonstrate expert consensus on the Nursing Process-Clinical Decision Support System Standard and its content validity. All experts agreed the Advanced Nursing Process should be the centerpiece for the Nursing Process-Clinical Decision Support System and should suggest research-based, predefined nursing diagnoses and correct linkages between diagnoses, evidence-based interventions, and patient outcomes.
The FDA's Experience with Emerging Genomics Technologies-Past, Present, and Future.
Xu, Joshua; Thakkar, Shraddha; Gong, Binsheng; Tong, Weida
2016-07-01
The rapid advancement of emerging genomics technologies and their application for assessing safety and efficacy of FDA-regulated products require a high standard of reliability and robustness supporting regulatory decision-making in the FDA. To facilitate the regulatory application, the FDA implemented a novel data submission program, Voluntary Genomics Data Submission (VGDS), and also to engage the stakeholders. As part of the endeavor, for the past 10 years, the FDA has led an international consortium of regulatory agencies, academia, pharmaceutical companies, and genomics platform providers, which was named MicroArray Quality Control Consortium (MAQC), to address issues such as reproducibility, precision, specificity/sensitivity, and data interpretation. Three projects have been completed so far assessing these genomics technologies: gene expression microarrays, whole genome genotyping arrays, and whole transcriptome sequencing (i.e., RNA-seq). The resultant studies provide the basic parameters for fit-for-purpose application of these new data streams in regulatory environments, and the solutions have been made available to the public through peer-reviewed publications. The latest MAQC project is also called the SEquencing Quality Control (SEQC) project focused on next-generation sequencing. Using reference samples with built-in controls, SEQC studies have demonstrated that relative gene expression can be measured accurately and reliably across laboratories and RNA-seq platforms. Besides prediction performance comparable to microarrays in clinical settings and safety assessments, RNA-seq is shown to have better sensitivity for low expression and reveal novel transcriptomic features. Future effort of MAQC will be focused on quality control of whole genome sequencing and targeted sequencing.
The FDA’s Experience with Emerging Genomics Technologies—Past, Present, and Future
Xu, Joshua; Thakkar, Shraddha; Gong, Binsheng; Tong, Weida
2016-01-01
The rapid advancement of emerging genomics technologies and their application for assessing safety and efficacy of FDA-regulated products require a high standard of reliability and robustness supporting regulatory decision-making in the FDA. To facilitate the regulatory application, the FDA implemented a novel data submission program, Voluntary Genomics Data Submission (VGDS), and also to engage the stakeholders. As part of the endeavor, for the past 10 years, the FDA has led an international consortium of regulatory agencies, academia, pharmaceutical companies, and genomics platform providers, which was named MicroArray Quality Control Consortium (MAQC), to address issues such as reproducibility, precision, specificity/sensitivity, and data interpretation. Three projects have been completed so far assessing these genomics technologies: gene expression microarrays, whole genome genotyping arrays, and whole transcriptome sequencing (i.e., RNA-seq). The resultant studies provide the basic parameters for fit-for-purpose application of these new data streams in regulatory environments, and the solutions have been made available to the public through peer-reviewed publications. The latest MAQC project is also called the SEquencing Quality Control (SEQC) project focused on next-generation sequencing. Using reference samples with built-in controls, SEQC studies have demonstrated that relative gene expression can be measured accurately and reliably across laboratories and RNA-seq platforms. Besides prediction performance comparable to microarrays in clinical settings and safety assessments, RNA-seq is shown to have better sensitivity for low expression and reveal novel transcriptomic features. Future effort of MAQC will be focused on quality control of whole genome sequencing and targeted sequencing. PMID:27116022
Vafaee Sharbaf, Fatemeh; Mosafer, Sara; Moattar, Mohammad Hossein
2016-06-01
This paper proposes an approach for gene selection in microarray data. The proposed approach consists of a primary filter approach using Fisher criterion which reduces the initial genes and hence the search space and time complexity. Then, a wrapper approach which is based on cellular learning automata (CLA) optimized with ant colony method (ACO) is used to find the set of features which improve the classification accuracy. CLA is applied due to its capability to learn and model complicated relationships. The selected features from the last phase are evaluated using ROC curve and the most effective while smallest feature subset is determined. The classifiers which are evaluated in the proposed framework are K-nearest neighbor; support vector machine and naïve Bayes. The proposed approach is evaluated on 4 microarray datasets. The evaluations confirm that the proposed approach can find the smallest subset of genes while approaching the maximum accuracy. Copyright © 2016 Elsevier Inc. All rights reserved.
Gene Expression Omnibus (GEO): Microarray data storage, submission, retrieval, and analysis
Barrett, Tanya
2006-01-01
The Gene Expression Omnibus (GEO) repository at the National Center for Biotechnology Information (NCBI) archives and freely distributes high-throughput molecular abundance data, predominantly gene expression data generated by DNA microarray technology. The database has a flexible design that can handle diverse styles of both unprocessed and processed data in a MIAME- (Minimum Information About a Microarray Experiment) supportive infrastructure that promotes fully annotated submissions. GEO currently stores about a billion individual gene expression measurements, derived from over 100 organisms, submitted by over 1,500 laboratories, addressing a wide range of biological phenomena. To maximize the utility of these data, several user-friendly Web-based interfaces and applications have been implemented that enable effective exploration, query, and visualization of these data, at the level of individual genes or entire studies. This chapter describes how the data are stored, submission procedures, and mechanisms for data retrieval and query. GEO is publicly accessible at http://www.ncbi.nlm.nih.gov/projects/geo/. PMID:16939800
Shrinkage regression-based methods for microarray missing value imputation.
Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng
2013-01-01
Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.
Semantic Clinical Guideline Documents
Eriksson, Henrik; Tu, Samson W.; Musen, Mark
2005-01-01
Decision-support systems based on clinical practice guidelines can support physicians and other health-care personnel in the process of following best practice consistently. A knowledge-based approach to represent guidelines makes it possible to encode computer-interpretable guidelines in a formal manner, perform consistency checks, and use the guidelines directly in decision-support systems. Decision-support authors and guideline users require guidelines in human-readable formats in addition to computer-interpretable ones (e.g., for guideline review and quality assurance). We propose a new document-oriented information architecture that combines knowledge-representation models with electronic and paper documents. The approach integrates decision-support modes with standard document formats to create a combined clinical-guideline model that supports on-line viewing, printing, and decision support. PMID:16779037
Cheng, Ningtao; Wu, Leihong; Cheng, Yiyu
2013-01-01
The promise of microarray technology in providing prediction classifiers for cancer outcome estimation has been confirmed by a number of demonstrable successes. However, the reliability of prediction results relies heavily on the accuracy of statistical parameters involved in classifiers. It cannot be reliably estimated with only a small number of training samples. Therefore, it is of vital importance to determine the minimum number of training samples and to ensure the clinical value of microarrays in cancer outcome prediction. We evaluated the impact of training sample size on model performance extensively based on 3 large-scale cancer microarray datasets provided by the second phase of MicroArray Quality Control project (MAQC-II). An SSNR-based (scale of signal-to-noise ratio) protocol was proposed in this study for minimum training sample size determination. External validation results based on another 3 cancer datasets confirmed that the SSNR-based approach could not only determine the minimum number of training samples efficiently, but also provide a valuable strategy for estimating the underlying performance of classifiers in advance. Once translated into clinical routine applications, the SSNR-based protocol would provide great convenience in microarray-based cancer outcome prediction in improving classifier reliability. PMID:23861920
A new functional membrane protein microarray based on tethered phospholipid bilayers.
Chadli, Meriem; Maniti, Ofelia; Marquette, Christophe; Tillier, Bruno; Cortès, Sandra; Girard-Egrot, Agnès
2018-04-30
A new prototype of a membrane protein biochip is presented in this article. This biochip was created by the combination of novel technologies of peptide-tethered bilayer lipid membrane (pep-tBLM) formation and solid support micropatterning. Pep-tBLMs integrating a membrane protein were obtained in the form of microarrays on a gold chip. The formation of the microspots was visualized in real-time by surface plasmon resonance imaging (SPRi) and the functionality of a GPCR (CXCR4), reinserted locally into microwells, was assessed by ligand binding studies. In brief, to achieve micropatterning, P19-4H, a 4 histidine-possessing peptide spacer, was spotted inside microwells obtained on polystyrene-coated gold, and Ni-chelating proteoliposomes were injected into the reaction chamber. Proteoliposome binding to the peptide was based on metal-chelate interaction. The peptide-tethered lipid bilayer was finally obtained by addition of a fusogenic peptide (AH peptide) to promote proteoliposome fusion. The CXCR4 pep-tBLM microarray was characterized by surface plasmon resonance imaging (SPRi) throughout the building-up process. This new generation of membrane protein biochip represents a promising method of developing a screening tool for drug discovery.
mRNA-Based Parallel Detection of Active Methanotroph Populations by Use of a Diagnostic Microarray
Bodrossy, Levente; Stralis-Pavese, Nancy; Konrad-Köszler, Marianne; Weilharter, Alexandra; Reichenauer, Thomas G.; Schöfer, David; Sessitsch, Angela
2006-01-01
A method was developed for the mRNA-based application of microbial diagnostic microarrays to detect active microbial populations. DNA- and mRNA-based analyses of environmental samples were compared and confirmed via quantitative PCR. Results indicated that mRNA-based microarray analyses may provide additional information on the composition and functioning of microbial communities. PMID:16461725
Adaptation of a Knowledge-Based Decision-Support System in the Tactical Environment.
1981-12-01
002-04-6411S1CURITY CL All PICATION OF 1,416 PAGE (00HIR Onto ea0aOW .L10 *GU9WVC 4bGSI.CAYON S. Voss 10466lVka t... OftesoE ’ making decisons . The...noe..aaw Ad tdlalttt’ IV 680011 MMib) Artificial Intelligence; Decision-Support Systems; Tactical Decision- making ; Knowledge-based Decision-support...tactical information to assist tactical commanders in making decisions. The system, TAC*, for "Tactical Adaptable Consultant," incorporates a database
EMDS users guide (version 2.0): knowledge-based decision support for ecological assessment.
Keith M. Reynolds
1999-01-01
The USDA Forest Service Pacific Northwest Research Station in Corvallis, Oregon, has developed the ecosystem management decision support (EMDS) system. The system integrates the logical formalism of knowledge-based reasoning into a geographic information system (GIS) environment to provide decision support for ecological landscape assessment and evaluation. The...
Kumar, Mukesh; Rath, Nitish Kumar; Rath, Santanu Kumar
2016-04-01
Microarray-based gene expression profiling has emerged as an efficient technique for classification, prognosis, diagnosis, and treatment of cancer. Frequent changes in the behavior of this disease generates an enormous volume of data. Microarray data satisfies both the veracity and velocity properties of big data, as it keeps changing with time. Therefore, the analysis of microarray datasets in a small amount of time is essential. They often contain a large amount of expression, but only a fraction of it comprises genes that are significantly expressed. The precise identification of genes of interest that are responsible for causing cancer are imperative in microarray data analysis. Most existing schemes employ a two-phase process such as feature selection/extraction followed by classification. In this paper, various statistical methods (tests) based on MapReduce are proposed for selecting relevant features. After feature selection, a MapReduce-based K-nearest neighbor (mrKNN) classifier is also employed to classify microarray data. These algorithms are successfully implemented in a Hadoop framework. A comparative analysis is done on these MapReduce-based models using microarray datasets of various dimensions. From the obtained results, it is observed that these models consume much less execution time than conventional models in processing big data. Copyright © 2016 Elsevier Inc. All rights reserved.
THE ABRF MARG MICROARRAY SURVEY 2005: TAKING THE PULSE ON THE MICROARRAY FIELD
Over the past several years microarray technology has evolved into a critical component of any discovery based program. Since 1999, the Association of Biomolecular Resource Facilities (ABRF) Microarray Research Group (MARG) has conducted biennial surveys designed to generate a pr...
A database for the analysis of immunity genes in Drosophila: PADMA database.
Lee, Mark J; Mondal, Ariful; Small, Chiyedza; Paddibhatla, Indira; Kawaguchi, Akira; Govind, Shubha
2011-01-01
While microarray experiments generate voluminous data, discerning trends that support an existing or alternative paradigm is challenging. To synergize hypothesis building and testing, we designed the Pathogen Associated Drosophila MicroArray (PADMA) database for easy retrieval and comparison of microarray results from immunity-related experiments (www.padmadatabase.org). PADMA also allows biologists to upload their microarray-results and compare it with datasets housed within PADMA. We tested PADMA using a preliminary dataset from Ganaspis xanthopoda-infected fly larvae, and uncovered unexpected trends in gene expression, reshaping our hypothesis. Thus, the PADMA database will be a useful resource to fly researchers to evaluate, revise, and refine hypotheses.
Matthew Thompson; David Calkin; Joe H. Scott; Michael Hand
2017-01-01
Wildfire risk assessment is increasingly being adopted to support federal wildfire management decisions in the United States. Existing decision support systems, specifically the Wildland Fire Decision Support System (WFDSS), provide a rich set of probabilistic and riskâbased information to support the management of active wildfire incidents. WFDSS offers a wide range...
A Java-based tool for the design of classification microarrays.
Meng, Da; Broschat, Shira L; Call, Douglas R
2008-08-04
Classification microarrays are used for purposes such as identifying strains of bacteria and determining genetic relationships to understand the epidemiology of an infectious disease. For these cases, mixed microarrays, which are composed of DNA from more than one organism, are more effective than conventional microarrays composed of DNA from a single organism. Selection of probes is a key factor in designing successful mixed microarrays because redundant sequences are inefficient and limited representation of diversity can restrict application of the microarray. We have developed a Java-based software tool, called PLASMID, for use in selecting the minimum set of probe sequences needed to classify different groups of plasmids or bacteria. The software program was successfully applied to several different sets of data. The utility of PLASMID was illustrated using existing mixed-plasmid microarray data as well as data from a virtual mixed-genome microarray constructed from different strains of Streptococcus. Moreover, use of data from expression microarray experiments demonstrated the generality of PLASMID. In this paper we describe a new software tool for selecting a set of probes for a classification microarray. While the tool was developed for the design of mixed microarrays-and mixed-plasmid microarrays in particular-it can also be used to design expression arrays. The user can choose from several clustering methods (including hierarchical, non-hierarchical, and a model-based genetic algorithm), several probe ranking methods, and several different display methods. A novel approach is used for probe redundancy reduction, and probe selection is accomplished via stepwise discriminant analysis. Data can be entered in different formats (including Excel and comma-delimited text), and dendrogram, heat map, and scatter plot images can be saved in several different formats (including jpeg and tiff). Weights generated using stepwise discriminant analysis can be stored for analysis of subsequent experimental data. Additionally, PLASMID can be used to construct virtual microarrays with genomes from public databases, which can then be used to identify an optimal set of probes.
Knowledge-Based Information Management in Decision Support for Ecosystem Management
Keith Reynolds; Micahel Saunders; Richard Olson; Daniel Schmoldt; Michael Foster; Donald Latham; Bruce Miller; John Steffenson; Lawrence Bednar; Patrick Cunningham
1995-01-01
The Pacific Northwest Research Station (USDA Forest Service) is developing a knowledge-based information management system to provide decision support for watershed analysis in the Pacific Northwest region of the U.S. The decision support system includes: (1) a GIS interface that allows users to graphically navigate to specific provinces and watersheds and display a...
Amland, Robert C; Lyons, Jason J; Greene, Tracy L; Haley, James M
2015-10-01
To examine the diagnostic accuracy of a two-stage clinical decision support system for early recognition and stratification of patients with sepsis. Observational cohort study employing a two-stage sepsis clinical decision support to recognise and stratify patients with sepsis. The stage one component was comprised of a cloud-based clinical decision support with 24/7 surveillance to detect patients at risk of sepsis. The cloud-based clinical decision support delivered notifications to the patients' designated nurse, who then electronically contacted a provider. The second stage component comprised a sepsis screening and stratification form integrated into the patient electronic health record, essentially an evidence-based decision aid, used by providers to assess patients at bedside. Urban, 284 acute bed community hospital in the USA; 16,000 hospitalisations annually. Data on 2620 adult patients were collected retrospectively in 2014 after the clinical decision support was implemented. 'Suspected infection' was the established gold standard to assess clinical decision support clinimetric performance. A sepsis alert activated on 417 (16%) of 2620 adult patients hospitalised. Applying 'suspected infection' as standard, the patient population characteristics showed 72% sensitivity and 73% positive predictive value. A postalert screening conducted by providers at bedside of 417 patients achieved 81% sensitivity and 94% positive predictive value. Providers documented against 89% patients with an alert activated by clinical decision support and completed 75% of bedside screening and stratification of patients with sepsis within one hour from notification. A clinical decision support binary alarm system with cross-checking functionality improves early recognition and facilitates stratification of patients with sepsis.
Hoffman, Aubri S; Llewellyn-Thomas, Hilary A; Tosteson, Anna N A; O'Connor, Annette M; Volk, Robert J; Tomek, Ivan M; Andrews, Steven B; Bartels, Stephen J
2014-12-12
Over 100 trials show that patient decision aids effectively improve patients' information comprehension and values-based decision making. However, gaps remain in our understanding of several fundamental and applied questions, particularly related to the design of interactive, personalized decision aids. This paper describes an interdisciplinary development process for, and early field testing of, a web-based patient decision support research platform, or virtual decision lab, to address these questions. An interdisciplinary stakeholder panel designed the web-based research platform with three components: a) an introduction to shared decision making, b) a web-based patient decision aid, and c) interactive data collection items. Iterative focus groups provided feedback on paper drafts and online prototypes. A field test assessed a) feasibility for using the research platform, in terms of recruitment, usage, and acceptability; and b) feasibility of using the web-based decision aid component, compared to performance of a videobooklet decision aid in clinical care. This interdisciplinary, theory-based, patient-centered design approach produced a prototype for field-testing in six months. Participants (n = 126) reported that: the decision aid component was easy to use (98%), information was clear (90%), the length was appropriate (100%), it was appropriately detailed (90%), and it held their interest (97%). They spent a mean of 36 minutes using the decision aid and 100% preferred using their home/library computer. Participants scored a mean of 75% correct on the Decision Quality, Knowledge Subscale, and 74 out of 100 on the Preparation for Decision Making Scale. Completing the web-based decision aid reduced mean Decisional Conflict scores from 31.1 to 19.5 (p < 0.01). Combining decision science and health informatics approaches facilitated rapid development of a web-based patient decision support research platform that was feasible for use in research studies in terms of recruitment, acceptability, and usage. Within this platform, the web-based decision aid component performed comparably with the videobooklet decision aid used in clinical practice. Future studies may use this interactive research platform to study patients' decision making processes in real-time, explore interdisciplinary approaches to designing web-based decision aids, and test strategies for tailoring decision support to meet patients' needs and preferences.
Development of transportation asset management decision support tools : final report.
DOT National Transportation Integrated Search
2017-08-09
This study developed a web-based prototype decision support platform to demonstrate the benefits of transportation asset management in monitoring asset performance, supporting asset funding decisions, planning budget tradeoffs, and optimizing resourc...
Design and realization of tourism spatial decision support system based on GIS
NASA Astrophysics Data System (ADS)
Ma, Zhangbao; Qi, Qingwen; Xu, Li
2008-10-01
In this paper, the existing problems of current tourism management information system are analyzed. GIS, tourism as well as spatial decision support system are introduced, and the application of geographic information system technology and spatial decision support system to tourism management and the establishment of tourism spatial decision support system based on GIS are proposed. System total structure, system hardware and software environment, database design and structure module design of this system are introduced. Finally, realization methods of this systemic core functions are elaborated.
Mocellin, Simone; Lise, Mario; Nitti, Donato
2007-01-01
Advances in tumor immunology are supporting the clinical implementation of several immunological approaches to cancer in the clinical setting. However, the alternate success of current immunotherapeutic regimens underscores the fact that the molecular mechanisms underlying immune-mediated tumor rejection are still poorly understood. Given the complexity of the immune system network and the multidimensionality of tumor/host interactions, the comprehension of tumor immunology might greatly benefit from high-throughput microarray analysis, which can portrait the molecular kinetics of immune response on a genome-wide scale, thus accelerating the discovery pace and ultimately catalyzing the development of new hypotheses in cell biology. Although in its infancy, the implementation of microarray technology in tumor immunology studies has already provided investigators with novel data and intriguing new hypotheses on the molecular cascade leading to an effective immune response against cancer. Although the general principles of microarray-based gene profiling have rapidly spread in the scientific community, the need for mastering this technique to produce meaningful data and correctly interpret the enormous output of information generated by this technology is critical and represents a tremendous challenge for investigators, as outlined in the first section of this book. In the present Chapter, we report on some of the most significant results obtained with the application of DNA microarray in this oncology field.
Karakülah, G.; Dicle, O.; Sökmen, S.; Çelikoğlu, C.C.
2015-01-01
Summary Background The selection of appropriate rectal cancer treatment is a complex multi-criteria decision making process, in which clinical decision support systems might be used to assist and enrich physicians’ decision making. Objective The objective of the study was to develop a web-based clinical decision support tool for physicians in the selection of potentially beneficial treatment options for patients with rectal cancer. Methods The updated decision model contained 8 and 10 criteria in the first and second steps respectively. The decision support model, developed in our previous study by combining the Analytic Hierarchy Process (AHP) method which determines the priority of criteria and decision tree that formed using these priorities, was updated and applied to 388 patients data collected retrospectively. Later, a web-based decision support tool named corRECTreatment was developed. The compatibility of the treatment recommendations by the expert opinion and the decision support tool was examined for its consistency. Two surgeons were requested to recommend a treatment and an overall survival value for the treatment among 20 different cases that we selected and turned into a scenario among the most common and rare treatment options in the patient data set. Results In the AHP analyses of the criteria, it was found that the matrices, generated for both decision steps, were consistent (consistency ratio<0.1). Depending on the decisions of experts, the consistency value for the most frequent cases was found to be 80% for the first decision step and 100% for the second decision step. Similarly, for rare cases consistency was 50% for the first decision step and 80% for the second decision step. Conclusions The decision model and corRECTreatment, developed by applying these on real patient data, are expected to provide potential users with decision support in rectal cancer treatment processes and facilitate them in making projections about treatment options. PMID:25848413
Suner, A; Karakülah, G; Dicle, O; Sökmen, S; Çelikoğlu, C C
2015-01-01
The selection of appropriate rectal cancer treatment is a complex multi-criteria decision making process, in which clinical decision support systems might be used to assist and enrich physicians' decision making. The objective of the study was to develop a web-based clinical decision support tool for physicians in the selection of potentially beneficial treatment options for patients with rectal cancer. The updated decision model contained 8 and 10 criteria in the first and second steps respectively. The decision support model, developed in our previous study by combining the Analytic Hierarchy Process (AHP) method which determines the priority of criteria and decision tree that formed using these priorities, was updated and applied to 388 patients data collected retrospectively. Later, a web-based decision support tool named corRECTreatment was developed. The compatibility of the treatment recommendations by the expert opinion and the decision support tool was examined for its consistency. Two surgeons were requested to recommend a treatment and an overall survival value for the treatment among 20 different cases that we selected and turned into a scenario among the most common and rare treatment options in the patient data set. In the AHP analyses of the criteria, it was found that the matrices, generated for both decision steps, were consistent (consistency ratio<0.1). Depending on the decisions of experts, the consistency value for the most frequent cases was found to be 80% for the first decision step and 100% for the second decision step. Similarly, for rare cases consistency was 50% for the first decision step and 80% for the second decision step. The decision model and corRECTreatment, developed by applying these on real patient data, are expected to provide potential users with decision support in rectal cancer treatment processes and facilitate them in making projections about treatment options.
Shedden, Kerby; Taylor, Jeremy M.G.; Enkemann, Steve A.; Tsao, Ming S.; Yeatman, Timothy J.; Gerald, William L.; Eschrich, Steve; Jurisica, Igor; Venkatraman, Seshan E.; Meyerson, Matthew; Kuick, Rork; Dobbin, Kevin K.; Lively, Tracy; Jacobson, James W.; Beer, David G.; Giordano, Thomas J.; Misek, David E.; Chang, Andrew C.; Zhu, Chang Qi; Strumpf, Dan; Hanash, Samir; Shepherd, Francis A.; Ding, Kuyue; Seymour, Lesley; Naoki, Katsuhiko; Pennell, Nathan; Weir, Barbara; Verhaak, Roel; Ladd-Acosta, Christine; Golub, Todd; Gruidl, Mike; Szoke, Janos; Zakowski, Maureen; Rusch, Valerie; Kris, Mark; Viale, Agnes; Motoi, Noriko; Travis, William; Sharma, Anupama
2009-01-01
Although prognostic gene expression signatures for survival in early stage lung cancer have been proposed, for clinical application it is critical to establish their performance across different subject populations and in different laboratories. Here we report a large, training-testing, multi-site blinded validation study to characterize the performance of several prognostic models based on gene expression for 442 lung adenocarcinomas. The hypotheses proposed examined whether microarray measurements of gene expression either alone or combined with basic clinical covariates (stage, age, sex) can be used to predict overall survival in lung cancer subjects. Several models examined produced risk scores that substantially correlated with actual subject outcome. Most methods performed better with clinical data, supporting the combined use of clinical and molecular information when building prognostic models for early stage lung cancer. This study also provides the largest available set of microarray data with extensive pathological and clinical annotation for lung adenocarcinomas. PMID:18641660
Geospatial Data Fusion and Multigroup Decision Support for Surface Water Quality Management
NASA Astrophysics Data System (ADS)
Sun, A. Y.; Osidele, O.; Green, R. T.; Xie, H.
2010-12-01
Social networking and social media have gained significant popularity and brought fundamental changes to many facets of our everyday life. With the ever-increasing adoption of GPS-enabled gadgets and technology, location-based content is likely to play a central role in social networking sites. While location-based content is not new to the geoscience community, where geographic information systems (GIS) are extensively used, the delivery of useful geospatial data to targeted user groups for decision support is new. Decision makers and modelers ought to make more effective use of the new web-based tools to expand the scope of environmental awareness education, public outreach, and stakeholder interaction. Environmental decision processes are often rife with uncertainty and controversy, requiring integration of multiple sources of information and compromises between diverse interests. Fusing of multisource, multiscale environmental data for multigroup decision support is a challenging task. Toward this goal, a multigroup decision support platform should strive to achieve transparency, impartiality, and timely synthesis of information. The latter criterion often constitutes a major technical bottleneck to traditional GIS-based media, featuring large file or image sizes and requiring special processing before web deployment. Many tools and design patterns have appeared in recent years to ease the situation somewhat. In this project, we explore the use of Web 2.0 technologies for “pushing” location-based content to multigroups involved in surface water quality management and decision making. In particular, our granular bottom-up approach facilitates effective delivery of information to most relevant user groups. Our location-based content includes in-situ and remotely sensed data disseminated by NASA and other national and local agencies. Our project is demonstrated for managing the total maximum daily load (TMDL) program in the Arroyo Colorado coastal river basin in Texas. The overall design focuses on assigning spatial information to decision support elements and on efficiently using Web 2.0 technologies to relay scientific information to the nonscientific community. We conclude that (i) social networking, if appropriately used, has great potential for mitigating difficulty associated with multigroup decision making; (ii) all potential stakeholder groups should be involved in creating a useful decision support system; and (iii) environmental decision support systems should be considered a must-have, instead of an optional component of TMDL decision support projects. Acknowledgment: This project was supported by NASA grant NNX09AR63G.
Combined rule extraction and feature elimination in supervised classification.
Liu, Sheng; Patel, Ronak Y; Daga, Pankaj R; Liu, Haining; Fu, Gang; Doerksen, Robert J; Chen, Yixin; Wilkins, Dawn E
2012-09-01
There are a vast number of biology related research problems involving a combination of multiple sources of data to achieve a better understanding of the underlying problems. It is important to select and interpret the most important information from these sources. Thus it will be beneficial to have a good algorithm to simultaneously extract rules and select features for better interpretation of the predictive model. We propose an efficient algorithm, Combined Rule Extraction and Feature Elimination (CRF), based on 1-norm regularized random forests. CRF simultaneously extracts a small number of rules generated by random forests and selects important features. We applied CRF to several drug activity prediction and microarray data sets. CRF is capable of producing performance comparable with state-of-the-art prediction algorithms using a small number of decision rules. Some of the decision rules are biologically significant.
Spot detection and image segmentation in DNA microarray data.
Qin, Li; Rueda, Luis; Ali, Adnan; Ngom, Alioune
2005-01-01
Following the invention of microarrays in 1994, the development and applications of this technology have grown exponentially. The numerous applications of microarray technology include clinical diagnosis and treatment, drug design and discovery, tumour detection, and environmental health research. One of the key issues in the experimental approaches utilising microarrays is to extract quantitative information from the spots, which represent genes in a given experiment. For this process, the initial stages are important and they influence future steps in the analysis. Identifying the spots and separating the background from the foreground is a fundamental problem in DNA microarray data analysis. In this review, we present an overview of state-of-the-art methods for microarray image segmentation. We discuss the foundations of the circle-shaped approach, adaptive shape segmentation, histogram-based methods and the recently introduced clustering-based techniques. We analytically show that clustering-based techniques are equivalent to the one-dimensional, standard k-means clustering algorithm that utilises the Euclidean distance.
Microarray platform for omics analysis
NASA Astrophysics Data System (ADS)
Mecklenburg, Michael; Xie, Bin
2001-09-01
Microarray technology has revolutionized genetic analysis. However, limitations in genome analysis has lead to renewed interest in establishing 'omic' strategies. As we enter the post-genomic era, new microarray technologies are needed to address these new classes of 'omic' targets, such as proteins, as well as lipids and carbohydrates. We have developed a microarray platform that combines self- assembling monolayers with the biotin-streptavidin system to provide a robust, versatile immobilization scheme. A hydrophobic film is patterned on the surface creating an array of tension wells that eliminates evaporation effects thereby reducing the shear stress to which biomolecules are exposed to during immobilization. The streptavidin linker layer makes it possible to adapt and/or develop microarray based assays using virtually any class of biomolecules including: carbohydrates, peptides, antibodies, receptors, as well as them ore traditional DNA based arrays. Our microarray technology is designed to furnish seamless compatibility across the various 'omic' platforms by providing a common blueprint for fabricating and analyzing arrays. The prototype microarray uses a microscope slide footprint patterned with 2 by 96 flat wells. Data on the microarray platform will be presented.
A knowledge-based decision support system for payload scheduling
NASA Technical Reports Server (NTRS)
Floyd, Stephen; Ford, Donnie
1988-01-01
The role that artificial intelligence/expert systems technologies play in the development and implementation of effective decision support systems is illustrated. A recently developed prototype system for supporting the scheduling of subsystems and payloads/experiments for NASA's Space Station program is presented and serves to highlight various concepts. The potential integration of knowledge based systems and decision support systems which has been proposed in several recent articles and presentations is illustrated.
Kawamoto, Kensaku; Lobach, David F
2003-01-01
Computerized physician order entry (CPOE) systems represent an important tool for providing clinical decision support. In undertaking this systematic review, our objective was to identify the features of CPOE-based clinical decision support systems (CDSSs) most effective at modifying clinician behavior. For this review, two independent reviewers systematically identified randomized controlled trials that evaluated the effectiveness of CPOE-based CDSSs in changing clinician behavior. Furthermore, each included study was assessed for the presence of 14 CDSS features. We screened 10,023 citations and included 11 studies. Of the 10 studies comparing a CPOE-based CDSS intervention against a non-CDSS control group, 7 reported a significant desired change in professional practice. Moreover, meta-regression analysis revealed that automatic provision of the decision support was strongly associated with improved professional practice (adjusted odds ratio, 23.72; 95% confidence interval, 1.75-infiniti). Thus, we conclude that automatic provision of decision support is a critical feature of successful CPOE-based CDSS interventions.
Lauriks, Steve; de Wit, Matty A S; Buster, Marcel C A; Fassaert, Thijs J L; van Wifferen, Ron; Klazinga, Niek S
2014-10-01
The current study set out to develop a decision support tool based on the Self-Sufficiency Matrix (Dutch version; SSM-D) for the clinical decision to allocate homeless people to the public mental health care system at the central access point of public mental health care in Amsterdam, The Netherlands. Logistic regression and receiver operating characteristic-curve analyses were used to model professional decisions and establish four decision categories based on SSM-D scores from half of the research population (Total n = 612). The model and decision categories were found to be accurate and reliable in predicting professional decisions in the second half of the population. Results indicate that the decision support tool based on the SSM-D is useful and feasible. The method to develop the SSM-D as a decision support tool could be applied to decision-making processes in other systems and services where the SSM-D has been implemented, to further increase the utility of the instrument.
A decision-based perspective for the design of methods for systems design
NASA Technical Reports Server (NTRS)
Mistree, Farrokh; Muster, Douglas; Shupe, Jon A.; Allen, Janet K.
1989-01-01
Organization of material, a definition of decision based design, a hierarchy of decision based design, the decision support problem technique, a conceptual model design that can be manufactured and maintained, meta-design, computer-based design, action learning, and the characteristics of decisions are among the topics covered.
Brenman, J E; Gao, F B; Jan, L Y; Jan, Y N
2001-11-01
Morphological complexity of neurons contributes to their functional complexity. How neurons generate different dendritic patterns is not known. We identified the sequoia mutant from a previous screen for dendrite mutants. Here we report that Sequoia is a pan-neural nuclear protein containing two putative zinc fingers homologous to the DNA binding domain of Tramtrack. sequoia mutants affect the cell fate decision of a small subset of neurons but have global effects on axon and dendrite morphologies of most and possibly all neurons. In support of sequoia as a specific regulator of neuronal morphogenesis, microarray experiments indicate that sequoia may regulate downstream genes that are important for executing neurite development rather than altering a variety of molecules that specify cell fates.
Wright, Adam; Sittig, Dean F
2008-12-01
In this paper, we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. The SANDS architecture for decision support has several significant advantages over other architectures for clinical decision support. The most salient of these are:
Chondrocyte channel transcriptomics
Lewis, Rebecca; May, Hannah; Mobasheri, Ali; Barrett-Jolley, Richard
2013-01-01
To date, a range of ion channels have been identified in chondrocytes using a number of different techniques, predominantly electrophysiological and/or biomolecular; each of these has its advantages and disadvantages. Here we aim to compare and contrast the data available from biophysical and microarray experiments. This letter analyses recent transcriptomics datasets from chondrocytes, accessible from the European Bioinformatics Institute (EBI). We discuss whether such bioinformatic analysis of microarray datasets can potentially accelerate identification and discovery of ion channels in chondrocytes. The ion channels which appear most frequently across these microarray datasets are discussed, along with their possible functions. We discuss whether functional or protein data exist which support the microarray data. A microarray experiment comparing gene expression in osteoarthritis and healthy cartilage is also discussed and we verify the differential expression of 2 of these genes, namely the genes encoding large calcium-activated potassium (BK) and aquaporin channels. PMID:23995703
Miller, Randolph A.; Waitman, Lemuel R.; Chen, Sutin; Rosenbloom, S. Trent
2006-01-01
The authors describe a pragmatic approach to the introduction of clinical decision support at the point of care, based on a decade of experience in developing and evolving Vanderbilt’s inpatient “WizOrder” care provider order entry (CPOE) system. The inpatient care setting provides a unique opportunity to interject CPOE-based decision support features that restructure clinical workflows, deliver focused relevant educational materials, and influence how care is delivered to patients. From their empirical observations, the authors have developed a generic model for decision support within inpatient CPOE systems. They believe that the model’s utility extends beyond Vanderbilt, because it is based on characteristics of end-user workflows and on decision support considerations that are common to a variety of inpatient settings and CPOE systems. The specific approach to implementing a given clinical decision support feature within a CPOE system should involve evaluation along three axes: what type of intervention to create (for which the authors describe 4 general categories); when to introduce the intervention into the user’s workflow (for which the authors present 7 categories), and how disruptive, during use of the system, the intervention might be to end-users’ workflows (for which the authors describe 6 categories). Framing decision support in this manner may help both developers and clinical end-users plan future alterations to their systems when needs for new decision support features arise. PMID:16290243
Fully Automated Complementary DNA Microarray Segmentation using a Novel Fuzzy-based Algorithm.
Saberkari, Hamidreza; Bahrami, Sheyda; Shamsi, Mousa; Amoshahy, Mohammad Javad; Ghavifekr, Habib Badri; Sedaaghi, Mohammad Hossein
2015-01-01
DNA microarray is a powerful approach to study simultaneously, the expression of 1000 of genes in a single experiment. The average value of the fluorescent intensity could be calculated in a microarray experiment. The calculated intensity values are very close in amount to the levels of expression of a particular gene. However, determining the appropriate position of every spot in microarray images is a main challenge, which leads to the accurate classification of normal and abnormal (cancer) cells. In this paper, first a preprocessing approach is performed to eliminate the noise and artifacts available in microarray cells using the nonlinear anisotropic diffusion filtering method. Then, the coordinate center of each spot is positioned utilizing the mathematical morphology operations. Finally, the position of each spot is exactly determined through applying a novel hybrid model based on the principle component analysis and the spatial fuzzy c-means clustering (SFCM) algorithm. Using a Gaussian kernel in SFCM algorithm will lead to improving the quality in complementary DNA microarray segmentation. The performance of the proposed algorithm has been evaluated on the real microarray images, which is available in Stanford Microarray Databases. Results illustrate that the accuracy of microarray cells segmentation in the proposed algorithm reaches to 100% and 98% for noiseless/noisy cells, respectively.
NASA Astrophysics Data System (ADS)
Tibbetts, Clark; Lichanska, Agnieszka M.; Borsuk, Lisa A.; Weslowski, Brian; Morris, Leah M.; Lorence, Matthew C.; Schafer, Klaus O.; Campos, Joseph; Sene, Mohamadou; Myers, Christopher A.; Faix, Dennis; Blair, Patrick J.; Brown, Jason; Metzgar, David
2010-04-01
High-density resequencing microarrays support simultaneous detection and identification of multiple viral and bacterial pathogens. Because detection and identification using RPM is based upon multiple specimen-specific target pathogen gene sequences generated in the individual test, the test results enable both a differential diagnostic analysis and epidemiological tracking of detected pathogen strains and variants from one specimen to the next. The RPM assay enables detection and identification of pathogen sequences that share as little as 80% sequence similarity to prototype target gene sequences represented as detector tiles on the array. This capability enables the RPM to detect and identify previously unknown strains and variants of a detected pathogen, as in sentinel cases associated with an infectious disease outbreak. We illustrate this capability using assay results from testing influenza A virus vaccines configured with strains that were first defined years after the design of the RPM microarray. Results are also presented from RPM-Flu testing of three specimens independently confirmed to the positive for the 2009 Novel H1N1 outbreak strain of influenza virus.
Microarray-based screening of heat shock protein inhibitors.
Schax, Emilia; Walter, Johanna-Gabriela; Märzhäuser, Helene; Stahl, Frank; Scheper, Thomas; Agard, David A; Eichner, Simone; Kirschning, Andreas; Zeilinger, Carsten
2014-06-20
Based on the importance of heat shock proteins (HSPs) in diseases such as cancer, Alzheimer's disease or malaria, inhibitors of these chaperons are needed. Today's state-of-the-art techniques to identify HSP inhibitors are performed in microplate format, requiring large amounts of proteins and potential inhibitors. In contrast, we have developed a miniaturized protein microarray-based assay to identify novel inhibitors, allowing analysis with 300 pmol of protein. The assay is based on competitive binding of fluorescence-labeled ATP and potential inhibitors to the ATP-binding site of HSP. Therefore, the developed microarray enables the parallel analysis of different ATP-binding proteins on a single microarray. We have demonstrated the possibility of multiplexing by immobilizing full-length human HSP90α and HtpG of Helicobacter pylori on microarrays. Fluorescence-labeled ATP was competed by novel geldanamycin/reblastatin derivatives with IC50 values in the range of 0.5 nM to 4 μM and Z(*)-factors between 0.60 and 0.96. Our results demonstrate the potential of a target-oriented multiplexed protein microarray to identify novel inhibitors for different members of the HSP90 family. Copyright © 2014 Elsevier B.V. All rights reserved.
Dolan, James G
2010-01-01
Current models of healthcare quality recommend that patient management decisions be evidence-based and patient-centered. Evidence-based decisions require a thorough understanding of current information regarding the natural history of disease and the anticipated outcomes of different management options. Patient-centered decisions incorporate patient preferences, values, and unique personal circumstances into the decision making process and actively involve both patients along with health care providers as much as possible. Fundamentally, therefore, evidence-based, patient-centered decisions are multi-dimensional and typically involve multiple decision makers.Advances in the decision sciences have led to the development of a number of multiple criteria decision making methods. These multi-criteria methods are designed to help people make better choices when faced with complex decisions involving several dimensions. They are especially helpful when there is a need to combine "hard data" with subjective preferences, to make trade-offs between desired outcomes, and to involve multiple decision makers. Evidence-based, patient-centered clinical decision making has all of these characteristics. This close match suggests that clinical decision support systems based on multi-criteria decision making techniques have the potential to enable patients and providers to carry out the tasks required to implement evidence-based, patient-centered care effectively and efficiently in clinical settings.The goal of this paper is to give readers a general introduction to the range of multi-criteria methods available and show how they could be used to support clinical decision-making. Methods discussed include the balance sheet, the even swap method, ordinal ranking methods, direct weighting methods, multi-attribute decision analysis, and the analytic hierarchy process (AHP).
Dolan, James G.
2010-01-01
Current models of healthcare quality recommend that patient management decisions be evidence-based and patient-centered. Evidence-based decisions require a thorough understanding of current information regarding the natural history of disease and the anticipated outcomes of different management options. Patient-centered decisions incorporate patient preferences, values, and unique personal circumstances into the decision making process and actively involve both patients along with health care providers as much as possible. Fundamentally, therefore, evidence-based, patient-centered decisions are multi-dimensional and typically involve multiple decision makers. Advances in the decision sciences have led to the development of a number of multiple criteria decision making methods. These multi-criteria methods are designed to help people make better choices when faced with complex decisions involving several dimensions. They are especially helpful when there is a need to combine “hard data” with subjective preferences, to make trade-offs between desired outcomes, and to involve multiple decision makers. Evidence-based, patient-centered clinical decision making has all of these characteristics. This close match suggests that clinical decision support systems based on multi-criteria decision making techniques have the potential to enable patients and providers to carry out the tasks required to implement evidence-based, patient-centered care effectively and efficiently in clinical settings. The goal of this paper is to give readers a general introduction to the range of multi-criteria methods available and show how they could be used to support clinical decision-making. Methods discussed include the balance sheet, the even swap method, ordinal ranking methods, direct weighting methods, multi-attribute decision analysis, and the analytic hierarchy process (AHP) PMID:21394218
ERIC Educational Resources Information Center
Jackson, Cath; Cheater, Francine M.; Peacock, Rose; Leask, Julie; Trevena, Lyndal
2010-01-01
Objective: The objective of this feasibility study was to evaluate the acceptability and potential effectiveness of a web-based MMR decision aid in supporting informed decision-making for the MMR vaccine. Design: This was a prospective before-and-after evaluation. Setting: Thirty parents of children eligible for MMR vaccination were recruited from…
User-centered design to improve clinical decision support in primary care.
Brunner, Julian; Chuang, Emmeline; Goldzweig, Caroline; Cain, Cindy L; Sugar, Catherine; Yano, Elizabeth M
2017-08-01
A growing literature has demonstrated the ability of user-centered design to make clinical decision support systems more effective and easier to use. However, studies of user-centered design have rarely examined more than a handful of sites at a time, and have frequently neglected the implementation climate and organizational resources that influence clinical decision support. The inclusion of such factors was identified by a systematic review as "the most important improvement that can be made in health IT evaluations." (1) Identify the prevalence of four user-centered design practices at United States Veterans Affairs (VA) primary care clinics and assess the perceived utility of clinical decision support at those clinics; (2) Evaluate the association between those user-centered design practices and the perceived utility of clinical decision support. We analyzed clinic-level survey data collected in 2006-2007 from 170 VA primary care clinics. We examined four user-centered design practices: 1) pilot testing, 2) provider satisfaction assessment, 3) formal usability assessment, and 4) analysis of impact on performance improvement. We used a regression model to evaluate the association between user-centered design practices and the perceived utility of clinical decision support, while accounting for other important factors at those clinics, including implementation climate, available resources, and structural characteristics. We also examined associations separately at community-based clinics and at hospital-based clinics. User-centered design practices for clinical decision support varied across clinics: 74% conducted pilot testing, 62% conducted provider satisfaction assessment, 36% conducted a formal usability assessment, and 79% conducted an analysis of impact on performance improvement. Overall perceived utility of clinical decision support was high, with a mean rating of 4.17 (±.67) out of 5 on a composite measure. "Analysis of impact on performance improvement" was the only user-centered design practice significantly associated with perceived utility of clinical decision support, b=.47 (p<.001). This association was present in hospital-based clinics, b=.34 (p<.05), but was stronger at community-based clinics, b=.61 (p<.001). Our findings are highly supportive of the practice of analyzing the impact of clinical decision support on performance metrics. This was the most common user-centered design practice in our study, and was the practice associated with higher perceived utility of clinical decision support. This practice may be particularly helpful at community-based clinics, which are typically less connected to VA medical center resources. Published by Elsevier B.V.
Gene ARMADA: an integrated multi-analysis platform for microarray data implemented in MATLAB.
Chatziioannou, Aristotelis; Moulos, Panagiotis; Kolisis, Fragiskos N
2009-10-27
The microarray data analysis realm is ever growing through the development of various tools, open source and commercial. However there is absence of predefined rational algorithmic analysis workflows or batch standardized processing to incorporate all steps, from raw data import up to the derivation of significantly differentially expressed gene lists. This absence obfuscates the analytical procedure and obstructs the massive comparative processing of genomic microarray datasets. Moreover, the solutions provided, heavily depend on the programming skills of the user, whereas in the case of GUI embedded solutions, they do not provide direct support of various raw image analysis formats or a versatile and simultaneously flexible combination of signal processing methods. We describe here Gene ARMADA (Automated Robust MicroArray Data Analysis), a MATLAB implemented platform with a Graphical User Interface. This suite integrates all steps of microarray data analysis including automated data import, noise correction and filtering, normalization, statistical selection of differentially expressed genes, clustering, classification and annotation. In its current version, Gene ARMADA fully supports 2 coloured cDNA and Affymetrix oligonucleotide arrays, plus custom arrays for which experimental details are given in tabular form (Excel spreadsheet, comma separated values, tab-delimited text formats). It also supports the analysis of already processed results through its versatile import editor. Besides being fully automated, Gene ARMADA incorporates numerous functionalities of the Statistics and Bioinformatics Toolboxes of MATLAB. In addition, it provides numerous visualization and exploration tools plus customizable export data formats for seamless integration by other analysis tools or MATLAB, for further processing. Gene ARMADA requires MATLAB 7.4 (R2007a) or higher and is also distributed as a stand-alone application with MATLAB Component Runtime. Gene ARMADA provides a highly adaptable, integrative, yet flexible tool which can be used for automated quality control, analysis, annotation and visualization of microarray data, constituting a starting point for further data interpretation and integration with numerous other tools.
DNA Microarray-based Ecotoxicological Biomarker Discovery in a Small Fish Model Species
This paper addresses several issues critical to use of zebrafish oligonucleotide microarrays for computational toxicology research on endocrine disrupting chemicals using small fish models, and more generally, the use of microarrays in aquatic toxicology.
IMPROVING THE RELIABILITY OF MICROARRAYS FOR TOXICOLOGY RESEARCH: A COLLABORATIVE APPROACH
Microarray-based gene expression profiling is a critical tool to identify molecular biomarkers of specific chemical stressors. Although current microarray technologies have progressed from their infancy, biological and technical repeatability and reliability are often still limit...
Zhu, Yuerong; Zhu, Yuelin; Xu, Wei
2008-01-01
Background Though microarray experiments are very popular in life science research, managing and analyzing microarray data are still challenging tasks for many biologists. Most microarray programs require users to have sophisticated knowledge of mathematics, statistics and computer skills for usage. With accumulating microarray data deposited in public databases, easy-to-use programs to re-analyze previously published microarray data are in high demand. Results EzArray is a web-based Affymetrix expression array data management and analysis system for researchers who need to organize microarray data efficiently and get data analyzed instantly. EzArray organizes microarray data into projects that can be analyzed online with predefined or custom procedures. EzArray performs data preprocessing and detection of differentially expressed genes with statistical methods. All analysis procedures are optimized and highly automated so that even novice users with limited pre-knowledge of microarray data analysis can complete initial analysis quickly. Since all input files, analysis parameters, and executed scripts can be downloaded, EzArray provides maximum reproducibility for each analysis. In addition, EzArray integrates with Gene Expression Omnibus (GEO) and allows instantaneous re-analysis of published array data. Conclusion EzArray is a novel Affymetrix expression array data analysis and sharing system. EzArray provides easy-to-use tools for re-analyzing published microarray data and will help both novice and experienced users perform initial analysis of their microarray data from the location of data storage. We believe EzArray will be a useful system for facilities with microarray services and laboratories with multiple members involved in microarray data analysis. EzArray is freely available from . PMID:18218103
Intelligent Case Based Decision Support System for Online Diagnosis of Automated Production System
NASA Astrophysics Data System (ADS)
Ben Rabah, N.; Saddem, R.; Ben Hmida, F.; Carre-Menetrier, V.; Tagina, M.
2017-01-01
Diagnosis of Automated Production System (APS) is a decision-making process designed to detect, locate and identify a particular failure caused by the control law. In the literature, there are three major types of reasoning for industrial diagnosis: the first is model-based, the second is rule-based and the third is case-based. The common and major limitation of the first and the second reasonings is that they do not have automated learning ability. This paper presents an interactive and effective Case Based Decision Support System for online Diagnosis (CB-DSSD) of an APS. It offers a synergy between the Case Based Reasoning (CBR) and the Decision Support System (DSS) in order to support and assist Human Operator of Supervision (HOS) in his/her decision process. Indeed, the experimental evaluation performed on an Interactive Training System for PLC (ITS PLC) that allows the control of a Programmable Logic Controller (PLC), simulating sensors or/and actuators failures and validating the control algorithm through a real time interactive experience, showed the efficiency of our approach.
EDGE3: A web-based solution for management and analysis of Agilent two color microarray experiments
Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A
2009-01-01
Background The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE3 was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. Results EDGE3 has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE3 is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Conclusion Here, we present EDGE3, an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE3 provides a means for managing RNA samples and arrays during the hybridization process. EDGE3 is freely available for download at . PMID:19732451
Vollrath, Aaron L; Smith, Adam A; Craven, Mark; Bradfield, Christopher A
2009-09-04
The ability to generate transcriptional data on the scale of entire genomes has been a boon both in the improvement of biological understanding and in the amount of data generated. The latter, the amount of data generated, has implications when it comes to effective storage, analysis and sharing of these data. A number of software tools have been developed to store, analyze, and share microarray data. However, a majority of these tools do not offer all of these features nor do they specifically target the commonly used two color Agilent DNA microarray platform. Thus, the motivating factor for the development of EDGE(3) was to incorporate the storage, analysis and sharing of microarray data in a manner that would provide a means for research groups to collaborate on Agilent-based microarray experiments without a large investment in software-related expenditures or extensive training of end-users. EDGE(3) has been developed with two major functions in mind. The first function is to provide a workflow process for the generation of microarray data by a research laboratory or a microarray facility. The second is to store, analyze, and share microarray data in a manner that doesn't require complicated software. To satisfy the first function, EDGE3 has been developed as a means to establish a well defined experimental workflow and information system for microarray generation. To satisfy the second function, the software application utilized as the user interface of EDGE(3) is a web browser. Within the web browser, a user is able to access the entire functionality, including, but not limited to, the ability to perform a number of bioinformatics based analyses, collaborate between research groups through a user-based security model, and access to the raw data files and quality control files generated by the software used to extract the signals from an array image. Here, we present EDGE(3), an open-source, web-based application that allows for the storage, analysis, and controlled sharing of transcription-based microarray data generated on the Agilent DNA platform. In addition, EDGE(3) provides a means for managing RNA samples and arrays during the hybridization process. EDGE(3) is freely available for download at http://edge.oncology.wisc.edu/.
Pirooznia, Mehdi; Deng, Youping
2006-12-12
Graphical user interface (GUI) software promotes novelty by allowing users to extend the functionality. SVM Classifier is a cross-platform graphical application that handles very large datasets well. The purpose of this study is to create a GUI application that allows SVM users to perform SVM training, classification and prediction. The GUI provides user-friendly access to state-of-the-art SVM methods embodied in the LIBSVM implementation of Support Vector Machine. We implemented the java interface using standard swing libraries. We used a sample data from a breast cancer study for testing classification accuracy. We achieved 100% accuracy in classification among the BRCA1-BRCA2 samples with RBF kernel of SVM. We have developed a java GUI application that allows SVM users to perform SVM training, classification and prediction. We have demonstrated that support vector machines can accurately classify genes into functional categories based upon expression data from DNA microarray hybridization experiments. Among the different kernel functions that we examined, the SVM that uses a radial basis kernel function provides the best performance. The SVM Classifier is available at http://mfgn.usm.edu/ebl/svm/.
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems. PMID:25961028
Alshamlan, Hala; Badr, Ghada; Alohali, Yousef
2015-01-01
An artificial bee colony (ABC) is a relatively recent swarm intelligence optimization approach. In this paper, we propose the first attempt at applying ABC algorithm in analyzing a microarray gene expression profile. In addition, we propose an innovative feature selection algorithm, minimum redundancy maximum relevance (mRMR), and combine it with an ABC algorithm, mRMR-ABC, to select informative genes from microarray profile. The new approach is based on a support vector machine (SVM) algorithm to measure the classification accuracy for selected genes. We evaluate the performance of the proposed mRMR-ABC algorithm by conducting extensive experiments on six binary and multiclass gene expression microarray datasets. Furthermore, we compare our proposed mRMR-ABC algorithm with previously known techniques. We reimplemented two of these techniques for the sake of a fair comparison using the same parameters. These two techniques are mRMR when combined with a genetic algorithm (mRMR-GA) and mRMR when combined with a particle swarm optimization algorithm (mRMR-PSO). The experimental results prove that the proposed mRMR-ABC algorithm achieves accurate classification performance using small number of predictive genes when tested using both datasets and compared to previously suggested methods. This shows that mRMR-ABC is a promising approach for solving gene selection and cancer classification problems.
Schadt, Eric E; Edwards, Stephen W; GuhaThakurta, Debraj; Holder, Dan; Ying, Lisa; Svetnik, Vladimir; Leonardson, Amy; Hart, Kyle W; Russell, Archie; Li, Guoya; Cavet, Guy; Castle, John; McDonagh, Paul; Kan, Zhengyan; Chen, Ronghua; Kasarskis, Andrew; Margarint, Mihai; Caceres, Ramon M; Johnson, Jason M; Armour, Christopher D; Garrett-Engele, Philip W; Tsinoremas, Nicholas F; Shoemaker, Daniel D
2004-01-01
Background Computational and microarray-based experimental approaches were used to generate a comprehensive transcript index for the human genome. Oligonucleotide probes designed from approximately 50,000 known and predicted transcript sequences from the human genome were used to survey transcription from a diverse set of 60 tissues and cell lines using ink-jet microarrays. Further, expression activity over at least six conditions was more generally assessed using genomic tiling arrays consisting of probes tiled through a repeat-masked version of the genomic sequence making up chromosomes 20 and 22. Results The combination of microarray data with extensive genome annotations resulted in a set of 28,456 experimentally supported transcripts. This set of high-confidence transcripts represents the first experimentally driven annotation of the human genome. In addition, the results from genomic tiling suggest that a large amount of transcription exists outside of annotated regions of the genome and serves as an example of how this activity could be measured on a genome-wide scale. Conclusions These data represent one of the most comprehensive assessments of transcriptional activity in the human genome and provide an atlas of human gene expression over a unique set of gene predictions. Before the annotation of the human genome is considered complete, however, the previously unannotated transcriptional activity throughout the genome must be fully characterized. PMID:15461792
Finding Patterns of Emergence in Science and Technology
2012-09-24
formal evaluation scheduled – Case Studies, Eight Examples: Tissue Engineering, Cold Fusion, RF Metamaterials, DNA Microarrays, Genetic Algorithms, RNAi...emerging capabilities Case Studies, Eight Examples: • Tissue Engineering, Cold Fusion, RF Metamaterials, DNA Microarrays, Genetic Algorithms...Evidence Quality (i.e., the rubric ) and deliver comprehensible evidential support for nomination • Demonstrate proof-of-concept nomination for Chinese
Geue, Lutz; Stieber, Bettina; Monecke, Stefan; Engelmann, Ines; Gunzer, Florian; Slickers, Peter; Braun, Sascha D; Ehricht, Ralf
2014-08-01
In this study, we developed a new rapid, economic, and automated microarray-based genotyping test for the standardized subtyping of Shiga toxins 1 and 2 of Escherichia coli. The microarrays from Alere Technologies can be used in two different formats, the ArrayTube and the ArrayStrip (which enables high-throughput testing in a 96-well format). One microarray chip harbors all the gene sequences necessary to distinguish between all Stx subtypes, facilitating the identification of single and multiple subtypes within a single isolate in one experiment. Specific software was developed to automatically analyze all data obtained from the microarray. The assay was validated with 21 Shiga toxin-producing E. coli (STEC) reference strains that were previously tested by the complete set of conventional subtyping PCRs. The microarray results showed 100% concordance with the PCR results. Essentially identical results were detected when the standard DNA extraction method was replaced by a time-saving heat lysis protocol. For further validation of the microarray, we identified the Stx subtypes or combinations of the subtypes in 446 STEC field isolates of human and animal origin. In summary, this oligonucleotide array represents an excellent diagnostic tool that provides some advantages over standard PCR-based subtyping. The number of the spotted probes on the microarrays can be increased by additional probes, such as for novel alleles, species markers, or resistance genes, should the need arise. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
An Integrated Web-based Decision Support System in Disaster Risk Management
NASA Astrophysics Data System (ADS)
Aye, Z. C.; Jaboyedoff, M.; Derron, M. H.
2012-04-01
Nowadays, web based decision support systems (DSS) play an essential role in disaster risk management because of their supporting abilities which help the decision makers to improve their performances and make better decisions without needing to solve complex problems while reducing human resources and time. Since the decision making process is one of the main factors which highly influence the damages and losses of society, it is extremely important to make right decisions at right time by combining available risk information with advanced web technology of Geographic Information System (GIS) and Decision Support System (DSS). This paper presents an integrated web-based decision support system (DSS) of how to use risk information in risk management efficiently and effectively while highlighting the importance of a decision support system in the field of risk reduction. Beyond the conventional systems, it provides the users to define their own strategies starting from risk identification to the risk reduction, which leads to an integrated approach in risk management. In addition, it also considers the complexity of changing environment from different perspectives and sectors with diverse stakeholders' involvement in the development process. The aim of this platform is to contribute a part towards the natural hazards and geosciences society by developing an open-source web platform where the users can analyze risk profiles and make decisions by performing cost benefit analysis, Environmental Impact Assessment (EIA) and Strategic Environmental Assessment (SEA) with the support of others tools and resources provided. There are different access rights to the system depending on the user profiles and their responsibilities. The system is still under development and the current version provides maps viewing, basic GIS functionality, assessment of important infrastructures (e.g. bridge, hospital, etc.) affected by landslides and visualization of the impact-probability matrix in terms of socio-economic dimension.
DISC-BASED IMMUNOASSAY MICROARRAYS. (R825433)
Microarray technology as applied to areas that include genomics, diagnostics, environmental, and drug discovery, is an interesting research topic for which different chip-based devices have been developed. As an alternative, we have explored the principle of compact disc-based...
Using Data-Based Inquiry and Decision Making To Improve Instruction.
ERIC Educational Resources Information Center
Feldman, Jay; Tung, Rosann
2001-01-01
Discusses a study of six schools using data-based inquiry and decision-making process to improve instruction. Findings identified two conditions to support successful implementation of the process: administrative support, especially in providing teachers learning time, and teacher leadership to encourage and support colleagues to own the process.…
Wilkins, Ella J; Archibald, Alison D; Sahhar, Margaret A; White, Susan M
2016-11-01
Chromosomal microarray is an increasingly utilized diagnostic test, particularly in the pediatric setting. However, the clinical significance of copy number variants detected by this technology is not always understood, creating uncertainties in interpreting and communicating results. The aim of this study was to explore parents' experiences of an uncertain microarray result for their child. This research utilized a qualitative approach with a phenomenological methodology. Semi-structured interviews were conducted with nine parents of eight children who received an uncertain microarray result for their child, either a 16p11.2 microdeletion or 15q13.3 microdeletion. Interviews were transcribed verbatim and thematic analysis was used to identify themes within the data. Participants were unprepared for the abnormal test result. They had a complex perception of the extent of their child's condition and a mixed understanding of the clinical relevance of the result, but were accepting of the limitations of medical knowledge, and appeared to have adapted to the result. The test result was empowering for parents in terms of access to medical and educational services; however, they articulated significant unmet support needs. Participants expressed hope for the future, in particular that more information would become available over time. This research has demonstrated that parents of children who have an uncertain microarray result appeared to adapt to uncertainty and limited availability of information and valued honesty and empathic ongoing support from health professionals. Genetic health professionals are well positioned to provide such support and aid patients' and families' adaptation to their situation as well as promote empowerment. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Supporting decision-making processes for evidence-based mental health promotion.
Jané-Llopis, Eva; Katschnig, Heinz; McDaid, David; Wahlbeck, Kristian
2011-12-01
The use of evidence is critical in guiding decision-making, but evidence from effect studies will be only one of a number of factors that will need to be taken into account in the decision-making processes. Equally important for policymakers will be the use of different types of evidence including implementation essentials and other decision-making principles such as social justice, political, ethical, equity issues, reflecting public attitudes and the level of resources available, rather than be based on health outcomes alone. This paper, aimed to support decision-makers, highlights the importance of commissioning high-quality evaluations, the key aspects to assess levels of evidence, the importance of supporting evidence-based implementation and what to look out for before, during and after implementation of mental health promotion and mental disorder prevention programmes.
Relational Algebra in Spatial Decision Support Systems Ontologies.
Diomidous, Marianna; Chardalias, Kostis; Koutonias, Panagiotis; Magnita, Adrianna; Andrianopoulos, Charalampos; Zimeras, Stelios; Mechili, Enkeleint Aggelos
2017-01-01
Decision Support Systems (DSS) is a powerful tool, for facilitates researchers to choose the correct decision based on their final results. Especially in medical cases where doctors could use these systems, to overcome the problem with the clinical misunderstanding. Based on these systems, queries must be constructed based on the particular questions that doctors must answer. In this work, combination between questions and queries would be presented via relational algebra.
Designing Computerized Decision Support That Works for Clinicians and Families
Fiks, Alexander G.
2011-01-01
Evidence-based decision-making is central to the practice of pediatrics. Clinical trials and other biomedical research provide a foundation for this process, and practice guidelines, drawing from their results, inform the optimal management of an increasing number of childhood health problems. However, many clinicians fail to adhere to guidelines. Clinical decision support delivered using health information technology, often in the form of electronic health records, provides a tool to deliver evidence-based information to the point of care and has the potential to overcome barriers to evidence-based practice. An increasing literature now informs how these systems should be designed and implemented to most effectively improve outcomes in pediatrics. Through the examples of computerized physician order entry, as well as the impact of alerts at the point of care on immunization rates, the delivery of evidence-based asthma care, and the follow-up of children with attention deficit hyperactivity disorder, the following review addresses strategies for success in using these tools. The following review argues that, as decision support evolves, the clinician should no longer be the sole target of information and alerts. Through the Internet and other technologies, families are increasingly seeking health information and gathering input to guide health decisions. By enlisting clinical decision support systems to deliver evidence-based information to both clinicians and families, help families express their preferences and goals, and connect families to the medical home, clinical decision support may ultimately be most effective in improving outcomes. PMID:21315295
A Platform for Combined DNA and Protein Microarrays Based on Total Internal Reflection Fluorescence
Asanov, Alexander; Zepeda, Angélica; Vaca, Luis
2012-01-01
We have developed a novel microarray technology based on total internal reflection fluorescence (TIRF) in combination with DNA and protein bioassays immobilized at the TIRF surface. Unlike conventional microarrays that exhibit reduced signal-to-background ratio, require several stages of incubation, rinsing and stringency control, and measure only end-point results, our TIRF microarray technology provides several orders of magnitude better signal-to-background ratio, performs analysis rapidly in one step, and measures the entire course of association and dissociation kinetics between target DNA and protein molecules and the bioassays. In many practical cases detection of only DNA or protein markers alone does not provide the necessary accuracy for diagnosing a disease or detecting a pathogen. Here we describe TIRF microarrays that detect DNA and protein markers simultaneously, which reduces the probabilities of false responses. Supersensitive and multiplexed TIRF DNA and protein microarray technology may provide a platform for accurate diagnosis or enhanced research studies. Our TIRF microarray system can be mounted on upright or inverted microscopes or interfaced directly with CCD cameras equipped with a single objective, facilitating the development of portable devices. As proof-of-concept we applied TIRF microarrays for detecting molecular markers from Bacillus anthracis, the pathogen responsible for anthrax. PMID:22438738
Microintaglio Printing for Soft Lithography-Based in Situ Microarrays
Biyani, Manish; Ichiki, Takanori
2015-01-01
Advances in lithographic approaches to fabricating bio-microarrays have been extensively explored over the last two decades. However, the need for pattern flexibility, a high density, a high resolution, affordability and on-demand fabrication is promoting the development of unconventional routes for microarray fabrication. This review highlights the development and uses of a new molecular lithography approach, called “microintaglio printing technology”, for large-scale bio-microarray fabrication using a microreactor array (µRA)-based chip consisting of uniformly-arranged, femtoliter-size µRA molds. In this method, a single-molecule-amplified DNA microarray pattern is self-assembled onto a µRA mold and subsequently converted into a messenger RNA or protein microarray pattern by simultaneously producing and transferring (immobilizing) a messenger RNA or a protein from a µRA mold to a glass surface. Microintaglio printing allows the self-assembly and patterning of in situ-synthesized biomolecules into high-density (kilo-giga-density), ordered arrays on a chip surface with µm-order precision. This holistic aim, which is difficult to achieve using conventional printing and microarray approaches, is expected to revolutionize and reshape proteomics. This review is not written comprehensively, but rather substantively, highlighting the versatility of microintaglio printing for developing a prerequisite platform for microarray technology for the postgenomic era. PMID:27600226
Walser, Sarah A; Werner-Lin, Allison; Russell, Amita; Wapner, Ronald J; Bernhardt, Barbara A
2016-10-01
This study aims to explore how couples' understanding of the nature and consequences of positive prenatal chromosomal microarray analysis (CMA) results impacts decision-making and concern about pregnancy. We interviewed 28 women and 12 male partners after receiving positive results and analyzed the transcripts to assess their understanding and level of concern about the expected clinical implications of results. Participant descriptions were compared to the original laboratory interpretation. When diagnosed prenatally, couples' understanding of the nature and consequences of copy number variants (CNVs) impacts decision-making and concern. Findings suggest women, but less so partners, generally understand the nature and clinical implications of prenatal CMA results. Couples feel reassured, perhaps sometimes falsely so, when a CNV is inherited from a "normal" parent and experience considerable uncertainty when a CNV is de novo, frequently precipitating a search for additional information and guidance. Five factors influenced participants' concern including: the pattern of inheritance, type of possible phenotypic involvement, perceived manageability of outcomes, availability and strength of evidence about outcomes associated with the CNV, and provider messages about continuing the pregnancy. A good understanding of results is vital as couples decide whether or not to continue with their pregnancy and seek additional information to assist in pregnancy decision-making.
Multiplex cDNA quantification method that facilitates the standardization of gene expression data
Gotoh, Osamu; Murakami, Yasufumi; Suyama, Akira
2011-01-01
Microarray-based gene expression measurement is one of the major methods for transcriptome analysis. However, current microarray data are substantially affected by microarray platforms and RNA references because of the microarray method can provide merely the relative amounts of gene expression levels. Therefore, valid comparisons of the microarray data require standardized platforms, internal and/or external controls and complicated normalizations. These requirements impose limitations on the extensive comparison of gene expression data. Here, we report an effective approach to removing the unfavorable limitations by measuring the absolute amounts of gene expression levels on common DNA microarrays. We have developed a multiplex cDNA quantification method called GEP-DEAN (Gene expression profiling by DCN-encoding-based analysis). The method was validated by using chemically synthesized DNA strands of known quantities and cDNA samples prepared from mouse liver, demonstrating that the absolute amounts of cDNA strands were successfully measured with a sensitivity of 18 zmol in a highly multiplexed manner in 7 h. PMID:21415008
Dhukaram, Anandhi Vivekanandan; Baber, Chris
2015-06-01
Patients make various healthcare decisions on a daily basis. Such day-to-day decision making can have significant consequences on their own health, treatment, care, and costs. While decision aids (DAs) provide effective support in enhancing patient's decision making, to date there have been few studies examining patient's decision making process or exploring how the understanding of such decision processes can aid in extracting requirements for the design of DAs. This paper applies Cognitive Work Analysis (CWA) to analyse patient's decision making in order to inform requirements for supporting self-care decision making. This study uses focus groups to elicit information from elderly cardiovascular disease (CVD) patients concerning a range of decision situations they face on a daily basis. Specifically, the focus groups addressed issues related to the decision making of CVD in terms of medication compliance, pain, diet and exercise. The results of these focus groups are used to develop high level views using CWA. CWA framework decomposes the complex decision making problem to inform three approaches to DA design: one design based on high level requirements; one based on a normative model of decision-making for patients; and the third based on a range of heuristics that patients seem to use. CWA helps in extracting and synthesising decision making from different perspectives: decision processes, work organisation, patient competencies and strategies used in decision making. As decision making can be influenced by human behaviour like skills, rules and knowledge, it is argued that patients require support to different types of decision making. This paper also provides insights for designers in using CWA framework for the design of effective DAs to support patients in self-management. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Chemiluminescence microarrays in analytical chemistry: a critical review.
Seidel, Michael; Niessner, Reinhard
2014-09-01
Multi-analyte immunoassays on microarrays and on multiplex DNA microarrays have been described for quantitative analysis of small organic molecules (e.g., antibiotics, drugs of abuse, small molecule toxins), proteins (e.g., antibodies or protein toxins), and microorganisms, viruses, and eukaryotic cells. In analytical chemistry, multi-analyte detection by use of analytical microarrays has become an innovative research topic because of the possibility of generating several sets of quantitative data for different analyte classes in a short time. Chemiluminescence (CL) microarrays are powerful tools for rapid multiplex analysis of complex matrices. A wide range of applications for CL microarrays is described in the literature dealing with analytical microarrays. The motivation for this review is to summarize the current state of CL-based analytical microarrays. Combining analysis of different compound classes on CL microarrays reduces analysis time, cost of reagents, and use of laboratory space. Applications are discussed, with examples from food safety, water safety, environmental monitoring, diagnostics, forensics, toxicology, and biosecurity. The potential and limitations of research on multiplex analysis by use of CL microarrays are discussed in this review.
Wright, Adam; Sittig, Dean F.
2008-01-01
In this paper we describe and evaluate a new distributed architecture for clinical decision support called SANDS (Service-oriented Architecture for NHIN Decision Support), which leverages current health information exchange efforts and is based on the principles of a service-oriented architecture. The architecture allows disparate clinical information systems and clinical decision support systems to be seamlessly integrated over a network according to a set of interfaces and protocols described in this paper. The architecture described is fully defined and developed, and six use cases have been developed and tested using a prototype electronic health record which links to one of the existing prototype National Health Information Networks (NHIN): drug interaction checking, syndromic surveillance, diagnostic decision support, inappropriate prescribing in older adults, information at the point of care and a simple personal health record. Some of these use cases utilize existing decision support systems, which are either commercially or freely available at present, and developed outside of the SANDS project, while other use cases are based on decision support systems developed specifically for the project. Open source code for many of these components is available, and an open source reference parser is also available for comparison and testing of other clinical information systems and clinical decision support systems that wish to implement the SANDS architecture. PMID:18434256
Schönmann, Susan; Loy, Alexander; Wimmersberger, Céline; Sobek, Jens; Aquino, Catharine; Vandamme, Peter; Frey, Beat; Rehrauer, Hubert; Eberl, Leo
2009-04-01
For cultivation-independent and highly parallel analysis of members of the genus Burkholderia, an oligonucleotide microarray (phylochip) consisting of 131 hierarchically nested 16S rRNA gene-targeted oligonucleotide probes was developed. A novel primer pair was designed for selective amplification of a 1.3 kb 16S rRNA gene fragment of Burkholderia species prior to microarray analysis. The diagnostic performance of the microarray for identification and differentiation of Burkholderia species was tested with 44 reference strains of the genera Burkholderia, Pandoraea, Ralstonia and Limnobacter. Hybridization patterns based on presence/absence of probe signals were interpreted semi-automatically using the novel likelihood-based strategy of the web-tool Phylo- Detect. Eighty-eight per cent of the reference strains were correctly identified at the species level. The evaluated microarray was applied to investigate shifts in the Burkholderia community structure in acidic forest soil upon addition of cadmium, a condition that selected for Burkholderia species. The microarray results were in agreement with those obtained from phylogenetic analysis of Burkholderia 16S rRNA gene sequences recovered from the same cadmiumcontaminated soil, demonstrating the value of the Burkholderia phylochip for determinative and environmental studies.
Creating and sharing clinical decision support content with Web 2.0: Issues and examples.
Wright, Adam; Bates, David W; Middleton, Blackford; Hongsermeier, Tonya; Kashyap, Vipul; Thomas, Sean M; Sittig, Dean F
2009-04-01
Clinical decision support is a powerful tool for improving healthcare quality and patient safety. However, developing a comprehensive package of decision support interventions is costly and difficult. If used well, Web 2.0 methods may make it easier and less costly to develop decision support. Web 2.0 is characterized by online communities, open sharing, interactivity and collaboration. Although most previous attempts at sharing clinical decision support content have worked outside of the Web 2.0 framework, several initiatives are beginning to use Web 2.0 to share and collaborate on decision support content. We present case studies of three efforts: the Clinfowiki, a world-accessible wiki for developing decision support content; Partners Healthcare eRooms, web-based tools for developing decision support within a single organization; and Epic Systems Corporation's Community Library, a repository for sharing decision support content for customers of a single clinical system vendor. We evaluate the potential of Web 2.0 technologies to enable collaborative development and sharing of clinical decision support systems through the lens of three case studies; analyzing technical, legal and organizational issues for developers, consumers and organizers of clinical decision support content in Web 2.0. We believe the case for Web 2.0 as a tool for collaborating on clinical decision support content appears strong, particularly for collaborative content development within an organization.
Usadel, Björn; Nagel, Axel; Steinhauser, Dirk; Gibon, Yves; Bläsing, Oliver E; Redestig, Henning; Sreenivasulu, Nese; Krall, Leonard; Hannah, Matthew A; Poree, Fabien; Fernie, Alisdair R; Stitt, Mark
2006-12-18
Microarray technology has become a widely accepted and standardized tool in biology. The first microarray data analysis programs were developed to support pair-wise comparison. However, as microarray experiments have become more routine, large scale experiments have become more common, which investigate multiple time points or sets of mutants or transgenics. To extract biological information from such high-throughput expression data, it is necessary to develop efficient analytical platforms, which combine manually curated gene ontologies with efficient visualization and navigation tools. Currently, most tools focus on a few limited biological aspects, rather than offering a holistic, integrated analysis. Here we introduce PageMan, a multiplatform, user-friendly, and stand-alone software tool that annotates, investigates, and condenses high-throughput microarray data in the context of functional ontologies. It includes a GUI tool to transform different ontologies into a suitable format, enabling the user to compare and choose between different ontologies. It is equipped with several statistical modules for data analysis, including over-representation analysis and Wilcoxon statistical testing. Results are exported in a graphical format for direct use, or for further editing in graphics programs.PageMan provides a fast overview of single treatments, allows genome-level responses to be compared across several microarray experiments covering, for example, stress responses at multiple time points. This aids in searching for trait-specific changes in pathways using mutants or transgenics, analyzing development time-courses, and comparison between species. In a case study, we analyze the results of publicly available microarrays of multiple cold stress experiments using PageMan, and compare the results to a previously published meta-analysis.PageMan offers a complete user's guide, a web-based over-representation analysis as well as a tutorial, and is freely available at http://mapman.mpimp-golm.mpg.de/pageman/. PageMan allows multiple microarray experiments to be efficiently condensed into a single page graphical display. The flexible interface allows data to be quickly and easily visualized, facilitating comparisons within experiments and to published experiments, thus enabling researchers to gain a rapid overview of the biological responses in the experiments.
The emergence and diffusion of DNA microarray technology.
Lenoir, Tim; Giannella, Eric
2006-08-22
The network model of innovation widely adopted among researchers in the economics of science and technology posits relatively porous boundaries between firms and academic research programs and a bi-directional flow of inventions, personnel, and tacit knowledge between sites of university and industry innovation. Moreover, the model suggests that these bi-directional flows should be considered as mutual stimulation of research and invention in both industry and academe, operating as a positive feedback loop. One side of this bi-directional flow--namely; the flow of inventions into industry through the licensing of university-based technologies--has been well studied; but the reverse phenomenon of the stimulation of university research through the absorption of new directions emanating from industry has yet to be investigated in much detail. We discuss the role of federal funding of academic research in the microarray field, and the multiple pathways through which federally supported development of commercial microarray technologies have transformed core academic research fields. Our study confirms the picture put forward by several scholars that the open character of networked economies is what makes them truly innovative. In an open system innovations emerge from the network. The emergence and diffusion of microarray technologies we have traced here provides an excellent example of an open system of innovation in action. Whether they originated in a startup company environment that operated like a think-tank, such as Affymax, the research labs of a large firm, such as Agilent, or within a research university, the inventors we have followed drew heavily on knowledge resources from all parts of the network in bringing microarray platforms to light. Federal funding for high-tech startups and new industrial development was important at several phases in the early history of microarrays, and federal funding of academic researchers using microarrays was fundamental to transforming the research agendas of several fields within academe. The typical story told about the role of federal funding emphasizes the spillovers from federally funded academic research to industry. Our study shows that the knowledge spillovers worked both ways, with federal funding of non-university research providing the impetus for reshaping the research agendas of several academic fields.
The emergence and diffusion of DNA microarray technology
Lenoir, Tim; Giannella, Eric
2006-01-01
The network model of innovation widely adopted among researchers in the economics of science and technology posits relatively porous boundaries between firms and academic research programs and a bi-directional flow of inventions, personnel, and tacit knowledge between sites of university and industry innovation. Moreover, the model suggests that these bi-directional flows should be considered as mutual stimulation of research and invention in both industry and academe, operating as a positive feedback loop. One side of this bi-directional flow – namely; the flow of inventions into industry through the licensing of university-based technologies – has been well studied; but the reverse phenomenon of the stimulation of university research through the absorption of new directions emanating from industry has yet to be investigated in much detail. We discuss the role of federal funding of academic research in the microarray field, and the multiple pathways through which federally supported development of commercial microarray technologies have transformed core academic research fields. Our study confirms the picture put forward by several scholars that the open character of networked economies is what makes them truly innovative. In an open system innovations emerge from the network. The emergence and diffusion of microarray technologies we have traced here provides an excellent example of an open system of innovation in action. Whether they originated in a startup company environment that operated like a think-tank, such as Affymax, the research labs of a large firm, such as Agilent, or within a research university, the inventors we have followed drew heavily on knowledge resources from all parts of the network in bringing microarray platforms to light. Federal funding for high-tech startups and new industrial development was important at several phases in the early history of microarrays, and federal funding of academic researchers using microarrays was fundamental to transforming the research agendas of several fields within academe. The typical story told about the role of federal funding emphasizes the spillovers from federally funded academic research to industry. Our study shows that the knowledge spillovers worked both ways, with federal funding of non-university research providing the impetus for reshaping the research agendas of several academic fields. PMID:16925816
Yamamoto, F; Yamamoto, M
2004-07-01
We previously developed a PCR-based DNA fingerprinting technique named the Methylation Sensitive (MS)-AFLP method, which permits comparative genome-wide scanning of methylation status with a manageable number of fingerprinting experiments. The technique uses the methylation sensitive restriction enzyme NotI in the context of the existing Amplified Fragment Length Polymorphism (AFLP) method. Here we report the successful conversion of this gel electrophoresis-based DNA fingerprinting technique into a DNA microarray hybridization technique (DNA Microarray MS-AFLP). By performing a total of 30 (15 x 2 reciprocal labeling) DNA Microarray MS-AFLP hybridization experiments on genomic DNA from two breast and three prostate cancer cell lines in all pairwise combinations, and Southern hybridization experiments using more than 100 different probes, we have demonstrated that the DNA Microarray MS-AFLP is a reliable method for genetic and epigenetic analyses. No statistically significant differences were observed in the number of differences between the breast-prostate hybridization experiments and the breast-breast or prostate-prostate comparisons.
Using Visualization in Cockpit Decision Support Systems
NASA Technical Reports Server (NTRS)
Aragon, Cecilia R.
2005-01-01
In order to safely operate their aircraft, pilots must make rapid decisions based on integrating and processing large amounts of heterogeneous information. Visual displays are often the most efficient method of presenting safety-critical data to pilots in real time. However, care must be taken to ensure the pilot is provided with the appropriate amount of information to make effective decisions and not become cognitively overloaded. The results of two usability studies of a prototype airflow hazard visualization cockpit decision support system are summarized. The studies demonstrate that such a system significantly improves the performance of helicopter pilots landing under turbulent conditions. Based on these results, design principles and implications for cockpit decision support systems using visualization are presented.
Development of a microarray-based assay for efficient testing of new HSP70/DnaK inhibitors.
Mohammadi-Ostad-Kalayeh, Sona; Hrupins, Vjaceslavs; Helmsen, Sabine; Ahlbrecht, Christin; Stahl, Frank; Scheper, Thomas; Preller, Matthias; Surup, Frank; Stadler, Marc; Kirschning, Andreas; Zeilinger, Carsten
2017-12-15
A facile method for testing ATP binding in a highly miniaturized microarray environment using human HSP70 and DnaK from Mycobacterium tuberculosis as biological targets is reported. Supported by molecular modelling studies we demonstrate that the position of the fluorescence label on ATP has a strong influence on the binding to human HSP70. Importantly, the label has to be positioned on the adenine ring and not to the terminal phosphate group. Unlabelled ATP displaced bound Cy5-ATP from HSP70 in the micromolar range. The affinity of a well-known HSP70 inhibitor VER155008 for the ATP binding site in HSP70 was determined, with a EC 50 in the micromolar range, whereas reblastin, a HSP90-inhibitor, did not compete for ATP in the presence of HSP70. The applicability of the method was demonstrated by screening a small compound library of natural products. This unraveled that terphenyls rickenyl A and D, recently isolated from cultures of the fungus Hypoxylon rickii, are inhibitors of HSP70. They compete with ATP for the chaperone in the range of 29 µM (Rickenyl D) and 49 µM (Rickenyl A). Furthermore, the microarray-based test system enabled protein-protein interaction analysis using full-length HSP70 and HSP90 proteins. The labelled full-length human HSP90 binds with a half-maximal affinity of 5.5 µg/ml (∼40 µM) to HSP70. The data also demonstrate that the microarray test has potency for many applications from inhibitor screening to target-oriented interaction studies. Copyright © 2017 Elsevier Ltd. All rights reserved.
CLIC, a tool for expanding biological pathways based on co-expression across thousands of datasets
Li, Yang; Liu, Jun S.; Mootha, Vamsi K.
2017-01-01
In recent years, there has been a huge rise in the number of publicly available transcriptional profiling datasets. These massive compendia comprise billions of measurements and provide a special opportunity to predict the function of unstudied genes based on co-expression to well-studied pathways. Such analyses can be very challenging, however, since biological pathways are modular and may exhibit co-expression only in specific contexts. To overcome these challenges we introduce CLIC, CLustering by Inferred Co-expression. CLIC accepts as input a pathway consisting of two or more genes. It then uses a Bayesian partition model to simultaneously partition the input gene set into coherent co-expressed modules (CEMs), while assigning the posterior probability for each dataset in support of each CEM. CLIC then expands each CEM by scanning the transcriptome for additional co-expressed genes, quantified by an integrated log-likelihood ratio (LLR) score weighted for each dataset. As a byproduct, CLIC automatically learns the conditions (datasets) within which a CEM is operative. We implemented CLIC using a compendium of 1774 mouse microarray datasets (28628 microarrays) or 1887 human microarray datasets (45158 microarrays). CLIC analysis reveals that of 910 canonical biological pathways, 30% consist of strongly co-expressed gene modules for which new members are predicted. For example, CLIC predicts a functional connection between protein C7orf55 (FMC1) and the mitochondrial ATP synthase complex that we have experimentally validated. CLIC is freely available at www.gene-clic.org. We anticipate that CLIC will be valuable both for revealing new components of biological pathways as well as the conditions in which they are active. PMID:28719601
Application of machine learning on brain cancer multiclass classification
NASA Astrophysics Data System (ADS)
Panca, V.; Rustam, Z.
2017-07-01
Classification of brain cancer is a problem of multiclass classification. One approach to solve this problem is by first transforming it into several binary problems. The microarray gene expression dataset has the two main characteristics of medical data: extremely many features (genes) and only a few number of samples. The application of machine learning on microarray gene expression dataset mainly consists of two steps: feature selection and classification. In this paper, the features are selected using a method based on support vector machine recursive feature elimination (SVM-RFE) principle which is improved to solve multiclass classification, called multiple multiclass SVM-RFE. Instead of using only the selected features on a single classifier, this method combines the result of multiple classifiers. The features are divided into subsets and SVM-RFE is used on each subset. Then, the selected features on each subset are put on separate classifiers. This method enhances the feature selection ability of each single SVM-RFE. Twin support vector machine (TWSVM) is used as the method of the classifier to reduce computational complexity. While ordinary SVM finds single optimum hyperplane, the main objective Twin SVM is to find two non-parallel optimum hyperplanes. The experiment on the brain cancer microarray gene expression dataset shows this method could classify 71,4% of the overall test data correctly, using 100 and 1000 genes selected from multiple multiclass SVM-RFE feature selection method. Furthermore, the per class results show that this method could classify data of normal and MD class with 100% accuracy.
Opportunities at the Intersection of Bioinformatics and Health Informatics
Miller, Perry L.
2000-01-01
This paper provides a “viewpoint discussion” based on a presentation made to the 2000 Symposium of the American College of Medical Informatics. It discusses potential opportunities for researchers in health informatics to become involved in the rapidly growing field of bioinformatics, using the activities of the Yale Center for Medical Informatics as a case study. One set of opportunities occurs where bioinformatics research itself intersects with the clinical world. Examples include the correlations between individual genetic variation with clinical risk factors, disease presentation, and differential response to treatment; and the implications of including genetic test results in the patient record, which raises clinical decision support issues as well as legal and ethical issues. A second set of opportunities occurs where bioinformatics research can benefit from the technologic expertise and approaches that informaticians have used extensively in the clinical arena. Examples include database organization and knowledge representation, data mining, and modeling and simulation. Microarray technology is discussed as a specific potential area for collaboration. Related questions concern how best to establish collaborations with bioscientists so that the interests and needs of both sets of researchers can be met in a synergistic fashion, and the most appropriate home for bioinformatics in an academic medical center. PMID:10984461
Cell-Based Microarrays for In Vitro Toxicology
NASA Astrophysics Data System (ADS)
Wegener, Joachim
2015-07-01
DNA/RNA and protein microarrays have proven their outstanding bioanalytical performance throughout the past decades, given the unprecedented level of parallelization by which molecular recognition assays can be performed and analyzed. Cell microarrays (CMAs) make use of similar construction principles. They are applied to profile a given cell population with respect to the expression of specific molecular markers and also to measure functional cell responses to drugs and chemicals. This review focuses on the use of cell-based microarrays for assessing the cytotoxicity of drugs, toxins, or chemicals in general. It also summarizes CMA construction principles with respect to the cell types that are used for such microarrays, the readout parameters to assess toxicity, and the various formats that have been established and applied. The review ends with a critical comparison of CMAs and well-established microtiter plate (MTP) approaches.
NASA Astrophysics Data System (ADS)
Sabeur, Z. A.; Wächter, J.; Middleton, S. E.; Zlatev, Z.; Häner, R.; Hammitzsch, M.; Loewe, P.
2012-04-01
The intelligent management of large volumes of environmental monitoring data for early tsunami warning requires the deployment of robust and scalable service oriented infrastructure that is supported by an agile knowledge-base for critical decision-support In the TRIDEC project (TRIDEC 2010-2013), a sensor observation service bus of the TRIDEC system is being developed for the advancement of complex tsunami event processing and management. Further, a dedicated TRIDEC system knowledge-base is being implemented to enable on-demand access to semantically rich OGC SWE compliant hydrodynamic observations and operationally oriented meta-information to multiple subscribers. TRIDEC decision support requires a scalable and agile real-time processing architecture which enables fast response to evolving subscribers requirements as the tsunami crisis develops. This is also achieved with the support of intelligent processing services which specialise in multi-level fusion methods with relevance feedback and deep learning. The TRIDEC knowledge base development work coupled with that of the generic sensor bus platform shall be presented to demonstrate advanced decision-support with situation awareness in context of tsunami early warning and crisis management.
D'Arrigo, Stefano; Gavazzi, Francesco; Alfei, Enrico; Zuffardi, Orsetta; Montomoli, Cristina; Corso, Barbara; Buzzi, Erika; Sciacca, Francesca L; Bulgheroni, Sara; Riva, Daria; Pantaleoni, Chiara
2016-05-01
Microarray-based comparative genomic hybridization is a method of molecular analysis that identifies chromosomal anomalies (or copy number variants) that correlate with clinical phenotypes. The aim of the present study was to apply a clinical score previously designated by de Vries to 329 patients with intellectual disability/developmental disorder (intellectual disability/developmental delay) referred to our tertiary center and to see whether the clinical factors are associated with a positive outcome of aCGH analyses. Another goal was to test the association between a positive microarray-based comparative genomic hybridization result and the severity of intellectual disability/developmental delay. Microarray-based comparative genomic hybridization identified structural chromosomal alterations responsible for the intellectual disability/developmental delay phenotype in 16% of our sample. Our study showed that causative copy number variants are frequently found even in cases of mild intellectual disability (30.77%). We want to emphasize the need to conduct microarray-based comparative genomic hybridization on all individuals with intellectual disability/developmental delay, regardless of the severity, because the degree of intellectual disability/developmental delay does not predict the diagnostic yield of microarray-based comparative genomic hybridization. © The Author(s) 2015.
MiMiR – an integrated platform for microarray data sharing, mining and analysis
Tomlinson, Chris; Thimma, Manjula; Alexandrakis, Stelios; Castillo, Tito; Dennis, Jayne L; Brooks, Anthony; Bradley, Thomas; Turnbull, Carly; Blaveri, Ekaterini; Barton, Geraint; Chiba, Norie; Maratou, Klio; Soutter, Pat; Aitman, Tim; Game, Laurence
2008-01-01
Background Despite considerable efforts within the microarray community for standardising data format, content and description, microarray technologies present major challenges in managing, sharing, analysing and re-using the large amount of data generated locally or internationally. Additionally, it is recognised that inconsistent and low quality experimental annotation in public data repositories significantly compromises the re-use of microarray data for meta-analysis. MiMiR, the Microarray data Mining Resource was designed to tackle some of these limitations and challenges. Here we present new software components and enhancements to the original infrastructure that increase accessibility, utility and opportunities for large scale mining of experimental and clinical data. Results A user friendly Online Annotation Tool allows researchers to submit detailed experimental information via the web at the time of data generation rather than at the time of publication. This ensures the easy access and high accuracy of meta-data collected. Experiments are programmatically built in the MiMiR database from the submitted information and details are systematically curated and further annotated by a team of trained annotators using a new Curation and Annotation Tool. Clinical information can be annotated and coded with a clinical Data Mapping Tool within an appropriate ethical framework. Users can visualise experimental annotation, assess data quality, download and share data via a web-based experiment browser called MiMiR Online. All requests to access data in MiMiR are routed through a sophisticated middleware security layer thereby allowing secure data access and sharing amongst MiMiR registered users prior to publication. Data in MiMiR can be mined and analysed using the integrated EMAAS open source analysis web portal or via export of data and meta-data into Rosetta Resolver data analysis package. Conclusion The new MiMiR suite of software enables systematic and effective capture of extensive experimental and clinical information with the highest MIAME score, and secure data sharing prior to publication. MiMiR currently contains more than 150 experiments corresponding to over 3000 hybridisations and supports the Microarray Centre's large microarray user community and two international consortia. The MiMiR flexible and scalable hardware and software architecture enables secure warehousing of thousands of datasets, including clinical studies, from microarray and potentially other -omics technologies. PMID:18801157
MiMiR--an integrated platform for microarray data sharing, mining and analysis.
Tomlinson, Chris; Thimma, Manjula; Alexandrakis, Stelios; Castillo, Tito; Dennis, Jayne L; Brooks, Anthony; Bradley, Thomas; Turnbull, Carly; Blaveri, Ekaterini; Barton, Geraint; Chiba, Norie; Maratou, Klio; Soutter, Pat; Aitman, Tim; Game, Laurence
2008-09-18
Despite considerable efforts within the microarray community for standardising data format, content and description, microarray technologies present major challenges in managing, sharing, analysing and re-using the large amount of data generated locally or internationally. Additionally, it is recognised that inconsistent and low quality experimental annotation in public data repositories significantly compromises the re-use of microarray data for meta-analysis. MiMiR, the Microarray data Mining Resource was designed to tackle some of these limitations and challenges. Here we present new software components and enhancements to the original infrastructure that increase accessibility, utility and opportunities for large scale mining of experimental and clinical data. A user friendly Online Annotation Tool allows researchers to submit detailed experimental information via the web at the time of data generation rather than at the time of publication. This ensures the easy access and high accuracy of meta-data collected. Experiments are programmatically built in the MiMiR database from the submitted information and details are systematically curated and further annotated by a team of trained annotators using a new Curation and Annotation Tool. Clinical information can be annotated and coded with a clinical Data Mapping Tool within an appropriate ethical framework. Users can visualise experimental annotation, assess data quality, download and share data via a web-based experiment browser called MiMiR Online. All requests to access data in MiMiR are routed through a sophisticated middleware security layer thereby allowing secure data access and sharing amongst MiMiR registered users prior to publication. Data in MiMiR can be mined and analysed using the integrated EMAAS open source analysis web portal or via export of data and meta-data into Rosetta Resolver data analysis package. The new MiMiR suite of software enables systematic and effective capture of extensive experimental and clinical information with the highest MIAME score, and secure data sharing prior to publication. MiMiR currently contains more than 150 experiments corresponding to over 3000 hybridisations and supports the Microarray Centre's large microarray user community and two international consortia. The MiMiR flexible and scalable hardware and software architecture enables secure warehousing of thousands of datasets, including clinical studies, from microarray and potentially other -omics technologies.
Stakeholder perspectives on decision-analytic modeling frameworks to assess genetic services policy.
Guzauskas, Gregory F; Garrison, Louis P; Stock, Jacquie; Au, Sylvia; Doyle, Debra Lochner; Veenstra, David L
2013-01-01
Genetic services policymakers and insurers often make coverage decisions in the absence of complete evidence of clinical utility and under budget constraints. We evaluated genetic services stakeholder opinions on the potential usefulness of decision-analytic modeling to inform coverage decisions, and asked them to identify genetic tests for decision-analytic modeling studies. We presented an overview of decision-analytic modeling to members of the Western States Genetic Services Collaborative Reimbursement Work Group and state Medicaid representatives and conducted directed content analysis and an anonymous survey to gauge their attitudes toward decision-analytic modeling. Participants also identified and prioritized genetic services for prospective decision-analytic evaluation. Participants expressed dissatisfaction with current processes for evaluating insurance coverage of genetic services. Some participants expressed uncertainty about their comprehension of decision-analytic modeling techniques. All stakeholders reported openness to using decision-analytic modeling for genetic services assessments. Participants were most interested in application of decision-analytic concepts to multiple-disorder testing platforms, such as next-generation sequencing and chromosomal microarray. Decision-analytic modeling approaches may provide a useful decision tool to genetic services stakeholders and Medicaid decision-makers.
Kawamoto, Kensaku; Lobach, David F
2005-01-01
Despite their demonstrated ability to improve care quality, clinical decision support systems are not widely used. In part, this limited use is due to the difficulty of sharing medical knowledge in a machine-executable format. To address this problem, we developed a decision support Web service known as SEBASTIAN. In SEBASTIAN, individual knowledge modules define the data requirements for assessing a patient, the conclusions that can be drawn using that data, and instructions on how to generate those conclusions. Using standards-based XML messages transmitted over HTTP, client decision support applications provide patient data to SEBASTIAN and receive patient-specific assessments and recommendations. SEBASTIAN has been used to implement four distinct decision support systems; an architectural overview is provided for one of these systems. Preliminary assessments indicate that SEBASTIAN fulfills all original design objectives, including the re-use of executable medical knowledge across diverse applications and care settings, the straightforward authoring of knowledge modules, and use of the framework to implement decision support applications with significant clinical utility.
Chorpita, Bruce F; Bernstein, Adam; Daleiden, Eric L
2008-03-01
This paper illustrates the application of design principles for tools that structure clinical decision-making. If the effort to implement evidence-based practices in community services organizations is to be effective, attention must be paid to the decision-making context in which such treatments are delivered. Clinical research trials commonly occur in an environment characterized by structured decision making and expert supports. Technology has great potential to serve mental health organizations by supporting these potentially important contextual features of the research environment, through organization and reporting of clinical data into interpretable information to support decisions and anchor decision-making procedures. This article describes one example of a behavioral health reporting system designed to facilitate clinical and administrative use of evidence-based practices. The design processes underlying this system-mapping of decision points and distillation of performance information at the individual, caseload, and organizational levels-can be implemented to support clinical practice in a wide variety of settings.
Toward the Modularization of Decision Support Systems
NASA Astrophysics Data System (ADS)
Raskin, R. G.
2009-12-01
Decision support systems are typically developed entirely from scratch without the use of modular components. This “stovepiped” approach is inefficient and costly because it prevents a developer from leveraging the data, models, tools, and services of other developers. Even when a decision support component is made available, it is difficult to know what problem it solves, how it relates to other components, or even that the component exists, The Spatial Decision Support (SDS) Consortium was formed in 2008 to organize the body of knowledge in SDS within a common portal. The portal identifies the canonical steps in the decision process and enables decision support components to be registered, categorized, and searched. This presentation describes how a decision support system can be assembled from modular models, data, tools and services, based on the needs of the Earth science application.
Microarray profiling of human white adipose tissue after exogenous leptin injection.
Taleb, S; Van Haaften, R; Henegar, C; Hukshorn, C; Cancello, R; Pelloux, V; Hanczar, B; Viguerie, N; Langin, D; Evelo, C; Zucker, J; Clément, K; Saris, W H M
2006-03-01
Leptin is a secreted adipocyte hormone that plays a key role in the regulation of body weight homeostasis. The leptin effect on human white adipose tissue (WAT) is still debated. The aim of this study was to assess whether the administration of polyethylene glycol-leptin (PEG-OB) in a single supraphysiological dose has transcriptional effects on genes of WAT and to identify its target genes and functional pathways in WAT. Blood samples and WAT biopsies were obtained from 10 healthy nonobese men before treatment and 72 h after the PEG-OB injection, leading to an approximate 809-fold increase in circulating leptin. The WAT gene expression profile before and after the PEG-OB injection was compared using pangenomic microarrays. Functional gene annotations based on the gene ontology of the PEG-OB regulated genes were performed using both an 'in house' automated procedure and GenMAPP (Gene Microarray Pathway Profiler), designed for viewing and analyzing gene expression data in the context of biological pathways. Statistical analysis of microarray data revealed that PEG-OB had a major down-regulated effect on WAT gene expression, as we obtained 1,822 and 100 down- and up-regulated genes, respectively. Microarray data were validated using reverse transcription quantitative PCR. Functional gene annotations of PEG-OB regulated genes revealed that the functional class related to immunity and inflammation was among the most mobilized PEG-OB pathway in WAT. These genes are mainly expressed in the cell of the stroma vascular fraction in comparison with adipocytes. Our observations support the hypothesis that leptin could act on WAT, particularly on genes related to inflammation and immunity, which may suggest a novel leptin target pathway in human WAT.
1984-09-01
is not only difficult and time consuming , but also crucial to the success of the project, the question is whether a decision support system designed...KtI I - uAujvhIMtf IENE In THE FEASIBILITY OF A DECISION SUPPORT SYSTEM FOR THE DETERMINATION OF SOURCE SELECTION EVALUATION ’CRITERIA THESIS .2...INSTITUTE OF TECHNOLOGY Wright-Patterson Air Force Base, Ohio DZM=0N STATEMENT A ,’r !’ILMILSHIM S /8 4 THE FEASIBILITY OF A DECISION SUPPORT SYSTEM FOR
Dai, Yilin; Guo, Ling; Li, Meng; Chen, Yi-Bu
2012-06-08
Microarray data analysis presents a significant challenge to researchers who are unable to use the powerful Bioconductor and its numerous tools due to their lack of knowledge of R language. Among the few existing software programs that offer a graphic user interface to Bioconductor packages, none have implemented a comprehensive strategy to address the accuracy and reliability issue of microarray data analysis due to the well known probe design problems associated with many widely used microarray chips. There is also a lack of tools that would expedite the functional analysis of microarray results. We present Microarray Я US, an R-based graphical user interface that implements over a dozen popular Bioconductor packages to offer researchers a streamlined workflow for routine differential microarray expression data analysis without the need to learn R language. In order to enable a more accurate analysis and interpretation of microarray data, we incorporated the latest custom probe re-definition and re-annotation for Affymetrix and Illumina chips. A versatile microarray results output utility tool was also implemented for easy and fast generation of input files for over 20 of the most widely used functional analysis software programs. Coupled with a well-designed user interface, Microarray Я US leverages cutting edge Bioconductor packages for researchers with no knowledge in R language. It also enables a more reliable and accurate microarray data analysis and expedites downstream functional analysis of microarray results.
Decision support tools to support the operations of traffic management centers (TMC)
DOT National Transportation Integrated Search
2011-01-31
The goal of this project is to develop decision support tools to support traffic management operations based on collected intelligent transportation system (ITS) data. The project developments are in accordance with the needs of traffic management ce...
2013-01-01
Background Differential diagnosis between malignant follicular thyroid cancer (FTC) and benign follicular thyroid adenoma (FTA) is a great challenge for even an experienced pathologist and requires special effort. Molecular markers may potentially support a differential diagnosis between FTC and FTA in postoperative specimens. The purpose of this study was to derive molecular support for differential post-operative diagnosis, in the form of a simple multigene mRNA-based classifier that would differentiate between FTC and FTA tissue samples. Methods A molecular classifier was created based on a combined analysis of two microarray datasets (using 66 thyroid samples). The performance of the classifier was assessed using an independent dataset comprising 71 formalin-fixed paraffin-embedded (FFPE) samples (31 FTC and 40 FTA), which were analysed by quantitative real-time PCR (qPCR). In addition, three other microarray datasets (62 samples) were used to confirm the utility of the classifier. Results Five of 8 genes selected from training datasets (ELMO1, EMCN, ITIH5, KCNAB1, SLCO2A1) were amplified by qPCR in FFPE material from an independent sample set. Three other genes did not amplify in FFPE material, probably due to low abundance. All 5 analysed genes were downregulated in FTC compared to FTA. The sensitivity and specificity of the 5-gene classifier tested on the FFPE dataset were 71% and 72%, respectively. Conclusions The proposed approach could support histopathological examination: 5-gene classifier may aid in molecular discrimination between FTC and FTA in FFPE material. PMID:24099521
Library of molecular associations: curating the complex molecular basis of liver diseases.
Buchkremer, Stefan; Hendel, Jasmin; Krupp, Markus; Weinmann, Arndt; Schlamp, Kai; Maass, Thorsten; Staib, Frank; Galle, Peter R; Teufel, Andreas
2010-03-20
Systems biology approaches offer novel insights into the development of chronic liver diseases. Current genomic databases supporting systems biology analyses are mostly based on microarray data. Although these data often cover genome wide expression, the validity of single microarray experiments remains questionable. However, for systems biology approaches addressing the interactions of molecular networks comprehensive but also highly validated data are necessary. We have therefore generated the first comprehensive database for published molecular associations in human liver diseases. It is based on PubMed published abstracts and aimed to close the gap between genome wide coverage of low validity from microarray data and individual highly validated data from PubMed. After an initial text mining process, the extracted abstracts were all manually validated to confirm content and potential genetic associations and may therefore be highly trusted. All data were stored in a publicly available database, Library of Molecular Associations http://www.medicalgenomics.org/databases/loma/news, currently holding approximately 1260 confirmed molecular associations for chronic liver diseases such as HCC, CCC, liver fibrosis, NASH/fatty liver disease, AIH, PBC, and PSC. We furthermore transformed these data into a powerful resource for molecular liver research by connecting them to multiple biomedical information resources. Together, this database is the first available database providing a comprehensive view and analysis options for published molecular associations on multiple liver diseases.
A simulation-optimization-based decision support tool for mitigating traffic congestion.
DOT National Transportation Integrated Search
2009-12-01
"Traffic congestion has grown considerably in the United States over the past twenty years. In this paper, we develop : a robust decision support tool based on simulation optimization to evaluate and recommend congestion-mitigation : strategies to tr...
Cell-of-Origin in Diffuse Large B-Cell Lymphoma: Are the Assays Ready for the Clinic?
Scott, David W
2015-01-01
Diffuse large B-cell lymphoma (DLBCL) is the most common lymphoma worldwide and consists of a heterogeneous group of cancers classified together on the basis of shared morphology, immunophenotype, and aggressive clinical behavior. It is now recognized that this malignancy comprises at least two distinct molecular subtypes identified by gene expression profiling: the activated B-cell-like (ABC) and the germinal center B-cell-like (GCB) groups-the cell-of-origin (COO) classification. These two groups have different genetic mutation landscapes, pathobiology, and outcomes following treatment. Evidence is accumulating that novel agents have selective activity in one or the other COO group, making COO a predictive biomarker. Thus, there is now a pressing need for accurate and robust methods to assign COO, to support clinical trials, and ultimately guide treatment decisions for patients. The "gold standard" methods for COO are based on gene expression profiling (GEP) of RNA from fresh frozen tissue using microarray technology, which is an impractical solution when formalin-fixed paraffin-embedded tissue (FFPET) biopsies are the standard diagnostic material. This review outlines the history of the COO classification before examining the practical implementation of COO assays applicable to FFPET biopsies. The immunohistochemistry (IHC)-based algorithms and gene expression-based assays suitable for the highly degraded RNA from FFPET are discussed. Finally, the technical and practical challenges that still need to be addressed are outlined before robust gene expression-based assays are used in the routine management of patients with DLBCL.
2010-01-01
Background The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. Results In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. Conclusion High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data. PMID:20122245
Seok, Junhee; Kaushal, Amit; Davis, Ronald W; Xiao, Wenzhong
2010-01-18
The large amount of high-throughput genomic data has facilitated the discovery of the regulatory relationships between transcription factors and their target genes. While early methods for discovery of transcriptional regulation relationships from microarray data often focused on the high-throughput experimental data alone, more recent approaches have explored the integration of external knowledge bases of gene interactions. In this work, we develop an algorithm that provides improved performance in the prediction of transcriptional regulatory relationships by supplementing the analysis of microarray data with a new method of integrating information from an existing knowledge base. Using a well-known dataset of yeast microarrays and the Yeast Proteome Database, a comprehensive collection of known information of yeast genes, we show that knowledge-based predictions demonstrate better sensitivity and specificity in inferring new transcriptional interactions than predictions from microarray data alone. We also show that comprehensive, direct and high-quality knowledge bases provide better prediction performance. Comparison of our results with ChIP-chip data and growth fitness data suggests that our predicted genome-wide regulatory pairs in yeast are reasonable candidates for follow-up biological verification. High quality, comprehensive, and direct knowledge bases, when combined with appropriate bioinformatic algorithms, can significantly improve the discovery of gene regulatory relationships from high throughput gene expression data.
Applied Empiricism: Ensuring the Validity of Causal Response to Intervention Decisions
ERIC Educational Resources Information Center
Kilgus, Stephen P.; Collier-Meek, Melissa A.; Johnson, Austin H.; Jaffery, Rose
2014-01-01
School personnel make a variety of decisions within multitiered problem-solving frameworks, including the decision to assign a student to group-based support, to design an individualized support plan, or classify a student as eligible for special education. Each decision is founded upon a judgment regarding whether the student has responded to…
A fisheye viewer for microarray-based gene expression data
Wu, Min; Thao, Cheng; Mu, Xiangming; Munson, Ethan V
2006-01-01
Background Microarray has been widely used to measure the relative amounts of every mRNA transcript from the genome in a single scan. Biologists have been accustomed to reading their experimental data directly from tables. However, microarray data are quite large and are stored in a series of files in a machine-readable format, so direct reading of the full data set is not feasible. The challenge is to design a user interface that allows biologists to usefully view large tables of raw microarray-based gene expression data. This paper presents one such interface – an electronic table (E-table) that uses fisheye distortion technology. Results The Fisheye Viewer for microarray-based gene expression data has been successfully developed to view MIAME data stored in the MAGE-ML format. The viewer can be downloaded from the project web site . The fisheye viewer was implemented in Java so that it could run on multiple platforms. We implemented the E-table by adapting JTable, a default table implementation in the Java Swing user interface library. Fisheye views use variable magnification to balance magnification for easy viewing and compression for maximizing the amount of data on the screen. Conclusion This Fisheye Viewer is a lightweight but useful tool for biologists to quickly overview the raw microarray-based gene expression data in an E-table. PMID:17038193
Mining subspace clusters from DNA microarray data using large itemset techniques.
Chang, Ye-In; Chen, Jiun-Rung; Tsai, Yueh-Chi
2009-05-01
Mining subspace clusters from the DNA microarrays could help researchers identify those genes which commonly contribute to a disease, where a subspace cluster indicates a subset of genes whose expression levels are similar under a subset of conditions. Since in a DNA microarray, the number of genes is far larger than the number of conditions, those previous proposed algorithms which compute the maximum dimension sets (MDSs) for any two genes will take a long time to mine subspace clusters. In this article, we propose the Large Itemset-Based Clustering (LISC) algorithm for mining subspace clusters. Instead of constructing MDSs for any two genes, we construct only MDSs for any two conditions. Then, we transform the task of finding the maximal possible gene sets into the problem of mining large itemsets from the condition-pair MDSs. Since we are only interested in those subspace clusters with gene sets as large as possible, it is desirable to pay attention to those gene sets which have reasonable large support values in the condition-pair MDSs. From our simulation results, we show that the proposed algorithm needs shorter processing time than those previous proposed algorithms which need to construct gene-pair MDSs.
NED-IIS: An Intelligent Information System for Forest Ecosystem Management
W.D. Potter; S. Somasekar; R. Kommineni; H.M. Rauscher
1999-01-01
We view Intelligent Information System (IIS) as composed of a unified knowledge base, database, and model base. The model base includes decision support models, forecasting models, and cvsualization models for example. In addition, we feel that the model base should include domain specific porblems solving modules as well as decision support models. This, then,...
Transfection microarray and the applications.
Miyake, Masato; Yoshikawa, Tomohiro; Fujita, Satoshi; Miyake, Jun
2009-05-01
Microarray transfection has been extensively studied for high-throughput functional analysis of mammalian cells. However, control of efficiency and reproducibility are the critical issues for practical use. By using solid-phase transfection accelerators and nano-scaffold, we provide a highly efficient and reproducible microarray-transfection device, "transfection microarray". The device would be applied to the limited number of available primary cells and stem cells not only for large-scale functional analysis but also reporter-based time-lapse cellular event analysis.
Lung Cancer Assistant: a hybrid clinical decision support application for lung cancer care.
Sesen, M Berkan; Peake, Michael D; Banares-Alcantara, Rene; Tse, Donald; Kadir, Timor; Stanley, Roz; Gleeson, Fergus; Brady, Michael
2014-09-06
Multidisciplinary team (MDT) meetings are becoming the model of care for cancer patients worldwide. While MDTs have improved the quality of cancer care, the meetings impose substantial time pressure on the members, who generally attend several such MDTs. We describe Lung Cancer Assistant (LCA), a clinical decision support (CDS) prototype designed to assist the experts in the treatment selection decisions in the lung cancer MDTs. A novel feature of LCA is its ability to provide rule-based and probabilistic decision support within a single platform. The guideline-based CDS is based on clinical guideline rules, while the probabilistic CDS is based on a Bayesian network trained on the English Lung Cancer Audit Database (LUCADA). We assess rule-based and probabilistic recommendations based on their concordances with the treatments recorded in LUCADA. Our results reveal that the guideline rule-based recommendations perform well in simulating the recorded treatments with exact and partial concordance rates of 0.57 and 0.79, respectively. On the other hand, the exact and partial concordance rates achieved with probabilistic results are relatively poorer with 0.27 and 0.76. However, probabilistic decision support fulfils a complementary role in providing accurate survival estimations. Compared to recorded treatments, both CDS approaches promote higher resection rates and multimodality treatments.
Development of Asset Management Decision Support Tools for Power Equipment
NASA Astrophysics Data System (ADS)
Okamoto, Tatsuki; Takahashi, Tsuguhiro
Development of asset management decision support tools become very intensive in order to reduce maintenance cost of power equipment due to the liberalization of power business. This article reviews some aspects of present status of asset management decision support tools development for power equipment based on the papers published in international conferences, domestic conventions, and several journals.
A Decision Support Framework For Science-Based, Multi-Stakeholder Deliberation: A Coral Reef Example
We present a decision support framework for science-based assessment and multi-stakeholder deliberation. The framework consists of two parts: a DPSIR (Drivers-Pressures-States-Impacts-Responses) analysis to identify the important causal relationships among anthropogenic environ...
Tu, Samson W; Hrabak, Karen M; Campbell, James R; Glasgow, Julie; Nyman, Mark A; McClure, Robert; McClay, James; Abarbanel, Robert; Mansfield, James G; Martins, Susana M; Goldstein, Mary K; Musen, Mark A
2006-01-01
Developing computer-interpretable clinical practice guidelines (CPGs) to provide decision support for guideline-based care is an extremely labor-intensive task. In the EON/ATHENA and SAGE projects, we formulated substantial portions of CPGs as computable statements that express declarative relationships between patient conditions and possible interventions. We developed query and expression languages that allow a decision-support system (DSS) to evaluate these statements in specific patient situations. A DSS can use these guideline statements in multiple ways, including: (1) as inputs for determining preferred alternatives in decision-making, and (2) as a way to provide targeted commentaries in the clinical information system. The use of these declarative statements significantly reduces the modeling expertise and effort required to create and maintain computer-interpretable knowledge bases for decision-support purpose. We discuss possible implications for sharing of such knowledge bases.
Development of a Digital Microarray with Interferometric Reflectance Imaging
NASA Astrophysics Data System (ADS)
Sevenler, Derin
This dissertation describes a new type of molecular assay for nucleic acids and proteins. We call this technique a digital microarray since it is conceptually similar to conventional fluorescence microarrays, yet it performs enumerative ('digital') counting of the number captured molecules. Digital microarrays are approximately 10,000-fold more sensitive than fluorescence microarrays, yet maintain all of the strengths of the platform including low cost and high multiplexing (i.e., many different tests on the same sample simultaneously). Digital microarrays use gold nanorods to label the captured target molecules. Each gold nanorod on the array is individually detected based on its light scattering, with an interferometric microscopy technique called SP-IRIS. Our optimized high-throughput version of SP-IRIS is able to scan a typical array of 500 spots in less than 10 minutes. Digital DNA microarrays may have utility in applications where sequencing is prohibitively expensive or slow. As an example, we describe a digital microarray assay for gene expression markers of bacterial drug resistance.
García-Sáez, Gema; Rigla, Mercedes; Martínez-Sarriegui, Iñaki; Shalom, Erez; Peleg, Mor; Broens, Tom; Pons, Belén; Caballero-Ruíz, Estefanía; Gómez, Enrique J; Hernando, M Elena
2014-03-01
The risks associated with gestational diabetes (GD) can be reduced with an active treatment able to improve glycemic control. Advances in mobile health can provide new patient-centric models for GD to create personalized health care services, increase patient independence and improve patients' self-management capabilities, and potentially improve their treatment compliance. In these models, decision-support functions play an essential role. The telemedicine system MobiGuide provides personalized medical decision support for GD patients that is based on computerized clinical guidelines and adapted to a mobile environment. The patient's access to the system is supported by a smartphone-based application that enhances the efficiency and ease of use of the system. We formalized the GD guideline into a computer-interpretable guideline (CIG). We identified several workflows that provide decision-support functionalities to patients and 4 types of personalized advice to be delivered through a mobile application at home, which is a preliminary step to providing decision-support tools in a telemedicine system: (1) therapy, to help patients to comply with medical prescriptions; (2) monitoring, to help patients to comply with monitoring instructions; (3) clinical assessment, to inform patients about their health conditions; and (4) upcoming events, to deal with patients' personal context or special events. The whole process to specify patient-oriented decision support functionalities ensures that it is based on the knowledge contained in the GD clinical guideline and thus follows evidence-based recommendations but at the same time is patient-oriented, which could enhance clinical outcomes and patients' acceptance of the whole system. © 2014 Diabetes Technology Society.
Zhang, Mingyuan; Velasco, Ferdinand T.; Musser, R. Clayton; Kawamoto, Kensaku
2013-01-01
Enabling clinical decision support (CDS) across multiple electronic health record (EHR) systems has been a desired but largely unattained aim of clinical informatics, especially in commercial EHR systems. A potential opportunity for enabling such scalable CDS is to leverage vendor-supported, Web-based CDS development platforms along with vendor-supported application programming interfaces (APIs). Here, we propose a potential staged approach for enabling such scalable CDS, starting with the use of custom EHR APIs and moving towards standardized EHR APIs to facilitate interoperability. We analyzed three commercial EHR systems for their capabilities to support the proposed approach, and we implemented prototypes in all three systems. Based on these analyses and prototype implementations, we conclude that the approach proposed is feasible, already supported by several major commercial EHR vendors, and potentially capable of enabling cross-platform CDS at scale. PMID:24551426
LS Bound based gene selection for DNA microarray data.
Zhou, Xin; Mao, K Z
2005-04-15
One problem with discriminant analysis of DNA microarray data is that each sample is represented by quite a large number of genes, and many of them are irrelevant, insignificant or redundant to the discriminant problem at hand. Methods for selecting important genes are, therefore, of much significance in microarray data analysis. In the present study, a new criterion, called LS Bound measure, is proposed to address the gene selection problem. The LS Bound measure is derived from leave-one-out procedure of LS-SVMs (least squares support vector machines), and as the upper bound for leave-one-out classification results it reflects to some extent the generalization performance of gene subsets. We applied this LS Bound measure for gene selection on two benchmark microarray datasets: colon cancer and leukemia. We also compared the LS Bound measure with other evaluation criteria, including the well-known Fisher's ratio and Mahalanobis class separability measure, and other published gene selection algorithms, including Weighting factor and SVM Recursive Feature Elimination. The strength of the LS Bound measure is that it provides gene subsets leading to more accurate classification results than the filter method while its computational complexity is at the level of the filter method. A companion website can be accessed at http://www.ntu.edu.sg/home5/pg02776030/lsbound/. The website contains: (1) the source code of the gene selection algorithm; (2) the complete set of tables and figures regarding the experimental study; (3) proof of the inequality (9). ekzmao@ntu.edu.sg.
Lynch, Abigail J.; Taylor, William W.; McCright, Aaron M.
2016-01-01
Decision support tools can aid decision making by systematically incorporating information, accounting for uncertainties, and facilitating evaluation between alternatives. Without user buy-in, however, decision support tools can fail to influence decision-making processes. We surveyed fishery researchers, managers, and fishers affiliated with the Lake Whitefish Coregonus clupeaformis fishery in the 1836 Treaty Waters of Lakes Huron, Michigan, and Superior to assess opinions of current and future management needs to identify barriers to, and opportunities for, developing a decision support tool based on Lake Whitefish recruitment projections with climate change. Approximately 64% of 39 respondents were satisfied with current management, and nearly 85% agreed that science was well integrated into management programs. Though decision support tools can facilitate science integration into management, respondents suggest that they face significant implementation barriers, including lack of political will to change management and perceived uncertainty in decision support outputs. Recommendations from this survey can inform development of decision support tools for fishery management in the Great Lakes and other regions.
Dynamic association rules for gene expression data analysis.
Chen, Shu-Chuan; Tsai, Tsung-Hsien; Chung, Cheng-Han; Li, Wen-Hsiung
2015-10-14
The purpose of gene expression analysis is to look for the association between regulation of gene expression levels and phenotypic variations. This association based on gene expression profile has been used to determine whether the induction/repression of genes correspond to phenotypic variations including cell regulations, clinical diagnoses and drug development. Statistical analyses on microarray data have been developed to resolve gene selection issue. However, these methods do not inform us of causality between genes and phenotypes. In this paper, we propose the dynamic association rule algorithm (DAR algorithm) which helps ones to efficiently select a subset of significant genes for subsequent analysis. The DAR algorithm is based on association rules from market basket analysis in marketing. We first propose a statistical way, based on constructing a one-sided confidence interval and hypothesis testing, to determine if an association rule is meaningful. Based on the proposed statistical method, we then developed the DAR algorithm for gene expression data analysis. The method was applied to analyze four microarray datasets and one Next Generation Sequencing (NGS) dataset: the Mice Apo A1 dataset, the whole genome expression dataset of mouse embryonic stem cells, expression profiling of the bone marrow of Leukemia patients, Microarray Quality Control (MAQC) data set and the RNA-seq dataset of a mouse genomic imprinting study. A comparison of the proposed method with the t-test on the expression profiling of the bone marrow of Leukemia patients was conducted. We developed a statistical way, based on the concept of confidence interval, to determine the minimum support and minimum confidence for mining association relationships among items. With the minimum support and minimum confidence, one can find significant rules in one single step. The DAR algorithm was then developed for gene expression data analysis. Four gene expression datasets showed that the proposed DAR algorithm not only was able to identify a set of differentially expressed genes that largely agreed with that of other methods, but also provided an efficient and accurate way to find influential genes of a disease. In the paper, the well-established association rule mining technique from marketing has been successfully modified to determine the minimum support and minimum confidence based on the concept of confidence interval and hypothesis testing. It can be applied to gene expression data to mine significant association rules between gene regulation and phenotype. The proposed DAR algorithm provides an efficient way to find influential genes that underlie the phenotypic variance.
[Oligonucleotide microarray for subtyping avian influenza virus].
Xueqing, Han; Xiangmei, Lin; Yihong, Hou; Shaoqiang, Wu; Jian, Liu; Lin, Mei; Guangle, Jia; Zexiao, Yang
2008-09-01
Avian influenza viruses are important human and animal respiratory pathogens and rapid diagnosis of novel emerging avian influenza viruses is vital for effective global influenza surveillance. We developed an oligonucleotide microarray-based method for subtyping all avian influenza virus (16 HA and 9 NA subtypes). In total 25 pairs of primers specific for different subtypes and 1 pair of universal primers were carefully designed based on the genomic sequences of influenza A viruses retrieved from GenBank database. Several multiplex RT-PCR methods were then developed, and the target cDNAs of 25 subtype viruses were amplified by RT-PCR or overlapping PCR for evaluating the microarray. Further 52 oligonucleotide probes specific for all 25 subtype viruses were designed according to published gene sequences of avian influenza viruses in amplified target cDNAs domains, and a microarray for subtyping influenza A virus was developed. Then its specificity and sensitivity were validated by using different subtype strains and 2653 samples from 49 different areas. The results showed that all the subtypes of influenza virus could be identified simultaneously on this microarray with high sensitivity, which could reach to 2.47 pfu/mL virus or 2.5 ng target DNA. Furthermore, there was no cross reaction with other avian respiratory virus. An oligonucleotide microarray-based strategy for detection of avian influenza viruses has been developed. Such a diagnostic microarray will be useful in discovering and identifying all subtypes of avian influenza virus.
Armour, Christine M; Dougan, Shelley Danielle; Brock, Jo-Ann; Chari, Radha; Chodirker, Bernie N; DeBie, Isabelle; Evans, Jane A; Gibson, William T; Kolomietz, Elena; Nelson, Tanya N; Tihy, Frédérique; Thomas, Mary Ann; Stavropoulos, Dimitri J
2018-01-01
Background The aim of this guideline is to provide updated recommendations for Canadian genetic counsellors, medical geneticists, maternal fetal medicine specialists, clinical laboratory geneticists and other practitioners regarding the use of chromosomal microarray analysis (CMA) for prenatal diagnosis. This guideline replaces the 2011 Society of Obstetricians and Gynaecologists of Canada (SOGC)-Canadian College of Medical Geneticists (CCMG) Joint Technical Update. Methods A multidisciplinary group consisting of medical geneticists, genetic counsellors, maternal fetal medicine specialists and clinical laboratory geneticists was assembled to review existing literature and guidelines for use of CMA in prenatal care and to make recommendations relevant to the Canadian context. The statement was circulated for comment to the CCMG membership-at-large for feedback and, following incorporation of feedback, was approved by the CCMG Board of Directors on 5 June 2017 and the SOGC Board of Directors on 19 June 2017. Results and conclusions Recommendations include but are not limited to: (1) CMA should be offered following a normal rapid aneuploidy screen when multiple fetal malformations are detected (II-1A) or for nuchal translucency (NT) ≥3.5 mm (II-2B) (recommendation 1); (2) a professional with expertise in prenatal chromosomal microarray analysis should provide genetic counselling to obtain informed consent, discuss the limitations of the methodology, obtain the parental decisions for return of incidental findings (II-2A) (recommendation 4) and provide post-test counselling for reporting of test results (III-A) (recommendation 9); (3) the resolution of chromosomal microarray analysis should be similar to postnatal microarray platforms to ensure small pathogenic variants are detected. To minimise the reporting of uncertain findings, it is recommended that variants of unknown significance (VOUS) smaller than 500 Kb deletion or 1 Mb duplication not be routinely reported in the prenatal context. Additionally, VOUS above these cut-offs should only be reported if there is significant supporting evidence that deletion or duplication of the region may be pathogenic (III-B) (recommendation 5); (4) secondary findings associated with a medically actionable disorder with childhood onset should be reported, whereas variants associated with adult-onset conditions should not be reported unless requested by the parents or disclosure can prevent serious harm to family members (III-A) (recommendation 8). The working group recognises that there is variability across Canada in delivery of prenatal testing, and these recommendations were developed to promote consistency and provide a minimum standard for all provinces and territories across the country (recommendation 9). PMID:29496978
Chen, Zhenyu; Li, Jianping; Wei, Liwei
2007-10-01
Recently, gene expression profiling using microarray techniques has been shown as a promising tool to improve the diagnosis and treatment of cancer. Gene expression data contain high level of noise and the overwhelming number of genes relative to the number of available samples. It brings out a great challenge for machine learning and statistic techniques. Support vector machine (SVM) has been successfully used to classify gene expression data of cancer tissue. In the medical field, it is crucial to deliver the user a transparent decision process. How to explain the computed solutions and present the extracted knowledge becomes a main obstacle for SVM. A multiple kernel support vector machine (MK-SVM) scheme, consisting of feature selection, rule extraction and prediction modeling is proposed to improve the explanation capacity of SVM. In this scheme, we show that the feature selection problem can be translated into an ordinary multiple parameters learning problem. And a shrinkage approach: 1-norm based linear programming is proposed to obtain the sparse parameters and the corresponding selected features. We propose a novel rule extraction approach using the information provided by the separating hyperplane and support vectors to improve the generalization capacity and comprehensibility of rules and reduce the computational complexity. Two public gene expression datasets: leukemia dataset and colon tumor dataset are used to demonstrate the performance of this approach. Using the small number of selected genes, MK-SVM achieves encouraging classification accuracy: more than 90% for both two datasets. Moreover, very simple rules with linguist labels are extracted. The rule sets have high diagnostic power because of their good classification performance.
Combat Service Support (CSS) Enabler Functional Assessment (CEFA)
1998-07-01
CDR), Combined Arms Support Command (CASCOM) with a tool to aid decision making related to mitigating E/I peacetime (programmatic) and wartime risks...not be fielded by Fiscal Year (FY) 10. Based on their estimates, any decisions , especially reductions in manpower, which rely on the existence of the E...Support (CSS) enablers/initiatives (E/I), thereby providing the Commander (CDR), Combined Arms Support Command (CASCOM) with a tool to aid decision
Controlling false-negative errors in microarray differential expression analysis: a PRIM approach.
Cole, Steve W; Galic, Zoran; Zack, Jerome A
2003-09-22
Theoretical considerations suggest that current microarray screening algorithms may fail to detect many true differences in gene expression (Type II analytic errors). We assessed 'false negative' error rates in differential expression analyses by conventional linear statistical models (e.g. t-test), microarray-adapted variants (e.g. SAM, Cyber-T), and a novel strategy based on hold-out cross-validation. The latter approach employs the machine-learning algorithm Patient Rule Induction Method (PRIM) to infer minimum thresholds for reliable change in gene expression from Boolean conjunctions of fold-induction and raw fluorescence measurements. Monte Carlo analyses based on four empirical data sets show that conventional statistical models and their microarray-adapted variants overlook more than 50% of genes showing significant up-regulation. Conjoint PRIM prediction rules recover approximately twice as many differentially expressed transcripts while maintaining strong control over false-positive (Type I) errors. As a result, experimental replication rates increase and total analytic error rates decline. RT-PCR studies confirm that gene inductions detected by PRIM but overlooked by other methods represent true changes in mRNA levels. PRIM-based conjoint inference rules thus represent an improved strategy for high-sensitivity screening of DNA microarrays. Freestanding JAVA application at http://microarray.crump.ucla.edu/focus
Huerta, Mario; Munyi, Marc; Expósito, David; Querol, Enric; Cedano, Juan
2014-06-15
The microarrays performed by scientific teams grow exponentially. These microarray data could be useful for researchers around the world, but unfortunately they are underused. To fully exploit these data, it is necessary (i) to extract these data from a repository of the high-throughput gene expression data like Gene Expression Omnibus (GEO) and (ii) to make the data from different microarrays comparable with tools easy to use for scientists. We have developed these two solutions in our server, implementing a database of microarray marker genes (Marker Genes Data Base). This database contains the marker genes of all GEO microarray datasets and it is updated monthly with the new microarrays from GEO. Thus, researchers can see whether the marker genes of their microarray are marker genes in other microarrays in the database, expanding the analysis of their microarray to the rest of the public microarrays. This solution helps not only to corroborate the conclusions regarding a researcher's microarray but also to identify the phenotype of different subsets of individuals under investigation, to frame the results with microarray experiments from other species, pathologies or tissues, to search for drugs that promote the transition between the studied phenotypes, to detect undesirable side effects of the treatment applied, etc. Thus, the researcher can quickly add relevant information to his/her studies from all of the previous analyses performed in other studies as long as they have been deposited in public repositories. Marker-gene database tool: http://ibb.uab.es/mgdb © The Author 2014. Published by Oxford University Press.
Features of Computer-Based Decision Aids: Systematic Review, Thematic Synthesis, and Meta-Analyses.
Syrowatka, Ania; Krömker, Dörthe; Meguerditchian, Ari N; Tamblyn, Robyn
2016-01-26
Patient information and education, such as decision aids, are gradually moving toward online, computer-based environments. Considerable research has been conducted to guide content and presentation of decision aids. However, given the relatively new shift to computer-based support, little attention has been given to how multimedia and interactivity can improve upon paper-based decision aids. The first objective of this review was to summarize published literature into a proposed classification of features that have been integrated into computer-based decision aids. Building on this classification, the second objective was to assess whether integration of specific features was associated with higher-quality decision making. Relevant studies were located by searching MEDLINE, Embase, CINAHL, and CENTRAL databases. The review identified studies that evaluated computer-based decision aids for adults faced with preference-sensitive medical decisions and reported quality of decision-making outcomes. A thematic synthesis was conducted to develop the classification of features. Subsequently, meta-analyses were conducted based on standardized mean differences (SMD) from randomized controlled trials (RCTs) that reported knowledge or decisional conflict. Further subgroup analyses compared pooled SMDs for decision aids that incorporated a specific feature to other computer-based decision aids that did not incorporate the feature, to assess whether specific features improved quality of decision making. Of 3541 unique publications, 58 studies met the target criteria and were included in the thematic synthesis. The synthesis identified six features: content control, tailoring, patient narratives, explicit values clarification, feedback, and social support. A subset of 26 RCTs from the thematic synthesis was used to conduct the meta-analyses. As expected, computer-based decision aids performed better than usual care or alternative aids; however, some features performed better than others. Integration of content control improved quality of decision making (SMD 0.59 vs 0.23 for knowledge; SMD 0.39 vs 0.29 for decisional conflict). In contrast, tailoring reduced quality of decision making (SMD 0.40 vs 0.71 for knowledge; SMD 0.25 vs 0.52 for decisional conflict). Similarly, patient narratives also reduced quality of decision making (SMD 0.43 vs 0.65 for knowledge; SMD 0.17 vs 0.46 for decisional conflict). Results were varied for different types of explicit values clarification, feedback, and social support. Integration of media rich or interactive features into computer-based decision aids can improve quality of preference-sensitive decision making. However, this is an emerging field with limited evidence to guide use. The systematic review and thematic synthesis identified features that have been integrated into available computer-based decision aids, in an effort to facilitate reporting of these features and to promote integration of such features into decision aids. The meta-analyses and associated subgroup analyses provide preliminary evidence to support integration of specific features into future decision aids. Further research can focus on clarifying independent contributions of specific features through experimental designs and refining the designs of features to improve effectiveness.
Drost, Derek R; Novaes, Evandro; Boaventura-Novaes, Carolina; Benedict, Catherine I; Brown, Ryan S; Yin, Tongming; Tuskan, Gerald A; Kirst, Matias
2009-06-01
Microarrays have demonstrated significant power for genome-wide analyses of gene expression, and recently have also revolutionized the genetic analysis of segregating populations by genotyping thousands of loci in a single assay. Although microarray-based genotyping approaches have been successfully applied in yeast and several inbred plant species, their power has not been proven in an outcrossing species with extensive genetic diversity. Here we have developed methods for high-throughput microarray-based genotyping in such species using a pseudo-backcross progeny of 154 individuals of Populus trichocarpa and P. deltoides analyzed with long-oligonucleotide in situ-synthesized microarray probes. Our analysis resulted in high-confidence genotypes for 719 single-feature polymorphism (SFP) and 1014 gene expression marker (GEM) candidates. Using these genotypes and an established microsatellite (SSR) framework map, we produced a high-density genetic map comprising over 600 SFPs, GEMs and SSRs. The abundance of gene-based markers allowed us to localize over 35 million base pairs of previously unplaced whole-genome shotgun (WGS) scaffold sequence to putative locations in the genome of P. trichocarpa. A high proportion of sampled scaffolds could be verified for their placement with independently mapped SSRs, demonstrating the previously un-utilized power that high-density genotyping can provide in the context of map-based WGS sequence reassembly. Our results provide a substantial contribution to the continued improvement of the Populus genome assembly, while demonstrating the feasibility of microarray-based genotyping in a highly heterozygous population. The strategies presented are applicable to genetic mapping efforts in all plant species with similarly high levels of genetic diversity.
Screening Mammalian Cells on a Hydrogel: Functionalized Small Molecule Microarray.
Zhu, Biwei; Jiang, Bo; Na, Zhenkun; Yao, Shao Q
2017-01-01
Mammalian cell-based microarray technology has gained wide attention, for its plethora of promising applications. The platform is able to provide simultaneous information on multiple parameters for a given target, or even multiple target proteins, in a complex biological system. Here we describe the preparation of mammalian cell-based microarrays using selectively captured of human prostate cancer cells (PC-3). This platform was then used in controlled drug release and measuring the associated drug effects on these cancer cells.
Yılmaz Isıkhan, Selen; Karabulut, Erdem; Alpar, Celal Reha
2016-01-01
Background/Aim . Evaluating the success of dose prediction based on genetic or clinical data has substantially advanced recently. The aim of this study is to predict various clinical dose values from DNA gene expression datasets using data mining techniques. Materials and Methods . Eleven real gene expression datasets containing dose values were included. First, important genes for dose prediction were selected using iterative sure independence screening. Then, the performances of regression trees (RTs), support vector regression (SVR), RT bagging, SVR bagging, and RT boosting were examined. Results . The results demonstrated that a regression-based feature selection method substantially reduced the number of irrelevant genes from raw datasets. Overall, the best prediction performance in nine of 11 datasets was achieved using SVR; the second most accurate performance was provided using a gradient-boosting machine (GBM). Conclusion . Analysis of various dose values based on microarray gene expression data identified common genes found in our study and the referenced studies. According to our findings, SVR and GBM can be good predictors of dose-gene datasets. Another result of the study was to identify the sample size of n = 25 as a cutoff point for RT bagging to outperform a single RT.
Profiling In Situ Microbial Community Structure with an Amplification Microarray
Knickerbocker, Christopher; Bryant, Lexi; Golova, Julia; Wiles, Cory; Williams, Kenneth H.; Peacock, Aaron D.; Long, Philip E.
2013-01-01
The objectives of this study were to unify amplification, labeling, and microarray hybridization chemistries within a single, closed microfluidic chamber (an amplification microarray) and verify technology performance on a series of groundwater samples from an in situ field experiment designed to compare U(VI) mobility under conditions of various alkalinities (as HCO3−) during stimulated microbial activity accompanying acetate amendment. Analytical limits of detection were between 2 and 200 cell equivalents of purified DNA. Amplification microarray signatures were well correlated with 16S rRNA-targeted quantitative PCR results and hybridization microarray signatures. The succession of the microbial community was evident with and consistent between the two microarray platforms. Amplification microarray analysis of acetate-treated groundwater showed elevated levels of iron-reducing bacteria (Flexibacter, Geobacter, Rhodoferax, and Shewanella) relative to the average background profile, as expected. Identical molecular signatures were evident in the transect treated with acetate plus NaHCO3, but at much lower signal intensities and with a much more rapid decline (to nondetection). Azoarcus, Thaurea, and Methylobacterium were responsive in the acetate-only transect but not in the presence of bicarbonate. Observed differences in microbial community composition or response to bicarbonate amendment likely had an effect on measured rates of U reduction, with higher rates probable in the part of the field experiment that was amended with bicarbonate. The simplification in microarray-based work flow is a significant technological advance toward entirely closed-amplicon microarray-based tests and is generally extensible to any number of environmental monitoring applications. PMID:23160129
DesAutels, Spencer J; Fox, Zachary E; Giuse, Dario A; Williams, Annette M; Kou, Qing-Hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia
2016-01-01
Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems.
MacDonald-Wilson, Kim L; Hutchison, Shari L; Karpov, Irina; Wittman, Paul; Deegan, Patricia E
2017-04-01
Individual involvement in treatment decisions with providers, often through the use of decision support aids, improves quality of care. This study investigates an implementation strategy to bring decision support to community mental health centers (CMHC). Fifty-two CMHCs implemented a decision support toolkit supported by a 12-month learning collaborative using the Breakthrough Series model. Participation in learning collaborative activities was high, indicating feasibility of the implementation model. Progress by staff in meeting process aims around utilization of components of the toolkit improved significantly over time (p < .0001). Survey responses by individuals in service corroborate successful implementation. Community-based providers were able to successfully implement decision support in mental health services as evidenced by improved process outcomes and sustained practices over 1 year through the structure of the learning collaborative model.
Shin, Hwa Hui; Hwang, Byeong Hee; Seo, Jeong Hyun
2014-01-01
It is important to rapidly and selectively detect and analyze pathogenic Salmonella enterica subsp. enterica in contaminated food to reduce the morbidity and mortality of Salmonella infection and to guarantee food safety. In the present work, we developed an oligonucleotide microarray containing duplicate specific capture probes based on the carB gene, which encodes the carbamoyl phosphate synthetase large subunit, as a competent biomarker evaluated by genetic analysis to selectively and efficiently detect and discriminate three S. enterica subsp. enterica serotypes: Choleraesuis, Enteritidis, and Typhimurium. Using the developed microarray system, three serotype targets were successfully analyzed in a range as low as 1.6 to 3.1 nM and were specifically discriminated from each other without nonspecific signals. In addition, the constructed microarray did not have cross-reactivity with other common pathogenic bacteria and even enabled the clear discrimination of the target Salmonella serotype from a bacterial mixture. Therefore, these results demonstrated that our novel carB-based oligonucleotide microarray can be used as an effective and specific detection system for S. enterica subsp. enterica serotypes. PMID:24185846
Shin, Hwa Hui; Hwang, Byeong Hee; Seo, Jeong Hyun; Cha, Hyung Joon
2014-01-01
It is important to rapidly and selectively detect and analyze pathogenic Salmonella enterica subsp. enterica in contaminated food to reduce the morbidity and mortality of Salmonella infection and to guarantee food safety. In the present work, we developed an oligonucleotide microarray containing duplicate specific capture probes based on the carB gene, which encodes the carbamoyl phosphate synthetase large subunit, as a competent biomarker evaluated by genetic analysis to selectively and efficiently detect and discriminate three S. enterica subsp. enterica serotypes: Choleraesuis, Enteritidis, and Typhimurium. Using the developed microarray system, three serotype targets were successfully analyzed in a range as low as 1.6 to 3.1 nM and were specifically discriminated from each other without nonspecific signals. In addition, the constructed microarray did not have cross-reactivity with other common pathogenic bacteria and even enabled the clear discrimination of the target Salmonella serotype from a bacterial mixture. Therefore, these results demonstrated that our novel carB-based oligonucleotide microarray can be used as an effective and specific detection system for S. enterica subsp. enterica serotypes.
Development of an evidence-based decision pathway for vestibular schwannoma treatment options.
Linkov, Faina; Valappil, Benita; McAfee, Jacob; Goughnour, Sharon L; Hildrew, Douglas M; McCall, Andrew A; Linkov, Igor; Hirsch, Barry; Snyderman, Carl
To integrate multiple sources of clinical information with patient feedback to build evidence-based decision support model to facilitate treatment selection for patients suffering from vestibular schwannomas (VS). This was a mixed methods study utilizing focus group and survey methodology to solicit feedback on factors important for making treatment decisions among patients. Two 90-minute focus groups were conducted by an experienced facilitator. Previously diagnosed VS patients were recruited by clinical investigators at the University of Pittsburgh Medical Center (UPMC). Classical content analysis was used for focus group data analysis. Providers were recruited from practices within the UPMC system and were surveyed using Delphi methods. This information can provide a basis for multi-criteria decision analysis (MCDA) framework to develop a treatment decision support system for patients with VS. Eight themes were derived from these data (focus group + surveys): doctor/health care system, side effects, effectiveness of treatment, anxiety, mortality, family/other people, quality of life, and post-operative symptoms. These data, as well as feedback from physicians were utilized in building a multi-criteria decision model. The study illustrated steps involved in the development of a decision support model that integrates evidence-based data and patient values to select treatment alternatives. Studies focusing on the actual development of the decision support technology for this group of patients are needed, as decisions are highly multifactorial. Such tools have the potential to improve decision making for complex medical problems with alternate treatment pathways. Copyright © 2016 Elsevier Inc. All rights reserved.
Rai, Muhammad Farooq; Tycksen, Eric D; Sandell, Linda J; Brophy, Robert H
2018-01-01
Microarrays and RNA-seq are at the forefront of high throughput transcriptome analyses. Since these methodologies are based on different principles, there are concerns about the concordance of data between the two techniques. The concordance of RNA-seq and microarrays for genome-wide analysis of differential gene expression has not been rigorously assessed in clinically derived ligament tissues. To demonstrate the concordance between RNA-seq and microarrays and to assess potential benefits of RNA-seq over microarrays, we assessed differences in transcript expression in anterior cruciate ligament (ACL) tissues based on time-from-injury. ACL remnants were collected from patients with an ACL tear at the time of ACL reconstruction. RNA prepared from torn ACL remnants was subjected to Agilent microarrays (N = 24) and RNA-seq (N = 8). The correlation of biological replicates in RNA-seq and microarrays data was similar (0.98 vs. 0.97), demonstrating that each platform has high internal reproducibility. Correlations between the RNA-seq data and the individual microarrays were low, but correlations between the RNA-seq values and the geometric mean of the microarrays values were moderate. The cross-platform concordance for differentially expressed transcripts or enriched pathways was linearly correlated (r = 0.64). RNA-Seq was superior in detecting low abundance transcripts and differentiating biologically critical isoforms. Additional independent validation of transcript expression was undertaken using microfluidic PCR for selected genes. PCR data showed 100% concordance (in expression pattern) with RNA-seq and microarrays data. These findings demonstrate that RNA-seq has advantages over microarrays for transcriptome profiling of ligament tissues when available and affordable. Furthermore, these findings are likely transferable to other musculoskeletal tissues where tissue collection is challenging and cells are in low abundance. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 36:484-497, 2018. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.
Web-based decision support system to predict risk level of long term rice production
NASA Astrophysics Data System (ADS)
Mukhlash, Imam; Maulidiyah, Ratna; Sutikno; Setiyono, Budi
2017-09-01
Appropriate decision making in risk management of rice production is very important in agricultural planning, especially for Indonesia which is an agricultural country. Good decision would be obtained if the supporting data required are satisfied and using appropriate methods. This study aims to develop a Decision Support System that can be used to predict the risk level of rice production in some districts which are central of rice production in East Java. Web-based decision support system is constructed so that the information can be easily accessed and understood. Components of the system are data management, model management, and user interface. This research uses regression models of OLS and Copula. OLS model used to predict rainfall while Copula model used to predict harvested area. Experimental results show that the models used are successfully predict the harvested area of rice production in some districts which are central of rice production in East Java at any given time based on the conditions and climate of a region. Furthermore, it can predict the amount of rice production with the level of risk. System generates prediction of production risk level in the long term for some districts that can be used as a decision support for the authorities.
Keith Reynolds; Barry Bollenbacher; Chip Fisher; Melissa Hart; Mary Manning; Eric Henderson; Bruce Sims
2016-01-01
This report documents a decision-support process developed in the U.S. Department of Agriculture, Forest Service, Northern Region to assess management opportunities as part of an ecosystem-based approach to management that emphasizes ecological resilience. The decision-support system described in this work implements what is known as the Integrated Restoration and...
Towards generic online multicriteria decision support in patient-centred health care.
Dowie, Jack; Kjer Kaltoft, Mette; Salkeld, Glenn; Cunich, Michelle
2015-10-01
To introduce a new online generic decision support system based on multicriteria decision analysis (MCDA), implemented in practical and user-friendly software (Annalisa©). All parties in health care lack a simple and generic way to picture and process the decisions to be made in pursuit of improved decision making and more informed choice within an overall philosophy of person- and patient-centred care. The MCDA-based system generates patient-specific clinical guidance in the form of an opinion as to the merits of the alternative options in a decision, which are all scored and ranked. The scores for each option combine, in a simple expected value calculation, the best estimates available now for the performance of those options on patient-determined criteria, with the individual patient's preferences, expressed as importance weightings for those criteria. The survey software within which the Annalisa file is embedded (Elicia©) customizes and personalizes the presentation and inputs. Principles relevant to the development of such decision-specific MCDA-based aids are noted and comparisons with alternative implementations presented. The necessity to trade-off practicality (including resource constraints) with normative rigour and empirical complexity, in both their development and delivery, is emphasized. The MCDA-/Annalisa-based decision support system represents a prescriptive addition to the portfolio of decision-aiding tools available online to individuals and clinicians interested in pursuing shared decision making and informed choice within a commitment to transparency in relation to both the evidence and preference bases of decisions. Some empirical data establishing its usability are provided. © 2013 The Authors. Health Expectations published by John Wiley & Sons Ltd.
Dehlendorf, Christine; Fitzpatrick, Judith; Steinauer, Jody; Swiader, Lawrence; Grumbach, Kevin; Hall, Cara; Kuppermann, Miriam
2017-07-01
We developed and formatively evaluated a tablet-based decision support tool for use by women prior to a contraceptive counseling visit to help them engage in shared decision making regarding method selection. Drawing upon formative work around women's preferences for contraceptive counseling and conceptual understanding of health care decision making, we iteratively developed a storyboard and then digital prototypes, based on best practices for decision support tool development. Pilot testing using both quantitative and qualitative data and cognitive testing was conducted. We obtained feedback from patient and provider advisory groups throughout the development process. Ninety-six percent of women who used the tool in pilot testing reported that it helped them choose a method, and qualitative interviews indicated acceptability of the tool's content and presentation. Compared to the control group, women who used the tool demonstrated trends toward increased likelihood of complete satisfaction with their method. Participant responses to cognitive testing were used in tool refinement. Our decision support tool appears acceptable to women in the family planning setting. Formative evaluation of the tool supports its utility among patients making contraceptive decisions, which can be further evaluated in a randomized controlled trial. Copyright © 2017 Elsevier B.V. All rights reserved.
Building Better Decision-Support by Using Knowledge Discovery.
ERIC Educational Resources Information Center
Jurisica, Igor
2000-01-01
Discusses knowledge-based decision-support systems that use artificial intelligence approaches. Addresses the issue of how to create an effective case-based reasoning system for complex and evolving domains, focusing on automated methods for system optimization and domain knowledge evolution that can supplement knowledge acquired from domain…
Wright, Adam; Sittig, Dean F; Ash, Joan S; Erickson, Jessica L; Hickman, Trang T; Paterno, Marilyn; Gebhardt, Eric; McMullen, Carmit; Tsurikova, Ruslana; Dixon, Brian E; Fraser, Greg; Simonaitis, Linas; Sonnenberg, Frank A; Middleton, Blackford
2015-11-01
To identify challenges, lessons learned and best practices for service-oriented clinical decision support, based on the results of the Clinical Decision Support Consortium, a multi-site study which developed, implemented and evaluated clinical decision support services in a diverse range of electronic health records. Ethnographic investigation using the rapid assessment process, a procedure for agile qualitative data collection and analysis, including clinical observation, system demonstrations and analysis and 91 interviews. We identified challenges and lessons learned in eight dimensions: (1) hardware and software computing infrastructure, (2) clinical content, (3) human-computer interface, (4) people, (5) workflow and communication, (6) internal organizational policies, procedures, environment and culture, (7) external rules, regulations, and pressures and (8) system measurement and monitoring. Key challenges included performance issues (particularly related to data retrieval), differences in terminologies used across sites, workflow variability and the need for a legal framework. Based on the challenges and lessons learned, we identified eight best practices for developers and implementers of service-oriented clinical decision support: (1) optimize performance, or make asynchronous calls, (2) be liberal in what you accept (particularly for terminology), (3) foster clinical transparency, (4) develop a legal framework, (5) support a flexible front-end, (6) dedicate human resources, (7) support peer-to-peer communication, (8) improve standards. The Clinical Decision Support Consortium successfully developed a clinical decision support service and implemented it in four different electronic health records and four diverse clinical sites; however, the process was arduous. The lessons identified by the Consortium may be useful for other developers and implementers of clinical decision support services. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Decision Support Systems (DSSs) For Contaminated Land Management - Gaps And Challenges
A plethora of information is available when considering decision support systems for risk-based management of contaminated land. Broad issues of what is contaminated land, what is a brownfield, and what is remediation are discussed in EU countries and the U.S. Making decisions ...
DOT National Transportation Integrated Search
2010-11-01
This project developed a GIS-based Spatial Decision Support System to help local, metropolitan, and state : jurisdictions and authorities in Texas understand the implications of transportation planning and : investment decisions, and plan appropriate...
Forecasting and communicating the potential outcomes of decision options requires support tools that aid in evaluating alternative scenarios in a user-friendly context and that highlight variables relevant to the decision options and valuable stakeholders. Envision is a GIS-base...
USDA-ARS?s Scientific Manuscript database
The invasive brown marmorated stink bug, Halyomorpha halys (Stål), has become a serious pest in mid-Atlantic apple orchards. Because no decision support tools exist for H. halys management, calendar-based insecticide applications have been the only successful technique for mitigating H. halys injur...
NCBI GEO: mining tens of millions of expression profiles--database and tools update.
Barrett, Tanya; Troup, Dennis B; Wilhite, Stephen E; Ledoux, Pierre; Rudnev, Dmitry; Evangelista, Carlos; Kim, Irene F; Soboleva, Alexandra; Tomashevsky, Maxim; Edgar, Ron
2007-01-01
The Gene Expression Omnibus (GEO) repository at the National Center for Biotechnology Information (NCBI) archives and freely disseminates microarray and other forms of high-throughput data generated by the scientific community. The database has a minimum information about a microarray experiment (MIAME)-compliant infrastructure that captures fully annotated raw and processed data. Several data deposit options and formats are supported, including web forms, spreadsheets, XML and Simple Omnibus Format in Text (SOFT). In addition to data storage, a collection of user-friendly web-based interfaces and applications are available to help users effectively explore, visualize and download the thousands of experiments and tens of millions of gene expression patterns stored in GEO. This paper provides a summary of the GEO database structure and user facilities, and describes recent enhancements to database design, performance, submission format options, data query and retrieval utilities. GEO is accessible at http://www.ncbi.nlm.nih.gov/geo/
Thermodynamically optimal whole-genome tiling microarray design and validation.
Cho, Hyejin; Chou, Hui-Hsien
2016-06-13
Microarray is an efficient apparatus to interrogate the whole transcriptome of species. Microarray can be designed according to annotated gene sets, but the resulted microarrays cannot be used to identify novel transcripts and this design method is not applicable to unannotated species. Alternatively, a whole-genome tiling microarray can be designed using only genomic sequences without gene annotations, and it can be used to detect novel RNA transcripts as well as known genes. The difficulty with tiling microarray design lies in the tradeoff between probe-specificity and coverage of the genome. Sequence comparison methods based on BLAST or similar software are commonly employed in microarray design, but they cannot precisely determine the subtle thermodynamic competition between probe targets and partially matched probe nontargets during hybridizations. Using the whole-genome thermodynamic analysis software PICKY to design tiling microarrays, we can achieve maximum whole-genome coverage allowable under the thermodynamic constraints of each target genome. The resulted tiling microarrays are thermodynamically optimal in the sense that all selected probes share the same melting temperature separation range between their targets and closest nontargets, and no additional probes can be added without violating the specificity of the microarray to the target genome. This new design method was used to create two whole-genome tiling microarrays for Escherichia coli MG1655 and Agrobacterium tumefaciens C58 and the experiment results validated the design.
Scalable software architectures for decision support.
Musen, M A
1999-12-01
Interest in decision-support programs for clinical medicine soared in the 1970s. Since that time, workers in medical informatics have been particularly attracted to rule-based systems as a means of providing clinical decision support. Although developers have built many successful applications using production rules, they also have discovered that creation and maintenance of large rule bases is quite problematic. In the 1980s, several groups of investigators began to explore alternative programming abstractions that can be used to build decision-support systems. As a result, the notions of "generic tasks" and of reusable problem-solving methods became extremely influential. By the 1990s, academic centers were experimenting with architectures for intelligent systems based on two classes of reusable components: (1) problem-solving methods--domain-independent algorithms for automating stereotypical tasks--and (2) domain ontologies that captured the essential concepts (and relationships among those concepts) in particular application areas. This paper highlights how developers can construct large, maintainable decision-support systems using these kinds of building blocks. The creation of domain ontologies and problem-solving methods is the fundamental end product of basic research in medical informatics. Consequently, these concepts need more attention by our scientific community.
Data submission and quality in microarray-based microRNA profiling
Witwer, Kenneth W.
2014-01-01
Background Public sharing of scientific data has assumed greater importance in the ‘omics’ era. Transparency is necessary for confirmation and validation, and multiple examiners aid in extracting maximal value from large datasets. Accordingly, database submission and provision of the Minimum Information About a Microarray Experiment (MIAME) are required by most journals as a prerequisite for review or acceptance. Methods In this study, the level of data submission and MIAME compliance was reviewed for 127 articles that included microarray-based microRNA profiling and that were published from July, 2011 through April, 2012 in the journals that published the largest number of such articles—PLOS ONE, the Journal of Biological Chemistry, Blood, and Oncogene—along with articles from nine other journals, including Clinical Chemistry, that published smaller numbers of array-based articles. Results Overall, data submission was reported at publication for less than 40% of all articles, and almost 75% of articles were MIAME-noncompliant. On average, articles that included full data submission scored significantly higher on a quality metric than articles with limited or no data submission, and studies with adequate description of methods disproportionately included larger numbers of experimental repeats. Finally, for several articles that were not MIAME-compliant, data re-analysis revealed less than complete support for the published conclusions, in one case leading to retraction. Conclusions These findings buttress the hypothesis that reluctance to share data is associated with low study quality and suggest that most miRNA array investigations are underpowered and/or potentially compromised by a lack of appropriate reporting and data submission. PMID:23358751
Data submission and quality in microarray-based microRNA profiling.
Witwer, Kenneth W
2013-02-01
Public sharing of scientific data has assumed greater importance in the omics era. Transparency is necessary for confirmation and validation, and multiple examiners aid in extracting maximal value from large data sets. Accordingly, database submission and provision of the Minimum Information About a Microarray Experiment (MIAME)(3) are required by most journals as a prerequisite for review or acceptance. In this study, the level of data submission and MIAME compliance was reviewed for 127 articles that included microarray-based microRNA (miRNA) profiling and were published from July 2011 through April 2012 in the journals that published the largest number of such articles--PLOS ONE, the Journal of Biological Chemistry, Blood, and Oncogene--along with articles from 9 other journals, including Clinical Chemistry, that published smaller numbers of array-based articles. Overall, data submission was reported at publication for <40% of all articles, and almost 75% of articles were MIAME noncompliant. On average, articles that included full data submission scored significantly higher on a quality metric than articles with limited or no data submission, and studies with adequate description of methods disproportionately included larger numbers of experimental repeats. Finally, for several articles that were not MIAME compliant, data reanalysis revealed less than complete support for the published conclusions, in 1 case leading to retraction. These findings buttress the hypothesis that reluctance to share data is associated with low study quality and suggest that most miRNA array investigations are underpowered and/or potentially compromised by a lack of appropriate reporting and data submission. © 2012 American Association for Clinical Chemistry
2010-01-01
Background The development of DNA microarrays has facilitated the generation of hundreds of thousands of transcriptomic datasets. The use of a common reference microarray design allows existing transcriptomic data to be readily compared and re-analysed in the light of new data, and the combination of this design with large datasets is ideal for 'systems'-level analyses. One issue is that these datasets are typically collected over many years and may be heterogeneous in nature, containing different microarray file formats and gene array layouts, dye-swaps, and showing varying scales of log2- ratios of expression between microarrays. Excellent software exists for the normalisation and analysis of microarray data but many data have yet to be analysed as existing methods struggle with heterogeneous datasets; options include normalising microarrays on an individual or experimental group basis. Our solution was to develop the Batch Anti-Banana Algorithm in R (BABAR) algorithm and software package which uses cyclic loess to normalise across the complete dataset. We have already used BABAR to analyse the function of Salmonella genes involved in the process of infection of mammalian cells. Results The only input required by BABAR is unprocessed GenePix or BlueFuse microarray data files. BABAR provides a combination of 'within' and 'between' microarray normalisation steps and diagnostic boxplots. When applied to a real heterogeneous dataset, BABAR normalised the dataset to produce a comparable scaling between the microarrays, with the microarray data in excellent agreement with RT-PCR analysis. When applied to a real non-heterogeneous dataset and a simulated dataset, BABAR's performance in identifying differentially expressed genes showed some benefits over standard techniques. Conclusions BABAR is an easy-to-use software tool, simplifying the simultaneous normalisation of heterogeneous two-colour common reference design cDNA microarray-based transcriptomic datasets. We show BABAR transforms real and simulated datasets to allow for the correct interpretation of these data, and is the ideal tool to facilitate the identification of differentially expressed genes or network inference analysis from transcriptomic datasets. PMID:20128918
Modifications and integration of the electronic tracking board in a pediatric emergency department.
Dexheimer, Judith W; Kennebeck, Stephanie
2013-07-01
Electronic health records (EHRs) are used for data storage; provider, laboratory, and patient communication; clinical decision support; procedure and medication orders; and decision support alerts. Clinical decision support is part of any EHR and is designed to help providers make better decisions. The emergency department (ED) poses a unique environment to the use of EHRs and clinical decision support. Used effectively, computerized tracking boards can help improve flow, communication, and the dissemination of pertinent visit information between providers and other departments in a busy ED. We discuss the unique modifications and decisions made in the implementation of an EHR and computerized tracking board in a pediatric ED. We discuss the changing views based on provider roles, customization to the user interface including the layout and colors, decision support, tracking board best practices collected from other institutions and colleagues, and a case study of using reminders on the electronic tracking board to drive pain reassessments.
Nurses' Clinical Decision Making on Adopting a Wound Clinical Decision Support System.
Khong, Peck Chui Betty; Hoi, Shu Yin; Holroyd, Eleanor; Wang, Wenru
2015-07-01
Healthcare information technology systems are considered the ideal tool to inculcate evidence-based nursing practices. The wound clinical decision support system was built locally to support nurses to manage pressure ulcer wounds in their daily practice. However, its adoption rate is not optimal. The study's objective was to discover the concepts that informed the RNs' decisions to adopt the wound clinical decision support system as an evidence-based technology in their nursing practice. This was an exploratory, descriptive, and qualitative design using face-to-face interviews, individual interviews, and active participatory observation. A purposive, theoretical sample of 14 RNs was recruited from one of the largest public tertiary hospitals in Singapore after obtaining ethics approval. After consenting, the nurses were interviewed and observed separately. Recruitment stopped when data saturation was reached. All transcribed interview data underwent a concurrent thematic analysis, whereas observational data were content analyzed independently and subsequently triangulated with the interview data. Eight emerging themes were identified, namely, use of the wound clinical decision support system, beliefs in the wound clinical decision support system, influences of the workplace culture, extent of the benefits, professional control over nursing practices, use of knowledge, gut feelings, and emotions (fear, doubt, and frustration). These themes represented the nurses' mental outlook as they made decisions on adopting the wound clinical decision support system in light of the complexities of their roles and workloads. This research has provided insight on the nurses' thoughts regarding their decision to interact with the computer environment in a Singapore context. It captured the nurses' complex thoughts when deciding whether to adopt or reject information technology as they practice in a clinical setting.
Zhang, Yi-Fan; Tian, Yu; Zhou, Tian-Shu; Araki, Kenji; Li, Jing-Song
2016-01-01
The broad adoption of clinical decision support systems within clinical practice has been hampered mainly by the difficulty in expressing domain knowledge and patient data in a unified formalism. This paper presents a semantic-based approach to the unified representation of healthcare domain knowledge and patient data for practical clinical decision making applications. A four-phase knowledge engineering cycle is implemented to develop a semantic healthcare knowledge base based on an HL7 reference information model, including an ontology to model domain knowledge and patient data and an expression repository to encode clinical decision making rules and queries. A semantic clinical decision support system is designed to provide patient-specific healthcare recommendations based on the knowledge base and patient data. The proposed solution is evaluated in the case study of type 2 diabetes mellitus inpatient management. The knowledge base is successfully instantiated with relevant domain knowledge and testing patient data. Ontology-level evaluation confirms model validity. Application-level evaluation of diagnostic accuracy reaches a sensitivity of 97.5%, a specificity of 100%, and a precision of 98%; an acceptance rate of 97.3% is given by domain experts for the recommended care plan orders. The proposed solution has been successfully validated in the case study as providing clinical decision support at a high accuracy and acceptance rate. The evaluation results demonstrate the technical feasibility and application prospect of our approach. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Maouche, Seraya; Poirier, Odette; Godefroy, Tiphaine; Olaso, Robert; Gut, Ivo; Collet, Jean-Phillipe; Montalescot, Gilles; Cambien, François
2008-01-01
Background In this study we assessed the respective ability of Affymetrix and Illumina microarray methodologies to answer a relevant biological question, namely the change in gene expression between resting monocytes and macrophages derived from these monocytes. Five RNA samples for each type of cell were hybridized to the two platforms in parallel. In addition, a reference list of differentially expressed genes (DEG) was generated from a larger number of hybridizations (mRNA from 86 individuals) using the RNG/MRC two-color platform. Results Our results show an important overlap of the Illumina and Affymetrix DEG lists. In addition, more than 70% of the genes in these lists were also present in the reference list. Overall the two platforms had very similar performance in terms of biological significance, evaluated by the presence in the DEG lists of an excess of genes belonging to Gene Ontology (GO) categories relevant for the biology of monocytes and macrophages. Our results support the conclusion of the MicroArray Quality Control (MAQC) project that the criteria used to constitute the DEG lists strongly influence the degree of concordance among platforms. However the importance of prioritizing genes by magnitude of effect (fold change) rather than statistical significance (p-value) to enhance cross-platform reproducibility recommended by the MAQC authors was not supported by our data. Conclusion Functional analysis based on GO enrichment demonstrates that the 2 compared technologies delivered very similar results and identified most of the relevant GO categories enriched in the reference list. PMID:18578872
A Fuzzy-Based Decision Support Model for Selecting the Best Dialyser Flux in Haemodialysis.
Oztürk, Necla; Tozan, Hakan
2015-01-01
Decision making is an important procedure for every organization. The procedure is particularly challenging for complicated multi-criteria problems. Selection of dialyser flux is one of the decisions routinely made for haemodialysis treatment provided for chronic kidney failure patients. This study provides a decision support model for selecting the best dialyser flux between high-flux and low-flux dialyser alternatives. The preferences of decision makers were collected via a questionnaire. A total of 45 questionnaires filled by dialysis physicians and nephrologists were assessed. A hybrid fuzzy-based decision support software that enables the use of Analytic Hierarchy Process (AHP), Fuzzy Analytic Hierarchy Process (FAHP), Analytic Network Process (ANP), and Fuzzy Analytic Network Process (FANP) was used to evaluate the flux selection model. In conclusion, the results showed that a high-flux dialyser is the best. option for haemodialysis treatment.
Decision-support systems for natural-hazards and land-management issues
Dinitz, Laura; Forney, William; Byrd, Kristin
2012-01-01
Scientists at the USGS Western Geographic Science Center are developing decision-support systems (DSSs) for natural-hazards and land-management issues. DSSs are interactive computer-based tools that use data and models to help identify and solve problems. These systems can provide crucial support to policymakers, planners, and communities for making better decisions about long-term natural hazards mitigation and land-use planning.
New approaches for real time decision support systems
NASA Technical Reports Server (NTRS)
Hair, D. Charles; Pickslay, Kent
1994-01-01
NCCOSC RDT&E Division (NRaD) is conducting research into ways of improving decision support systems (DSS) that are used in tactical Navy decision making situations. The research has focused on the incorporation of findings about naturalistic decision-making processes into the design of the DSS. As part of that research, two computer tools were developed that model the two primary naturalistic decision-making strategies used by Navy experts in tactical settings. Current work is exploring how best to incorporate the information produced by those tools into an existing simulation of current Navy decision support systems. This work has implications for any applications involving the need to make decisions under time constraints, based on incomplete or ambiguous data.
Microtiter plate-based antibody microarrays for bacteria and toxins
USDA-ARS?s Scientific Manuscript database
Research has focused on the development of rapid biosensor-based, high-throughput, and multiplexed detection of pathogenic bacteria in foods. Specifically, antibody microarrays in 96-well microtiter plates have been generated for the purpose of selective detection of Shiga toxin-producing E. coli (...
Microarray-based Comparative Genomic Indexing of the Cronobacter genus (Enterobacter sakazakii)
USDA-ARS?s Scientific Manuscript database
Cronobacter is a recently defined genus synonymous with Enterobacter sakazakii. This new genus currently comprises 6 genomospecies. To extend our understanding of the genetic relationship between Cronobacter sakazakii BAA-894 and the other species of this genus, microarray-based comparative genomi...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-15
... DEPARTMENT OF DEFENSE Department of the Navy Record of Decision for the U.S. Marine Corps Basing of MV-22 and H-1 Aircraft in Support of III Marine Expeditionary Force Elements in Hawaii AGENCY... aircraft) in support of III Marine Expeditionary Force elements in Hawaii. SUPPLEMENTARY INFORMATION: The...
A pilot study of distributed knowledge management and clinical decision support in the cloud.
Dixon, Brian E; Simonaitis, Linas; Goldberg, Howard S; Paterno, Marilyn D; Schaeffer, Molly; Hongsermeier, Tonya; Wright, Adam; Middleton, Blackford
2013-09-01
Implement and perform pilot testing of web-based clinical decision support services using a novel framework for creating and managing clinical knowledge in a distributed fashion using the cloud. The pilot sought to (1) develop and test connectivity to an external clinical decision support (CDS) service, (2) assess the exchange of data to and knowledge from the external CDS service, and (3) capture lessons to guide expansion to more practice sites and users. The Clinical Decision Support Consortium created a repository of shared CDS knowledge for managing hypertension, diabetes, and coronary artery disease in a community cloud hosted by Partners HealthCare. A limited data set for primary care patients at a separate health system was securely transmitted to a CDS rules engine hosted in the cloud. Preventive care reminders triggered by the limited data set were returned for display to clinician end users for review and display. During a pilot study, we (1) monitored connectivity and system performance, (2) studied the exchange of data and decision support reminders between the two health systems, and (3) captured lessons. During the six month pilot study, there were 1339 patient encounters in which information was successfully exchanged. Preventive care reminders were displayed during 57% of patient visits, most often reminding physicians to monitor blood pressure for hypertensive patients (29%) and order eye exams for patients with diabetes (28%). Lessons learned were grouped into five themes: performance, governance, semantic interoperability, ongoing adjustments, and usability. Remote, asynchronous cloud-based decision support performed reasonably well, although issues concerning governance, semantic interoperability, and usability remain key challenges for successful adoption and use of cloud-based CDS that will require collaboration between biomedical informatics and computer science disciplines. Decision support in the cloud is feasible and may be a reasonable path toward achieving better support of clinical decision-making across the widest range of health care providers. Published by Elsevier B.V.
Features of Computer-Based Decision Aids: Systematic Review, Thematic Synthesis, and Meta-Analyses
Krömker, Dörthe; Meguerditchian, Ari N; Tamblyn, Robyn
2016-01-01
Background Patient information and education, such as decision aids, are gradually moving toward online, computer-based environments. Considerable research has been conducted to guide content and presentation of decision aids. However, given the relatively new shift to computer-based support, little attention has been given to how multimedia and interactivity can improve upon paper-based decision aids. Objective The first objective of this review was to summarize published literature into a proposed classification of features that have been integrated into computer-based decision aids. Building on this classification, the second objective was to assess whether integration of specific features was associated with higher-quality decision making. Methods Relevant studies were located by searching MEDLINE, Embase, CINAHL, and CENTRAL databases. The review identified studies that evaluated computer-based decision aids for adults faced with preference-sensitive medical decisions and reported quality of decision-making outcomes. A thematic synthesis was conducted to develop the classification of features. Subsequently, meta-analyses were conducted based on standardized mean differences (SMD) from randomized controlled trials (RCTs) that reported knowledge or decisional conflict. Further subgroup analyses compared pooled SMDs for decision aids that incorporated a specific feature to other computer-based decision aids that did not incorporate the feature, to assess whether specific features improved quality of decision making. Results Of 3541 unique publications, 58 studies met the target criteria and were included in the thematic synthesis. The synthesis identified six features: content control, tailoring, patient narratives, explicit values clarification, feedback, and social support. A subset of 26 RCTs from the thematic synthesis was used to conduct the meta-analyses. As expected, computer-based decision aids performed better than usual care or alternative aids; however, some features performed better than others. Integration of content control improved quality of decision making (SMD 0.59 vs 0.23 for knowledge; SMD 0.39 vs 0.29 for decisional conflict). In contrast, tailoring reduced quality of decision making (SMD 0.40 vs 0.71 for knowledge; SMD 0.25 vs 0.52 for decisional conflict). Similarly, patient narratives also reduced quality of decision making (SMD 0.43 vs 0.65 for knowledge; SMD 0.17 vs 0.46 for decisional conflict). Results were varied for different types of explicit values clarification, feedback, and social support. Conclusions Integration of media rich or interactive features into computer-based decision aids can improve quality of preference-sensitive decision making. However, this is an emerging field with limited evidence to guide use. The systematic review and thematic synthesis identified features that have been integrated into available computer-based decision aids, in an effort to facilitate reporting of these features and to promote integration of such features into decision aids. The meta-analyses and associated subgroup analyses provide preliminary evidence to support integration of specific features into future decision aids. Further research can focus on clarifying independent contributions of specific features through experimental designs and refining the designs of features to improve effectiveness. PMID:26813512
Knowledge bases, clinical decision support systems, and rapid learning in oncology.
Yu, Peter Paul
2015-03-01
One of the most important benefits of health information technology is to assist the cognitive process of the human mind in the face of vast amounts of health data, limited time for decision making, and the complexity of the patient with cancer. Clinical decision support tools are frequently cited as a technologic solution to this problem, but to date useful clinical decision support systems (CDSS) have been limited in utility and implementation. This article describes three unique sources of health data that underlie fundamentally different types of knowledge bases which feed into CDSS. CDSS themselves comprise a variety of models which are discussed. The relationship of knowledge bases and CDSS to rapid learning health systems design is critical as CDSS are essential drivers of rapid learning in clinical care. Copyright © 2015 by American Society of Clinical Oncology.
Ruettger, Anke; Nieter, Johanna; Skrypnyk, Artem; Engelmann, Ines; Ziegler, Albrecht; Moser, Irmgard; Monecke, Stefan; Ehricht, Ralf
2012-01-01
Membrane-based spoligotyping has been converted to DNA microarray format to qualify it for high-throughput testing. We have shown the assay's validity and suitability for direct typing from tissue and detecting new spoligotypes. Advantages of the microarray methodology include rapidity, ease of operation, automatic data processing, and affordability. PMID:22553239
Ruettger, Anke; Nieter, Johanna; Skrypnyk, Artem; Engelmann, Ines; Ziegler, Albrecht; Moser, Irmgard; Monecke, Stefan; Ehricht, Ralf; Sachse, Konrad
2012-07-01
Membrane-based spoligotyping has been converted to DNA microarray format to qualify it for high-throughput testing. We have shown the assay's validity and suitability for direct typing from tissue and detecting new spoligotypes. Advantages of the microarray methodology include rapidity, ease of operation, automatic data processing, and affordability.
Leong, T Y; Kaiser, K; Miksch, S
2007-01-01
Guideline-based clinical decision support is an emerging paradigm to help reduce error, lower cost, and improve quality in evidence-based medicine. The free and open source (FOS) approach is a promising alternative for delivering cost-effective information technology (IT) solutions in health care. In this paper, we survey the current FOS enabling technologies for patient-centric, guideline-based care, and discuss the current trends and future directions of their role in clinical decision support. We searched PubMed, major biomedical informatics websites, and the web in general for papers and links related to FOS health care IT systems. We also relied on our background and knowledge for specific subtopics. We focused on the functionalities of guideline modeling tools, and briefly examined the supporting technologies for terminology, data exchange and electronic health record (EHR) standards. To effectively support patient-centric, guideline-based care, the computerized guidelines and protocols need to be integrated with existing clinical information systems or EHRs. Technologies that enable such integration should be accessible, interoperable, and scalable. A plethora of FOS tools and techniques for supporting different knowledge management and quality assurance tasks involved are available. Many challenges, however, remain in their implementation. There are active and growing trends of deploying FOS enabling technologies for integrating clinical guidelines, protocols, and pathways into the main care processes. The continuing development and maturation of such technologies are likely to make increasingly significant contributions to patient-centric, guideline-based clinical decision support.
Decision blocks: A tool for automating decision making in CLIPS
NASA Technical Reports Server (NTRS)
Eick, Christoph F.; Mehta, Nikhil N.
1991-01-01
The human capability of making complex decision is one of the most fascinating facets of human intelligence, especially if vague, judgemental, default or uncertain knowledge is involved. Unfortunately, most existing rule based forward chaining languages are not very suitable to simulate this aspect of human intelligence, because of their lack of support for approximate reasoning techniques needed for this task, and due to the lack of specific constructs to facilitate the coding of frequently reoccurring decision block to provide better support for the design and implementation of rule based decision support systems. A language called BIRBAL, which is defined on the top of CLIPS, for the specification of decision blocks, is introduced. Empirical experiments involving the comparison of the length of CLIPS program with the corresponding BIRBAL program for three different applications are surveyed. The results of these experiments suggest that for decision making intensive applications, a CLIPS program tends to be about three times longer than the corresponding BIRBAL program.
Kračun, Stjepan Krešimir; Fangel, Jonatan Ulrik; Rydahl, Maja Gro; Pedersen, Henriette Lodberg; Vidal-Melgosa, Silvia; Willats, William George Tycho
2017-01-01
Cell walls are an important feature of plant cells and a major component of the plant glycome. They have both structural and physiological functions and are critical for plant growth and development. The diversity and complexity of these structures demand advanced high-throughput techniques to answer questions about their structure, functions and roles in both fundamental and applied scientific fields. Microarray technology provides both the high-throughput and the feasibility aspects required to meet that demand. In this chapter, some of the most recent microarray-based techniques relating to plant cell walls are described together with an overview of related contemporary techniques applied to carbohydrate microarrays and their general potential in glycoscience. A detailed experimental procedure for high-throughput mapping of plant cell wall glycans using the comprehensive microarray polymer profiling (CoMPP) technique is included in the chapter and provides a good example of both the robust and high-throughput nature of microarrays as well as their applicability to plant glycomics.
Brodsky, Leonid; Leontovich, Andrei; Shtutman, Michael; Feinstein, Elena
2004-01-01
Mathematical methods of analysis of microarray hybridizations deal with gene expression profiles as elementary units. However, some of these profiles do not reflect a biologically relevant transcriptional response, but rather stem from technical artifacts. Here, we describe two technically independent but rationally interconnected methods for identification of such artifactual profiles. Our diagnostics are based on detection of deviations from uniformity, which is assumed as the main underlying principle of microarray design. Method 1 is based on detection of non-uniformity of microarray distribution of printed genes that are clustered based on the similarity of their expression profiles. Method 2 is based on evaluation of the presence of gene-specific microarray spots within the slides’ areas characterized by an abnormal concentration of low/high differential expression values, which we define as ‘patterns of differentials’. Applying two novel algorithms, for nested clustering (method 1) and for pattern detection (method 2), we can make a dual estimation of the profile’s quality for almost every printed gene. Genes with artifactual profiles detected by method 1 may then be removed from further analysis. Suspicious differential expression values detected by method 2 may be either removed or weighted according to the probabilities of patterns that cover them, thus diminishing their input in any further data analysis. PMID:14999086
Hu, Guohong; Wang, Hui-Yun; Greenawalt, Danielle M.; Azaro, Marco A.; Luo, Minjie; Tereshchenko, Irina V.; Cui, Xiangfeng; Yang, Qifeng; Gao, Richeng; Shen, Li; Li, Honghua
2006-01-01
Microarray-based analysis of single nucleotide polymorphisms (SNPs) has many applications in large-scale genetic studies. To minimize the influence of experimental variation, microarray data usually need to be processed in different aspects including background subtraction, normalization and low-signal filtering before genotype determination. Although many algorithms are sophisticated for these purposes, biases are still present. In the present paper, new algorithms for SNP microarray data analysis and the software, AccuTyping, developed based on these algorithms are described. The algorithms take advantage of a large number of SNPs included in each assay, and the fact that the top and bottom 20% of SNPs can be safely treated as homozygous after sorting based on their ratios between the signal intensities. These SNPs are then used as controls for color channel normalization and background subtraction. Genotype calls are made based on the logarithms of signal intensity ratios using two cutoff values, which were determined after training the program with a dataset of ∼160 000 genotypes and validated by non-microarray methods. AccuTyping was used to determine >300 000 genotypes of DNA and sperm samples. The accuracy was shown to be >99%. AccuTyping can be downloaded from . PMID:16982644
Timmerman, Peter; Barderas, Rodrigo; Desmet, Johan; Altschuh, Danièle; Shochat, Susana; Hollestelle, Martine J; Höppener, Jo W M; Monasterio, Alberto; Casal, J Ignacio; Meloen, Rob H
2009-12-04
The great success of therapeutic monoclonal antibodies has fueled research toward mimicry of their binding sites and the development of new strategies for peptide-based mimetics production. Here, we describe a new combinatorial approach for the production of peptidomimetics using the complementarity-determining regions (CDRs) from gastrin17 (pyroEGPWLEEEEEAYGWMDF-NH(2)) antibodies as starting material for cyclic peptide synthesis in a microarray format. Gastrin17 is a trophic factor in gastrointestinal tumors, including pancreatic cancer, which makes it an interesting target for development of therapeutic antibodies. Screening of microarrays containing bicyclic peptidomimetics identified a high number of gastrin binders. A strong correlation was observed between gastrin binding and overall charge of the peptidomimetic. Most of the best gastrin binders proceeded from CDRs containing charged residues. In contrast, CDRs from high affinity antibodies containing mostly neutral residues failed to yield good binders. Our experiments revealed essential differences in the mode of antigen binding between CDR-derived peptidomimetics (K(d) values in micromolar range) and the parental monoclonal antibodies (K(d) values in nanomolar range). However, chemically derived peptidomimetics from gastrin binders were very effective in gastrin neutralization studies using cell-based assays, yielding a neutralizing activity in pancreatic tumoral cell lines comparable with that of gastrin-specific monoclonal antibodies. These data support the use of combinatorial CDR-peptide microarrays as a tool for the development of a new generation of chemically synthesized cyclic peptidomimetics with functional activity.
Timmerman, Peter; Barderas, Rodrigo; Desmet, Johan; Altschuh, Danièle; Shochat, Susana; Hollestelle, Martine J.; Höppener, Jo W. M.; Monasterio, Alberto; Casal, J. Ignacio; Meloen, Rob H.
2009-01-01
The great success of therapeutic monoclonal antibodies has fueled research toward mimicry of their binding sites and the development of new strategies for peptide-based mimetics production. Here, we describe a new combinatorial approach for the production of peptidomimetics using the complementarity-determining regions (CDRs) from gastrin17 (pyroEGPWLEEEEEAYGWMDF-NH2) antibodies as starting material for cyclic peptide synthesis in a microarray format. Gastrin17 is a trophic factor in gastrointestinal tumors, including pancreatic cancer, which makes it an interesting target for development of therapeutic antibodies. Screening of microarrays containing bicyclic peptidomimetics identified a high number of gastrin binders. A strong correlation was observed between gastrin binding and overall charge of the peptidomimetic. Most of the best gastrin binders proceeded from CDRs containing charged residues. In contrast, CDRs from high affinity antibodies containing mostly neutral residues failed to yield good binders. Our experiments revealed essential differences in the mode of antigen binding between CDR-derived peptidomimetics (Kd values in micromolar range) and the parental monoclonal antibodies (Kd values in nanomolar range). However, chemically derived peptidomimetics from gastrin binders were very effective in gastrin neutralization studies using cell-based assays, yielding a neutralizing activity in pancreatic tumoral cell lines comparable with that of gastrin-specific monoclonal antibodies. These data support the use of combinatorial CDR-peptide microarrays as a tool for the development of a new generation of chemically synthesized cyclic peptidomimetics with functional activity. PMID:19808684
NASA Astrophysics Data System (ADS)
Bernardini, James Nicholas, III
An understanding of the microbiota within life support systems is essential for the prolonged presence of humans in space. This is because microbes may cause disease or induce biofouling and/or corrosion within spacecraft water systems. It is imperative that we develop effective high-throughput technologies for characterizing microbial populations that can eventually be used in the space environment. This dissertation describes testing and development of such methodologies, targeting both bacteria and viruses in water, and examines the bacterial and viral diversity within two spacecraft life support systems. The bacterial community of the International Space Station Internal Active Thermal Control System (IATCS) was examined using conventional culture-based and advanced molecular techniques including adenosine triphosphate (ATP) and Limulus Amebocyte Lysate (LAL) assays, direct microscopic examination, and analyses of 16S rRNA gene libraries from the community metagenome. The cultivable heterotrophs of the IATCS fluids ranged from below detection limit to 1.1x10 5/100 ml, and viable cells, measured by ATP, ranged from 1.4x10 3/100 ml to 7.7x105/100 ml. DNA extraction, cloning, sequencing, and bioinformatic analysis of the clones from 16S RNA gene libraries showed members of the firmicutes, alpha, beta, and gamma-proteobacteria to be present in the fluids. This persistent microbial bioburden and the presence of probable metal reducers, biofilm formers, and opportunistic pathogens illustrate the need for better characterization of bacterial communities present within spacecraft fluids. A new methodology was developed for detection of viruses in water using microarrays. Samples were concentrated by lyophilization, resuspended and filtered (0.22microm). Viral nucleic acids were then extracted, amplified, fluorescently labeled and hybridized onto a custom microarray with probes for ˜1000 known viruses. Numerous virus signatures were observed. Human Adenovirus C and Influenza A viruses were used to verify positive microarray hybridizations by quantitative polymerase chain reaction (PCR), reverse transcriptase PCR, and conventional PCR. Experiments were performed using municipal drinking water, IATCS fluids, and Shuttle drinking water. Thus, this dissertation describes what we believe is the first molecular analysis of the IATCS bacterial ecology and the first use and validation of a microarray-based assay for the detection of viral genetic signatures within drinking waters.
Preparing for a decision support system.
Callan, K
2000-08-01
The increasing pressure to reduce costs and improve outcomes is driving the health care industry to view information as a competitive advantage. Timely information is required to help reduce inefficiencies and improve patient care. Numerous disparate operational or transactional information systems with inconsistent and often conflicting data are no longer adequate to meet the information needs of integrated care delivery systems and networks in competitive managed care environments. This article reviews decision support system characteristics and describes a process to assess the preparedness of an organization to implement and use decision support systems to achieve a more effective, information-based decision process. Decision support tools included in this article range from reports to data mining.
SimArray: a user-friendly and user-configurable microarray design tool
Auburn, Richard P; Russell, Roslin R; Fischer, Bettina; Meadows, Lisa A; Sevillano Matilla, Santiago; Russell, Steven
2006-01-01
Background Microarrays were first developed to assess gene expression but are now also used to map protein-binding sites and to assess allelic variation between individuals. Regardless of the intended application, efficient production and appropriate array design are key determinants of experimental success. Inefficient production can make larger-scale studies prohibitively expensive, whereas poor array design makes normalisation and data analysis problematic. Results We have developed a user-friendly tool, SimArray, which generates a randomised spot layout, computes a maximum meta-grid area, and estimates the print time, in response to user-specified design decisions. Selected parameters include: the number of probes to be printed; the microtitre plate format; the printing pin configuration, and the achievable spot density. SimArray is compatible with all current robotic spotters that employ 96-, 384- or 1536-well microtitre plates, and can be configured to reflect most production environments. Print time and maximum meta-grid area estimates facilitate evaluation of each array design for its suitability. Randomisation of the spot layout facilitates correction of systematic biases by normalisation. Conclusion SimArray is intended to help both established researchers and those new to the microarray field to develop microarray designs with randomised spot layouts that are compatible with their specific production environment. SimArray is an open-source program and is available from . PMID:16509966
2010-01-01
Background Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. Discussion We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. Summary In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be improved. PMID:20504357
McCaughey, Deirdre; Bruning, Nealia S
2010-05-26
Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be improved.
Designing Tools for Supporting User Decision-Making in e-Commerce
NASA Astrophysics Data System (ADS)
Sutcliffe, Alistair; Al-Qaed, Faisal
The paper describes a set of tools designed to support a variety of user decision-making strategies. The tools are complemented by an online advisor so they can be adapted to different domains and users can be guided to adopt appropriate tools for different choices in e-commerce, e.g. purchasing high-value products, exploring product fit to users’ needs, or selecting products which satisfy requirements. The tools range from simple recommenders to decision support by interactive querying and comparison matrices. They were evaluated in a scenario-based experiment which varied the users’ task and motivation, with and without an advisor agent. The results show the tools and advisor were effective in supporting users and agreed with the predictions of ADM (adaptive decision making) theory, on which the design of the tools was based.
A DNA microarray-based assay to detect dual infection with two dengue virus serotypes.
Díaz-Badillo, Alvaro; Muñoz, María de Lourdes; Perez-Ramirez, Gerardo; Altuzar, Victor; Burgueño, Juan; Mendoza-Alvarez, Julio G; Martínez-Muñoz, Jorge P; Cisneros, Alejandro; Navarrete-Espinosa, Joel; Sanchez-Sinencio, Feliciano
2014-04-25
Here; we have described and tested a microarray based-method for the screening of dengue virus (DENV) serotypes. This DNA microarray assay is specific and sensitive and can detect dual infections with two dengue virus serotypes and single-serotype infections. Other methodologies may underestimate samples containing more than one serotype. This technology can be used to discriminate between the four DENV serotypes. Single-stranded DNA targets were covalently attached to glass slides and hybridised with specific labelled probes. DENV isolates and dengue samples were used to evaluate microarray performance. Our results demonstrate that the probes hybridized specifically to DENV serotypes; with no detection of unspecific signals. This finding provides evidence that specific probes can effectively identify single and double infections in DENV samples.
A DNA Microarray-Based Assay to Detect Dual Infection with Two Dengue Virus Serotypes
Díaz-Badillo, Alvaro; de Lourdes Muñoz, María; Perez-Ramirez, Gerardo; Altuzar, Victor; Burgueño, Juan; Mendoza-Alvarez, Julio G.; Martínez-Muñoz, Jorge P.; Cisneros, Alejandro; Navarrete-Espinosa, Joel; Sanchez-Sinencio, Feliciano
2014-01-01
Here; we have described and tested a microarray based-method for the screening of dengue virus (DENV) serotypes. This DNA microarray assay is specific and sensitive and can detect dual infections with two dengue virus serotypes and single-serotype infections. Other methodologies may underestimate samples containing more than one serotype. This technology can be used to discriminate between the four DENV serotypes. Single-stranded DNA targets were covalently attached to glass slides and hybridised with specific labelled probes. DENV isolates and dengue samples were used to evaluate microarray performance. Our results demonstrate that the probes hybridized specifically to DENV serotypes; with no detection of unspecific signals. This finding provides evidence that specific probes can effectively identify single and double infections in DENV samples. PMID:24776933
Martinez, Kathryn A; Resnicow, Ken; Williams, Geoffrey C; Silva, Marlene; Abrahamse, Paul; Shumway, Dean A; Wallner, Lauren P; Katz, Steven J; Hawley, Sarah T
2016-12-01
Provider communication that supports patient autonomy has been associated with numerous positive patient outcomes. However, to date, no research has examined the relationship between perceived provider communication style and patient-assessed decision quality in breast cancer. Using a population-based sample of women with localized breast cancer, we assessed patient perceptions of autonomy-supportive communication from their surgeons and medical oncologists, as well as patient-reported decision quality. We used multivariable linear regression to examine the association between autonomy-supportive communication and subjective decision quality for surgery and chemotherapy decisions, controlling for sociodemographic and clinical factors, as well as patient-reported communication preference (non-directive or directive). Among the 1690 women included in the overall sample, patient-reported decision quality scores were positively associated with higher levels of perceived autonomy-supportive communication from surgeons (β=0.30; p<0.001) and medical oncologists (β=0.26; p<0.001). Patient communication style preference moderated the association between physician communication style received and perceived decision quality. Autonomy-supportive communication by physicians was associated with higher subjective decision quality among women with localized breast cancer. These results support future efforts to design interventions that enhance autonomy-supportive communication. Autonomy-supportive communication by cancer doctors can improve patients' perceived decision quality. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Resnicow, Ken; Williams, Geoffrey C.; Silva, Marlene; Abrahamse, Paul; Shumway, Dean; Wallner, Lauren; Katz, Steven; Hawley, Sarah
2016-01-01
Objective Provider communication that supports patient autonomy has been associated with numerous positive patient outcomes. However, to date, no research has examined the relationship between perceived provider communication style and patient-assessed decision quality in breast cancer. Methods Using a population-based sample of women with localized breast cancer, we assessed patient perceptions of autonomy-supportive communication from their surgeons and medical oncologists, as well as patient-reported decision quality. We used multivariable linear regression to examine the association between autonomy-supportive communication and subjective decision quality for surgery and chemotherapy decisions, controlling for sociodemographic and clinical factors, as well as patient-reported communication preference (non-directive or directive). Results Among the 1,690 women included in the overall sample, patient-reported decision quality scores were positively associated with higher levels of perceived autonomy-supportive communication from surgeons (β=0.30; p<0.001) and medical oncologists (β=0.26; p<0.001). Patient communication style preference moderated the association between physician communication style received and perceived decision quality. Conclusion Autonomy-supportive communication by physicians was associated with higher subjective decision quality among women with localized breast cancer. These results support future efforts to design interventions that enhance autonomy-supportive communication. Practice Implications Autonomy-supportive communication by cancer doctors can improve patients’ perceived decision quality. PMID:27395750
DesAutels, Spencer J.; Fox, Zachary E.; Giuse, Dario A.; Williams, Annette M.; Kou, Qing-hua; Weitkamp, Asli; Neal R, Patel; Bettinsoli Giuse, Nunzia
2016-01-01
Clinical decision support (CDS) knowledge, embedded over time in mature medical systems, presents an interesting and complex opportunity for information organization, maintenance, and reuse. To have a holistic view of all decision support requires an in-depth understanding of each clinical system as well as expert knowledge of the latest evidence. This approach to clinical decision support presents an opportunity to unify and externalize the knowledge within rules-based decision support. Driven by an institutional need to prioritize decision support content for migration to new clinical systems, the Center for Knowledge Management and Health Information Technology teams applied their unique expertise to extract content from individual systems, organize it through a single extensible schema, and present it for discovery and reuse through a newly created Clinical Support Knowledge Acquisition and Archival Tool (CS-KAAT). CS-KAAT can build and maintain the underlying knowledge infrastructure needed by clinical systems. PMID:28269846
Baptista, Sofia; Teles Sampaio, Elvira; Heleno, Bruno; Azevedo, Luís Filipe; Martins, Carlos
2018-06-26
Prostate cancer is a leading cause of cancer among men. Because screening for prostate cancer is a controversial issue, many experts in the field have defended the use of shared decision making using validated decision aids, which can be presented in different formats (eg, written, multimedia, Web). Recent studies have concluded that decision aids improve knowledge and reduce decisional conflict. This meta-analysis aimed to investigate the impact of using Web-based decision aids to support men's prostate cancer screening decisions in comparison with usual care and other formats of decision aids. We searched PubMed, CINAHL, PsycINFO, and Cochrane CENTRAL databases up to November 2016. This search identified randomized controlled trials, which assessed Web-based decision aids for men making a prostate cancer screening decision and reported quality of decision-making outcomes. Two reviewers independently screened citations for inclusion criteria, extracted data, and assessed risk of bias. Using a random-effects model, meta-analyses were conducted pooling results using mean differences (MD), standardized mean differences (SMD), and relative risks (RR). Of 2406 unique citations, 7 randomized controlled trials met the inclusion criteria. For risk of bias, selective outcome reporting and participant/personnel blinding were mostly rated as unclear due to inadequate reporting. Based on seven items, two studies had high risk of bias for one item. Compared to usual care, Web-based decision aids increased knowledge (SMD 0.46; 95% CI 0.18-0.75), reduced decisional conflict (MD -7.07%; 95% CI -9.44 to -4.71), and reduced the practitioner control role in the decision-making process (RR 0.50; 95% CI 0.31-0.81). Web-based decision aids compared to printed decision aids yielded no differences in knowledge, decisional conflict, and participation in decision or screening behaviors. Compared to video decision aids, Web-based decision aids showed lower average knowledge scores (SMD -0.50; 95% CI -0.88 to -0.12) and a slight decrease in prostate-specific antigen screening (RR 1.12; 95% CI 1.01-1.25). According to this analysis, Web-based decision aids performed similarly to alternative formats (ie, printed, video) for the assessed decision-quality outcomes. The low cost, readiness, availability, and anonymity of the Web can be an advantage for increasing access to decision aids that support prostate cancer screening decisions among men. ©Sofia Baptista, Elvira Teles Sampaio, Bruno Heleno, Luís Filipe Azevedo, Carlos Martins. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 26.06.2018.
IBM's Health Analytics and Clinical Decision Support.
Kohn, M S; Sun, J; Knoop, S; Shabo, A; Carmeli, B; Sow, D; Syed-Mahmood, T; Rapp, W
2014-08-15
This survey explores the role of big data and health analytics developed by IBM in supporting the transformation of healthcare by augmenting evidence-based decision-making. Some problems in healthcare and strategies for change are described. It is argued that change requires better decisions, which, in turn, require better use of the many kinds of healthcare information. Analytic resources that address each of the information challenges are described. Examples of the role of each of the resources are given. There are powerful analytic tools that utilize the various kinds of big data in healthcare to help clinicians make more personalized, evidenced-based decisions. Such resources can extract relevant information and provide insights that clinicians can use to make evidence-supported decisions. There are early suggestions that these resources have clinical value. As with all analytic tools, they are limited by the amount and quality of data. Big data is an inevitable part of the future of healthcare. There is a compelling need to manage and use big data to make better decisions to support the transformation of healthcare to the personalized, evidence-supported model of the future. Cognitive computing resources are necessary to manage the challenges in employing big data in healthcare. Such tools have been and are being developed. The analytic resources, themselves, do not drive, but support healthcare transformation.
NASA Astrophysics Data System (ADS)
Roy, Jean; Breton, Richard; Paradis, Stephane
2001-08-01
Situation Awareness (SAW) is essential for commanders to conduct decision-making (DM) activities. Situation Analysis (SA) is defined as a process, the examination of a situation, its elements, and their relations, to provide and maintain a product, i.e., a state of SAW for the decision maker. Operational trends in warfare put the situation analysis process under pressure. This emphasizes the need for a real-time computer-based Situation analysis Support System (SASS) to aid commanders in achieving the appropriate situation awareness, thereby supporting their response to actual or anticipated threats. Data fusion is clearly a key enabler for SA and a SASS. Since data fusion is used for SA in support of dynamic human decision-making, the exploration of the SA concepts and the design of data fusion techniques must take into account human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight human factor aspects in order to ensure a cognitive fit of the fusion system with the decision-maker. Indeed, the tight integration of the human element with the SA technology is essential. Regarding these issues, this paper provides a description of CODSI (Command Decision Support Interface), and operational- like human machine interface prototype for investigations in computer-based SA and command decision support. With CODSI, one objective was to apply recent developments in SA theory and information display technology to the problem of enhancing SAW quality. It thus provides a capability to adequately convey tactical information to command decision makers. It also supports the study of human-computer interactions for SA, and methodologies for SAW measurement.
Deciphering the Function of New Gonococcal Vaccine Antigens Using Phenotypic Microarrays
Baarda, Benjamin I.; Emerson, Sarah; Proteau, Philip J.
2017-01-01
ABSTRACT The function and extracellular location of cell envelope proteins make them attractive candidates for developing vaccines against bacterial diseases, including challenging drug-resistant pathogens, such as Neisseria gonorrhoeae. A proteomics-driven reverse vaccinology approach has delivered multiple gonorrhea vaccine candidates; however, the biological functions of many of them remain to be elucidated. Herein, the functions of six gonorrhea vaccine candidates—NGO2121, NGO1985, NGO2054, NGO2111, NGO1205, and NGO1344—in cell envelope homeostasis were probed using phenotype microarrays under 1,056 conditions and a ΔbamE mutant (Δngo1780) as a reference of perturbed outer membrane integrity. Optimal growth conditions for an N. gonorrhoeae phenotype microarray assay in defined liquid medium were developed, which can be useful in other applications, including rapid and thorough antimicrobial susceptibility assessment. Our studies revealed 91 conditions having uniquely positive or negative effects on one of the examined mutants. A cluster analysis of 37 and 57 commonly beneficial and detrimental compounds, respectively, revealed three separate phenotype groups: NGO2121 and NGO1985; NGO1344 and BamE; and the trio of NGO1205, NGO2111, and NGO2054, with the last protein forming an independent branch of this cluster. Similar phenotypes were associated with loss of these vaccine candidates in the highly antibiotic-resistant WHO X strain. Based on their extensive sensitivity phenomes, NGO1985 and NGO2121 appear to be the most promising vaccine candidates. This study establishes the principle that phenotype microarrays can be successfully applied to a fastidious bacterial organism, such as N. gonorrhoeae. IMPORTANCE Innovative approaches are required to develop vaccines against prevalent and neglected sexually transmitted infections, such as gonorrhea. Herein, we have utilized phenotype microarrays in the first such investigation into Neisseria gonorrhoeae to probe the function of proteome-derived vaccine candidates in cell envelope homeostasis. Information gained from this screening can feed the vaccine candidate decision tree by providing insights into the roles these proteins play in membrane permeability, integrity, and overall N. gonorrhoeae physiology. The optimized screening protocol can be applied in investigations into the function of other hypothetical proteins of N. gonorrhoeae discovered in the expanding number of whole-genome sequences, in addition to revealing phenotypic differences between clinical and laboratory strains. PMID:28630127
2014-10-01
designed an Internet-based and mobile application (software) to assist with the following domains pertinent to diabetes self-management: 1...management that provides education, reminders, and support. The new tool is an internet-based and mobile application (software), now called Tracking...is mobile , provides decision support with actionable options, and is based on user input, will enhance diabetes self-care, improve glycemic control
Girard, Laurie D.; Boissinot, Karel; Peytavi, Régis; Boissinot, Maurice; Bergeron, Michel G.
2014-01-01
The combination of molecular diagnostic technologies is increasingly used to overcome limitations on sensitivity, specificity or multiplexing capabilities, and provide efficient lab-on-chip devices. Two such techniques, PCR amplification and microarray hybridization are used serially to take advantage of the high sensitivity and specificity of the former combined with high multiplexing capacities of the latter. These methods are usually performed in different buffers and reaction chambers. However, these elaborate methods have a high complexity cost related to reagent requirements, liquid storage and the number of reaction chambers to integrate into automated devices. Furthermore, microarray hybridizations have a sequence dependent efficiency not always predictable. In this work, we have developed the concept of a structured oligonucleotide probe which is activated by cleavage from polymerase exonuclease activity. This technology is called SCISSOHR for Structured Cleavage Induced Single-Stranded Oligonucleotide Hybridization Reaction. The SCISSOHR probes enable indexing the target sequence to a tag sequence. The SCISSOHR technology also allows the combination of nucleic acid amplification and microarray hybridization in a single vessel in presence of the PCR buffer only. The SCISSOHR technology uses an amplification probe that is irreversibly modified in presence of the target, releasing a single-stranded DNA tag for microarray hybridization. Each tag is composed of a 3-nucleotidesequence-dependent segment and a unique “target sequence-independent” 14-nucleotide segment allowing for optimal hybridization with minimal cross-hybridization. We evaluated the performance of five (5) PCR buffers to support microarray hybridization, compared to a conventional hybridization buffer. Finally, as a proof of concept, we developed a multiplexed assay for the amplification, detection, and identification of three (3) DNA targets. This new technology will facilitate the design of lab-on-chip microfluidic devices, while also reducing consumable costs. At term, it will allow the cost-effective automation of highly multiplexed assays for detection and identification of genetic targets. PMID:25489607
Girard, Laurie D; Boissinot, Karel; Peytavi, Régis; Boissinot, Maurice; Bergeron, Michel G
2015-02-07
The combination of molecular diagnostic technologies is increasingly used to overcome limitations on sensitivity, specificity or multiplexing capabilities, and provide efficient lab-on-chip devices. Two such techniques, PCR amplification and microarray hybridization are used serially to take advantage of the high sensitivity and specificity of the former combined with high multiplexing capacities of the latter. These methods are usually performed in different buffers and reaction chambers. However, these elaborate methods have high complexity and cost related to reagent requirements, liquid storage and the number of reaction chambers to integrate into automated devices. Furthermore, microarray hybridizations have a sequence dependent efficiency not always predictable. In this work, we have developed the concept of a structured oligonucleotide probe which is activated by cleavage from polymerase exonuclease activity. This technology is called SCISSOHR for Structured Cleavage Induced Single-Stranded Oligonucleotide Hybridization Reaction. The SCISSOHR probes enable indexing the target sequence to a tag sequence. The SCISSOHR technology also allows the combination of nucleic acid amplification and microarray hybridization in a single vessel in presence of the PCR buffer only. The SCISSOHR technology uses an amplification probe that is irreversibly modified in presence of the target, releasing a single-stranded DNA tag for microarray hybridization. Each tag is composed of a 3-nucleotide sequence-dependent segment and a unique "target sequence-independent" 14-nucleotide segment allowing for optimal hybridization with minimal cross-hybridization. We evaluated the performance of five (5) PCR buffers to support microarray hybridization, compared to a conventional hybridization buffer. Finally, as a proof of concept, we developed a multiplexed assay for the amplification, detection, and identification of three (3) DNA targets. This new technology will facilitate the design of lab-on-chip microfluidic devices, while also reducing consumable costs. At term, it will allow the cost-effective automation of highly multiplexed assays for detection and identification of genetic targets.
Tsalatsanis, Athanasios; Barnes, Laura E; Hozo, Iztok; Djulbegovic, Benjamin
2011-12-23
Despite the well documented advantages of hospice care, most terminally ill patients do not reap the maximum benefit from hospice services, with the majority of them receiving hospice care either prematurely or delayed. Decision systems to improve the hospice referral process are sorely needed. We present a novel theoretical framework that is based on well-established methodologies of prognostication and decision analysis to assist with the hospice referral process for terminally ill patients. We linked the SUPPORT statistical model, widely regarded as one of the most accurate models for prognostication of terminally ill patients, with the recently developed regret based decision curve analysis (regret DCA). We extend the regret DCA methodology to consider harms associated with the prognostication test as well as harms and effects of the management strategies. In order to enable patients and physicians in making these complex decisions in real-time, we developed an easily accessible web-based decision support system available at the point of care. The web-based decision support system facilitates the hospice referral process in three steps. First, the patient or surrogate is interviewed to elicit his/her personal preferences regarding the continuation of life-sustaining treatment vs. palliative care. Then, regret DCA is employed to identify the best strategy for the particular patient in terms of threshold probability at which he/she is indifferent between continuation of treatment and of hospice referral. Finally, if necessary, the probabilities of survival and death for the particular patient are computed based on the SUPPORT prognostication model and contrasted with the patient's threshold probability. The web-based design of the CDSS enables patients, physicians, and family members to participate in the decision process from anywhere internet access is available. We present a theoretical framework to facilitate the hospice referral process. Further rigorous clinical evaluation including testing in a prospective randomized controlled trial is required and planned.
2011-01-01
Background Despite the well documented advantages of hospice care, most terminally ill patients do not reap the maximum benefit from hospice services, with the majority of them receiving hospice care either prematurely or delayed. Decision systems to improve the hospice referral process are sorely needed. Methods We present a novel theoretical framework that is based on well-established methodologies of prognostication and decision analysis to assist with the hospice referral process for terminally ill patients. We linked the SUPPORT statistical model, widely regarded as one of the most accurate models for prognostication of terminally ill patients, with the recently developed regret based decision curve analysis (regret DCA). We extend the regret DCA methodology to consider harms associated with the prognostication test as well as harms and effects of the management strategies. In order to enable patients and physicians in making these complex decisions in real-time, we developed an easily accessible web-based decision support system available at the point of care. Results The web-based decision support system facilitates the hospice referral process in three steps. First, the patient or surrogate is interviewed to elicit his/her personal preferences regarding the continuation of life-sustaining treatment vs. palliative care. Then, regret DCA is employed to identify the best strategy for the particular patient in terms of threshold probability at which he/she is indifferent between continuation of treatment and of hospice referral. Finally, if necessary, the probabilities of survival and death for the particular patient are computed based on the SUPPORT prognostication model and contrasted with the patient's threshold probability. The web-based design of the CDSS enables patients, physicians, and family members to participate in the decision process from anywhere internet access is available. Conclusions We present a theoretical framework to facilitate the hospice referral process. Further rigorous clinical evaluation including testing in a prospective randomized controlled trial is required and planned. PMID:22196308
Optimized Probe Masking for Comparative Transcriptomics of Closely Related Species
Poeschl, Yvonne; Delker, Carolin; Trenner, Jana; Ullrich, Kristian Karsten; Quint, Marcel; Grosse, Ivo
2013-01-01
Microarrays are commonly applied to study the transcriptome of specific species. However, many available microarrays are restricted to model organisms, and the design of custom microarrays for other species is often not feasible. Hence, transcriptomics approaches of non-model organisms as well as comparative transcriptomics studies among two or more species often make use of cost-intensive RNAseq studies or, alternatively, by hybridizing transcripts of a query species to a microarray of a closely related species. When analyzing these cross-species microarray expression data, differences in the transcriptome of the query species can cause problems, such as the following: (i) lower hybridization accuracy of probes due to mismatches or deletions, (ii) probes binding multiple transcripts of different genes, and (iii) probes binding transcripts of non-orthologous genes. So far, methods for (i) exist, but these neglect (ii) and (iii). Here, we propose an approach for comparative transcriptomics addressing problems (i) to (iii), which retains only transcript-specific probes binding transcripts of orthologous genes. We apply this approach to an Arabidopsis lyrata expression data set measured on a microarray designed for Arabidopsis thaliana, and compare it to two alternative approaches, a sequence-based approach and a genomic DNA hybridization-based approach. We investigate the number of retained probe sets, and we validate the resulting expression responses by qRT-PCR. We find that the proposed approach combines the benefit of sequence-based stringency and accuracy while allowing the expression analysis of much more genes than the alternative sequence-based approach. As an added benefit, the proposed approach requires probes to detect transcripts of orthologous genes only, which provides a superior base for biological interpretation of the measured expression responses. PMID:24260119
Reuse of imputed data in microarray analysis increases imputation efficiency
Kim, Ki-Yeol; Kim, Byoung-Jin; Yi, Gwan-Su
2004-01-01
Background The imputation of missing values is necessary for the efficient use of DNA microarray data, because many clustering algorithms and some statistical analysis require a complete data set. A few imputation methods for DNA microarray data have been introduced, but the efficiency of the methods was low and the validity of imputed values in these methods had not been fully checked. Results We developed a new cluster-based imputation method called sequential K-nearest neighbor (SKNN) method. This imputes the missing values sequentially from the gene having least missing values, and uses the imputed values for the later imputation. Although it uses the imputed values, the efficiency of this new method is greatly improved in its accuracy and computational complexity over the conventional KNN-based method and other methods based on maximum likelihood estimation. The performance of SKNN was in particular higher than other imputation methods for the data with high missing rates and large number of experiments. Application of Expectation Maximization (EM) to the SKNN method improved the accuracy, but increased computational time proportional to the number of iterations. The Multiple Imputation (MI) method, which is well known but not applied previously to microarray data, showed a similarly high accuracy as the SKNN method, with slightly higher dependency on the types of data sets. Conclusions Sequential reuse of imputed data in KNN-based imputation greatly increases the efficiency of imputation. The SKNN method should be practically useful to save the data of some microarray experiments which have high amounts of missing entries. The SKNN method generates reliable imputed values which can be used for further cluster-based analysis of microarray data. PMID:15504240
Evidence and Obesity Prevention: Developing Evidence Summaries to Support Decision Making
ERIC Educational Resources Information Center
Clark, Rachel; Waters, Elizabeth; Armstrong, Rebecca; Conning, Rebecca; Allender, Steven; Swinburn, Boyd
2013-01-01
Public health practitioners make decisions based on research evidence in combination with a variety of other influences. Evidence summaries are one of a range of knowledge translation options used to support evidence-informed decision making. The literature relevant to obesity prevention requires synthesis for it to be accessible and relevant to…
Choosing to Decline: Finding Common Ground through the Perspective of Shared Decision Making.
Megregian, Michele; Nieuwenhuijze, Marianne
2018-05-18
Respectful communication is a key component of any clinical relationship. Shared decision making is the process of collaboration that occurs between a health care provider and patient in order to make health care decisions based upon the best available evidence and the individual's preferences. A midwife and woman (and her support persons) engage together to make health care decisions, using respectful communication that is based upon the best available evidence and the woman's preferences, values, and goals. Supporting a woman's autonomy, however, can be particularly challenging in maternity care when recommended treatments or interventions are declined. In the past, the real or perceived increased risk to a woman's health or that of her fetus as a result of that choice has occasionally resulted in coercion. Through the process of shared decision making, the woman's autonomy may be supported, including the choice to decline interventions. The case presented here demonstrates how a shared decision-making framework can support the health care provider-patient relationship in the context of informed refusal. © 2018 by the American College of Nurse-Midwives.
Bornstein, Aaron M.; Daw, Nathaniel D.
2013-01-01
How do we use our memories of the past to guide decisions we've never had to make before? Although extensive work describes how the brain learns to repeat rewarded actions, decisions can also be influenced by associations between stimuli or events not directly involving reward — such as when planning routes using a cognitive map or chess moves using predicted countermoves — and these sorts of associations are critical when deciding among novel options. This process is known as model-based decision making. While the learning of environmental relations that might support model-based decisions is well studied, and separately this sort of information has been inferred to impact decisions, there is little evidence concerning the full cycle by which such associations are acquired and drive choices. Of particular interest is whether decisions are directly supported by the same mnemonic systems characterized for relational learning more generally, or instead rely on other, specialized representations. Here, building on our previous work, which isolated dual representations underlying sequential predictive learning, we directly demonstrate that one such representation, encoded by the hippocampal memory system and adjacent cortical structures, supports goal-directed decisions. Using interleaved learning and decision tasks, we monitor predictive learning directly and also trace its influence on decisions for reward. We quantitatively compare the learning processes underlying multiple behavioral and fMRI observables using computational model fits. Across both tasks, a quantitatively consistent learning process explains reaction times, choices, and both expectation- and surprise-related neural activity. The same hippocampal and ventral stream regions engaged in anticipating stimuli during learning are also engaged in proportion to the difficulty of decisions. These results support a role for predictive associations learned by the hippocampal memory system to be recalled during choice formation. PMID:24339770
Conformance Testing: Measurement Decision Rules
NASA Technical Reports Server (NTRS)
Mimbs, Scott M.
2010-01-01
The goal of a Quality Management System (QMS) as specified in ISO 9001 and AS9100 is to provide assurance to the customer that end products meet specifications. Measuring devices, often called measuring and test equipment (MTE), are used to provide the evidence of product conformity to specified requirements. Unfortunately, processes that employ MTE can become a weak link to the overall QMS if proper attention is not given to the measurement process design, capability, and implementation. Documented "decision rules" establish the requirements to ensure measurement processes provide the measurement data that supports the needs of the QMS. Measurement data are used to make the decisions that impact all areas of technology. Whether measurements support research, design, production, or maintenance, ensuring the data supports the decision is crucial. Measurement data quality can be critical to the resulting consequences of measurement-based decisions. Historically, most industries required simplistic, one-size-fits-all decision rules for measurements. One-size-fits-all rules in some cases are not rigorous enough to provide adequate measurement results, while in other cases are overly conservative and too costly to implement. Ideally, decision rules should be rigorous enough to match the criticality of the parameter being measured, while being flexible enough to be cost effective. The goal of a decision rule is to ensure that measurement processes provide data with a sufficient level of quality to support the decisions being made - no more, no less. This paper discusses the basic concepts of providing measurement-based evidence that end products meet specifications. Although relevant to all measurement-based conformance tests, the target audience is the MTE end-user, which is anyone using MTE other than calibration service providers. Topics include measurement fundamentals, the associated decision risks, verifying conformance to specifications, and basic measurement decisions rules.
Need-Supportive Advising for Undecided Students
ERIC Educational Resources Information Center
Leach, Jennifer Kay; Patall, Erika A.
2016-01-01
To explore the relationship between need-supportive advising and students' decision making on academic majors, we conducted a longitudinal study of 145 students based on their reports of basic psychological need satisfaction and their decision-making processes. We hypothesized that need-supportive advising would positively contribute to autonomous…
Web-based health services and clinical decision support.
Jegelevicius, Darius; Marozas, Vaidotas; Lukosevicius, Arunas; Patasius, Martynas
2004-01-01
The purpose of this study was the development of a Web-based e-health service for comprehensive assistance and clinical decision support. The service structure consists of a Web server, a PHP-based Web interface linked to a clinical SQL database, Java applets for interactive manipulation and visualization of signals and a Matlab server linked with signal and data processing algorithms implemented by Matlab programs. The service ensures diagnostic signal- and image analysis-sbased clinical decision support. By using the discussed methodology, a pilot service for pathology specialists for automatic calculation of the proliferation index has been developed. Physicians use a simple Web interface for uploading the pictures under investigation to the server; subsequently a Java applet interface is used for outlining the region of interest and, after processing on the server, the requested proliferation index value is calculated. There is also an "expert corner", where experts can submit their index estimates and comments on particular images, which is especially important for system developers. These expert evaluations are used for optimization and verification of automatic analysis algorithms. Decision support trials have been conducted for ECG and ophthalmology ultrasonic investigations of intraocular tumor differentiation. Data mining algorithms have been applied and decision support trees constructed. These services are under implementation by a Web-based system too. The study has shown that the Web-based structure ensures more effective, flexible and accessible services compared with standalone programs and is very convenient for biomedical engineers and physicians, especially in the development phase.
arrayCGHbase: an analysis platform for comparative genomic hybridization microarrays
Menten, Björn; Pattyn, Filip; De Preter, Katleen; Robbrecht, Piet; Michels, Evi; Buysse, Karen; Mortier, Geert; De Paepe, Anne; van Vooren, Steven; Vermeesch, Joris; Moreau, Yves; De Moor, Bart; Vermeulen, Stefan; Speleman, Frank; Vandesompele, Jo
2005-01-01
Background The availability of the human genome sequence as well as the large number of physically accessible oligonucleotides, cDNA, and BAC clones across the entire genome has triggered and accelerated the use of several platforms for analysis of DNA copy number changes, amongst others microarray comparative genomic hybridization (arrayCGH). One of the challenges inherent to this new technology is the management and analysis of large numbers of data points generated in each individual experiment. Results We have developed arrayCGHbase, a comprehensive analysis platform for arrayCGH experiments consisting of a MIAME (Minimal Information About a Microarray Experiment) supportive database using MySQL underlying a data mining web tool, to store, analyze, interpret, compare, and visualize arrayCGH results in a uniform and user-friendly format. Following its flexible design, arrayCGHbase is compatible with all existing and forthcoming arrayCGH platforms. Data can be exported in a multitude of formats, including BED files to map copy number information on the genome using the Ensembl or UCSC genome browser. Conclusion ArrayCGHbase is a web based and platform independent arrayCGH data analysis tool, that allows users to access the analysis suite through the internet or a local intranet after installation on a private server. ArrayCGHbase is available at . PMID:15910681
cDNA Microarray Screening in Food Safety
ROY, SASHWATI; SEN, CHANDAN K
2009-01-01
The cDNA microarray technology and related bioinformatics tools presents a wide range of novel application opportunities. The technology may be productively applied to address food safety. In this mini-review article, we present an update highlighting the late breaking discoveries that demonstrate the vitality of cDNA microarray technology as a tool to analyze food safety with reference to microbial pathogens and genetically modified foods. In order to bring the microarray technology to mainstream food safety, it is important to develop robust user-friendly tools that may be applied in a field setting. In addition, there needs to be a standardized process for regulatory agencies to interpret and act upon microarray-based data. The cDNA microarray approach is an emergent technology in diagnostics. Its values lie in being able to provide complimentary molecular insight when employed in addition to traditional tests for food safety, as part of a more comprehensive battery of tests. PMID:16466843
ArrayNinja: An Open Source Platform for Unified Planning and Analysis of Microarray Experiments.
Dickson, B M; Cornett, E M; Ramjan, Z; Rothbart, S B
2016-01-01
Microarray-based proteomic platforms have emerged as valuable tools for studying various aspects of protein function, particularly in the field of chromatin biochemistry. Microarray technology itself is largely unrestricted in regard to printable material and platform design, and efficient multidimensional optimization of assay parameters requires fluidity in the design and analysis of custom print layouts. This motivates the need for streamlined software infrastructure that facilitates the combined planning and analysis of custom microarray experiments. To this end, we have developed ArrayNinja as a portable, open source, and interactive application that unifies the planning and visualization of microarray experiments and provides maximum flexibility to end users. Array experiments can be planned, stored to a private database, and merged with the imaged results for a level of data interaction and centralization that is not currently attainable with available microarray informatics tools. © 2016 Elsevier Inc. All rights reserved.
Giordano, R; Passarella, G; Uricchio, V F; Vurro, M
2007-07-01
The importance of shared decision processes in water management derives from the awareness of the inadequacy of traditional--i.e. engineering--approaches in dealing with complex and ill-structured problems. It is becoming increasingly obvious that traditional problem solving and decision support techniques, based on optimisation and factual knowledge, have to be combined with stakeholder based policy design and implementation. The aim of our research is the definition of an integrated decision support system for consensus achievement (IDSS-C) able to support a participative decision-making process in all its phases: problem definition and structuring, identification of the possible alternatives, formulation of participants' judgments, and consensus achievement. Furthermore, the IDSS-C aims at structuring, i.e. systematising the knowledge which has emerged during the participative process in order to make it comprehensible for the decision-makers and functional for the decision process. Problem structuring methods (PSM) and multi-group evaluation methods (MEM) have been integrated in the IDSS-C. PSM are used to support the stakeholders in providing their perspective of the problem and to elicit their interests and preferences, while MEM are used to define not only the degree of consensus for each alternative, highlighting those where the agreement is high, but also the consensus label for each alternative and the behaviour of individuals during the participative decision-making. The IDSS-C is applied experimentally to a decision process regarding the use of treated wastewater for agricultural irrigation in the Apulia Region (southern Italy).
NASA Astrophysics Data System (ADS)
Liu, Robin H.; Lodes, Mike; Fuji, H. Sho; Danley, David; McShea, Andrew
Microarray assays typically involve multistage sample processing and fluidic handling, which are generally labor-intensive and time-consuming. Automation of these processes would improve robustness, reduce run-to-run and operator-to-operator variation, and reduce costs. In this chapter, a fully integrated and self-contained microfluidic biochip device that has been developed to automate the fluidic handling steps for microarray-based gene expression or genotyping analysis is presented. The device consists of a semiconductor-based CustomArray® chip with 12,000 features and a microfluidic cartridge. The CustomArray was manufactured using a semiconductor-based in situ synthesis technology. The micro-fluidic cartridge consists of microfluidic pumps, mixers, valves, fluid channels, and reagent storage chambers. Microarray hybridization and subsequent fluidic handling and reactions (including a number of washing and labeling steps) were performed in this fully automated and miniature device before fluorescent image scanning of the microarray chip. Electrochemical micropumps were integrated in the cartridge to provide pumping of liquid solutions. A micromixing technique based on gas bubbling generated by electrochemical micropumps was developed. Low-cost check valves were implemented in the cartridge to prevent cross-talk of the stored reagents. Gene expression study of the human leukemia cell line (K562) and genotyping detection and sequencing of influenza A subtypes have been demonstrated using this integrated biochip platform. For gene expression assays, the microfluidic CustomArray device detected sample RNAs with a concentration as low as 0.375 pM. Detection was quantitative over more than three orders of magnitude. Experiment also showed that chip-to-chip variability was low indicating that the integrated microfluidic devices eliminate manual fluidic handling steps that can be a significant source of variability in genomic analysis. The genotyping results showed that the device identified influenza A hemagglutinin and neuraminidase subtypes and sequenced portions of both genes, demonstrating the potential of integrated microfluidic and microarray technology for multiple virus detection. The device provides a cost-effective solution to eliminate labor-intensive and time-consuming fluidic handling steps and allows microarray-based DNA analysis in a rapid and automated fashion.
DNA Microarray Wet Lab Simulation Brings Genomics into the High School Curriculum
ERIC Educational Resources Information Center
Campbell, A. Malcolm; Zanta, Carolyn A.; Heyer, Laurie J.; Kittinger, Ben; Gabric, Kathleen M.; Adler, Leslie
2006-01-01
We have developed a wet lab DNA microarray simulation as part of a complete DNA microarray module for high school students. The wet lab simulation has been field tested with high school students in Illinois and Maryland as well as in workshops with high school teachers from across the nation. Instead of using DNA, our simulation is based on pH…
Prostate Cancer Biorepository Network
2017-10-01
Department of the Army position, policy or decision unless so designated by other documentation. REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704...clinical data including pathology and outcome data are annotated with the biospecimens. Specialized processing consists of tissue microarray design ...Months 1- 6): Completed in 1st quarter Task 5. Report on performance metrics: Ongoing (accrual reports are provided on quarterly basis) Task 6
Feature Selection with Conjunctions of Decision Stumps and Learning from Microarray Data.
Shah, M; Marchand, M; Corbeil, J
2012-01-01
One of the objectives of designing feature selection learning algorithms is to obtain classifiers that depend on a small number of attributes and have verifiable future performance guarantees. There are few, if any, approaches that successfully address the two goals simultaneously. To the best of our knowledge, such algorithms that give theoretical bounds on the future performance have not been proposed so far in the context of the classification of gene expression data. In this work, we investigate the premise of learning a conjunction (or disjunction) of decision stumps in Occam's Razor, Sample Compression, and PAC-Bayes learning settings for identifying a small subset of attributes that can be used to perform reliable classification tasks. We apply the proposed approaches for gene identification from DNA microarray data and compare our results to those of the well-known successful approaches proposed for the task. We show that our algorithm not only finds hypotheses with a much smaller number of genes while giving competitive classification accuracy but also having tight risk guarantees on future performance, unlike other approaches. The proposed approaches are general and extensible in terms of both designing novel algorithms and application to other domains.
A Semantic Approach with Decision Support for Safety Service in Smart Home Management
Huang, Xiaoci; Yi, Jianjun; Zhu, Xiaomin; Chen, Shaoli
2016-01-01
Research on smart homes (SHs) has increased significantly in recent years because of the convenience provided by having an assisted living environment. The functions of SHs as mentioned in previous studies, particularly safety services, are seldom discussed or mentioned. Thus, this study proposes a semantic approach with decision support for safety service in SH management. The focus of this contribution is to explore a context awareness and reasoning approach for risk recognition in SH that enables the proper decision support for flexible safety service provision. The framework of SH based on a wireless sensor network is described from the perspective of neighbourhood management. This approach is based on the integration of semantic knowledge in which a reasoner can make decisions about risk recognition and safety service. We present a management ontology for a SH and relevant monitoring contextual information, which considers its suitability in a pervasive computing environment and is service-oriented. We also propose a rule-based reasoning method to provide decision support through reasoning techniques and context-awareness. A system prototype is developed to evaluate the feasibility, time response and extendibility of the approach. The evaluation of our approach shows that it is more effective in daily risk event recognition. The decisions for service provision are shown to be accurate. PMID:27527170
A Semantic Approach with Decision Support for Safety Service in Smart Home Management.
Huang, Xiaoci; Yi, Jianjun; Zhu, Xiaomin; Chen, Shaoli
2016-08-03
Research on smart homes (SHs) has increased significantly in recent years because of the convenience provided by having an assisted living environment. The functions of SHs as mentioned in previous studies, particularly safety services, are seldom discussed or mentioned. Thus, this study proposes a semantic approach with decision support for safety service in SH management. The focus of this contribution is to explore a context awareness and reasoning approach for risk recognition in SH that enables the proper decision support for flexible safety service provision. The framework of SH based on a wireless sensor network is described from the perspective of neighbourhood management. This approach is based on the integration of semantic knowledge in which a reasoner can make decisions about risk recognition and safety service. We present a management ontology for a SH and relevant monitoring contextual information, which considers its suitability in a pervasive computing environment and is service-oriented. We also propose a rule-based reasoning method to provide decision support through reasoning techniques and context-awareness. A system prototype is developed to evaluate the feasibility, time response and extendibility of the approach. The evaluation of our approach shows that it is more effective in daily risk event recognition. The decisions for service provision are shown to be accurate.
A Peripheral Blood Signature of Vasodilator-Responsive Pulmonary Arterial Hypertension
Hemnes, Anna R.; Trammell, Aaron W.; Archer, Stephen L.; Rich, Stuart; Yu, Chang; Nian, Hui; Penner, Niki; Funke, Mitchell; Wheeler, Lisa; Robbins, Ivan M.; Austin, Eric D.; Newman, John H.; West, James
2014-01-01
Background Heterogeneity in response to treatment of pulmonary arterial hypertension (PAH) is a major challenge to improving outcome in this disease. Although vasodilator responsive PAH (VR-PAH) accounts for a minority of cases, VR-PAH has a pronounced response to calcium channel blockers and better survival than non-responsive PAH (VN-PAH). We hypothesized that VR-PAH has a different molecular etiology from VN-PAH that can be detected in the peripheral blood. Methods and Results Microarrays of cultured lymphocytes from VR-PAH and VN-PAH patients followed at Vanderbilt University were performed with quantitative PCR performed on peripheral blood for the 25 most different genes. We developed a decision tree to identify VR-PAH patients based on the results with validation in a second VR-PAH cohort from the University of Chicago. We found broad differences in gene expression patterns on microarray analysis including cell-cell adhesion factors, cytoskeletal and rho/GTPase genes. 13/25 genes tested in whole blood were significantly different: EPDR1, DSG2, SCD5, P2RY5, MGAT5, RHOQ, UCHL1, ZNF652, RALGPS2, TPD52, MKNL1, RAPGEF2 and PIAS1. Seven decision trees were built using expression levels of two genes as the primary genes: DSG2, a desmosomal cadherin involved in Wnt/β-catenin signaling, and RHOQ, which encodes a cytoskeletal protein involved in insulin-mediated signaling. These trees correctly identified 5/5 VR-PAH in the validation cohort. Conclusions VR-PAH and VN-PAH can be differentiated using RNA expression patterns in peripheral blood. These differences may reflect different molecular etiologies of the two PAH phenotypes. This biomarker methodology may identify PAH patients that have a favorable treatment response. PMID:25361553
A study on spatial decision support systems for HIV/AIDS prevention based on COM GIS technology
NASA Astrophysics Data System (ADS)
Yang, Kun; Luo, Huasong; Peng, Shungyun; Xu, Quanli
2007-06-01
Based on the deeply analysis of the current status and the existing problems of GIS technology applications in Epidemiology, this paper has proposed the method and process for establishing the spatial decision support systems of AIDS epidemic prevention by integrating the COM GIS, Spatial Database, GPS, Remote Sensing, and Communication technologies, as well as ASP and ActiveX software development technologies. One of the most important issues for constructing the spatial decision support systems of AIDS epidemic prevention is how to integrate the AIDS spreading models with GIS. The capabilities of GIS applications in the AIDS epidemic prevention have been described here in this paper firstly. Then some mature epidemic spreading models have also been discussed for extracting the computation parameters. Furthermore, a technical schema has been proposed for integrating the AIDS spreading models with GIS and relevant geospatial technologies, in which the GIS and model running platforms share a common spatial database and the computing results can be spatially visualized on Desktop or Web GIS clients. Finally, a complete solution for establishing the decision support systems of AIDS epidemic prevention has been offered in this paper based on the model integrating methods and ESRI COM GIS software packages. The general decision support systems are composed of data acquisition sub-systems, network communication sub-systems, model integrating sub-systems, AIDS epidemic information spatial database sub-systems, AIDS epidemic information querying and statistical analysis sub-systems, AIDS epidemic dynamic surveillance sub-systems, AIDS epidemic information spatial analysis and decision support sub-systems, as well as AIDS epidemic information publishing sub-systems based on Web GIS.
Sojda, Richard S.; Chen, Serena H.; El Sawah, Sondoss; Guillaume, Joseph H.A.; Jakeman, A.J.; Lautenbach, Sven; McIntosh, Brian S.; Rizzoli, A.E.; Seppelt, Ralf; Struss, Peter; Voinov, Alexey; Volk, Martin
2012-01-01
Two of the basic tenets of decision support system efforts are to help identify and structure the decisions to be supported, and to then provide analysis in how those decisions might be best made. One example from wetland management would be that wildlife biologists must decide when to draw down water levels to optimise aquatic invertebrates as food for breeding ducks. Once such a decision is identified, a system or tool to help them make that decision in the face of current and projected climate conditions could be developed. We examined a random sample of 100 papers published from 2001-2011 in Environmental Modelling and Software that used the phrase “decision support system” or “decision support tool”, and which are characteristic of different sectors. In our review, 41% of the systems and tools related to the water resources sector, 34% were related to agriculture, and 22% to the conservation of fish, wildlife, and protected area management. Only 60% of the papers were deemed to be reporting on DSS. This was based on the papers reviewed not having directly identified a specific decision to be supported. We also report on the techniques that were used to identify the decisions, such as formal survey, focus group, expert opinion, or sole judgment of the author(s). The primary underlying modelling system, e.g., expert system, agent based model, Bayesian belief network, geographical information system (GIS), and the like was categorised next. Finally, since decision support typically should target some aspect of unstructured decisions, we subjectively determined to what degree this was the case. In only 23% of the papers reviewed, did the system appear to tackle unstructured decisions. This knowledge should be useful in helping workers in the field develop more effective systems and tools, especially by being exposed to the approaches in different, but related, disciplines. We propose that a standard blueprint for reporting on DSS be developed for consideration by journal editors to aid them in filtering papers that use the term, “decision support”.
Web-based Traffic Noise Control Support System for Sustainable Transportation
NASA Astrophysics Data System (ADS)
Fan, Lisa; Dai, Liming; Li, Anson
Traffic noise is considered as one of the major pollutions that will affect our communities in the future. This paper presents a framework of web-based traffic noise control support system (WTNCSS) for a sustainable transportation. WTNCSS is to provide the decision makers, engineers and publics a platform to efficiently access the information, and effectively making decisions related to traffic control. The system is based on a Service Oriented Architecture (SOA) which takes the advantages of the convenience of World Wide Web system with the data format of XML. The whole system is divided into different modules such as the prediction module, ontology-based expert module and dynamic online survey module. Each module of the system provides a distinct information service to the decision support center through the HTTP protocol.
Bures, Vladimír; Otcenásková, Tereza; Cech, Pavel; Antos, Karel
2012-11-01
Biological incidents jeopardising public health require decision-making that consists of one dominant feature: complexity. Therefore, public health decision-makers necessitate appropriate support. Based on the analogy with business intelligence (BI) principles, the contextual analysis of the environment and available data resources, and conceptual modelling within systems and knowledge engineering, this paper proposes a general framework for computer-based decision support in the case of a biological incident. At the outset, the analysis of potential inputs to the framework is conducted and several resources such as demographic information, strategic documents, environmental characteristics, agent descriptors and surveillance systems are considered. Consequently, three prototypes were developed, tested and evaluated by a group of experts. Their selection was based on the overall framework scheme. Subsequently, an ontology prototype linked with an inference engine, multi-agent-based model focusing on the simulation of an environment, and expert-system prototypes were created. All prototypes proved to be utilisable support tools for decision-making in the field of public health. Nevertheless, the research revealed further issues and challenges that might be investigated by both public health focused researchers and practitioners.
Bayesian Decision Support for Adaptive Lung Treatments
NASA Astrophysics Data System (ADS)
McShan, Daniel; Luo, Yi; Schipper, Matt; TenHaken, Randall
2014-03-01
Purpose: A Bayesian Decision Network will be demonstrated to provide clinical decision support for adaptive lung response-driven treatment management based on evidence that physiologic metrics may correlate better with individual patient response than traditional (population-based) dose and volume-based metrics. Further, there is evidence that information obtained during the course of radiation therapy may further improve response predictions. Methods: Clinical factors were gathered for 58 patients including planned mean lung dose, and the bio-markers IL-8 and TGF-β1 obtained prior to treatment and two weeks into treatment along with complication outcomes for these patients. A Bayesian Decision Network was constructed using Netica 5.0.2 from Norsys linking these clinical factors to obtain a prediction of radiation induced lung disese (RILD) complication. A decision node was added to the network to provide a plan adaption recommendation based on the trade-off between the RILD prediction and complexity of replanning. A utility node provides the weighting cost between the competing factors. Results: The decision node predictions were optimized against the data for the 58 cases. With this decision network solution, one can consider the decision result for a new patient with specific findings to obtain a recommendation to adaptively modify the originally planned treatment course. Conclusions: A Bayesian approach allows handling and propagating probabilistic data in a logical and principled manner. Decision networks provide the further ability to provide utility-based trade-offs, reflecting non-medical but practical cost/benefit analysis. The network demonstrated illustrates the basic concept, but many other factors may affect these decisions and work on building better models are being designed and tested. Acknowledgement: Supported by NIH-P01-CA59827
A web-based decision support tool for prognosis simulation in multiple sclerosis.
Veloso, Mário
2014-09-01
A multiplicity of natural history studies of multiple sclerosis provides valuable knowledge of the disease progression but individualized prognosis remains elusive. A few decision support tools that assist the clinician in such task have emerged but have not received proper attention from clinicians and patients. The objective of the current work is to implement a web-based tool, conveying decision relevant prognostic scientific evidence, which will help clinicians discuss prognosis with individual patients. Data were extracted from a set of reference studies, especially those dealing with the natural history of multiple sclerosis. The web-based decision support tool for individualized prognosis simulation was implemented with NetLogo, a program environment suited for the development of complex adaptive systems. Its prototype has been launched online; it enables clinicians to predict both the likelihood of CIS to CDMS conversion, and the long-term prognosis of disability level and SPMS conversion, as well as assess and monitor the effects of treatment. More robust decision support tools, which convey scientific evidence and satisfy the needs of clinical practice by helping clinicians discuss prognosis expectations with individual patients, are required. The web-based simulation model herein introduced proposes to be a step forward toward this purpose. Copyright © 2014 Elsevier B.V. All rights reserved.
Modular Architecture for Integrated Model-Based Decision Support.
Gaebel, Jan; Schreiber, Erik; Oeser, Alexander; Oeltze-Jafra, Steffen
2018-01-01
Model-based decision support systems promise to be a valuable addition to oncological treatments and the implementation of personalized therapies. For the integration and sharing of decision models, the involved systems must be able to communicate with each other. In this paper, we propose a modularized architecture of dedicated systems for the integration of probabilistic decision models into existing hospital environments. These systems interconnect via web services and provide model sharing and processing capabilities for clinical information systems. Along the lines of IHE integration profiles from other disciplines and the meaningful reuse of routinely recorded patient data, our approach aims for the seamless integration of decision models into hospital infrastructure and the physicians' daily work.
A multicriteria decision making model for assessment and selection of an ERP in a logistics context
NASA Astrophysics Data System (ADS)
Pereira, Teresa; Ferreira, Fernanda A.
2017-07-01
The aim of this work is to apply a methodology of decision support based on a multicriteria decision analyses (MCDA) model that allows the assessment and selection of an Enterprise Resource Planning (ERP) in a Portuguese logistics company by Group Decision Maker (GDM). A Decision Support system (DSS) that implements a MCDA - Multicriteria Methodology for the Assessment and Selection of Information Systems / Information Technologies (MMASSI / IT) is used based on its features and facility to change and adapt the model to a given scope. Using this DSS it was obtained the information system that best suited to the decisional context, being this result evaluated through a sensitivity and robustness analysis.
Personalized Clinical Diagnosis in Data Bases for Treatment Support in Phthisiology.
Lugovkina, T K; Skornyakov, S N; Golubev, D N; Egorov, E A; Medvinsky, I D
2016-01-01
The decision-making is a key event in the clinical practice. The program products with clinical decision support models in electronic data-base as well as with fixed decision moments of the real clinical practice and treatment results are very actual instruments for improving phthisiological practice and may be useful in the severe cases caused by the resistant strains of Mycobacterium tuberculosis. The methodology for gathering and structuring of useful information (critical clinical signals for decisions) is described. Additional coding of clinical diagnosis characteristics was implemented for numeric reflection of the personal situations. The created methodology for systematization and coding Clinical Events allowed to improve the clinical decision models for better clinical results.
Advances in cell-free protein array methods.
Yu, Xiaobo; Petritis, Brianne; Duan, Hu; Xu, Danke; LaBaer, Joshua
2018-01-01
Cell-free protein microarrays represent a special form of protein microarray which display proteins made fresh at the time of the experiment, avoiding storage and denaturation. They have been used increasingly in basic and translational research over the past decade to study protein-protein interactions, the pathogen-host relationship, post-translational modifications, and antibody biomarkers of different human diseases. Their role in the first blood-based diagnostic test for early stage breast cancer highlights their value in managing human health. Cell-free protein microarrays will continue to evolve to become widespread tools for research and clinical management. Areas covered: We review the advantages and disadvantages of different cell-free protein arrays, with an emphasis on the methods that have been studied in the last five years. We also discuss the applications of each microarray method. Expert commentary: Given the growing roles and impact of cell-free protein microarrays in research and medicine, we discuss: 1) the current technical and practical limitations of cell-free protein microarrays; 2) the biomarker discovery and verification pipeline using protein microarrays; and 3) how cell-free protein microarrays will advance over the next five years, both in their technology and applications.
Reducing Diagnostic Error with Computer-Based Clinical Decision Support
ERIC Educational Resources Information Center
Greenes, Robert A.
2009-01-01
Information technology approaches to delivering diagnostic clinical decision support (CDS) are the subject of the papers to follow in the proceedings. These will address the history of CDS and present day approaches (Miller), evaluation of diagnostic CDS methods (Friedman), and the role of clinical documentation in supporting diagnostic decision…
Enhancing Results of Microarray Hybridizations Through Microagitation
Toegl, Andreas; Kirchner, Roland; Gauer, Christoph; Wixforth, Achim
2003-01-01
Protein and DNA microarrays have become a standard tool in proteomics/genomics research. In order to guarantee fast and reproducible hybridization results, the diffusion limit must be overcome. Surface acoustic wave (SAW) micro-agitation chips efficiently agitate the smallest sample volumes (down to 10 μL and below) without introducing any dead volume. The advantages are reduced reaction time, increased signal-to-noise ratio, improved homogeneity across the microarray, and better slide-to-slide reproducibility. The SAW micromixer chips are the heart of the Advalytix ArrayBooster, which is compatible with all microarrays based on the microscope slide format. PMID:13678150
Profiling protein function with small molecule microarrays
Winssinger, Nicolas; Ficarro, Scott; Schultz, Peter G.; Harris, Jennifer L.
2002-01-01
The regulation of protein function through posttranslational modification, local environment, and protein–protein interaction is critical to cellular function. The ability to analyze on a genome-wide scale protein functional activity rather than changes in protein abundance or structure would provide important new insights into complex biological processes. Herein, we report the application of a spatially addressable small molecule microarray to an activity-based profile of proteases in crude cell lysates. The potential of this small molecule-based profiling technology is demonstrated by the detection of caspase activation upon induction of apoptosis, characterization of the activated caspase, and inhibition of the caspase-executed apoptotic phenotype using the small molecule inhibitor identified in the microarray-based profile. PMID:12167675
Genome Consortium for Active Teaching: Meeting the Goals of BIO2010
Ledbetter, Mary Lee S.; Hoopes, Laura L.M.; Eckdahl, Todd T.; Heyer, Laurie J.; Rosenwald, Anne; Fowlks, Edison; Tonidandel, Scott; Bucholtz, Brooke; Gottfried, Gail
2007-01-01
The Genome Consortium for Active Teaching (GCAT) facilitates the use of modern genomics methods in undergraduate education. Initially focused on microarray technology, but with an eye toward diversification, GCAT is a community working to improve the education of tomorrow's life science professionals. GCAT participants have access to affordable microarrays, microarray scanners, free software for data analysis, and faculty workshops. Microarrays provided by GCAT have been used by 141 faculty on 134 campuses, including 21 faculty that serve large numbers of underrepresented minority students. An estimated 9480 undergraduates a year will have access to microarrays by 2009 as a direct result of GCAT faculty workshops. Gains for students include significantly improved comprehension of topics in functional genomics and increased interest in research. Faculty reported improved access to new technology and gains in understanding thanks to their involvement with GCAT. GCAT's network of supportive colleagues encourages faculty to explore genomics through student research and to learn a new and complex method with their undergraduates. GCAT is meeting important goals of BIO2010 by making research methods accessible to undergraduates, training faculty in genomics and bioinformatics, integrating mathematics into the biology curriculum, and increasing participation by underrepresented minority students. PMID:17548873
Genome Consortium for Active Teaching: meeting the goals of BIO2010.
Campbell, A Malcolm; Ledbetter, Mary Lee S; Hoopes, Laura L M; Eckdahl, Todd T; Heyer, Laurie J; Rosenwald, Anne; Fowlks, Edison; Tonidandel, Scott; Bucholtz, Brooke; Gottfried, Gail
2007-01-01
The Genome Consortium for Active Teaching (GCAT) facilitates the use of modern genomics methods in undergraduate education. Initially focused on microarray technology, but with an eye toward diversification, GCAT is a community working to improve the education of tomorrow's life science professionals. GCAT participants have access to affordable microarrays, microarray scanners, free software for data analysis, and faculty workshops. Microarrays provided by GCAT have been used by 141 faculty on 134 campuses, including 21 faculty that serve large numbers of underrepresented minority students. An estimated 9480 undergraduates a year will have access to microarrays by 2009 as a direct result of GCAT faculty workshops. Gains for students include significantly improved comprehension of topics in functional genomics and increased interest in research. Faculty reported improved access to new technology and gains in understanding thanks to their involvement with GCAT. GCAT's network of supportive colleagues encourages faculty to explore genomics through student research and to learn a new and complex method with their undergraduates. GCAT is meeting important goals of BIO2010 by making research methods accessible to undergraduates, training faculty in genomics and bioinformatics, integrating mathematics into the biology curriculum, and increasing participation by underrepresented minority students.
System for selecting relevant information for decision support.
Kalina, Jan; Seidl, Libor; Zvára, Karel; Grünfeldová, Hana; Slovák, Dalibor; Zvárová, Jana
2013-01-01
We implemented a prototype of a decision support system called SIR which has a form of a web-based classification service for diagnostic decision support. The system has the ability to select the most relevant variables and to learn a classification rule, which is guaranteed to be suitable also for high-dimensional measurements. The classification system can be useful for clinicians in primary care to support their decision-making tasks with relevant information extracted from any available clinical study. The implemented prototype was tested on a sample of patients in a cardiological study and performs an information extraction from a high-dimensional set containing both clinical and gene expression data.
Design and evaluation of Actichip, a thematic microarray for the study of the actin cytoskeleton
Muller, Jean; Mehlen, André; Vetter, Guillaume; Yatskou, Mikalai; Muller, Arnaud; Chalmel, Frédéric; Poch, Olivier; Friederich, Evelyne; Vallar, Laurent
2007-01-01
Background The actin cytoskeleton plays a crucial role in supporting and regulating numerous cellular processes. Mutations or alterations in the expression levels affecting the actin cytoskeleton system or related regulatory mechanisms are often associated with complex diseases such as cancer. Understanding how qualitative or quantitative changes in expression of the set of actin cytoskeleton genes are integrated to control actin dynamics and organisation is currently a challenge and should provide insights in identifying potential targets for drug discovery. Here we report the development of a dedicated microarray, the Actichip, containing 60-mer oligonucleotide probes for 327 genes selected for transcriptome analysis of the human actin cytoskeleton. Results Genomic data and sequence analysis features were retrieved from GenBank and stored in an integrative database called Actinome. From these data, probes were designed using a home-made program (CADO4MI) allowing sequence refinement and improved probe specificity by combining the complementary information recovered from the UniGene and RefSeq databases. Actichip performance was analysed by hybridisation with RNAs extracted from epithelial MCF-7 cells and human skeletal muscle. Using thoroughly standardised procedures, we obtained microarray images with excellent quality resulting in high data reproducibility. Actichip displayed a large dynamic range extending over three logs with a limit of sensitivity between one and ten copies of transcript per cell. The array allowed accurate detection of small changes in gene expression and reliable classification of samples based on the expression profiles of tissue-specific genes. When compared to two other oligonucleotide microarray platforms, Actichip showed similar sensitivity and concordant expression ratios. Moreover, Actichip was able to discriminate the highly similar actin isoforms whereas the two other platforms did not. Conclusion Our data demonstrate that Actichip is a powerful alternative to commercial high density microarrays for cytoskeleton gene profiling in normal or pathological samples. Actichip is available upon request. PMID:17727702
RDFBuilder: a tool to automatically build RDF-based interfaces for MAGE-OM microarray data sources.
Anguita, Alberto; Martin, Luis; Garcia-Remesal, Miguel; Maojo, Victor
2013-07-01
This paper presents RDFBuilder, a tool that enables RDF-based access to MAGE-ML-compliant microarray databases. We have developed a system that automatically transforms the MAGE-OM model and microarray data stored in the ArrayExpress database into RDF format. Additionally, the system automatically enables a SPARQL endpoint. This allows users to execute SPARQL queries for retrieving microarray data, either from specific experiments or from more than one experiment at a time. Our system optimizes response times by caching and reusing information from previous queries. In this paper, we describe our methods for achieving this transformation. We show that our approach is complementary to other existing initiatives, such as Bio2RDF, for accessing and retrieving data from the ArrayExpress database. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Autoregressive-model-based missing value estimation for DNA microarray time series data.
Choong, Miew Keen; Charbit, Maurice; Yan, Hong
2009-01-01
Missing value estimation is important in DNA microarray data analysis. A number of algorithms have been developed to solve this problem, but they have several limitations. Most existing algorithms are not able to deal with the situation where a particular time point (column) of the data is missing entirely. In this paper, we present an autoregressive-model-based missing value estimation method (ARLSimpute) that takes into account the dynamic property of microarray temporal data and the local similarity structures in the data. ARLSimpute is especially effective for the situation where a particular time point contains many missing values or where the entire time point is missing. Experiment results suggest that our proposed algorithm is an accurate missing value estimator in comparison with other imputation methods on simulated as well as real microarray time series datasets.
Medication-related clinical decision support in computerized provider order entry systems: a review.
Kuperman, Gilad J; Bobb, Anne; Payne, Thomas H; Avery, Anthony J; Gandhi, Tejal K; Burns, Gerard; Classen, David C; Bates, David W
2007-01-01
While medications can improve patients' health, the process of prescribing them is complex and error prone, and medication errors cause many preventable injuries. Computer provider order entry (CPOE) with clinical decision support (CDS), can improve patient safety and lower medication-related costs. To realize the medication-related benefits of CDS within CPOE, one must overcome significant challenges. Healthcare organizations implementing CPOE must understand what classes of CDS their CPOE systems can support, assure that clinical knowledge underlying their CDS systems is reasonable, and appropriately represent electronic patient data. These issues often influence to what extent an institution will succeed with its CPOE implementation and achieve its desired goals. Medication-related decision support is probably best introduced into healthcare organizations in two stages, basic and advanced. Basic decision support includes drug-allergy checking, basic dosing guidance, formulary decision support, duplicate therapy checking, and drug-drug interaction checking. Advanced decision support includes dosing support for renal insufficiency and geriatric patients, guidance for medication-related laboratory testing, drug-pregnancy checking, and drug-disease contraindication checking. In this paper, the authors outline some of the challenges associated with both basic and advanced decision support and discuss how those challenges might be addressed. The authors conclude with summary recommendations for delivering effective medication-related clinical decision support addressed to healthcare organizations, application and knowledge base vendors, policy makers, and researchers.
Decision support systems in health economics.
Quaglini, S; Dazzi, L; Stefanelli, M; Barosi, G; Marchetti, M
1999-08-01
This article describes a system addressed to different health care professionals for building, using, and sharing decision support systems for resource allocation. The system deals with selected areas, namely the choice of diagnostic tests, the therapy planning, and the instrumentation purchase. Decision support is based on decision-analytic models, incorporating an explicit knowledge representation of both the medical domain knowledge and the economic evaluation theory. Application models are built on top of meta-models, that are used as guidelines for making explicit both the cost and effectiveness components. This approach improves the transparency and soundness of the collaborative decision-making process and facilitates the result interpretation.
Design and implementation of the standards-based personal intelligent self-management system (PICS).
von Bargen, Tobias; Gietzelt, Matthias; Britten, Matthias; Song, Bianying; Wolf, Klaus-Hendrik; Kohlmann, Martin; Marschollek, Michael; Haux, Reinhold
2013-01-01
Against the background of demographic change and a diminishing care workforce there is a growing need for personalized decision support. The aim of this paper is to describe the design and implementation of the standards-based personal intelligent care systems (PICS). PICS makes consistent use of internationally accepted standards such as the Health Level 7 (HL7) Arden syntax for the representation of the decision logic, HL7 Clinical Document Architecture for information representation and is based on a open-source service-oriented architecture framework and a business process management system. Its functionality is exemplified for the application scenario of a patient suffering from congestive heart failure. Several vital signs sensors provide data for the decision support system, and a number of flexible communication channels are available for interaction with patient or caregiver. PICS is a standards-based, open and flexible system enabling personalized decision support. Further development will include the implementation of components on small computers and sensor nodes.
Gálvez, Juan Manuel; Castillo, Daniel; Herrera, Luis Javier; San Román, Belén; Valenzuela, Olga; Ortuño, Francisco Manuel; Rojas, Ignacio
2018-01-01
Most of the research studies developed applying microarray technology to the characterization of different pathological states of any disease may fail in reaching statistically significant results. This is largely due to the small repertoire of analysed samples, and to the limitation in the number of states or pathologies usually addressed. Moreover, the influence of potential deviations on the gene expression quantification is usually disregarded. In spite of the continuous changes in omic sciences, reflected for instance in the emergence of new Next-Generation Sequencing-related technologies, the existing availability of a vast amount of gene expression microarray datasets should be properly exploited. Therefore, this work proposes a novel methodological approach involving the integration of several heterogeneous skin cancer series, and a later multiclass classifier design. This approach is thus a way to provide the clinicians with an intelligent diagnosis support tool based on the use of a robust set of selected biomarkers, which simultaneously distinguishes among different cancer-related skin states. To achieve this, a multi-platform combination of microarray datasets from Affymetrix and Illumina manufacturers was carried out. This integration is expected to strengthen the statistical robustness of the study as well as the finding of highly-reliable skin cancer biomarkers. Specifically, the designed operation pipeline has allowed the identification of a small subset of 17 differentially expressed genes (DEGs) from which to distinguish among 7 involved skin states. These genes were obtained from the assessment of a number of potential batch effects on the gene expression data. The biological interpretation of these genes was inspected in the specific literature to understand their underlying information in relation to skin cancer. Finally, in order to assess their possible effectiveness in cancer diagnosis, a cross-validation Support Vector Machines (SVM)-based classification including feature ranking was performed. The accuracy attained exceeded the 92% in overall recognition of the 7 different cancer-related skin states. The proposed integration scheme is expected to allow the co-integration with other state-of-the-art technologies such as RNA-seq.
Keith M. Reynolds; Edward H. Holsten; Richard A. Werner
1994-01-01
SBexpert version 1.0 is a knowledge-based decision-support system for spruce beetle (Dendroctonus rutipennis (Kby.)) management developed for use in Microsoft Windows with the KnowledgePro Windows development language. The SBexpert users guide provides detailed instructions on the use of all SBexpert features. SBexpert has four main topics (...
ERIC Educational Resources Information Center
Kunisch, Joseph Martin
2012-01-01
Background: The Emergency Severity Index (ESI) is an emergency department (ED) triage classification system based on estimated patient-specific resource utilization. Rules for a computerized clinical decision support (CDS) system based on a patient's chief complaint were developed and tested using a stochastic model for predicting ESI scores.…
A Decision Support Model and Tool to Assist Financial Decision-Making in Universities
ERIC Educational Resources Information Center
Bhayat, Imtiaz; Manuguerra, Maurizio; Baldock, Clive
2015-01-01
In this paper, a model and tool is proposed to assist universities and other mission-based organisations to ascertain systematically the optimal portfolio of projects, in any year, meeting the organisations risk tolerances and available funds. The model and tool presented build on previous work on university operations and decision support systems…
Knerr, Sarah; Wernli, Karen J; Leppig, Kathleen; Ehrlich, Kelly; Graham, Amanda L; Farrell, David; Evans, Chalanda; Luta, George; Schwartz, Marc D; O'Neill, Suzanne C
2017-05-01
Mammographic breast density is one of the strongest risk factors for breast cancer after age and family history. Mandatory breast density disclosure policies are increasing nationally without clear guidance on how to communicate density status to women. Coupling density disclosure with personalized risk counseling and decision support through a web-based tool may be an effective way to allow women to make informed, values-consistent risk management decisions without increasing distress. This paper describes the design and methods of Engaged, a prospective, randomized controlled trial examining the effect of online personalized risk counseling and decision support on risk management decisions in women with dense breasts and increased breast cancer risk. The trial is embedded in a large integrated health care system in the Pacific Northwest. A total of 1250 female health plan members aged 40-69 with a recent negative screening mammogram who are at increased risk for interval cancer based on their 5-year breast cancer risk and BI-RADS® breast density will be randomly assigned to access either a personalized web-based counseling and decision support tool or standard educational content. Primary outcomes will be assessed using electronic health record data (i.e., chemoprevention and breast MRI utilization) and telephone surveys (i.e., distress) at baseline, six weeks, and twelve months. Engaged will provide evidence about whether a web-based personalized risk counseling and decision support tool is an effective method for communicating with women about breast density and risk management. An effective intervention could be disseminated with minimal clinical burden to align with density disclosure mandates. Clinical Trials Registration Number:NCT03029286. Copyright © 2017 Elsevier Inc. All rights reserved.
Kirby, Ralph; Herron, Paul; Hoskisson, Paul
2011-02-01
Based on available genome sequences, Actinomycetales show significant gene synteny across a wide range of species and genera. In addition, many genera show varying degrees of complex morphological development. Using the presence of gene synteny as a basis, it is clear that an analysis of gene conservation across the Streptomyces and various other Actinomycetales will provide information on both the importance of genes and gene clusters and the evolution of morphogenesis in these bacteria. Genome sequencing, although becoming cheaper, is still relatively expensive for comparing large numbers of strains. Thus, a heterologous DNA/DNA microarray hybridization dataset based on a Streptomyces coelicolor microarray allows a cheaper and greater depth of analysis of gene conservation. This study, using both bioinformatical and microarray approaches, was able to classify genes previously identified as involved in morphogenesis in Streptomyces into various subgroups in terms of conservation across species and genera. This will allow the targeting of genes for further study based on their importance at the species level and at higher evolutionary levels.
Eide, Magnus S; Endresen, Oyvind; Brett, Per Olaf; Ervik, Jon Leon; Røang, Kjell
2007-02-01
The paper describes a model, which estimates the risk levels of individual crude oil tankers. The intended use of the model, which is ready for trial implementation at The Norwegian Coastal Administrations new Vardø VTS (Vessel Traffic Service) centre, is to facilitate the comparison of ships and to support a risk based decision on which ships to focus attention on. For a VTS operator, tasked with monitoring hundreds of ships, this is a valuable decision support tool. The model answers the question, "Which ships are likely to produce an oil spill accident, and how much is it likely to spill?".
Large-scale analysis of gene expression using cDNA microarrays promises the
rapid detection of the mode of toxicity for drugs and other chemicals. cDNA
microarrays were used to examine chemically-induced alterations of gene
expression in HepG2 cells exposed to oxidative ...
Nanotechnology: moving from microarrays toward nanoarrays.
Chen, Hua; Li, Jun
2007-01-01
Microarrays are important tools for high-throughput analysis of biomolecules. The use of microarrays for parallel screening of nucleic acid and protein profiles has become an industry standard. A few limitations of microarrays are the requirement for relatively large sample volumes and elongated incubation time, as well as the limit of detection. In addition, traditional microarrays make use of bulky instrumentation for the detection, and sample amplification and labeling are quite laborious, which increase analysis cost and delays the time for obtaining results. These problems limit microarray techniques from point-of-care and field applications. One strategy for overcoming these problems is to develop nanoarrays, particularly electronics-based nanoarrays. With further miniaturization, higher sensitivity, and simplified sample preparation, nanoarrays could potentially be employed for biomolecular analysis in personal healthcare and monitoring of trace pathogens. In this chapter, it is intended to introduce the concept and advantage of nanotechnology and then describe current methods and protocols for novel nanoarrays in three aspects: (1) label-free nucleic acids analysis using nanoarrays, (2) nanoarrays for protein detection by conventional optical fluorescence microscopy as well as by novel label-free methods such as atomic force microscopy, and (3) nanoarray for enzymatic-based assay. These nanoarrays will have significant applications in drug discovery, medical diagnosis, genetic testing, environmental monitoring, and food safety inspection.
Grenville-Briggs, Laura J; Stansfield, Ian
2011-01-01
This report describes a linked series of Masters-level computer practical workshops. They comprise an advanced functional genomics investigation, based upon analysis of a microarray dataset probing yeast DNA damage responses. The workshops require the students to analyse highly complex transcriptomics datasets, and were designed to stimulate active learning through experience of current research methods in bioinformatics and functional genomics. They seek to closely mimic a realistic research environment, and require the students first to propose research hypotheses, then test those hypotheses using specific sections of the microarray dataset. The complexity of the microarray data provides students with the freedom to propose their own unique hypotheses, tested using appropriate sections of the microarray data. This research latitude was highly regarded by students and is a strength of this practical. In addition, the focus on DNA damage by radiation and mutagenic chemicals allows them to place their results in a human medical context, and successfully sparks broad interest in the subject material. In evaluation, 79% of students scored the practical workshops on a five-point scale as 4 or 5 (totally effective) for student learning. More broadly, the general use of microarray data as a "student research playground" is also discussed. Copyright © 2011 Wiley Periodicals, Inc.
Microarray platform affords improved product analysis in mammalian cell growth studies
Li, Lingyun; Migliore, Nicole; Schaefer, Eugene; Sharfstein, Susan T.; Dordick, Jonathan S.; Linhardt, Robert J.
2014-01-01
High throughput (HT) platforms serve as cost-efficient and rapid screening method for evaluating the effect of cell culture conditions and screening of chemicals. The aim of the current study was to develop a high-throughput cell-based microarray platform to assess the effect of culture conditions on Chinese hamster ovary (CHO) cells. Specifically, growth, transgene expression and metabolism of a GS/MSX CHO cell line, which produces a therapeutic monoclonal antibody, was examined using microarray system in conjunction with conventional shake flask platform in a non-proprietary medium. The microarray system consists of 60 nl spots of cells encapsulated in alginate and separated in groups via an 8-well chamber system attached to the chip. Results show the non-proprietary medium developed allows cell growth, production and normal glycosylation of recombinant antibody and metabolism of the recombinant CHO cells in both the microarray and shake flask platforms. In addition, 10.3 mM glutamate addition to the defined base media results in lactate metabolism shift in the recombinant GS/MSX CHO cells in the shake flask platform. Ultimately, the results demonstrate that the high-throughput microarray platform has the potential to be utilized for evaluating the impact of media additives on cellular processes, such as, cell growth, metabolism and productivity. PMID:24227746
Diversity Arrays Technology (DArT) for Pan-Genomic Evolutionary Studies of Non-Model Organisms
James, Karen E.; Schneider, Harald; Ansell, Stephen W.; Evers, Margaret; Robba, Lavinia; Uszynski, Grzegorz; Pedersen, Niklas; Newton, Angela E.; Russell, Stephen J.; Vogel, Johannes C.; Kilian, Andrzej
2008-01-01
Background High-throughput tools for pan-genomic study, especially the DNA microarray platform, have sparked a remarkable increase in data production and enabled a shift in the scale at which biological investigation is possible. The use of microarrays to examine evolutionary relationships and processes, however, is predominantly restricted to model or near-model organisms. Methodology/Principal Findings This study explores the utility of Diversity Arrays Technology (DArT) in evolutionary studies of non-model organisms. DArT is a hybridization-based genotyping method that uses microarray technology to identify and type DNA polymorphism. Theoretically applicable to any organism (even one for which no prior genetic data are available), DArT has not yet been explored in exclusively wild sample sets, nor extensively examined in a phylogenetic framework. DArT recovered 1349 markers of largely low copy-number loci in two lineages of seed-free land plants: the diploid fern Asplenium viride and the haploid moss Garovaglia elegans. Direct sequencing of 148 of these DArT markers identified 30 putative loci including four routinely sequenced for evolutionary studies in plants. Phylogenetic analyses of DArT genotypes reveal phylogeographic and substrate specificity patterns in A. viride, a lack of phylogeographic pattern in Australian G. elegans, and additive variation in hybrid or mixed samples. Conclusions/Significance These results enable methodological recommendations including procedures for detecting and analysing DArT markers tailored specifically to evolutionary investigations and practical factors informing the decision to use DArT, and raise evolutionary hypotheses concerning substrate specificity and biogeographic patterns. Thus DArT is a demonstrably valuable addition to the set of existing molecular approaches used to infer biological phenomena such as adaptive radiations, population dynamics, hybridization, introgression, ecological differentiation and phylogeography. PMID:18301759
Broad spectrum microarray for fingerprint-based bacterial species identification
2010-01-01
Background Microarrays are powerful tools for DNA-based molecular diagnostics and identification of pathogens. Most target a limited range of organisms and are based on only one or a very few genes for specific identification. Such microarrays are limited to organisms for which specific probes are available, and often have difficulty discriminating closely related taxa. We have developed an alternative broad-spectrum microarray that employs hybridisation fingerprints generated by high-density anonymous markers distributed over the entire genome for identification based on comparison to a reference database. Results A high-density microarray carrying 95,000 unique 13-mer probes was designed. Optimized methods were developed to deliver reproducible hybridisation patterns that enabled confident discrimination of bacteria at the species, subspecies, and strain levels. High correlation coefficients were achieved between replicates. A sub-selection of 12,071 probes, determined by ANOVA and class prediction analysis, enabled the discrimination of all samples in our panel. Mismatch probe hybridisation was observed but was found to have no effect on the discriminatory capacity of our system. Conclusions These results indicate the potential of our genome chip for reliable identification of a wide range of bacterial taxa at the subspecies level without laborious prior sequencing and probe design. With its high resolution capacity, our proof-of-principle chip demonstrates great potential as a tool for molecular diagnostics of broad taxonomic groups. PMID:20163710
Malinowski, Douglas P
2007-05-01
In recent years, the application of genomic and proteomic technologies to the problem of breast cancer prognosis and the prediction of therapy response have begun to yield encouraging results. Independent studies employing transcriptional profiling of primary breast cancer specimens using DNA microarrays have identified gene expression profiles that correlate with clinical outcome in primary breast biopsy specimens. Recent advances in microarray technology have demonstrated reproducibility, making clinical applications more achievable. In this regard, one such DNA microarray device based upon a 70-gene expression signature was recently cleared by the US FDA for application to breast cancer prognosis. These DNA microarrays often employ at least 70 gene targets for transcriptional profiling and prognostic assessment in breast cancer. The use of PCR-based methods utilizing a small subset of genes has recently demonstrated the ability to predict the clinical outcome in early-stage breast cancer. Furthermore, protein-based immunohistochemistry methods have progressed from using gene clusters and gene expression profiling to smaller subsets of expressed proteins to predict prognosis in early-stage breast cancer. Beyond prognostic applications, DNA microarray-based transcriptional profiling has demonstrated the ability to predict response to chemotherapy in early-stage breast cancer patients. In this review, recent advances in the use of multiple markers for prognosis of disease recurrence in early-stage breast cancer and the prediction of therapy response will be discussed.
GeneXplorer: an interactive web application for microarray data visualization and analysis.
Rees, Christian A; Demeter, Janos; Matese, John C; Botstein, David; Sherlock, Gavin
2004-10-01
When publishing large-scale microarray datasets, it is of great value to create supplemental websites where either the full data, or selected subsets corresponding to figures within the paper, can be browsed. We set out to create a CGI application containing many of the features of some of the existing standalone software for the visualization of clustered microarray data. We present GeneXplorer, a web application for interactive microarray data visualization and analysis in a web environment. GeneXplorer allows users to browse a microarray dataset in an intuitive fashion. It provides simple access to microarray data over the Internet and uses only HTML and JavaScript to display graphic and annotation information. It provides radar and zoom views of the data, allows display of the nearest neighbors to a gene expression vector based on their Pearson correlations and provides the ability to search gene annotation fields. The software is released under the permissive MIT Open Source license, and the complete documentation and the entire source code are freely available for download from CPAN http://search.cpan.org/dist/Microarray-GeneXplorer/.
Sinclair, Shane; Hagen, Neil A; Chambers, Carole; Manns, Braden; Simon, Anita; Browman, George P
2008-05-01
Drug decision-makers are involved in developing and implementing policy, procedure and processes to support health resource allocation regarding drug treatment formularies. A variety of approaches to decision-making, including formal decision-making frameworks, have been developed to support transparent and fair priority setting. Recently, a decision tool, 'The 6-STEPPPs Tool', was developed to assist in making decisions about new cancer drugs within the public health care system. We conducted a qualitative study, utilizing focus groups and participant observation, in order to investigate the internal frameworks that supported and challenged individual participants as they applied this decision tool within a multi-stakeholder decision process. We discovered that health care resource allocation engaged not only the minds of decision-makers but profoundly called on the often conflicting values of the heart. Objective decision-making frameworks for new drug therapies need to consider the subjective internal frameworks of decision-makers that affect decisions. Understanding the very human, internal turmoil experienced by individuals involved in health care resource allocation, sheds additional insight into how to account for reasonableness and how to better support difficult decisions through transparent, values-based resource allocation policy, procedures and processes.
A Decision Support Framework for Science-Based, Multi-Stakeholder Deliberation: A Coral Reef Example
NASA Astrophysics Data System (ADS)
Rehr, Amanda P.; Small, Mitchell J.; Bradley, Patricia; Fisher, William S.; Vega, Ann; Black, Kelly; Stockton, Tom
2012-12-01
We present a decision support framework for science-based assessment and multi-stakeholder deliberation. The framework consists of two parts: a DPSIR (Drivers-Pressures-States-Impacts-Responses) analysis to identify the important causal relationships among anthropogenic environmental stressors, processes, and outcomes; and a Decision Landscape analysis to depict the legal, social, and institutional dimensions of environmental decisions. The Decision Landscape incorporates interactions among government agencies, regulated businesses, non-government organizations, and other stakeholders. It also identifies where scientific information regarding environmental processes is collected and transmitted to improve knowledge about elements of the DPSIR and to improve the scientific basis for decisions. Our application of the decision support framework to coral reef protection and restoration in the Florida Keys focusing on anthropogenic stressors, such as wastewater, proved to be successful and offered several insights. Using information from a management plan, it was possible to capture the current state of the science with a DPSIR analysis as well as important decision options, decision makers and applicable laws with a the Decision Landscape analysis. A structured elicitation of values and beliefs conducted at a coral reef management workshop held in Key West, Florida provided a diversity of opinion and also indicated a prioritization of several environmental stressors affecting coral reef health. The integrated DPSIR/Decision landscape framework for the Florida Keys developed based on the elicited opinion and the DPSIR analysis can be used to inform management decisions, to reveal the role that further scientific information and research might play to populate the framework, and to facilitate better-informed agreement among participants.
Characterization and simulation of cDNA microarray spots using a novel mathematical model
Kim, Hye Young; Lee, Seo Eun; Kim, Min Jung; Han, Jin Il; Kim, Bo Kyung; Lee, Yong Sung; Lee, Young Seek; Kim, Jin Hyuk
2007-01-01
Background The quality of cDNA microarray data is crucial for expanding its application to other research areas, such as the study of gene regulatory networks. Despite the fact that a number of algorithms have been suggested to increase the accuracy of microarray gene expression data, it is necessary to obtain reliable microarray images by improving wet-lab experiments. As the first step of a cDNA microarray experiment, spotting cDNA probes is critical to determining the quality of spot images. Results We developed a governing equation of cDNA deposition during evaporation of a drop in the microarray spotting process. The governing equation included four parameters: the surface site density on the support, the extrapolated equilibrium constant for the binding of cDNA molecules with surface sites on glass slides, the macromolecular interaction factor, and the volume constant of a drop of cDNA solution. We simulated cDNA deposition from the single model equation by varying the value of the parameters. The morphology of the resulting cDNA deposit can be classified into three types: a doughnut shape, a peak shape, and a volcano shape. The spot morphology can be changed into a flat shape by varying the experimental conditions while considering the parameters of the governing equation of cDNA deposition. The four parameters were estimated by fitting the governing equation to the real microarray images. With the results of the simulation and the parameter estimation, the phenomenon of the formation of cDNA deposits in each type was investigated. Conclusion This study explains how various spot shapes can exist and suggests which parameters are to be adjusted for obtaining a good spot. This system is able to explore the cDNA microarray spotting process in a predictable, manageable and descriptive manner. We hope it can provide a way to predict the incidents that can occur during a real cDNA microarray experiment, and produce useful data for several research applications involving cDNA microarrays. PMID:18096047
Armour, Christine M; Dougan, Shelley Danielle; Brock, Jo-Ann; Chari, Radha; Chodirker, Bernie N; DeBie, Isabelle; Evans, Jane A; Gibson, William T; Kolomietz, Elena; Nelson, Tanya N; Tihy, Frédérique; Thomas, Mary Ann; Stavropoulos, Dimitri J
2018-04-01
The aim of this guideline is to provide updated recommendations for Canadian genetic counsellors, medical geneticists, maternal fetal medicine specialists, clinical laboratory geneticists and other practitioners regarding the use of chromosomal microarray analysis (CMA) for prenatal diagnosis. This guideline replaces the 2011 Society of Obstetricians and Gynaecologists of Canada (SOGC)-Canadian College of Medical Geneticists (CCMG) Joint Technical Update. A multidisciplinary group consisting of medical geneticists, genetic counsellors, maternal fetal medicine specialists and clinical laboratory geneticists was assembled to review existing literature and guidelines for use of CMA in prenatal care and to make recommendations relevant to the Canadian context. The statement was circulated for comment to the CCMG membership-at-large for feedback and, following incorporation of feedback, was approved by the CCMG Board of Directors on 5 June 2017 and the SOGC Board of Directors on 19 June 2017. Recommendations include but are not limited to: (1) CMA should be offered following a normal rapid aneuploidy screen when multiple fetal malformations are detected (II-1A) or for nuchal translucency (NT) ≥3.5 mm (II-2B) (recommendation 1); (2) a professional with expertise in prenatal chromosomal microarray analysis should provide genetic counselling to obtain informed consent, discuss the limitations of the methodology, obtain the parental decisions for return of incidental findings (II-2A) (recommendation 4) and provide post-test counselling for reporting of test results (III-A) (recommendation 9); (3) the resolution of chromosomal microarray analysis should be similar to postnatal microarray platforms to ensure small pathogenic variants are detected. To minimise the reporting of uncertain findings, it is recommended that variants of unknown significance (VOUS) smaller than 500 Kb deletion or 1 Mb duplication not be routinely reported in the prenatal context. Additionally, VOUS above these cut-offs should only be reported if there is significant supporting evidence that deletion or duplication of the region may be pathogenic (III-B) (recommendation 5); (4) secondary findings associated with a medically actionable disorder with childhood onset should be reported, whereas variants associated with adult-onset conditions should not be reported unless requested by the parents or disclosure can prevent serious harm to family members (III-A) (recommendation 8).The working group recognises that there is variability across Canada in delivery of prenatal testing, and these recommendations were developed to promote consistency and provide a minimum standard for all provinces and territories across the country (recommendation 9). © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
IBM’s Health Analytics and Clinical Decision Support
Sun, J.; Knoop, S.; Shabo, A.; Carmeli, B.; Sow, D.; Syed-Mahmood, T.; Rapp, W.
2014-01-01
Summary Objectives This survey explores the role of big data and health analytics developed by IBM in supporting the transformation of healthcare by augmenting evidence-based decision-making. Methods Some problems in healthcare and strategies for change are described. It is argued that change requires better decisions, which, in turn, require better use of the many kinds of healthcare information. Analytic resources that address each of the information challenges are described. Examples of the role of each of the resources are given. Results There are powerful analytic tools that utilize the various kinds of big data in healthcare to help clinicians make more personalized, evidenced-based decisions. Such resources can extract relevant information and provide insights that clinicians can use to make evidence-supported decisions. There are early suggestions that these resources have clinical value. As with all analytic tools, they are limited by the amount and quality of data. Conclusion Big data is an inevitable part of the future of healthcare. There is a compelling need to manage and use big data to make better decisions to support the transformation of healthcare to the personalized, evidence-supported model of the future. Cognitive computing resources are necessary to manage the challenges in employing big data in healthcare. Such tools have been and are being developed. The analytic resources, themselves, do not drive, but support healthcare transformation. PMID:25123736
POWER-ENHANCED MULTIPLE DECISION FUNCTIONS CONTROLLING FAMILY-WISE ERROR AND FALSE DISCOVERY RATES.
Peña, Edsel A; Habiger, Joshua D; Wu, Wensong
2011-02-01
Improved procedures, in terms of smaller missed discovery rates (MDR), for performing multiple hypotheses testing with weak and strong control of the family-wise error rate (FWER) or the false discovery rate (FDR) are developed and studied. The improvement over existing procedures such as the Šidák procedure for FWER control and the Benjamini-Hochberg (BH) procedure for FDR control is achieved by exploiting possible differences in the powers of the individual tests. Results signal the need to take into account the powers of the individual tests and to have multiple hypotheses decision functions which are not limited to simply using the individual p -values, as is the case, for example, with the Šidák, Bonferroni, or BH procedures. They also enhance understanding of the role of the powers of individual tests, or more precisely the receiver operating characteristic (ROC) functions of decision processes, in the search for better multiple hypotheses testing procedures. A decision-theoretic framework is utilized, and through auxiliary randomizers the procedures could be used with discrete or mixed-type data or with rank-based nonparametric tests. This is in contrast to existing p -value based procedures whose theoretical validity is contingent on each of these p -value statistics being stochastically equal to or greater than a standard uniform variable under the null hypothesis. Proposed procedures are relevant in the analysis of high-dimensional "large M , small n " data sets arising in the natural, physical, medical, economic and social sciences, whose generation and creation is accelerated by advances in high-throughput technology, notably, but not limited to, microarray technology.
Bălăcescu, Loredana; Bălăcescu, O; Crişan, N; Fetica, B; Petruţ, B; Bungărdean, Cătălina; Rus, Meda; Tudoran, Oana; Meurice, G; Irimie, Al; Dragoş, N; Berindan-Neagoe, Ioana
2011-01-01
Prostate cancer represents the first leading cause of cancer among western male population, with different clinical behavior ranging from indolent to metastatic disease. Although many molecules and deregulated pathways are known, the molecular mechanisms involved in the development of prostate cancer are not fully understood. The aim of this study was to explore the molecular variation underlying the prostate cancer, based on microarray analysis and bioinformatics approaches. Normal and prostate cancer tissues were collected by macrodissection from prostatectomy pieces. All prostate cancer specimens used in our study were Gleason score 7. Gene expression microarray (Agilent Technologies) was used for Whole Human Genome evaluation. The bioinformatics and functional analysis were based on Limma and Ingenuity software. The microarray analysis identified 1119 differentially expressed genes between prostate cancer and normal prostate, which were up- or down-regulated at least 2-fold. P-values were adjusted for multiple testing using Benjamini-Hochberg method with a false discovery rate of 0.01. These genes were analyzed with Ingenuity Pathway Analysis software and were established 23 genetic networks. Our microarray results provide new information regarding the molecular networks in prostate cancer stratified as Gleason 7. These data highlighted gene expression profiles for better understanding of prostate cancer progression.
Lee, Joseph C; Stiles, David; Lu, Jun; Cam, Margaret C
2007-01-01
Background Microarrays are a popular tool used in experiments to measure gene expression levels. Improving the reproducibility of microarray results produced by different chips from various manufacturers is important to create comparable and combinable experimental results. Alternative splicing has been cited as a possible cause of differences in expression measurements across platforms, though no study to this point has been conducted to show its influence in cross-platform differences. Results Using probe sequence data, a new microarray probe/transcript annotation was created based on the AceView Aug05 release that allowed for the categorization of genes based on their expression measurements' susceptibility to alternative splicing differences across microarray platforms. Examining gene expression data from multiple platforms in light of the new categorization, genes unsusceptible to alternative splicing differences showed higher signal agreement than those genes most susceptible to alternative splicing differences. The analysis gave rise to a different probe-level visualization method that can highlight probe differences according to transcript specificity. Conclusion The results highlight the need for detailed probe annotation at the transcriptome level. The presence of alternative splicing within a given sample can affect gene expression measurements and is a contributing factor to overall technical differences across platforms. PMID:17708771
Brunner, C; Hoffmann, K; Thiele, T; Schedler, U; Jehle, H; Resch-Genger, U
2015-04-01
Commercial platforms consisting of ready-to-use microarrays printed with target-specific DNA probes, a microarray scanner, and software for data analysis are available for different applications in medical diagnostics and food analysis, detecting, e.g., viral and bacteriological DNA sequences. The transfer of these tools from basic research to routine analysis, their broad acceptance in regulated areas, and their use in medical practice requires suitable calibration tools for regular control of instrument performance in addition to internal assay controls. Here, we present the development of a novel assay-adapted calibration slide for a commercialized DNA-based assay platform, consisting of precisely arranged fluorescent areas of various intensities obtained by incorporating different concentrations of a "green" dye and a "red" dye in a polymer matrix. These dyes present "Cy3" and "Cy5" analogues with improved photostability, chosen based upon their spectroscopic properties closely matching those of common labels for the green and red channel of microarray scanners. This simple tool allows to efficiently and regularly assess and control the performance of the microarray scanner provided with the biochip platform and to compare different scanners. It will be eventually used as fluorescence intensity scale for referencing of assays results and to enhance the overall comparability of diagnostic tests.
Stekel, Dov J.; Sarti, Donatella; Trevino, Victor; Zhang, Lihong; Salmon, Mike; Buckley, Chris D.; Stevens, Mark; Pallen, Mark J.; Penn, Charles; Falciani, Francesco
2005-01-01
A key step in the analysis of microarray data is the selection of genes that are differentially expressed. Ideally, such experiments should be properly replicated in order to infer both technical and biological variability, and the data should be subjected to rigorous hypothesis tests to identify the differentially expressed genes. However, in microarray experiments involving the analysis of very large numbers of biological samples, replication is not always practical. Therefore, there is a need for a method to select differentially expressed genes in a rational way from insufficiently replicated data. In this paper, we describe a simple method that uses bootstrapping to generate an error model from a replicated pilot study that can be used to identify differentially expressed genes in subsequent large-scale studies on the same platform, but in which there may be no replicated arrays. The method builds a stratified error model that includes array-to-array variability, feature-to-feature variability and the dependence of error on signal intensity. We apply this model to the characterization of the host response in a model of bacterial infection of human intestinal epithelial cells. We demonstrate the effectiveness of error model based microarray experiments and propose this as a general strategy for a microarray-based screening of large collections of biological samples. PMID:15800204
Bouaud, Jacques; Guézennec, Gilles; Séroussi, Brigitte
2018-01-01
The integration of clinical information models and termino-ontological models into a unique ontological framework is highly desirable for it facilitates data integration and management using the same formal mechanisms for both data concepts and information model components. This is particularly true for knowledge-based decision support tools that aim to take advantage of all facets of semantic web technologies in merging ontological reasoning, concept classification, and rule-based inferences. We present an ontology template that combines generic data model components with (parts of) existing termino-ontological resources. The approach is developed for the guideline-based decision support module on breast cancer management within the DESIREE European project. The approach is based on the entity attribute value model and could be extended to other domains.
Ooi, Chia Huey; Chetty, Madhu; Teng, Shyh Wei
2006-06-23
Due to the large number of genes in a typical microarray dataset, feature selection looks set to play an important role in reducing noise and computational cost in gene expression-based tissue classification while improving accuracy at the same time. Surprisingly, this does not appear to be the case for all multiclass microarray datasets. The reason is that many feature selection techniques applied on microarray datasets are either rank-based and hence do not take into account correlations between genes, or are wrapper-based, which require high computational cost, and often yield difficult-to-reproduce results. In studies where correlations between genes are considered, attempts to establish the merit of the proposed techniques are hampered by evaluation procedures which are less than meticulous, resulting in overly optimistic estimates of accuracy. We present two realistically evaluated correlation-based feature selection techniques which incorporate, in addition to the two existing criteria involved in forming a predictor set (relevance and redundancy), a third criterion called the degree of differential prioritization (DDP). DDP functions as a parameter to strike the balance between relevance and redundancy, providing our techniques with the novel ability to differentially prioritize the optimization of relevance against redundancy (and vice versa). This ability proves useful in producing optimal classification accuracy while using reasonably small predictor set sizes for nine well-known multiclass microarray datasets. For multiclass microarray datasets, especially the GCM and NCI60 datasets, DDP enables our filter-based techniques to produce accuracies better than those reported in previous studies which employed similarly realistic evaluation procedures.
A GH-Based Ontology to Support Applications for Automating Decision Support
2005-03-01
architecture for a decision support sys - tem. For this reason, it obtains data from, and updates, a database. IDA also wanted the prototype’s architecture...Chief In- formation Officer CoABS Control of Agent Based Sys - tems DBMS Database Management System DoD Department of Defense DTD Document Type...Generic Hub, the Moyeu Générique, and the Generische Nabe , specifying each as a separate service description with property names and values of the GH
Solutions to pervasive environmental problems often are not amenable to a straightforward application of science-based actions. These problems encompass large-scale environmental policy questions where environmental concerns, economic constraints, and societal values conflict ca...
Decision Making: New Paradigm for Education.
ERIC Educational Resources Information Center
Wales, Charles E.; And Others
1986-01-01
Defines education's new paradigm as schooling based on decision making, the critical thinking skills serving it, and the knowledge base supporting it. Outlines a model decision-making process using a hypothetical breakfast problem; a late riser chooses goals, generates ideas, develops an action plan, and implements and evaluates it. (4 references)…
School-Based Decision-Making: The Canadian Perspective.
ERIC Educational Resources Information Center
Peters, Frank
1997-01-01
In Canada, school-based decision making is a political expedient to co-opt public support for public education at the same time as financial resources to schools are being curtailed. School councils are advisory in nature and have no statutory position in either school or school-system decisions. (17 references) (MLF)
Fuzzy-logic based strategy for validation of multiplex methods: example with qualitative GMO assays.
Bellocchi, Gianni; Bertholet, Vincent; Hamels, Sandrine; Moens, W; Remacle, José; Van den Eede, Guy
2010-02-01
This paper illustrates the advantages that a fuzzy-based aggregation method could bring into the validation of a multiplex method for GMO detection (DualChip GMO kit, Eppendorf). Guidelines for validation of chemical, bio-chemical, pharmaceutical and genetic methods have been developed and ad hoc validation statistics are available and routinely used, for in-house and inter-laboratory testing, and decision-making. Fuzzy logic allows summarising the information obtained by independent validation statistics into one synthetic indicator of overall method performance. The microarray technology, introduced for simultaneous identification of multiple GMOs, poses specific validation issues (patterns of performance for a variety of GMOs at different concentrations). A fuzzy-based indicator for overall evaluation is illustrated in this paper, and applied to validation data for different genetically modified elements. Remarks were drawn on the analytical results. The fuzzy-logic based rules were shown to be applicable to improve interpretation of results and facilitate overall evaluation of the multiplex method.
An IT Architecture for Systems Medicine.
Ganzinger, Matthias; Gietzelt, Matthias; Karmen, Christian; Firnkorn, Daniel; Knaup, Petra
2015-01-01
Systems medicine aims to support treatment of complex diseases like cancer by integrating all available data for the disease. To provide such a decision support in clinical practice, a suitable IT architecture is necessary. We suggest a generic architecture comprised of the following three layers: data representation, decision support, and user interface. For the systems medicine research project "Clinically-applicable, omics-based assessment of survival, side effects, and targets in multiple myeloma" (CLIOMMICS) we developed a concrete instance of the generic architecture. We use i2b2 for representing the harmonized data. Since no deterministic model exists for multiple myeloma we use case-based reasoning for decision support. For clinical practice, visualizations of the results must be intuitive and clear. At the same time, they must communicate the uncertainty immanent in stochastic processes. Thus, we develop a specific user interface for systems medicine based on the web portal software Liferay.
Keith M. Reynolds; Edward H. Holsten
1997-01-01
SBexpert version 2.0 is a knowledge-based decision-support system for spruce beetle (Dendroctonus rufipennis (Kby.)) management developed for use in Microsoft (MS) Windows with the KnowledgePro Windows development language. Version 2.0 is a significant enhancement of version 1.0. The SBexpert users guide provides detailed instructions on the use of...
GET SMARTE: A DECISION SUPPORT SYSTEM TO REVITALIZE COMMUNITIES - CABERNET 2007
Sustainable Management Approaches and Revitalization Tools - electronic (SMARTe), is an open-source, web-based, decision support system for developing and evaluating future reuse scenarios for potentially contaminated land. SMARTe contains information and analysis tools for all a...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyle, C.A.; Baetz, B.W.
1998-12-31
Although there are a number of expert systems available which are designed to assist in resolving environmental problems, there is still a need for a system which would assist managers in determining waste management options for all types of wastes from one or more industrial plants, giving priority to sustainable use of resources, reuse and recycling. A prototype model was developed to determine the potentials for reuse and recycling of waste materials, to select the treatments needed to recycle waste materials or for treatment before disposal, and to determine potentials for co-treatment of wastes. A knowledge-based decision support system wasmore » then designed using this model. This paper describes the prototype model, the developed knowledge-based decision support system, the input and storage of data within the system and the inference engine developed for the system to determine the treatment options for the wastes. Options for sorting and selecting treatment trains are described, along with a discussion of the limitations of the approach and future developments needed for the system.« less
The Use of Atomic Force Microscopy for 3D Analysis of Nucleic Acid Hybridization on Microarrays.
Dubrovin, E V; Presnova, G V; Rubtsova, M Yu; Egorov, A M; Grigorenko, V G; Yaminsky, I V
2015-01-01
Oligonucleotide microarrays are considered today to be one of the most efficient methods of gene diagnostics. The capability of atomic force microscopy (AFM) to characterize the three-dimensional morphology of single molecules on a surface allows one to use it as an effective tool for the 3D analysis of a microarray for the detection of nucleic acids. The high resolution of AFM offers ways to decrease the detection threshold of target DNA and increase the signal-to-noise ratio. In this work, we suggest an approach to the evaluation of the results of hybridization of gold nanoparticle-labeled nucleic acids on silicon microarrays based on an AFM analysis of the surface both in air and in liquid which takes into account of their three-dimensional structure. We suggest a quantitative measure of the hybridization results which is based on the fraction of the surface area occupied by the nanoparticles.
Gadd, C S; Baskaran, P; Lobach, D F
1998-01-01
Extensive utilization of point-of-care decision support systems will be largely dependent on the development of user interaction capabilities that make them effective clinical tools in patient care settings. This research identified critical design features of point-of-care decision support systems that are preferred by physicians, through a multi-method formative evaluation of an evolving prototype of an Internet-based clinical decision support system. Clinicians used four versions of the system--each highlighting a different functionality. Surveys and qualitative evaluation methodologies assessed clinicians' perceptions regarding system usability and usefulness. Our analyses identified features that improve perceived usability, such as telegraphic representations of guideline-related information, facile navigation, and a forgiving, flexible interface. Users also preferred features that enhance usefulness and motivate use, such as an encounter documentation tool and the availability of physician instruction and patient education materials. In addition to identifying design features that are relevant to efforts to develop clinical systems for point-of-care decision support, this study demonstrates the value of combining quantitative and qualitative methods of formative evaluation with an iterative system development strategy to implement new information technology in complex clinical settings.
ERIC Educational Resources Information Center
Grenville-Briggs, Laura J.; Stansfield, Ian
2011-01-01
This report describes a linked series of Masters-level computer practical workshops. They comprise an advanced functional genomics investigation, based upon analysis of a microarray dataset probing yeast DNA damage responses. The workshops require the students to analyse highly complex transcriptomics datasets, and were designed to stimulate…
Caryoscope: An Open Source Java application for viewing microarray data in a genomic context
Awad, Ihab AB; Rees, Christian A; Hernandez-Boussard, Tina; Ball, Catherine A; Sherlock, Gavin
2004-01-01
Background Microarray-based comparative genome hybridization experiments generate data that can be mapped onto the genome. These data are interpreted more easily when represented graphically in a genomic context. Results We have developed Caryoscope, which is an open source Java application for visualizing microarray data from array comparative genome hybridization experiments in a genomic context. Caryoscope can read General Feature Format files (GFF files), as well as comma- and tab-delimited files, that define the genomic positions of the microarray reporters for which data are obtained. The microarray data can be browsed using an interactive, zoomable interface, which helps users identify regions of chromosomal deletion or amplification. The graphical representation of the data can be exported in a number of graphic formats, including publication-quality formats such as PostScript. Conclusion Caryoscope is a useful tool that can aid in the visualization, exploration and interpretation of microarray data in a genomic context. PMID:15488149
Guo, Qingsheng; Bai, Zhixiong; Liu, Yuqian; Sun, Qingjiang
2016-03-15
In this work, we report the application of streptavidin-coated quantum dot (strAV-QD) in molecular beacon (MB) microarray assays by using the strAV-QD to label the immobilized MB, avoiding target labeling and meanwhile obviating the use of amplification. The MBs are stem-loop structured oligodeoxynucleotides, modified with a thiol and a biotin at two terminals of the stem. With the strAV-QD labeling an "opened" MB rather than a "closed" MB via streptavidin-biotin reaction, a sensitive and specific detection of label-free target DNA sequence is demonstrated by the MB microarray, with a signal-to-background ratio of 8. The immobilized MBs can be perfectly regenerated, allowing the reuse of the microarray. The MB microarray also is able to detect single nucleotide polymorphisms, exhibiting genotype-dependent fluorescence signals. It is demonstrated that the MB microarray can perform as a 4-to-2 encoder, compressing the genotype information into two outputs. Copyright © 2015 Elsevier B.V. All rights reserved.
2012-01-01
Over the last decade, the introduction of microarray technology has had a profound impact on gene expression research. The publication of studies with dissimilar or altogether contradictory results, obtained using different microarray platforms to analyze identical RNA samples, has raised concerns about the reliability of this technology. The MicroArray Quality Control (MAQC) project was initiated to address these concerns, as well as other performance and data analysis issues. Expression data on four titration pools from two distinct reference RNA samples were generated at multiple test sites using a variety of microarray-based and alternative technology platforms. Here we describe the experimental design and probe mapping efforts behind the MAQC project. We show intraplatform consistency across test sites as well as a high level of interplatform concordance in terms of genes identified as differentially expressed. This study provides a resource that represents an important first step toward establishing a framework for the use of microarrays in clinical and regulatory settings. PMID:16964229
Split-plot microarray experiments: issues of design, power and sample size.
Tsai, Pi-Wen; Lee, Mei-Ling Ting
2005-01-01
This article focuses on microarray experiments with two or more factors in which treatment combinations of the factors corresponding to the samples paired together onto arrays are not completely random. A main effect of one (or more) factor(s) is confounded with arrays (the experimental blocks). This is called a split-plot microarray experiment. We utilise an analysis of variance (ANOVA) model to assess differentially expressed genes for between-array and within-array comparisons that are generic under a split-plot microarray experiment. Instead of standard t- or F-test statistics that rely on mean square errors of the ANOVA model, we use a robust method, referred to as 'a pooled percentile estimator', to identify genes that are differentially expressed across different treatment conditions. We illustrate the design and analysis of split-plot microarray experiments based on a case application described by Jin et al. A brief discussion of power and sample size for split-plot microarray experiments is also presented.
Horne, Avril C; Szemis, Joanna M; Webb, J Angus; Kaur, Simranjit; Stewardson, Michael J; Bond, Nick; Nathan, Rory
2018-03-01
One important aspect of adaptive management is the clear and transparent documentation of hypotheses, together with the use of predictive models (complete with any assumptions) to test those hypotheses. Documentation of such models can improve the ability to learn from management decisions and supports dialog between stakeholders. A key challenge is how best to represent the existing scientific knowledge to support decision-making. Such challenges are currently emerging in the field of environmental water management in Australia, where managers are required to prioritize the delivery of environmental water on an annual basis, using a transparent and evidence-based decision framework. We argue that the development of models of ecological responses to environmental water use needs to support both the planning and implementation cycles of adaptive management. Here we demonstrate an approach based on the use of Conditional Probability Networks to translate existing ecological knowledge into quantitative models that include temporal dynamics to support adaptive environmental flow management. It equally extends to other applications where knowledge is incomplete, but decisions must still be made.
NASA Astrophysics Data System (ADS)
Horne, Avril C.; Szemis, Joanna M.; Webb, J. Angus; Kaur, Simranjit; Stewardson, Michael J.; Bond, Nick; Nathan, Rory
2018-03-01
One important aspect of adaptive management is the clear and transparent documentation of hypotheses, together with the use of predictive models (complete with any assumptions) to test those hypotheses. Documentation of such models can improve the ability to learn from management decisions and supports dialog between stakeholders. A key challenge is how best to represent the existing scientific knowledge to support decision-making. Such challenges are currently emerging in the field of environmental water management in Australia, where managers are required to prioritize the delivery of environmental water on an annual basis, using a transparent and evidence-based decision framework. We argue that the development of models of ecological responses to environmental water use needs to support both the planning and implementation cycles of adaptive management. Here we demonstrate an approach based on the use of Conditional Probability Networks to translate existing ecological knowledge into quantitative models that include temporal dynamics to support adaptive environmental flow management. It equally extends to other applications where knowledge is incomplete, but decisions must still be made.
Gathering Real World Evidence with Cluster Analysis for Clinical Decision Support.
Xia, Eryu; Liu, Haifeng; Li, Jing; Mei, Jing; Li, Xuejun; Xu, Enliang; Li, Xiang; Hu, Gang; Xie, Guotong; Xu, Meilin
2017-01-01
Clinical decision support systems are information technology systems that assist clinical decision-making tasks, which have been shown to enhance clinical performance. Cluster analysis, which groups similar patients together, aims to separate patient cases into phenotypically heterogenous groups and defining therapeutically homogeneous patient subclasses. Useful as it is, the application of cluster analysis in clinical decision support systems is less reported. Here, we describe the usage of cluster analysis in clinical decision support systems, by first dividing patient cases into similar groups and then providing diagnosis or treatment suggestions based on the group profiles. This integration provides data for clinical decisions and compiles a wide range of clinical practices to inform the performance of individual clinicians. We also include an example usage of the system under the scenario of blood lipid management in type 2 diabetes. These efforts represent a step toward promoting patient-centered care and enabling precision medicine.
Engelmann, Brett W
2017-01-01
The Src Homology 2 (SH2) domain family primarily recognizes phosphorylated tyrosine (pY) containing peptide motifs. The relative affinity preferences among competing SH2 domains for phosphopeptide ligands define "specificity space," and underpins many functional pY mediated interactions within signaling networks. The degree of promiscuity exhibited and the dynamic range of affinities supported by individual domains or phosphopeptides is best resolved by a carefully executed and controlled quantitative high-throughput experiment. Here, I describe the fabrication and application of a cellulose-peptide conjugate microarray (CPCMA) platform to the quantitative analysis of SH2 domain specificity space. Included herein are instructions for optimal experimental design with special attention paid to common sources of systematic error, phosphopeptide SPOT synthesis, microarray fabrication, analyte titrations, data capture, and analysis.
DNA microarray-based PCR ribotyping of Clostridium difficile.
Schneeberg, Alexander; Ehricht, Ralf; Slickers, Peter; Baier, Vico; Neubauer, Heinrich; Zimmermann, Stefan; Rabold, Denise; Lübke-Becker, Antina; Seyboldt, Christian
2015-02-01
This study presents a DNA microarray-based assay for fast and simple PCR ribotyping of Clostridium difficile strains. Hybridization probes were designed to query the modularly structured intergenic spacer region (ISR), which is also the template for conventional and PCR ribotyping with subsequent capillary gel electrophoresis (seq-PCR) ribotyping. The probes were derived from sequences available in GenBank as well as from theoretical ISR module combinations. A database of reference hybridization patterns was set up from a collection of 142 well-characterized C. difficile isolates representing 48 seq-PCR ribotypes. The reference hybridization patterns calculated by the arithmetic mean were compared using a similarity matrix analysis. The 48 investigated seq-PCR ribotypes revealed 27 array profiles that were clearly distinguishable. The most frequent human-pathogenic ribotypes 001, 014/020, 027, and 078/126 were discriminated by the microarray. C. difficile strains related to 078/126 (033, 045/FLI01, 078, 126, 126/FLI01, 413, 413/FLI01, 598, 620, 652, and 660) and 014/020 (014, 020, and 449) showed similar hybridization patterns, confirming their genetic relatedness, which was previously reported. A panel of 50 C. difficile field isolates was tested by seq-PCR ribotyping and the DNA microarray-based assay in parallel. Taking into account that the current version of the microarray does not discriminate some closely related seq-PCR ribotypes, all isolates were typed correctly. Moreover, seq-PCR ribotypes without reference profiles available in the database (ribotype 009 and 5 new types) were correctly recognized as new ribotypes, confirming the performance and expansion potential of the microarray. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Schüler, Susann; Wenz, Ingrid; Wiederanders, B; Slickers, P; Ehricht, R
2006-06-12
Recent developments in DNA microarray technology led to a variety of open and closed devices and systems including high and low density microarrays for high-throughput screening applications as well as microarrays of lower density for specific diagnostic purposes. Beside predefined microarrays for specific applications manufacturers offer the production of custom-designed microarrays adapted to customers' wishes. Array based assays demand complex procedures including several steps for sample preparation (RNA extraction, amplification and sample labelling), hybridization and detection, thus leading to a high variability between several approaches and resulting in the necessity of extensive standardization and normalization procedures. In the present work a custom designed human proteinase DNA microarray of lower density in ArrayTube format was established. This highly economic open platform only requires standard laboratory equipment and allows the study of the molecular regulation of cell behaviour by proteinases. We established a procedure for sample preparation and hybridization and verified the array based gene expression profile by quantitative real-time PCR (QRT-PCR). Moreover, we compared the results with the well established Affymetrix microarray. By application of standard labelling procedures with e.g. Klenow fragment exo-, single primer amplification (SPA) or In Vitro Transcription (IVT) we noticed a loss of signal conservation for some genes. To overcome this problem we developed a protocol in accordance with the SPA protocol, in which we included target specific primers designed individually for each spotted oligomer. Here we present a complete array based assay in which only the specific transcripts of interest are amplified in parallel and in a linear manner. The array represents a proof of principle which can be adapted to other species as well. As the designed protocol for amplifying mRNA starts from as little as 100 ng total RNA, it presents an alternative method for detecting even low expressed genes by microarray experiments in a highly reproducible and sensitive manner. Preservation of signal integrity is demonstrated out by QRT-PCR measurements. The little amounts of total RNA necessary for the analyses make this method applicable for investigations with limited material as in clinical samples from, for example, organ or tumour biopsies. Those are arguments in favour of the high potential of our assay compared to established procedures for amplification within the field of diagnostic expression profiling. Nevertheless, the screening character of microarray data must be mentioned, and independent methods should verify the results.
Microarray-based cancer prediction using soft computing approach.
Wang, Xiaosheng; Gotoh, Osamu
2009-05-26
One of the difficulties in using gene expression profiles to predict cancer is how to effectively select a few informative genes to construct accurate prediction models from thousands or ten thousands of genes. We screen highly discriminative genes and gene pairs to create simple prediction models involved in single genes or gene pairs on the basis of soft computing approach and rough set theory. Accurate cancerous prediction is obtained when we apply the simple prediction models for four cancerous gene expression datasets: CNS tumor, colon tumor, lung cancer and DLBCL. Some genes closely correlated with the pathogenesis of specific or general cancers are identified. In contrast with other models, our models are simple, effective and robust. Meanwhile, our models are interpretable for they are based on decision rules. Our results demonstrate that very simple models may perform well on cancerous molecular prediction and important gene markers of cancer can be detected if the gene selection approach is chosen reasonably.
Mass-transport limitations in spot-based microarrays.
Zhao, Ming; Wang, Xuefeng; Nolte, David
2010-09-20
Mass transport of analyte to surface-immobilized affinity reagents is the fundamental bottleneck for sensitive detection in solid-support microarrays and biosensors. Analyte depletion in the volume adjacent to the sensor causes deviation from ideal association, significantly slows down reaction kinetics, and causes inhomogeneous binding across the sensor surface. In this paper we use high-resolution molecular interferometric imaging (MI2), a label-free optical interferometry technique for direct detection of molecular films, to study the inhomogeneous distribution of intra-spot binding across 100 micron-diameter protein spots. By measuring intra-spot binding inhomogeneity, reaction kinetics can be determined accurately when combined with a numerical three-dimensional finite element model. To ensure homogeneous binding across a spot, a critical flow rate is identified in terms of the association rate k(a) and the spot diameter. The binding inhomogeneity across a spot can be used to distinguish high-affinity low-concentration specific reactions from low-affinity high-concentration non-specific binding of background proteins.
The Toxicological Prioritization Index (ToxPi) decision support framework was previously developed to facilitate incorporation of diverse data to prioritize chemicals based on potential hazard. This ToxPi index was demonstrated by considering results of bioprofiling related to po...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reynolds, K.M.; Holsten, E.H.; Werner, R.A.
1995-03-01
SBexpert version 1.0 is a knowledge-based decision-support system for management of spruce beetle developed for use in Microsoft Windows. The users guide provides detailed instructions on the use of all SBexpert features. SBexpert has four main subprograms; introduction, analysis, textbook, and literature. The introduction is the first of the five subtopics in the SBexpert help system. The analysis topic is an advisory system for spruce beetle management that provides recommendation for reducing spruce beetle hazard and risk to spruce stands and is the main analytical topic in SBexpert. The textbook and literature topics provide complementary decision support for analysis.
Richard, Arianne C; Lyons, Paul A; Peters, James E; Biasci, Daniele; Flint, Shaun M; Lee, James C; McKinney, Eoin F; Siegel, Richard M; Smith, Kenneth G C
2014-08-04
Although numerous investigations have compared gene expression microarray platforms, preprocessing methods and batch correction algorithms using constructed spike-in or dilution datasets, there remains a paucity of studies examining the properties of microarray data using diverse biological samples. Most microarray experiments seek to identify subtle differences between samples with variable background noise, a scenario poorly represented by constructed datasets. Thus, microarray users lack important information regarding the complexities introduced in real-world experimental settings. The recent development of a multiplexed, digital technology for nucleic acid measurement enables counting of individual RNA molecules without amplification and, for the first time, permits such a study. Using a set of human leukocyte subset RNA samples, we compared previously acquired microarray expression values with RNA molecule counts determined by the nCounter Analysis System (NanoString Technologies) in selected genes. We found that gene measurements across samples correlated well between the two platforms, particularly for high-variance genes, while genes deemed unexpressed by the nCounter generally had both low expression and low variance on the microarray. Confirming previous findings from spike-in and dilution datasets, this "gold-standard" comparison demonstrated signal compression that varied dramatically by expression level and, to a lesser extent, by dataset. Most importantly, examination of three different cell types revealed that noise levels differed across tissues. Microarray measurements generally correlate with relative RNA molecule counts within optimal ranges but suffer from expression-dependent accuracy bias and precision that varies across datasets. We urge microarray users to consider expression-level effects in signal interpretation and to evaluate noise properties in each dataset independently.
Pricing and reimbursement frameworks in Central Eastern Europe: a decision tool to support choices.
Kolasa, Katarzyna; Kalo, Zoltan; Hornby, Edward
2015-02-01
Given limited financial resources in the Central Eastern European (CEE) region, challenges in obtaining access to innovative medical technologies are formidable. The objective of this research was to develop a decision tree that supports decision makers and drug manufacturers from CEE region in their search for optimal innovative pricing and reimbursement scheme (IPRSs). A systematic literature review was performed to search for published IPRSs, and then ten experts from the CEE region were interviewed to ascertain their opinions on these schemes. In total, 33 articles representing 46 unique IPRSs were analyzed. Based on our literature review and subsequent expert input, key decision nodes and branches of the decision tree were developed. The results indicate that outcome-based schemes are better suited to deal with uncertainties surrounding cost effectiveness, while non-outcome-based schemes are more appropriate for pricing and budget impact challenges.
Data warehousing: toward knowledge management.
Shams, K; Farishta, M
2001-02-01
With rapid changes taking place in the practice and delivery of health care, decision support systems have assumed an increasingly important role. More and more health care institutions are deploying data warehouse applications as decision support tools for strategic decision making. By making the right information available at the right time to the right decision makers in the right manner, data warehouses empower employees to become knowledge workers with the ability to make the right decisions and solve problems, creating strategic leverage for the organization. Health care management must plan and implement data warehousing strategy using a best practice approach. Through the power of data warehousing, health care management can negotiate bettermanaged care contracts based on the ability to provide accurate data on case mix and resource utilization. Management can also save millions of dollars through the implementation of clinical pathways in better resource utilization and changing physician behavior to best practices based on evidence-based medicine.
Radiomics biomarkers for accurate tumor progression prediction of oropharyngeal cancer
NASA Astrophysics Data System (ADS)
Hadjiiski, Lubomir; Chan, Heang-Ping; Cha, Kenny H.; Srinivasan, Ashok; Wei, Jun; Zhou, Chuan; Prince, Mark; Papagerakis, Silvana
2017-03-01
Accurate tumor progression prediction for oropharyngeal cancers is crucial for identifying patients who would best be treated with optimized treatment and therefore minimize the risk of under- or over-treatment. An objective decision support system that can merge the available radiomics, histopathologic and molecular biomarkers in a predictive model based on statistical outcomes of previous cases and machine learning may assist clinicians in making more accurate assessment of oropharyngeal tumor progression. In this study, we evaluated the feasibility of developing individual and combined predictive models based on quantitative image analysis from radiomics, histopathology and molecular biomarkers for oropharyngeal tumor progression prediction. With IRB approval, 31, 84, and 127 patients with head and neck CT (CT-HN), tumor tissue microarrays (TMAs) and molecular biomarker expressions, respectively, were collected. For 8 of the patients all 3 types of biomarkers were available and they were sequestered in a test set. The CT-HN lesions were automatically segmented using our level sets based method. Morphological, texture and molecular based features were extracted from CT-HN and TMA images, and selected features were merged by a neural network. The classification accuracy was quantified using the area under the ROC curve (AUC). Test AUCs of 0.87, 0.74, and 0.71 were obtained with the individual predictive models based on radiomics, histopathologic, and molecular features, respectively. Combining the radiomics and molecular models increased the test AUC to 0.90. Combining all 3 models increased the test AUC further to 0.94. This preliminary study demonstrates that the individual domains of biomarkers are useful and the integrated multi-domain approach is most promising for tumor progression prediction.
Hu, Ting; Pan, Qinxin; Andrew, Angeline S; Langer, Jillian M; Cole, Michael D; Tomlinson, Craig R; Karagas, Margaret R; Moore, Jason H
2014-04-11
Several different genetic and environmental factors have been identified as independent risk factors for bladder cancer in population-based studies. Recent studies have turned to understanding the role of gene-gene and gene-environment interactions in determining risk. We previously developed the bioinformatics framework of statistical epistasis networks (SEN) to characterize the global structure of interacting genetic factors associated with a particular disease or clinical outcome. By applying SEN to a population-based study of bladder cancer among Caucasians in New Hampshire, we were able to identify a set of connected genetic factors with strong and significant interaction effects on bladder cancer susceptibility. To support our statistical findings using networks, in the present study, we performed pathway enrichment analyses on the set of genes identified using SEN, and found that they are associated with the carcinogen benzo[a]pyrene, a component of tobacco smoke. We further carried out an mRNA expression microarray experiment to validate statistical genetic interactions, and to determine if the set of genes identified in the SEN were differentially expressed in a normal bladder cell line and a bladder cancer cell line in the presence or absence of benzo[a]pyrene. Significant nonrandom sets of genes from the SEN were found to be differentially expressed in response to benzo[a]pyrene in both the normal bladder cells and the bladder cancer cells. In addition, the patterns of gene expression were significantly different between these two cell types. The enrichment analyses and the gene expression microarray results support the idea that SEN analysis of bladder in population-based studies is able to identify biologically meaningful statistical patterns. These results bring us a step closer to a systems genetic approach to understanding cancer susceptibility that integrates population and laboratory-based studies.
eXframe: reusable framework for storage, analysis and visualization of genomics experiments
2011-01-01
Background Genome-wide experiments are routinely conducted to measure gene expression, DNA-protein interactions and epigenetic status. Structured metadata for these experiments is imperative for a complete understanding of experimental conditions, to enable consistent data processing and to allow retrieval, comparison, and integration of experimental results. Even though several repositories have been developed for genomics data, only a few provide annotation of samples and assays using controlled vocabularies. Moreover, many of them are tailored for a single type of technology or measurement and do not support the integration of multiple data types. Results We have developed eXframe - a reusable web-based framework for genomics experiments that provides 1) the ability to publish structured data compliant with accepted standards 2) support for multiple data types including microarrays and next generation sequencing 3) query, analysis and visualization integration tools (enabled by consistent processing of the raw data and annotation of samples) and is available as open-source software. We present two case studies where this software is currently being used to build repositories of genomics experiments - one contains data from hematopoietic stem cells and another from Parkinson's disease patients. Conclusion The web-based framework eXframe offers structured annotation of experiments as well as uniform processing and storage of molecular data from microarray and next generation sequencing platforms. The framework allows users to query and integrate information across species, technologies, measurement types and experimental conditions. Our framework is reusable and freely modifiable - other groups or institutions can deploy their own custom web-based repositories based on this software. It is interoperable with the most important data formats in this domain. We hope that other groups will not only use eXframe, but also contribute their own useful modifications. PMID:22103807
Robertson, Eden G; Wakefield, Claire E; Signorelli, Christina; Cohn, Richard J; Patenaude, Andrea; Foster, Claire; Pettit, Tristan; Fardell, Joanna E
2018-07-01
We conducted a systematic review to identify the strategies that have been recommended in the literature to facilitate shared decision-making regarding enrolment in pediatric oncology clinical trials. We searched seven databases for peer-reviewed literature, published 1990-2017. Of 924 articles identified, 17 studies were eligible for the review. We assessed study quality using the 'Mixed-Methods Appraisal Tool'. We coded the results and discussions of papers line-by-line using nVivo software. We categorized strategies thematically. Five main themes emerged: 1) decision-making as a process, 2) individuality of the process; 3) information provision, 4) the role of communication, or 5) decision and psychosocial support. Families should have adequate time to make a decision. HCPs should elicit parents' and patients' preferences for level of information and decision involvement. Information should be clear and provided in multiple modalities. Articles also recommended providing training for healthcare professionals and access to psychosocial support for families. High quality, individually-tailored information, open communication and psychosocial support appear vital in supporting decision-making regarding enrollment in clinical trials. These data will usefully inform future decision-making interventions/tools to support families making clinical trial decisions. A solid evidence-base for effective strategies which facilitate shared decision-making is needed. Copyright © 2018 Elsevier B.V. All rights reserved.
XWeB: The XML Warehouse Benchmark
NASA Astrophysics Data System (ADS)
Mahboubi, Hadj; Darmont, Jérôme
With the emergence of XML as a standard for representing business data, new decision support applications are being developed. These XML data warehouses aim at supporting On-Line Analytical Processing (OLAP) operations that manipulate irregular XML data. To ensure feasibility of these new tools, important performance issues must be addressed. Performance is customarily assessed with the help of benchmarks. However, decision support benchmarks do not currently support XML features. In this paper, we introduce the XML Warehouse Benchmark (XWeB), which aims at filling this gap. XWeB derives from the relational decision support benchmark TPC-H. It is mainly composed of a test data warehouse that is based on a unified reference model for XML warehouses and that features XML-specific structures, and its associate XQuery decision support workload. XWeB's usage is illustrated by experiments on several XML database management systems.
Identification of new autoantigens for primary biliary cirrhosis using human proteome microarrays.
Hu, Chao-Jun; Song, Guang; Huang, Wei; Liu, Guo-Zhen; Deng, Chui-Wen; Zeng, Hai-Pan; Wang, Li; Zhang, Feng-Chun; Zhang, Xuan; Jeong, Jun Seop; Blackshaw, Seth; Jiang, Li-Zhi; Zhu, Heng; Wu, Lin; Li, Yong-Zhe
2012-09-01
Primary biliary cirrhosis (PBC) is a chronic cholestatic liver disease of unknown etiology and is considered to be an autoimmune disease. Autoantibodies are important tools for accurate diagnosis of PBC. Here, we employed serum profiling analysis using a human proteome microarray composed of about 17,000 full-length unique proteins and identified 23 proteins that correlated with PBC. To validate these results, we fabricated a PBC-focused microarray with 21 of these newly identified candidates and nine additional known PBC antigens. By screening the PBC microarrays with additional cohorts of 191 PBC patients and 321 controls (43 autoimmune hepatitis, 55 hepatitis B virus, 31 hepatitis C virus, 48 rheumatoid arthritis, 45 systematic lupus erythematosus, 49 systemic sclerosis, and 50 healthy), six proteins were confirmed as novel PBC autoantigens with high sensitivities and specificities, including hexokinase-1 (isoforms I and II), Kelch-like protein 7, Kelch-like protein 12, zinc finger and BTB domain-containing protein 2, and eukaryotic translation initiation factor 2C, subunit 1. To facilitate clinical diagnosis, we developed ELISA for Kelch-like protein 12 and zinc finger and BTB domain-containing protein 2 and tested large cohorts (297 PBC and 637 control sera) to confirm the sensitivities and specificities observed in the microarray-based assays. In conclusion, our research showed that a strategy using high content protein microarray combined with a smaller but more focused protein microarray can effectively identify and validate novel PBC-specific autoantigens and has the capacity to be translated to clinical diagnosis by means of an ELISA-based method.
Samal, Lipika; D'Amore, John D; Bates, David W; Wright, Adam
2017-11-01
Clinical decision support tools for risk prediction are readily available, but typically require workflow interruptions and manual data entry so are rarely used. Due to new data interoperability standards for electronic health records (EHRs), other options are available. As a clinical case study, we sought to build a scalable, web-based system that would automate calculation of kidney failure risk and display clinical decision support to users in primary care practices. We developed a single-page application, web server, database, and application programming interface to calculate and display kidney failure risk. Data were extracted from the EHR using the Consolidated Clinical Document Architecture interoperability standard for Continuity of Care Documents (CCDs). EHR users were presented with a noninterruptive alert on the patient's summary screen and a hyperlink to details and recommendations provided through a web application. Clinic schedules and CCDs were retrieved using existing application programming interfaces to the EHR, and we provided a clinical decision support hyperlink to the EHR as a service. We debugged a series of terminology and technical issues. The application was validated with data from 255 patients and subsequently deployed to 10 primary care clinics where, over the course of 1 year, 569 533 CCD documents were processed. We validated the use of interoperable documents and open-source components to develop a low-cost tool for automated clinical decision support. Since Consolidated Clinical Document Architecture-based data extraction extends to any certified EHR, this demonstrates a successful modular approach to clinical decision support. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.
Kober, Catharina; Niessner, Reinhard; Seidel, Michael
2018-02-15
Increasing numbers of legionellosis outbreaks within the last years have shown that Legionella are a growing challenge for public health. Molecular biological detection methods capable of rapidly identifying viable Legionella are important for the control of engineered water systems. The current gold standard based on culture methods takes up to 10 days to show positive results. For this reason, a flow-based chemiluminescence (CL) DNA microarray was developed that is able to quantify viable and non-viable Legionella spp. as well as Legionella pneumophila in one hour. An isothermal heterogeneous asymmetric recombinase polymerase amplification (haRPA) was carried out on flow-based CL DNA microarrays. Detection limits of 87 genomic units (GU) µL -1 and 26GUµL -1 for Legionella spp. and Legionella pneumophila, respectively, were achieved. In this work, it was shown for the first time that the combination of a propidium monoazide (PMA) treatment with haRPA, the so-called viability haRPA, is able to identify viable Legionella on DNA microarrays. Different proportions of viable and non-viable Legionella, shown with the example of L. pneumophila, ranging in a total concentration between 10 1 to 10 5 GUµL -1 were analyzed on the microarray analysis platform MCR 3. Recovery values for viable Legionella spp. were found between 81% and 133%. With the combination of these two methods, there is a chance to replace culture-based methods in the future for the monitoring of engineered water systems like condensation recooling plants. Copyright © 2017 Elsevier B.V. All rights reserved.
Anegla A. Davis; Barbara A. Kleiss; Charles G. O' Hara; Jennifer S. Derby
2000-01-01
The Eco-Assessor, a GIS-based decision-support system, has been developed for the lower part of the Yazoo River Basin, Mississippi, to help planners and managers determine the best locations for the restoration of wetlands based on defined ecological and geographic criteria and probability of success. To assess the functional characteristics of the potential...
Evaluating concentration estimation errors in ELISA microarray experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daly, Don S.; White, Amanda M.; Varnum, Susan M.
Enzyme-linked immunosorbent assay (ELISA) is a standard immunoassay to predict a protein concentration in a sample. Deploying ELISA in a microarray format permits simultaneous prediction of the concentrations of numerous proteins in a small sample. These predictions, however, are uncertain due to processing error and biological variability. Evaluating prediction error is critical to interpreting biological significance and improving the ELISA microarray process. Evaluating prediction error must be automated to realize a reliable high-throughput ELISA microarray system. Methods: In this paper, we present a statistical method based on propagation of error to evaluate prediction errors in the ELISA microarray process. Althoughmore » propagation of error is central to this method, it is effective only when comparable data are available. Therefore, we briefly discuss the roles of experimental design, data screening, normalization and statistical diagnostics when evaluating ELISA microarray prediction errors. We use an ELISA microarray investigation of breast cancer biomarkers to illustrate the evaluation of prediction errors. The illustration begins with a description of the design and resulting data, followed by a brief discussion of data screening and normalization. In our illustration, we fit a standard curve to the screened and normalized data, review the modeling diagnostics, and apply propagation of error.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luo, Y; McShan, D; Schipper, M
2014-06-01
Purpose: To develop a decision support tool to predict a patient's potential overall survival (OS) and radiation induced toxicity (RIT) based on clinical factors and responses during the course of radiotherapy, and suggest appropriate radiation dose adjustments to improve therapeutic effect. Methods: Important relationships between a patient's basic information and their clinical features before and during the radiation treatment are identified from historical clinical data by using statistical learning and data mining approaches. During each treatment period, a data analysis (DA) module predicts radiotherapy features such as time to local progression (TTLP), time to distant metastases (TTDM), radiation toxicity tomore » different organs, etc., under possible future treatment plans based on patient specifics or responses. An information fusion (IF) module estimates intervals for a patient's OS and the probabilities of RIT from a treatment plan by integrating the outcomes of module DA. A decision making (DM) module calculates “satisfaction” with the predicted radiation outcome based on trade-offs between OS and RIT, and finds the best treatment plan for the next time period via multi-criteria optimization. Results: Using physical and biological data from 130 lung cancer patients as our test bed, we were able to train and implement the 3 modules of our decision support tool. Examples demonstrate how it can help predict a new patient's potential OS and RIT with different radiation dose plans along with how these combinations change with dose, thus presenting a range of satisfaction/utility for use in individualized decision support. Conclusion: Although the decision support tool is currently developed from a small patient sample size, it shows the potential for the improvement of each patient's satisfaction in personalized radiation therapy. The radiation treatment outcome prediction and decision making model needs to be evaluated with more patients and demonstrated for use in radiation treatments for other cancers. P01-CA59827;R01CA142840.« less
van der Krieke, Lian; Emerencia, Ando C; Boonstra, Nynke; Wunderink, Lex; de Jonge, Peter; Sytema, Sjoerd
2013-10-07
Mental health policy makers encourage the development of electronic decision aids to increase patient participation in medical decision making. Evidence is needed to determine whether these decision aids are helpful in clinical practice and whether they lead to increased patient involvement and better outcomes. This study reports the outcome of a randomized controlled trial and process evaluation of a Web-based intervention to facilitate shared decision making for people with psychotic disorders. The study was carried out in a Dutch mental health institution. Patients were recruited from 2 outpatient teams for patients with psychosis (N=250). Patients in the intervention condition (n=124) were provided an account to access a Web-based information and decision tool aimed to support patients in acquiring an overview of their needs and appropriate treatment options provided by their mental health care organization. Patients were given the opportunity to use the Web-based tool either on their own (at their home computer or at a computer of the service) or with the support of an assistant. Patients in the control group received care as usual (n=126). Half of the patients in the sample were patients experiencing a first episode of psychosis; the other half were patients with a chronic psychosis. Primary outcome was patient-perceived involvement in medical decision making, measured with the Combined Outcome Measure for Risk Communication and Treatment Decision-making Effectiveness (COMRADE). Process evaluation consisted of questionnaire-based surveys, open interviews, and researcher observation. In all, 73 patients completed the follow-up measurement and were included in the final analysis (response rate 29.2%). More than one-third (48/124, 38.7%) of the patients who were provided access to the Web-based decision aid used it, and most used its full functionality. No differences were found between the intervention and control conditions on perceived involvement in medical decision making (COMRADE satisfaction with communication: F1,68=0.422, P=.52; COMRADE confidence in decision: F1,67=0.086, P=.77). In addition, results of the process evaluation suggest that the intervention did not optimally fit in with routine practice of the participating teams. The development of electronic decision aids to facilitate shared medical decision making is encouraged and many people with a psychotic disorder can work with them. This holds for both first-episode patients and long-term care patients, although the latter group might need more assistance. However, results of this paper could not support the assumption that the use of electronic decision aids increases patient involvement in medical decision making. This may be because of weak implementation of the study protocol and a low response rate.
IONIO Project: Computer-mediated Decision Support System and Communication in Ocean Science
NASA Astrophysics Data System (ADS)
Oddo, Paolo; Acierno, Arianna; Cuna, Daniela; Federico, Ivan; Galati, Maria Barbara; Awad, Esam; Korres, Gerasimos; Lecci, Rita; Manzella, Giuseppe M. R.; Merico, Walter; Perivoliotis, Leonidas; Pinardi, Nadia; Shchekinova, Elena; Mannarini, Gianandrea; Vamvakaki, Chrysa; Pecci, Leda; Reseghetti, Franco
2013-04-01
A decision Support System is composed by four main steps. The first one is the definition of the problem, the issue to be covered, decisions to be taken. Different causes can provoke different problems, for each of the causes or its effects it is necessary to define a list of information and/or data that are required in order to take the better decision. The second step is the determination of sources from where information/data needed for decision-making can be obtained and who has that information. Furthermore it must be possible to evaluate the quality of the sources to see which of them can provide the best information, and identify the mode and format in which the information is presented. The third step is relying on the processing of knowledge, i.e. if the information/data are fitting for purposes. It has to be decided which parts of the information/data need to be used, what additional data or information is necessary to access, how can information be best presented to be able to understand the situation and take decisions. Finally, the decision making process is an interactive and inclusive process involving all concerned parties, whose different views must be taken into consideration. A knowledge based discussion forum is necessary to reach a consensus. A decision making process need to be examined closely and refined, and modified to meet differing needs over time. The report is presenting legal framework and knowledge base for a scientific based decision support system and a brief exploration of some of the skills that enhances the quality of decisions taken.
Information support for decision making on dispatching control of water distribution in irrigation
NASA Astrophysics Data System (ADS)
Yurchenko, I. F.
2018-05-01
The research has been carried out on developing the technique of supporting decision making for on-line control, operational management of water allocation for the interfarm irrigation projects basing on the analytical patterns of dispatcher control. This technique provides an increase of labour productivity as well as higher management quality due to the improved level of automation, as well as decision making optimization taking into account diagnostics of the issues, solutions classification, information being required to the decision makers.
Harris, Claire; Allen, Kelly; Waller, Cara; Dyer, Tim; Brooke, Vanessa; Garrubba, Marie; Melder, Angela; Voutier, Catherine; Gust, Anthony; Farjou, Dina
2017-06-21
This is the seventh in a series of papers reporting Sustainability in Health care by Allocating Resources Effectively (SHARE) in a local healthcare setting. The SHARE Program was a systematic, integrated, evidence-based program for resource allocation within a large Australian health service. It aimed to facilitate proactive use of evidence from research and local data; evidence-based decision-making for resource allocation including disinvestment; and development, implementation and evaluation of disinvestment projects. From the literature and responses of local stakeholders it was clear that provision of expertise and education, training and support of health service staff would be required to achieve these aims. Four support services were proposed. This paper is a detailed case report of the development, implementation and evaluation of a Data Service, Capacity Building Service and Project Support Service. An Evidence Service is reported separately. Literature reviews, surveys, interviews, consultation and workshops were used to capture and process the relevant information. Existing theoretical frameworks were adapted for evaluation and explication of processes and outcomes. Surveys and interviews identified current practice in use of evidence in decision-making, implementation and evaluation; staff needs for evidence-based practice; nature, type and availability of local health service data; and preferred formats for education and training. The Capacity Building and Project Support Services were successful in achieving short term objectives; but long term outcomes were not evaluated due to reduced funding. The Data Service was not implemented at all. Factors influencing the processes and outcomes are discussed. Health service staff need access to education, training, expertise and support to enable evidence-based decision-making and to implement and evaluate the changes arising from those decisions. Three support services were proposed based on research evidence and local findings. Local factors, some unanticipated and some unavoidable, were the main barriers to successful implementation. All three proposed support services hold promise as facilitators of EBP in the local healthcare setting. The findings from this study will inform further exploration.
Morrison, James J; Hostetter, Jason; Wang, Kenneth; Siegel, Eliot L
2015-02-01
Real-time mining of large research trial datasets enables development of case-based clinical decision support tools. Several applicable research datasets exist including the National Lung Screening Trial (NLST), a dataset unparalleled in size and scope for studying population-based lung cancer screening. Using these data, a clinical decision support tool was developed which matches patient demographics and lung nodule characteristics to a cohort of similar patients. The NLST dataset was converted into Structured Query Language (SQL) tables hosted on a web server, and a web-based JavaScript application was developed which performs real-time queries. JavaScript is used for both the server-side and client-side language, allowing for rapid development of a robust client interface and server-side data layer. Real-time data mining of user-specified patient cohorts achieved a rapid return of cohort cancer statistics and lung nodule distribution information. This system demonstrates the potential of individualized real-time data mining using large high-quality clinical trial datasets to drive evidence-based clinical decision-making.
Autonomous system for Web-based microarray image analysis.
Bozinov, Daniel
2003-12-01
Software-based feature extraction from DNA microarray images still requires human intervention on various levels. Manual adjustment of grid and metagrid parameters, precise alignment of superimposed grid templates and gene spots, or simply identification of large-scale artifacts have to be performed beforehand to reliably analyze DNA signals and correctly quantify their expression values. Ideally, a Web-based system with input solely confined to a single microarray image and a data table as output containing measurements for all gene spots would directly transform raw image data into abstracted gene expression tables. Sophisticated algorithms with advanced procedures for iterative correction function can overcome imminent challenges in image processing. Herein is introduced an integrated software system with a Java-based interface on the client side that allows for decentralized access and furthermore enables the scientist to instantly employ the most updated software version at any given time. This software tool is extended from PixClust as used in Extractiff incorporated with Java Web Start deployment technology. Ultimately, this setup is destined for high-throughput pipelines in genome-wide medical diagnostics labs or microarray core facilities aimed at providing fully automated service to its users.
A Quick and Parallel Analytical Method Based on Quantum Dots Labeling for ToRCH-Related Antibodies
NASA Astrophysics Data System (ADS)
Yang, Hao; Guo, Qing; He, Rong; Li, Ding; Zhang, Xueqing; Bao, Chenchen; Hu, Hengyao; Cui, Daxiang
2009-12-01
Quantum dot is a special kind of nanomaterial composed of periodic groups of II-VI, III-V or IV-VI materials. Their high quantum yield, broad absorption with narrow photoluminescence spectra and high resistance to photobleaching, make them become a promising labeling substance in biological analysis. Here, we report a quick and parallel analytical method based on quantum dots for ToRCH-related antibodies including Toxoplasma gondii, Rubella virus, Cytomegalovirus and Herpes simplex virus type 1 (HSV1) and 2 (HSV2). Firstly, we fabricated the microarrays with the five kinds of ToRCH-related antigens and used CdTe quantum dots to label secondary antibody and then analyzed 100 specimens of randomly selected clinical sera from obstetric outpatients. The currently prevalent enzyme-linked immunosorbent assay (ELISA) kits were considered as “golden standard” for comparison. The results show that the quantum dots labeling-based ToRCH microarrays have comparable sensitivity and specificity with ELISA. Besides, the microarrays hold distinct advantages over ELISA test format in detection time, cost, operation and signal stability. Validated by the clinical assay, our quantum dots-based ToRCH microarrays have great potential in the detection of ToRCH-related pathogens.
Decision Support System for Disability Assessment and Intervention.
ERIC Educational Resources Information Center
Dowler, Denetta L.; And Others
1991-01-01
Constructed decision support system to aid referral of good candidates for rehabilitation from Social Security Administration to rehabilitation counselors. Three layers of system were gross screening based on policy guidelines, training materials, and interviews with experts; physical and mental functional capacity items derived from policy…
Integrated decision support tools for Puget Sound salmon recovery planning
We developed a set of tools to provide decision support for community-based salmon recovery planning in Salish Sea watersheds. Here we describe how these tools are being integrated and applied in collaboration with Puget Sound tribes and community stakeholders to address restora...
OASIS: A GEOGRAPHICAL DECISION SUPPORT SYSTEM FOR GROUND-WATER CONTAMINANT MODELING
Three new software technologies were applied to develop an efficient and easy to use decision support system for ground-water contaminant modeling. Graphical interfaces create a more intuitive and effective form of communication with the computer compared to text-based interfaces...
Advanced Decision-Support for Coastal Beach Health: Virtual Beach 3.0
Virtual Beach is a free decision-support system designed to help beach managers and researchers construct, evaluate, and operate site-specific statistical models that can predict levels of fecal indicator bacteria (FIB) based on environmental conditions that are more readily mea...
Schnipper, Jeffrey L.; Linder, Jeffrey A.; Palchuk, Matvey B.; Einbinder, Jonathan S.; Li, Qi; Postilnik, Anatoly; Middleton, Blackford
2008-01-01
Clinical decision support systems (CDSS) integrated within Electronic Medical Records (EMR) hold the promise of improving healthcare quality. To date the effectiveness of CDSS has been less than expected, especially concerning the ambulatory management of chronic diseases. This is due, in part, to the fact that clinicians do not use CDSS fully. Barriers to clinicians' use of CDSS have included lack of integration into workflow, software usability issues, and relevance of the content to the patient at hand. At Partners HealthCare, we are developing “Smart Forms” to facilitate documentation-based clinical decision support. Rather than being interruptive in nature, the Smart Form enables writing a multi-problem visit note while capturing coded information and providing sophisticated decision support in the form of tailored recommendations for care. The current version of the Smart Form is designed around two chronic diseases: coronary artery disease and diabetes mellitus. The Smart Form has potential to improve the care of patients with both acute and chronic conditions. PMID:18436911
Schnipper, Jeffrey L; Linder, Jeffrey A; Palchuk, Matvey B; Einbinder, Jonathan S; Li, Qi; Postilnik, Anatoly; Middleton, Blackford
2008-01-01
Clinical decision support systems (CDSS) integrated within Electronic Medical Records (EMR) hold the promise of improving healthcare quality. To date the effectiveness of CDSS has been less than expected, especially concerning the ambulatory management of chronic diseases. This is due, in part, to the fact that clinicians do not use CDSS fully. Barriers to clinicians' use of CDSS have included lack of integration into workflow, software usability issues, and relevance of the content to the patient at hand. At Partners HealthCare, we are developing "Smart Forms" to facilitate documentation-based clinical decision support. Rather than being interruptive in nature, the Smart Form enables writing a multi-problem visit note while capturing coded information and providing sophisticated decision support in the form of tailored recommendations for care. The current version of the Smart Form is designed around two chronic diseases: coronary artery disease and diabetes mellitus. The Smart Form has potential to improve the care of patients with both acute and chronic conditions.
Huser, Vojtech; Sincan, Murat; Cimino, James J
2014-01-01
Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward.
Huser, Vojtech; Sincan, Murat; Cimino, James J
2014-01-01
Personalized medicine, the ability to tailor diagnostic and treatment decisions for individual patients, is seen as the evolution of modern medicine. We characterize here the informatics resources available today or envisioned in the near future that can support clinical interpretation of genomic test results. We assume a clinical sequencing scenario (germline whole-exome sequencing) in which a clinical specialist, such as an endocrinologist, needs to tailor patient management decisions within his or her specialty (targeted findings) but relies on a genetic counselor to interpret off-target incidental findings. We characterize the genomic input data and list various types of knowledge bases that provide genomic knowledge for generating clinical decision support. We highlight the need for patient-level databases with detailed lifelong phenotype content in addition to genotype data and provide a list of recommendations for personalized medicine knowledge bases and databases. We conclude that no single knowledge base can currently support all aspects of personalized recommendations and that consolidation of several current resources into larger, more dynamic and collaborative knowledge bases may offer a future path forward. PMID:25276091
Pilot study of a point-of-use decision support tool for cancer clinical trials eligibility.
Breitfeld, P P; Weisburd, M; Overhage, J M; Sledge, G; Tierney, W M
1999-01-01
Many adults with cancer are not enrolled in clinical trials because caregivers do not have the time to match the patient's clinical findings with varying eligibility criteria associated with multiple trials for which the patient might be eligible. The authors developed a point-of-use portable decision support tool (DS-TRIEL) to automate this matching process. The support tool consists of a hand-held computer with a programmable relational database. A two-level hierarchic decision framework was used for the identification of eligible subjects for two open breast cancer clinical trials. The hand-held computer also provides protocol consent forms and schemas to further help the busy oncologist. This decision support tool and the decision framework on which it is based could be used for multiple trials and different cancer sites.
Pilot Study of a Point-of-use Decision Support Tool for Cancer Clinical Trials Eligibility
Breitfeld, Philip P.; Weisburd, Marina; Overhage, J. Marc; Sledge, George; Tierney, William M.
1999-01-01
Many adults with cancer are not enrolled in clinical trials because caregivers do not have the time to match the patient's clinical findings with varying eligibility criteria associated with multiple trials for which the patient might be eligible. The authors developed a point-of-use portable decision support tool (DS-TRIEL) to automate this matching process. The support tool consists of a hand-held computer with a programmable relational database. A two-level hierarchic decision framework was used for the identification of eligible subjects for two open breast cancer clinical trials. The hand-held computer also provides protocol consent forms and schemas to further help the busy oncologist. This decision support tool and the decision framework on which it is based could be used for multiple trials and different cancer sites. PMID:10579605
A cDNA microarray gene expression data classifier for clinical diagnostics based on graph theory.
Benso, Alfredo; Di Carlo, Stefano; Politano, Gianfranco
2011-01-01
Despite great advances in discovering cancer molecular profiles, the proper application of microarray technology to routine clinical diagnostics is still a challenge. Current practices in the classification of microarrays' data show two main limitations: the reliability of the training data sets used to build the classifiers, and the classifiers' performances, especially when the sample to be classified does not belong to any of the available classes. In this case, state-of-the-art algorithms usually produce a high rate of false positives that, in real diagnostic applications, are unacceptable. To address this problem, this paper presents a new cDNA microarray data classification algorithm based on graph theory and is able to overcome most of the limitations of known classification methodologies. The classifier works by analyzing gene expression data organized in an innovative data structure based on graphs, where vertices correspond to genes and edges to gene expression relationships. To demonstrate the novelty of the proposed approach, the authors present an experimental performance comparison between the proposed classifier and several state-of-the-art classification algorithms.
Strauss, Christian; Endimiani, Andrea; Perreten, Vincent
2015-01-01
A rapid and simple DNA labeling system has been developed for disposable microarrays and has been validated for the detection of 117 antibiotic resistance genes abundant in Gram-positive bacteria. The DNA was fragmented and amplified using phi-29 polymerase and random primers with linkers. Labeling and further amplification were then performed by classic PCR amplification using biotinylated primers specific for the linkers. The microarray developed by Perreten et al. (Perreten, V., Vorlet-Fawer, L., Slickers, P., Ehricht, R., Kuhnert, P., Frey, J., 2005. Microarray-based detection of 90 antibiotic resistance genes of gram-positive bacteria. J.Clin.Microbiol. 43, 2291-2302.) was improved by additional oligonucleotides. A total of 244 oligonucleotides (26 to 37 nucleotide length and with similar melting temperatures) were spotted on the microarray, including genes conferring resistance to clinically important antibiotic classes like β-lactams, macrolides, aminoglycosides, glycopeptides and tetracyclines. Each antibiotic resistance gene is represented by at least 2 oligonucleotides designed from consensus sequences of gene families. The specificity of the oligonucleotides and the quality of the amplification and labeling were verified by analysis of a collection of 65 strains belonging to 24 species. Association between genotype and phenotype was verified for 6 antibiotics using 77 Staphylococcus strains belonging to different species and revealed 95% test specificity and a 93% predictive value of a positive test. The DNA labeling and amplification is independent of the species and of the target genes and could be used for different types of microarrays. This system has also the advantage to detect several genes within one bacterium at once, like in Staphylococcus aureus strain BM3318, in which up to 15 genes were detected. This new microarray-based detection system offers a large potential for applications in clinical diagnostic, basic research, food safety and surveillance programs for antimicrobial resistance. Copyright © 2014 Elsevier B.V. All rights reserved.
An Environment for Guideline-based Decision Support Systems for Outpatients Monitoring.
Zini, Elisa M; Lanzola, Giordano; Bossi, Paolo; Quaglini, Silvana
2017-08-11
We propose an architecture for monitoring outpatients that relies on mobile technologies for acquiring data. The goal is to better control the onset of possible side effects between the scheduled visits at the clinic. We analyze the architectural components required to ensure a high level of abstraction from data. Clinical practice guidelines were formalized with Alium, an authoring tool based on the PROforma language, using SNOMED-CT as a terminology standard. The Alium engine is accessible through a set of APIs that may be leveraged for implementing an application based on standard web technologies to be used by doctors at the clinic. Data sent by patients using mobile devices need to be complemented with those already available in the Electronic Health Record to generate personalized recommendations. Thus a middleware pursuing data abstraction is required. To comply with current standards, we adopted the HL7 Virtual Medical Record for Clinical Decision Support Logical Model, Release 2. The developed architecture for monitoring outpatients includes: (1) a guideline-based Decision Support System accessible through a web application that helps the doctors with prevention, diagnosis and treatment of therapy side effects; (2) an application for mobile devices, which allows patients to regularly send data to the clinic. In order to tailor the monitoring procedures to the specific patient, the Decision Support System also helps physicians with the configuration of the mobile application, suggesting the data to be collected and the associated collection frequency that may change over time, according to the individual patient's conditions. A proof of concept has been developed with a system for monitoring the side effects of chemo-radiotherapy in head and neck cancer patients. Our environment introduces two main innovation elements with respect to similar works available in the literature. First, in order to meet the specific patients' needs, in our work the Decision Support System also helps the physicians in properly configuring the mobile application. Then the Decision Support System is also continuously fed by patient-reported outcomes.
Development of a low-cost detection method for miRNA microarray.
Li, Wei; Zhao, Botao; Jin, Youxin; Ruan, Kangcheng
2010-04-01
MicroRNA (miRNA) microarray is a powerful tool to explore the expression profiling of miRNA. The current detection method used in miRNA microarray is mainly fluorescence based, which usually requires costly detection system such as laser confocal scanner of tens of thousands of dollars. Recently, we developed a low-cost yet sensitive detection method for miRNA microarray based on enzyme-linked assay. In this approach, the biotinylated miRNAs were captured by the corresponding oligonucleotide probes immobilized on microarray slide; and then the biotinylated miRNAs would capture streptavidin-conjugated alkaline phosphatase. A purple-black precipitation on each biotinylated miRNA spot was produced by the enzyme catalytic reaction. It could be easily detected by a charge-coupled device digital camera mounted on a microscope, which lowers the detection cost more than 100 fold compared with that of fluorescence method. Our data showed that signal intensity of the spot correlates well with the biotinylated miRNA concentration and the detection limit for miRNAs is at least 0.4 fmol and the detection dynamic range spans about 2.5 orders of magnitude, which is comparable to that of fluorescence method.
Fluorescent labeling of NASBA amplified tmRNA molecules for microarray applications
Scheler, Ott; Glynn, Barry; Parkel, Sven; Palta, Priit; Toome, Kadri; Kaplinski, Lauris; Remm, Maido; Maher, Majella; Kurg, Ants
2009-01-01
Background Here we present a novel promising microbial diagnostic method that combines the sensitivity of Nucleic Acid Sequence Based Amplification (NASBA) with the high information content of microarray technology for the detection of bacterial tmRNA molecules. The NASBA protocol was modified to include aminoallyl-UTP (aaUTP) molecules that were incorporated into nascent RNA during the NASBA reaction. Post-amplification labeling with fluorescent dye was carried out subsequently and tmRNA hybridization signal intensities were measured using microarray technology. Significant optimization of the labeled NASBA protocol was required to maintain the required sensitivity of the reactions. Results Two different aaUTP salts were evaluated and optimum final concentrations were identified for both. The final 2 mM concentration of aaUTP Li-salt in NASBA reaction resulted in highest microarray signals overall, being twice as high as the strongest signals with 1 mM aaUTP Na-salt. Conclusion We have successfully demonstrated efficient combination of NASBA amplification technology with microarray based hybridization detection. The method is applicative for many different areas of microbial diagnostics including environmental monitoring, bio threat detection, industrial process monitoring and clinical microbiology. PMID:19445684
Shin, Hwa Hui; Seo, Jeong Hyun; Kim, Chang Sup; Hwang, Byeong Hee; Cha, Hyung Joon
2016-05-15
Life-threatening diarrheal cholera is usually caused by water or food contaminated with cholera toxin-producing Vibrio cholerae. For the prevention and surveillance of cholera, it is crucial to rapidly and precisely detect and identify the etiological causes, such as V. cholerae and/or its toxin. In the present work, we propose the use of a hybrid double biomolecular marker (DBM) microarray containing 16S rRNA-based DNA capture probe to genotypically identify V. cholerae and GM1 pentasaccharide capture probe to phenotypically detect cholera toxin. We employed a simple sample preparation method to directly obtain genomic DNA and secreted cholera toxin as target materials from bacterial cells. By utilizing the constructed DBM microarray and prepared samples, V. cholerae and cholera toxin were detected successfully, selectively, and simultaneously; the DBM microarray was able to analyze the pathogenicity of the identified V. cholerae regardless of whether the bacteria produces toxin. Therefore, our proposed DBM microarray is a new effective platform for identifying bacteria and analyzing bacterial pathogenicity simultaneously. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Eckman, Richard S.
2009-01-01
Earth observations are playing an increasingly significant role in informing decision making in the energy sector. In renewable energy applications, space-based observations now routinely augment sparse ground-based observations used as input for renewable energy resource assessment applications. As one of the nine Group on Earth Observations (GEO) societal benefit areas, the enhancement of management and policy decision making in the energy sector is receiving attention in activities conducted by the Committee on Earth Observation Satellites (CEOS). CEOS has become the "space arm" for the implementation of the Global Earth Observation System of Systems (GEOSS) vision. It is directly supporting the space-based, near-term tasks articulated in the GEO three-year work plan. This paper describes a coordinated program of demonstration projects conducted by CEOS member agencies and partners to utilize Earth observations to enhance energy management end-user decision support systems. I discuss the importance of engagement with stakeholders and understanding their decision support needs in successfully increasing the uptake of Earth observation products for societal benefit. Several case studies are presented, demonstrating the importance of providing data sets in formats and units familiar and immediately usable by decision makers. These projects show the utility of Earth observations to enhance renewable energy resource assessment in the developing world, forecast space-weather impacts on the power grid, and improve energy efficiency in the built environment.
Miller, A; Pilcher, D; Mercaldo, N; Leong, T; Scheinkestel, C; Schildcrout, J
2010-08-01
Screen designs in computerized clinical information systems (CIS) have been modeled on their paper predecessors. However, limited understanding about how paper forms support clinical work means that we risk repeating old mistakes and creating new opportunities for error and inefficiency as illustrated by problems associated with computerized provider order entry systems. This study was designed to elucidate principles underlying a successful ICU paper-based CIS. The research was guided by two exploratory hypotheses: (1) paper-based artefacts (charts, notes, equipment, order forms) are used differently by nurses, doctors and other healthcare professionals in different (formal and informal) conversation contexts and (2) different artefacts support different decision processes that are distributed across role-based conversations. All conversations undertaken at the bedsides of five patients were recorded with any supporting artefacts for five days per patient. Data was coded according to conversational role-holders, clinical decision process, conversational context and artefacts. 2133 data points were analyzed using Poisson logistic regression analyses. Results show significant interactions between artefacts used during different professional conversations in different contexts (chi(2)((df=16))=55.8, p<0.0001). The interaction between artefacts used during different professional conversations for different clinical decision processes was not statistically significant although all two-way interactions were statistically significant. Paper-based CIS have evolved to support complex interdisciplinary decision processes. The translation of two design principles - support interdisciplinary perspectives and integrate decision processes - from paper to computerized CIS may minimize the risks associated with computerization. 2010 Australian College of Critical Care Nurses Ltd. Published by Elsevier Ltd. All rights reserved.
Zheng, Hua; Rosal, Milagros C; Li, Wenjun; Borg, Amy; Yang, Wenyun; Ayers, David C
2018-01-01
Background Data-driven surgical decisions will ensure proper use and timing of surgical care. We developed a Web-based patient-centered treatment decision and assessment tool to guide treatment decisions among patients with advanced knee osteoarthritis who are considering total knee replacement surgery. Objective The aim of this study was to examine user experience and acceptance of the Web-based treatment decision support tool among older adults. Methods User-centered formative and summative evaluations were conducted for the tool. A sample of 28 patients who were considering total knee replacement participated in the study. Participants’ responses to the user interface design, the clarity of information, as well as usefulness, satisfaction, and acceptance of the tool were collected through qualitative (ie, individual patient interviews) and quantitative (ie, standardized Computer System Usability Questionnaire) methods. Results Participants were older adults with a mean age of 63 (SD 11) years. Three-quarters of them had no technical questions using the tool. User interface design recommendations included larger fonts, bigger buttons, less colors, simpler navigation without extra “next page” click, less mouse movement, and clearer illustrations with simple graphs. Color-coded bar charts and outcome-specific graphs with positive action were easiest for them to understand the outcomes data. Questionnaire data revealed high satisfaction with the tool usefulness and interface quality, and also showed ease of use of the tool, regardless of age or educational status. Conclusions We evaluated the usability of a patient-centered decision support tool designed for advanced knee arthritis patients to facilitate their knee osteoarthritis treatment decision making. The lessons learned can inform other decision support tools to improve interface and content design for older patients’ use. PMID:29712620
Grimmett, Chloe; Pickett, Karen; Shepherd, Jonathan; Welch, Karen; Recio-Saucedo, Alejandra; Streit, Elke; Seers, Helen; Armstrong, Anne; Cutress, Ramsey I; Evans, D Gareth; Copson, Ellen; Meiser, Bettina; Eccles, Diana; Foster, Claire
2018-05-01
Identify existing resources developed and/or evaluated empirically in the published literature designed to support women with breast cancer making decisions regarding genetic testing for BRCA1/2 mutations. Systematic review of seven electronic databases. Studies were included if they described or evaluated resources that were designed to support women with breast cancer in making a decision to have genetic counselling or testing for familial breast cancer. Outcome and process evaluations, using any type of study design, as well as articles reporting the development of decision aids, were eligible for inclusion. Total of 9 publications, describing 6 resources were identified. Resources were effective at increasing knowledge or understanding of hereditary breast cancer. Satisfaction with resources was high. There was no evidence that any resource increased distress, worry or decisional conflict. Few resources included active functionalities for example, values-based exercises, to support decision-making. Tailored resources supporting decision-making may be helpful and valued by patients and increase knowledge of hereditary breast cancer, without causing additional distress. Clinicians should provide supportive written information to patients where it is available. However, there is a need for robustly developed decision tools to support decision-making around genetic testing in women with breast cancer. Copyright © 2017 Elsevier B.V. All rights reserved.
Szkola, A; Linares, E M; Worbs, S; Dorner, B G; Dietrich, R; Märtlbauer, E; Niessner, R; Seidel, M
2014-11-21
Simultaneous detection of small and large molecules on microarray immunoassays is a challenge that limits some applications in multiplex analysis. This is the case for biosecurity, where fast, cheap and reliable simultaneous detection of proteotoxins and small toxins is needed. Two highly relevant proteotoxins, ricin (60 kDa) and bacterial toxin staphylococcal enterotoxin B (SEB, 30 kDa) and the small phycotoxin saxitoxin (STX, 0.3 kDa) are potential biological warfare agents and require an analytical tool for simultaneous detection. Proteotoxins are successfully detected by sandwich immunoassays, whereas competitive immunoassays are more suitable for small toxins (<1 kDa). Based on this need, this work provides a novel and efficient solution based on anti-idiotypic antibodies for small molecules to combine both assay principles on one microarray. The biotoxin measurements are performed on a flow-through chemiluminescence microarray platform MCR3 in 18 minutes. The chemiluminescence signal was amplified by using a poly-horseradish peroxidase complex (polyHRP), resulting in low detection limits: 2.9 ± 3.1 μg L(-1) for ricin, 0.1 ± 0.1 μg L(-1) for SEB and 2.3 ± 1.7 μg L(-1) for STX. The developed multiplex system for the three biotoxins is completely novel, relevant in the context of biosecurity and establishes the basis for research on anti-idiotypic antibodies for microarray immunoassays.
SADA: Ecological Risk Based Decision Support System for Selective Remediation
Spatial Analysis and Decision Assistance (SADA) is freeware that implements terrestrial ecological risk assessment and yields a selective remediation design using its integral geographical information system, based on ecological and risk assessment inputs. Selective remediation ...
The statistics of identifying differentially expressed genes in Expresso and TM4: a comparison
Sioson, Allan A; Mane, Shrinivasrao P; Li, Pinghua; Sha, Wei; Heath, Lenwood S; Bohnert, Hans J; Grene, Ruth
2006-01-01
Background Analysis of DNA microarray data takes as input spot intensity measurements from scanner software and returns differential expression of genes between two conditions, together with a statistical significance assessment. This process typically consists of two steps: data normalization and identification of differentially expressed genes through statistical analysis. The Expresso microarray experiment management system implements these steps with a two-stage, log-linear ANOVA mixed model technique, tailored to individual experimental designs. The complement of tools in TM4, on the other hand, is based on a number of preset design choices that limit its flexibility. In the TM4 microarray analysis suite, normalization, filter, and analysis methods form an analysis pipeline. TM4 computes integrated intensity values (IIV) from the average intensities and spot pixel counts returned by the scanner software as input to its normalization steps. By contrast, Expresso can use either IIV data or median intensity values (MIV). Here, we compare Expresso and TM4 analysis of two experiments and assess the results against qRT-PCR data. Results The Expresso analysis using MIV data consistently identifies more genes as differentially expressed, when compared to Expresso analysis with IIV data. The typical TM4 normalization and filtering pipeline corrects systematic intensity-specific bias on a per microarray basis. Subsequent statistical analysis with Expresso or a TM4 t-test can effectively identify differentially expressed genes. The best agreement with qRT-PCR data is obtained through the use of Expresso analysis and MIV data. Conclusion The results of this research are of practical value to biologists who analyze microarray data sets. The TM4 normalization and filtering pipeline corrects microarray-specific systematic bias and complements the normalization stage in Expresso analysis. The results of Expresso using MIV data have the best agreement with qRT-PCR results. In one experiment, MIV is a better choice than IIV as input to data normalization and statistical analysis methods, as it yields as greater number of statistically significant differentially expressed genes; TM4 does not support the choice of MIV input data. Overall, the more flexible and extensive statistical models of Expresso achieve more accurate analytical results, when judged by the yardstick of qRT-PCR data, in the context of an experimental design of modest complexity. PMID:16626497
Advancements in Risk-Informed Performance-Based Asset Management for Commercial Nuclear Power Plants
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liming, James K.; Ravindra, Mayasandra K.
2006-07-01
Over the past several years, ABSG Consulting Inc. (ABS Consulting) and the South Texas Project Nuclear Operating Company (STPNOC) have developed a decision support process and associated software for risk-informed, performance-based asset management (RIPBAM) of nuclear power plant facilities. RIPBAM applies probabilistic risk assessment (PRA) tools and techniques in the realm of plant physical and financial asset management. The RIPBAM process applies a tiered set of models and supporting performance measures (or metrics) that can ultimately be applied to support decisions affecting the allocation and management of plant resources (e.g., funding, staffing, scheduling, etc.). In general, the ultimate goal ofmore » the RIPBAM process is to continually support decision-making to maximize a facility's net present value (NPV) and long-term profitability for its owners. While the initial applications of RIPBAM have been for nuclear power stations, the methodology can easily be adapted to other types of power station or complex facility decision-making support. RIPBAM can also be designed to focus on performance metrics other than NPV and profitability (e.g., mission reliability, operational availability, probability of mission success per dollar invested, etc.). Recent advancements in the RIPBAM process focus on expanding the scope of previous RIPBAM applications to include not only operations, maintenance, and safety issues, but also broader risk perception components affecting plant owner (stockholder), operator, and regulator biases. Conceptually, RIPBAM is a comprehensive risk-informed cash flow model for decision support. It originated as a tool to help manage plant refueling outage scheduling, and was later expanded to include the full spectrum of operations and maintenance decision support. However, it differs from conventional business modeling tools in that it employs a systems engineering approach with broadly based probabilistic analysis of organizational 'value streams'. The scope of value stream inclusion in the process can be established by the user, but in its broadest applications, RIPBAM can be used to address how risk perceptions of plant owners and regulators are impacted by plant performance. Plant staffs can expand and refine RIPBAM models scope via a phased program of activities over time. This paper shows how the multi-metric uncertainty analysis feature of RIPBAM can apply a wide spectrum of decision-influencing factors to support decisions designed to maximize the probability of achieving, maintaining, and improving upon plant goals and objectives. In this paper, the authors show how this approach can be extremely valuable to plant owners and operators in supporting plant value-impacting decision-making processes. (authors)« less
Computerised decision support in physical activity interventions: A systematic literature review.
Triantafyllidis, Andreas; Filos, Dimitris; Claes, Jomme; Buys, Roselien; Cornelissen, Véronique; Kouidi, Evangelia; Chouvarda, Ioanna; Maglaveras, Nicos
2018-03-01
The benefits of regular physical activity for health and quality of life are unarguable. New information, sensing and communication technologies have the potential to play a critical role in computerised decision support and coaching for physical activity. We provide a literature review of recent research in the development of physical activity interventions employing computerised decision support, their feasibility and effectiveness in healthy and diseased individuals, and map out challenges and future research directions. We searched the bibliographic databases of PubMed and Scopus to identify physical activity interventions with computerised decision support utilised in a real-life context. Studies were synthesized according to the target user group, the technological format (e.g., web-based or mobile-based) and decision-support features of the intervention, the theoretical model for decision support in health behaviour change, the study design, the primary outcome, the number of participants and their engagement with the intervention, as well as the total follow-up duration. From the 24 studies included in the review, the highest percentage (n = 7, 29%) targeted sedentary healthy individuals followed by patients with prediabetes/diabetes (n = 4, 17%) or overweight individuals (n = 4, 17%). Most randomized controlled trials reported significantly positive effects of the interventions, i.e., increase in physical activity (n = 7, 100%) for 7 studies assessing physical activity measures, weight loss (n = 3, 75%) for 4 studies assessing diet, and reductions in glycosylated hemoglobin (n = 2, 66%) for 3 studies assessing glycose concentration. Accelerometers/pedometers were used in almost half of the studies (n = 11, 46%). Most adopted decision support features included personalised goal-setting (n = 16, 67%) and motivational feedback sent to the users (n = 15, 63%). Fewer adopted features were integration with electronic health records (n = 3, 13%) and alerts sent to caregivers (n = 4, 17%). Theoretical models of decision support in health behaviour to drive the development of the intervention were not reported in most studies (n = 14, 58%). Interventions employing computerised decision support have the potential to promote physical activity and result in health benefits for both diseased and healthy individuals, and help healthcare providers to monitor patients more closely. Objectively measured activity through sensing devices, integration with clinical systems used by healthcare providers and theoretical frameworks for health behaviour change need to be employed in a larger scale in future studies in order to realise the development of evidence-based computerised systems for physical activity monitoring and coaching. Copyright © 2017 Elsevier B.V. All rights reserved.
Mueller, Martina; Wagner, Carol L; Annibale, David J; Knapp, Rebecca G; Hulsey, Thomas C; Almeida, Jonas S
2006-03-01
Approximately 30% of intubated preterm infants with respiratory distress syndrome (RDS) will fail attempted extubation, requiring reintubation and mechanical ventilation. Although ventilator technology and monitoring of premature infants have improved over time, optimal extubation remains challenging. Furthermore, extubation decisions for premature infants require complex informational processing, techniques implicitly learned through clinical practice. Computer-aided decision-support tools would benefit inexperienced clinicians, especially during peak neonatal intensive care unit (NICU) census. A five-step procedure was developed to identify predictive variables. Clinical expert (CE) thought processes comprised one model. Variables from that model were used to develop two mathematical models for the decision-support tool: an artificial neural network (ANN) and a multivariate logistic regression model (MLR). The ranking of the variables in the three models was compared using the Wilcoxon Signed Rank Test. The best performing model was used in a web-based decision-support tool with a user interface implemented in Hypertext Markup Language (HTML) and the mathematical model employing the ANN. CEs identified 51 potentially predictive variables for extubation decisions for an infant on mechanical ventilation. Comparisons of the three models showed a significant difference between the ANN and the CE (p = 0.0006). Of the original 51 potentially predictive variables, the 13 most predictive variables were used to develop an ANN as a web-based decision-tool. The ANN processes user-provided data and returns the prediction 0-1 score and a novelty index. The user then selects the most appropriate threshold for categorizing the prediction as a success or failure. Furthermore, the novelty index, indicating the similarity of the test case to the training case, allows the user to assess the confidence level of the prediction with regard to how much the new data differ from the data originally used for the development of the prediction tool. State-of-the-art, machine-learning methods can be employed for the development of sophisticated tools to aid clinicians' decisions. We identified numerous variables considered relevant for extubation decisions for mechanically ventilated premature infants with RDS. We then developed a web-based decision-support tool for clinicians which can be made widely available and potentially improve patient care world wide.
Improvement in the amine glass platform by bubbling method for a DNA microarray
Jee, Seung Hyun; Kim, Jong Won; Lee, Ji Hyeong; Yoon, Young Soo
2015-01-01
A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool. PMID:26468293
Improvement in the amine glass platform by bubbling method for a DNA microarray.
Jee, Seung Hyun; Kim, Jong Won; Lee, Ji Hyeong; Yoon, Young Soo
2015-01-01
A glass platform with high sensitivity for sexually transmitted diseases microarray is described here. An amino-silane-based self-assembled monolayer was coated on the surface of a glass platform using a novel bubbling method. The optimized surface of the glass platform had highly uniform surface modifications using this method, as well as improved hybridization properties with capture probes in the DNA microarray. On the basis of these results, the improved glass platform serves as a highly reliable and optimal material for the DNA microarray. Moreover, in this study, we demonstrated that our glass platform, manufactured by utilizing the bubbling method, had higher uniformity, shorter processing time, lower background signal, and higher spot signal than the platforms manufactured by the general dipping method. The DNA microarray manufactured with a glass platform prepared using bubbling method can be used as a clinical diagnostic tool.
Prediction of regulatory gene pairs using dynamic time warping and gene ontology.
Yang, Andy C; Hsu, Hui-Huang; Lu, Ming-Da; Tseng, Vincent S; Shih, Timothy K
2014-01-01
Selecting informative genes is the most important task for data analysis on microarray gene expression data. In this work, we aim at identifying regulatory gene pairs from microarray gene expression data. However, microarray data often contain multiple missing expression values. Missing value imputation is thus needed before further processing for regulatory gene pairs becomes possible. We develop a novel approach to first impute missing values in microarray time series data by combining k-Nearest Neighbour (KNN), Dynamic Time Warping (DTW) and Gene Ontology (GO). After missing values are imputed, we then perform gene regulation prediction based on our proposed DTW-GO distance measurement of gene pairs. Experimental results show that our approach is more accurate when compared with existing missing value imputation methods on real microarray data sets. Furthermore, our approach can also discover more regulatory gene pairs that are known in the literature than other methods.
Temperature Gradient Effect on Gas Discrimination Power of a Metal-Oxide Thin-Film Sensor Microarray
Sysoev, Victor V.; Kiselev, Ilya; Frietsch, Markus; Goschnick, Joachim
2004-01-01
The paper presents results concerning the effect of spatial inhomogeneous operating temperature on the gas discrimination power of a gas-sensor microarray, with the latter based on a thin SnO2 film employed in the KAMINA electronic nose. Three different temperature distributions over the substrate are discussed: a nearly homogeneous one and two temperature gradients, equal to approx. 3.3 °C/mm and 6.7 °C/mm, applied across the sensor elements (segments) of the array. The gas discrimination power of the microarray is judged by using the Mahalanobis distance in the LDA (Linear Discrimination Analysis) coordinate system between the data clusters obtained by the response of the microarray to four target vapors: ethanol, acetone, propanol and ammonia. It is shown that the application of a temperature gradient increases the gas discrimination power of the microarray by up to 35 %.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gentry, T.; Schadt, C.; Zhou, J.
Microarray technology has the unparalleled potential tosimultaneously determine the dynamics and/or activities of most, if notall, of the microbial populations in complex environments such as soilsand sediments. Researchers have developed several types of arrays thatcharacterize the microbial populations in these samples based on theirphylogenetic relatedness or functional genomic content. Several recentstudies have used these microarrays to investigate ecological issues;however, most have only analyzed a limited number of samples withrelatively few experiments utilizing the full high-throughput potentialof microarray analysis. This is due in part to the unique analyticalchallenges that these samples present with regard to sensitivity,specificity, quantitation, and data analysis. Thismore » review discussesspecific applications of microarrays to microbial ecology research alongwith some of the latest studies addressing the difficulties encounteredduring analysis of complex microbial communities within environmentalsamples. With continued development, microarray technology may ultimatelyachieve its potential for comprehensive, high-throughput characterizationof microbial populations in near real-time.« less
Double stranded nucleic acid biochips
Chernov, Boris; Golova, Julia
2006-05-23
This invention describes a new method of constructing double-stranded DNA (dsDNA) microarrays based on the use of pre-synthesized or natural DNA duplexes without a stem-loop structure. The complementary oligonucleotide chains are bonded together by a novel connector that includes a linker for immobilization on a matrix. A non-enzymatic method for synthesizing double-stranded nucleic acids with this novel connector enables the construction of inexpensive and robust dsDNA/dsRNA microarrays. DNA-DNA and DNA-protein interactions are investigated using the microarrays.
A decision technology system for health care electronic commerce.
Forgionne, G A; Gangopadhyay, A; Klein, J A; Eckhardt, R
1999-08-01
Mounting costs have escalated the pressure on health care providers and payers to improve decision making and control expenses. Transactions to form the needed decision data will routinely flow, often electronically, between the affected parties. Conventional health care information systems facilitate flow, process transactions, and generate useful decision information. Typically, such support is offered through a series of stand-alone systems that lose much useful decision knowledge and wisdom during health care electronic commerce (e-commerce). Integrating the stand-alone functions can enhance the quality and efficiency of the segmented support, create synergistic effects, and augment decision-making performance and value for both providers and payers. This article presents an information system that can provide complete and integrated support for e-commerce-based health care decision making. The article describes health care e-commerce, presents the system, examines the system's potential use and benefits, and draws implications for health care management and practice.
The Waste Reduction Decision Support System (WAR DSS) is a Java-based software product providing comprehensive modeling of potential adverse environmental impacts (PEI) predicted to result from newly designed or redesigned chemical manufacturing processes. The purpose of this so...
Development of prototype decision support systems for real-time freeway traffic routing. Volume I.
DOT National Transportation Integrated Search
1998-01-01
For a traffic management system (TMS) to improve traffic flow, TMS operators must develop effective routing strategies based on the data collected by the system. The purpose of this research was to build prototype decision support systems (DSS) for t...
Take the first heuristic, self-efficacy, and decision-making in sport.
Hepler, Teri J; Feltz, Deborah L
2012-06-01
Can taking the first (TTF) option in decision-making lead to the best decisions in sports contexts? And, is one's decision-making self-efficacy in that context linked to TTF decisions? The purpose of this study was to examine the role of the TTF heuristic and self-efficacy in decision-making on a simulated sports task. Undergraduate and graduate students (N = 72) participated in the study and performed 13 trials in each of two video-based basketball decision tasks. One task required participants to verbally generate options before making a final decision on what to do next, while the other task simply asked participants to make a decision regarding the next move as quickly as possible. Decision-making self-efficacy was assessed using a 10-item questionnaire comprising various aspects of decision-making in basketball. Participants also rated their confidence in the final decision. Results supported many of the tenets of the TTF heuristic, such that people used the heuristic on a majority of the trials (70%), earlier generated options were better than later ones, first options were meaningfully generated, and final options were meaningfully selected. Results did not support differences in dynamic inconsistency or decision confidence based on the number of options. Findings also supported the link between self-efficacy and the TTF heuristic. Participants with higher self-efficacy beliefs used TTF more frequently and generated fewer options than those with low self-efficacy. Thus, not only is TTF an important heuristic when making decisions in dynamic, time-pressure situations, but self-efficacy plays an influential role in TTF.
The Glycan Microarray Story from Construction to Applications.
Hyun, Ji Young; Pai, Jaeyoung; Shin, Injae
2017-04-18
Not only are glycan-mediated binding processes in cells and organisms essential for a wide range of physiological processes, but they are also implicated in various pathological processes. As a result, elucidation of glycan-associated biomolecular interactions and their consequences is of great importance in basic biological research and biomedical applications. In 2002, we and others were the first to utilize glycan microarrays in efforts aimed at the rapid analysis of glycan-associated recognition events. Because they contain a number of glycans immobilized in a dense and orderly manner on a solid surface, glycan microarrays enable multiple parallel analyses of glycan-protein binding events while utilizing only small amounts of glycan samples. Therefore, this microarray technology has become a leading edge tool in studies aimed at elucidating roles played by glycans and glycan binding proteins in biological systems. In this Account, we summarize our efforts on the construction of glycan microarrays and their applications in studies of glycan-associated interactions. Immobilization strategies of functionalized and unmodified glycans on derivatized glass surfaces are described. Although others have developed immobilization techniques, our efforts have focused on improving the efficiencies and operational simplicity of microarray construction. The microarray-based technology has been most extensively used for rapid analysis of the glycan binding properties of proteins. In addition, glycan microarrays have been employed to determine glycan-protein interactions quantitatively, detect pathogens, and rapidly assess substrate specificities of carbohydrate-processing enzymes. More recently, the microarrays have been employed to identify functional glycans that elicit cell surface lectin-mediated cellular responses. Owing to these efforts, it is now possible to use glycan microarrays to expand the understanding of roles played by glycans and glycan binding proteins in biological systems.
Assessing an AI knowledge-base for asymptomatic liver diseases.
Babic, A; Mathiesen, U; Hedin, K; Bodemar, G; Wigertz, O
1998-01-01
Discovering not yet seen knowledge from clinical data is of importance in the field of asymptomatic liver diseases. Avoidance of liver biopsy which is used as the ultimate confirmation of diagnosis by making the decision based on relevant laboratory findings only, would be considered an essential support. The system based on Quinlan's ID3 algorithm was simple and efficient in extracting the sought knowledge. Basic principles of applying the AI systems are therefore described and complemented with medical evaluation. Some of the diagnostic rules were found to be useful as decision algorithms i.e. they could be directly applied in clinical work and made a part of the knowledge-base of the Liver Guide, an automated decision support system.
Research Practice Partnerships: A Strategy for Promoting Evidence-Based Decision-Making in Education
ERIC Educational Resources Information Center
Wentworth, Laura; Mazzeo, Christopher; Connolly, Faith
2017-01-01
Background: In the United States, an emphasis on evidence-based decision-making in education has received renewed interest with the recent passage of the Every Student Succeeds Act. However, how best, in practice, to support the use of evidence in educational decision-making remains unclear. Research Practice Partnerships (RPPs) are a popular…
SMARTe (Sustainable Management Approaches and Revitalization Tools-electronic) is a web-based decision support tool developed by he Office of Research and Development (ORD) in partnership with the Office of Brownfields and Land Revitaliza...
NCBI GEO: archive for functional genomics data sets--update.
Barrett, Tanya; Wilhite, Stephen E; Ledoux, Pierre; Evangelista, Carlos; Kim, Irene F; Tomashevsky, Maxim; Marshall, Kimberly A; Phillippy, Katherine H; Sherman, Patti M; Holko, Michelle; Yefanov, Andrey; Lee, Hyeseung; Zhang, Naigong; Robertson, Cynthia L; Serova, Nadezhda; Davis, Sean; Soboleva, Alexandra
2013-01-01
The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data.
Gadd, C. S.; Baskaran, P.; Lobach, D. F.
1998-01-01
Extensive utilization of point-of-care decision support systems will be largely dependent on the development of user interaction capabilities that make them effective clinical tools in patient care settings. This research identified critical design features of point-of-care decision support systems that are preferred by physicians, through a multi-method formative evaluation of an evolving prototype of an Internet-based clinical decision support system. Clinicians used four versions of the system--each highlighting a different functionality. Surveys and qualitative evaluation methodologies assessed clinicians' perceptions regarding system usability and usefulness. Our analyses identified features that improve perceived usability, such as telegraphic representations of guideline-related information, facile navigation, and a forgiving, flexible interface. Users also preferred features that enhance usefulness and motivate use, such as an encounter documentation tool and the availability of physician instruction and patient education materials. In addition to identifying design features that are relevant to efforts to develop clinical systems for point-of-care decision support, this study demonstrates the value of combining quantitative and qualitative methods of formative evaluation with an iterative system development strategy to implement new information technology in complex clinical settings. Images Figure 1 PMID:9929188
Neighborhood graph and learning discriminative distance functions for clinical decision support.
Tsymbal, Alexey; Zhou, Shaohua Kevin; Huber, Martin
2009-01-01
There are two essential reasons for the slow progress in the acceptance of clinical case retrieval and similarity search-based decision support systems; the especial complexity of clinical data making it difficult to define a meaningful and effective distance function on them and the lack of transparency and explanation ability in many existing clinical case retrieval decision support systems. In this paper, we try to address these two problems by introducing a novel technique for visualizing inter-patient similarity based on a node-link representation with neighborhood graphs and by considering two techniques for learning discriminative distance function that help to combine the power of strong "black box" learners with the transparency of case retrieval and nearest neighbor classification.
Abe, James; Lobo, Jennifer M; Trifiletti, Daniel M; Showalter, Timothy N
2017-08-24
Despite the emergence of genomics-based risk prediction tools in oncology, there is not yet an established framework for communication of test results to cancer patients to support shared decision-making. We report findings from a stakeholder engagement program that aimed to develop a framework for using Markov models with individualized model inputs, including genomics-based estimates of cancer recurrence probability, to generate personalized decision aids for prostate cancer patients faced with radiation therapy treatment decisions after prostatectomy. We engaged a total of 22 stakeholders, including: prostate cancer patients, urological surgeons, radiation oncologists, genomic testing industry representatives, and biomedical informatics faculty. Slides were at each meeting to provide background information regarding the analytical framework. Participants were invited to provide feedback during the meeting, including revising the overall project aims. Stakeholder meeting content was reviewed and summarized by stakeholder group and by theme. The majority of stakeholder suggestions focused on aspects of decision aid design and formatting. Stakeholders were enthusiastic about the potential value of using decision analysis modeling with personalized model inputs for cancer recurrence risk, as well as competing risks from age and comorbidities, to generate a patient-centered tool to assist decision-making. Stakeholders did not view privacy considerations as a major barrier to the proposed decision aid program. A common theme was that decision aids should be portable across multiple platforms (electronic and paper), should allow for interaction by the user to adjust model inputs iteratively, and available to patients both before and during consult appointments. Emphasis was placed on the challenge of explaining the model's composite result of quality-adjusted life years. A range of stakeholders provided valuable insights regarding the design of a personalized decision aid program, based upon Markov modeling with individualized model inputs, to provide a patient-centered framework to support for genomic-based treatment decisions for cancer patients. The guidance provided by our stakeholders may be broadly applicable to the communication of genomic test results to patients in a patient-centered fashion that supports effective shared decision-making that represents a spectrum of personal factors such as age, medical comorbidities, and individual priorities and values.
The design of patient decision support interventions: addressing the theory-practice gap.
Elwyn, Glyn; Stiel, Mareike; Durand, Marie-Anne; Boivin, Jacky
2011-08-01
Although an increasing number of decision support interventions for patients (including decision aids) are produced, few make explicit use of theory. We argue the importance of using theory to guide design. The aim of this work was to address this theory-practice gap and to examine how a range of selected decision-making theories could inform the design and evaluation of decision support interventions. We reviewed the decision-making literature and selected relevant theories. We assessed their key principles, theoretical pathways and predictions in order to determine how they could inform the design of two core components of decision support interventions, namely, information and deliberation components and to specify theory-based outcome measures. Eight theories were selected: (1) the expected utility theory; (2) the conflict model of decision making; (3) prospect theory; (4) fuzzy-trace theory; (5) the differentiation and consolidation theory; (6) the ecological rationality theory; (7) the rational-emotional model of decision avoidance; and finally, (8) the Attend, React, Explain, Adapt model of affective forecasting. Some theories have strong relevance to the information design (e.g. prospect theory); some are more relevant to deliberation processes (conflict theory, differentiation theory and ecological validity). None of the theories in isolation was sufficient to inform the design of all the necessary components of decision support interventions. It was also clear that most work in theory-building has focused on explaining or describing how humans think rather than on how tools could be designed to help humans make good decisions. It is not surprising therefore that a large theory-practice gap exists as we consider decision support for patients. There was no relevant theory that integrated all the necessary contributions to the task of making good decisions in collaborative interactions. Initiatives such as the International Patient Decision Aids Standards Collaboration influence standards for the design of decision support interventions. However, this analysis points to the need to undertake more work in providing theoretical foundations for these interventions. © 2010 Blackwell Publishing Ltd.
SynopSIS: integrating physician sign-out with the electronic medical record.
Sarkar, Urmimala; Carter, Jonathan T; Omachi, Theodore A; Vidyarthi, Arpana R; Cucina, Russell; Bokser, Seth; van Eaton, Erik; Blum, Michael
2007-09-01
Safe delivery of care depends on effective communication among all health care providers, especially during transfers of care. The traditional medical chart does not adequately support such communication. We designed a patient-tracking tool that enhances provider communication and supports clinical decision making. To develop a problem-based patient-tracking tool, called Sign-out, Information Retrieval, and Summary (SynopSIS), in order to support patient tracking, transfers of care (ie, sign-outs), and daily rounds. Tertiary-care, university-based teaching hospital. SynopSIS compiles and organizes information from the electronic medical record to support hospital discharge and disposition decisions, daily provider decisions, and overnight or cross-coverage decisions. It reflects the provider's patient-care and daily work-flow needs. We plan to use Web-based surveys, audits of daily use, and interdisciplinary focus groups to evaluate SynopSIS's impact on communication between providers, quality of sign-out, patient continuity of care, and rounding efficiency. We expect SynopSIS to improve care by facilitating communication between care teams, standardizing sign-out, and automating daily review of clinical and laboratory trends. SynopSIS redesigns the clinical chart to better serve provider and patient needs. (c) 2007 Society of Hospital Medicine.
A Model-Based Joint Identification of Differentially Expressed Genes and Phenotype-Associated Genes
Seo, Minseok; Shin, Su-kyung; Kwon, Eun-Young; Kim, Sung-Eun; Bae, Yun-Jung; Lee, Seungyeoun; Sung, Mi-Kyung; Choi, Myung-Sook; Park, Taesung
2016-01-01
Over the last decade, many analytical methods and tools have been developed for microarray data. The detection of differentially expressed genes (DEGs) among different treatment groups is often a primary purpose of microarray data analysis. In addition, association studies investigating the relationship between genes and a phenotype of interest such as survival time are also popular in microarray data analysis. Phenotype association analysis provides a list of phenotype-associated genes (PAGs). However, it is sometimes necessary to identify genes that are both DEGs and PAGs. We consider the joint identification of DEGs and PAGs in microarray data analyses. The first approach we used was a naïve approach that detects DEGs and PAGs separately and then identifies the genes in an intersection of the list of PAGs and DEGs. The second approach we considered was a hierarchical approach that detects DEGs first and then chooses PAGs from among the DEGs or vice versa. In this study, we propose a new model-based approach for the joint identification of DEGs and PAGs. Unlike the previous two-step approaches, the proposed method identifies genes simultaneously that are DEGs and PAGs. This method uses standard regression models but adopts different null hypothesis from ordinary regression models, which allows us to perform joint identification in one-step. The proposed model-based methods were evaluated using experimental data and simulation studies. The proposed methods were used to analyze a microarray experiment in which the main interest lies in detecting genes that are both DEGs and PAGs, where DEGs are identified between two diet groups and PAGs are associated with four phenotypes reflecting the expression of leptin, adiponectin, insulin-like growth factor 1, and insulin. Model-based approaches provided a larger number of genes, which are both DEGs and PAGs, than other methods. Simulation studies showed that they have more power than other methods. Through analysis of data from experimental microarrays and simulation studies, the proposed model-based approach was shown to provide a more powerful result than the naïve approach and the hierarchical approach. Since our approach is model-based, it is very flexible and can easily handle different types of covariates. PMID:26964035
Fuzzy Based Decision Support System for Condition Assessment and Rating of Bridges
NASA Astrophysics Data System (ADS)
Srinivas, Voggu; Sasmal, Saptarshi; Karusala, Ramanjaneyulu
2016-09-01
In this work, a knowledge based decision support system has been developed to efficiently handle the issues such as distress diagnosis, assessment of damages and condition rating of existing bridges towards developing an exclusive and robust Bridge Management System (BMS) for sustainable bridges. The Knowledge Based Expert System (KBES) diagnoses the distresses and finds the cause of distress in the bridge by processing the data which are heuristic and combined with site inspection results, laboratory test results etc. The coupling of symbolic and numeric type of data has been successfully implemented in the expert system to strengthen its decision making process. Finally, the condition rating of the bridge is carried out using the assessment results obtained from the KBES and the information received from the bridge inspector. A systematic procedure has been developed using fuzzy mathematics for condition rating of bridges by combining the fuzzy weighted average and resolution identity technique. The proposed methodologies and the decision support system will facilitate in developing a robust and exclusive BMS for a network of bridges across the country and allow the bridge engineers and decision makers to carry out maintenance of bridges in a rational and systematic way.
Design and usability of heuristic-based deliberation tools for women facing amniocentesis.
Durand, Marie-Anne; Wegwarth, Odette; Boivin, Jacky; Elwyn, Glyn
2012-03-01
Evidence suggests that in decision contexts characterized by uncertainty and time constraints (e.g. health-care decisions), fast and frugal decision-making strategies (heuristics) may perform better than complex rules of reasoning. To examine whether it is possible to design deliberation components in decision support interventions using simple models (fast and frugal heuristics). The 'Take The Best' heuristic (i.e. selection of a 'most important reason') and 'The Tallying' integration algorithm (i.e. unitary weighing of pros and cons) were used to develop two deliberation components embedded in a Web-based decision support intervention for women facing amniocentesis testing. Ten researchers (recruited from 15), nine health-care providers (recruited from 28) and ten pregnant women (recruited from 14) who had recently been offered amniocentesis testing appraised evolving versions of 'your most important reason' (Take The Best) and 'weighing it up' (Tallying). Most researchers found the tools useful in facilitating decision making although emphasized the need for simple instructions and clear layouts. Health-care providers however expressed concerns regarding the usability and clarity of the tools. By contrast, 7 out of 10 pregnant women found the tools useful in weighing up the pros and cons of each option, helpful in structuring and clarifying their thoughts and visualizing their decision efforts. Several pregnant women felt that 'weighing it up' and 'your most important reason' were not appropriate when facing such a difficult and emotional decision. Theoretical approaches based on fast and frugal heuristics can be used to develop deliberation tools that provide helpful support to patients facing real-world decisions about amniocentesis. © 2011 Blackwell Publishing Ltd.
Ling, Zhi-Qiang; Wang, Yi; Mukaisho, Kenichi; Hattori, Takanori; Tatsuta, Takeshi; Ge, Ming-Hua; Jin, Li; Mao, Wei-Min; Sugihara, Hiroyuki
2010-06-01
Tests of differentially expressed genes (DEGs) from microarray experiments are based on the null hypothesis that genes that are irrelevant to the phenotype/stimulus are expressed equally in the target and control samples. However, this strict hypothesis is not always true, as there can be several transcriptomic background differences between target and control samples, including different cell/tissue types, different cell cycle stages and different biological donors. These differences lead to increased false positives, which have little biological/medical significance. In this article, we propose a statistical framework to identify DEGs between target and control samples from expression microarray data allowing transcriptomic background differences between these samples by introducing a modified null hypothesis that the gene expression background difference is normally distributed. We use an iterative procedure to perform robust estimation of the null hypothesis and identify DEGs as outliers. We evaluated our method using our own triplicate microarray experiment, followed by validations with reverse transcription-polymerase chain reaction (RT-PCR) and on the MicroArray Quality Control dataset. The evaluations suggest that our technique (i) results in less false positive and false negative results, as measured by the degree of agreement with RT-PCR of the same samples, (ii) can be applied to different microarray platforms and results in better reproducibility as measured by the degree of DEG identification concordance both intra- and inter-platforms and (iii) can be applied efficiently with only a few microarray replicates. Based on these evaluations, we propose that this method not only identifies more reliable and biologically/medically significant DEG, but also reduces the power-cost tradeoff problem in the microarray field. Source code and binaries freely available for download at http://comonca.org.cn/fdca/resources/softwares/deg.zip.
Carlson, Ruth I; Cattet, Marc R L; Sarauer, Bryan L; Nielsen, Scott E; Boulanger, John; Stenhouse, Gordon B; Janz, David M
2016-01-01
A novel antibody-based protein microarray was developed that simultaneously determines expression of 31 stress-associated proteins in skin samples collected from free-ranging grizzly bears (Ursus arctos) in Alberta, Canada. The microarray determines proteins belonging to four broad functional categories associated with stress physiology: hypothalamic-pituitary-adrenal axis proteins, apoptosis/cell cycle proteins, cellular stress/proteotoxicity proteins and oxidative stress/inflammation proteins. Small skin samples (50-100 mg) were collected from captured bears using biopsy punches. Proteins were isolated and labelled with fluorescent dyes, with labelled protein homogenates loaded onto microarrays to hybridize with antibodies. Relative protein expression was determined by comparison with a pooled standard skin sample. The assay was sensitive, requiring 80 µg of protein per sample to be run in triplicate on the microarray. Intra-array and inter-array coefficients of variation for individual proteins were generally <10 and <15%, respectively. With one exception, there were no significant differences in protein expression among skin samples collected from the neck, forelimb, hindlimb and ear in a subsample of n = 4 bears. This suggests that remotely delivered biopsy darts could be used in future sampling. Using generalized linear mixed models, certain proteins within each functional category demonstrated altered expression with respect to differences in year, season, geographical sampling location within Alberta and bear biological parameters, suggesting that these general variables may influence expression of specific proteins in the microarray. Our goal is to apply the protein microarray as a conservation physiology tool that can detect, evaluate and monitor physiological stress in grizzly bears and other species at risk over time in response to environmental change.
An ensemble of SVM classifiers based on gene pairs.
Tong, Muchenxuan; Liu, Kun-Hong; Xu, Chungui; Ju, Wenbin
2013-07-01
In this paper, a genetic algorithm (GA) based ensemble support vector machine (SVM) classifier built on gene pairs (GA-ESP) is proposed. The SVMs (base classifiers of the ensemble system) are trained on different informative gene pairs. These gene pairs are selected by the top scoring pair (TSP) criterion. Each of these pairs projects the original microarray expression onto a 2-D space. Extensive permutation of gene pairs may reveal more useful information and potentially lead to an ensemble classifier with satisfactory accuracy and interpretability. GA is further applied to select an optimized combination of base classifiers. The effectiveness of the GA-ESP classifier is evaluated on both binary-class and multi-class datasets. Copyright © 2013 Elsevier Ltd. All rights reserved.
Dairy cow culling strategies: making economical culling decisions.
Lehenbauer, T W; Oltjen, J W
1998-01-01
The purpose of this report was to examine important economic elements of culling decisions, to review progress in development of culling decision support systems, and to discern some of the potentially rewarding areas for future research on culling models. Culling decisions have an important influence on the economic performance of the dairy but are often made in a nonprogrammed fashion and based partly on the intuition of the decision maker. The computer technology that is available for dairy herd management has made feasible the use of economic models to support culling decisions. Financial components--including profit, cash flow, and risk--are major economic factors affecting culling decisions. Culling strategies are further influenced by short-term fluctuations in cow numbers as well as by planned herd expansion. Changes in herd size affect the opportunity cost for postponed replacement and may alter the relevance of optimization strategies that assume a fixed herd size. Improvements in model components related to biological factors affecting future cow performance, including milk production, reproductive status, and mastitis, appear to offer the greatest economic potential for enhancing culling decision support systems. The ultimate value of any culling decision support system for developing economic culling strategies will be determined by its results under field conditions.
7 CFR 900.64 - The Judge's decision.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false The Judge's decision. 900.64 Section 900.64... Judge's decision. (a) Corrections to and certification of transcript. (1) At such time as the judge may... order, based solely upon the evidence of record, and briefs in support thereof. (c) Judge's Decision...
75 FR 6689 - Sustainable Communities Planning Grant Program Advance Notice and Request for Comment
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-10
... greater and more broad-based support of community development and investment decisions. However, these... and to expand opportunities for stakeholders to engage in decision-making, HUD is seeking comments on... its partners to better understand how this Program can support cooperative regional planning efforts...
Evaluation of satellite-based, modeled-derived daily solar radiation data for the continental U.S.
USDA-ARS?s Scientific Manuscript database
Many applications of simulation models and related decision support tools for agriculture and natural resource management require daily meteorological data as inputs. Availability and quality of such data, however, often constrain research and decision support activities that require use of these to...
Using a Group Decision Support System for Creativity.
ERIC Educational Resources Information Center
Aiken, Milam; Riggs, Mary
1993-01-01
A computer-based group decision support system (GDSS) to increase collaborative group productivity and creativity is explained. Various roles for the computer are identified, and implementation of GDSS systems at the University of Mississippi and International Business Machines are described. The GDSS is seen as fostering productivity through…
This report summarizes the methodologies and findings of three regional assessments and considers the role of decision support in assisting adaptation to climate change. Background. In conjunction with the US Global Change Research Program’s (USGCRP’s) National Assessment of ...
Rusz, Orsolya; Papp, Orsolya; Vízkeleti, Laura; Molnár, Béla Ákos; Bende, Kristóf Csaba; Lotz, Gábor; Ács, Balázs; Kahán, Zsuzsanna; Székely, Tamás; Báthori, Ágnes; Szundi, Csilla; Kulka, Janina; Szállási, Zoltán; Tőkés, Anna-Mária
2018-05-16
To determine the associations between lysosomal-associated transmembrane protein 4b (LAPTM4B) gene copy number and response to different chemotherapy regimens in hormone receptor negative (HR-) primary breast carcinomas. Two cohorts were analyzed: (1) 69 core biopsies from HR-breast carcinomas treated with neoadjuvant chemotherapy (anthracycline based in 72.5% of patients and non-anthracycline based in 27.5% of patients). (2) Tissue microarray (TMA) of 74 HR-breast carcinomas treated with adjuvant therapy (77.0% of the patients received anthracycline, 17.6% of the patients non-anthracycline-based therapy, and in 5.4% of the cases, no treatment data are available). Interphase FISH technique was applied on pretreatment core biopsies (cohort I) and on TMAs (cohort II) using custom-made dual-labelled FISH probes (LAPTM4B/CEN8q FISH probe Abnova Corp.). In the neoadjuvant cohort in the anthracycline-treated group, we observed a significant difference (p = 0.029) of average LAPTM4B copy number between the non-responder and pathological complete responder groups (4.1 ± 1.1 vs. 2.6 ± 0.1). In the adjuvant setting, the anthracycline-treated group of metastatic breast carcinomas was characterized by higher LAPTM4B copy number comparing to the non-metastatic ones (p = 0.046). In contrast, in the non-anthracycline-treated group of patients, we did not find any LAPTM4B gene copy number differences between responder vs. non-responder groups or between metastatic vs. non-metastatic groups. Our results confirm the possible role of the LAPTM4B gene in anthracycline resistance in HR- breast cancer. Analyzing LAPTM4B copy number pattern may support future treatment decision.
Linan, Margaret K; Sottara, Davide; Freimuth, Robert R
2015-01-01
Pharmacogenomics (PGx) guidelines contain drug-gene relationships, therapeutic and clinical recommendations from which clinical decision support (CDS) rules can be extracted, rendered and then delivered through clinical decision support systems (CDSS) to provide clinicians with just-in-time information at the point of care. Several tools exist that can be used to generate CDS rules that are based on computer interpretable guidelines (CIG), but none have been previously applied to the PGx domain. We utilized the Unified Modeling Language (UML), the Health Level 7 virtual medical record (HL7 vMR) model, and standard terminologies to represent the semantics and decision logic derived from a PGx guideline, which were then mapped to the Health eDecisions (HeD) schema. The modeling and extraction processes developed here demonstrate how structured knowledge representations can be used to support the creation of shareable CDS rules from PGx guidelines.
DNA Microarray Wet Lab Simulation Brings Genomics into the High School Curriculum
Zanta, Carolyn A.; Heyer, Laurie J.; Kittinger, Ben; Gabric, Kathleen M.; Adler, Leslie
2006-01-01
We have developed a wet lab DNA microarray simulation as part of a complete DNA microarray module for high school students. The wet lab simulation has been field tested with high school students in Illinois and Maryland as well as in workshops with high school teachers from across the nation. Instead of using DNA, our simulation is based on pH indicators, which offer many ideal teaching characteristics. The simulation requires no specialized equipment, is very inexpensive, is very reliable, and takes very little preparation time. Student and teacher assessment data indicate the simulation is popular with both groups, and students show significant learning gains. We include many resources with this publication, including all prelab introductory materials (e.g., a paper microarray activity), the student handouts, teachers notes, and pre- and postassessment tools. We did not test the simulation on other student populations, but based on teacher feedback, the simulation also may fit well in community college and in introductory and nonmajors' college biology curricula. PMID:17146040
Study of hepatitis B virus gene mutations with enzymatic colorimetry-based DNA microarray.
Mao, Hailei; Wang, Huimin; Zhang, Donglei; Mao, Hongju; Zhao, Jianlong; Shi, Jian; Cui, Zhichu
2006-01-01
To establish a modified microarray method for detecting HBV gene mutations in the clinic. Site-specific oligonucleotide probes were immobilized to microarray slides and hybridized to biotin-labeled HBV gene fragments amplified from two-step PCR. Hybridized targets were transferred to nitrocellulose membranes, followed by intensity measurement using BCIP/NBT colorimetry. HBV genes from 99 Hepatitis B patients and 40 healthy blood donors were analyzed. Mutation frequencies of HBV pre-core/core and basic core promoter (BCP) regions were found to be significantly higher in the patient group (42%, 40% versus 2.5%, 5%, P < 0.01). Compared with a traditional fluorescence method, the colorimetry method exhibited the same level of sensitivity and reproducibility. An enzymatic colorimetry-based DNA microarray assay was successfully established to monitor HBV mutations. Pre-core/core and BCP mutations of HBV genes could be major causes of HBV infection in HBeAg-negative patients and could also be relevant to chronicity and aggravation of hepatitis B.
2011-01-01
Background Cytogenetic evaluation is a key component of the diagnosis and prognosis of chronic lymphocytic leukemia (CLL). We performed oligonucleotide-based comparative genomic hybridization microarray analysis on 34 samples with CLL and known abnormal karyotypes previously determined by cytogenetics and/or fluorescence in situ hybridization (FISH). Results Using a custom designed microarray that targets >1800 genes involved in hematologic disease and other malignancies, we identified additional cryptic aberrations and novel findings in 59% of cases. These included gains and losses of genes associated with cell cycle regulation, apoptosis and susceptibility loci on 3p21.31, 5q35.2q35.3, 10q23.31q23.33, 11q22.3, and 22q11.23. Conclusions Our results show that microarray analysis will detect known aberrations, including microscopic and cryptic alterations. In addition, novel genomic changes will be uncovered that may become important prognostic predictors or treatment targets for CLL in the future. PMID:22087757
Clustering gene expression data based on predicted differential effects of GV interaction.
Pan, Hai-Yan; Zhu, Jun; Han, Dan-Fu
2005-02-01
Microarray has become a popular biotechnology in biological and medical research. However, systematic and stochastic variabilities in microarray data are expected and unavoidable, resulting in the problem that the raw measurements have inherent "noise" within microarray experiments. Currently, logarithmic ratios are usually analyzed by various clustering methods directly, which may introduce bias interpretation in identifying groups of genes or samples. In this paper, a statistical method based on mixed model approaches was proposed for microarray data cluster analysis. The underlying rationale of this method is to partition the observed total gene expression level into various variations caused by different factors using an ANOVA model, and to predict the differential effects of GV (gene by variety) interaction using the adjusted unbiased prediction (AUP) method. The predicted GV interaction effects can then be used as the inputs of cluster analysis. We illustrated the application of our method with a gene expression dataset and elucidated the utility of our approach using an external validation.
DNA microarray wet lab simulation brings genomics into the high school curriculum.
Campbell, A Malcolm; Zanta, Carolyn A; Heyer, Laurie J; Kittinger, Ben; Gabric, Kathleen M; Adler, Leslie; Schulz, Barbara
2006-01-01
We have developed a wet lab DNA microarray simulation as part of a complete DNA microarray module for high school students. The wet lab simulation has been field tested with high school students in Illinois and Maryland as well as in workshops with high school teachers from across the nation. Instead of using DNA, our simulation is based on pH indicators, which offer many ideal teaching characteristics. The simulation requires no specialized equipment, is very inexpensive, is very reliable, and takes very little preparation time. Student and teacher assessment data indicate the simulation is popular with both groups, and students show significant learning gains. We include many resources with this publication, including all prelab introductory materials (e.g., a paper microarray activity), the student handouts, teachers notes, and pre- and postassessment tools. We did not test the simulation on other student populations, but based on teacher feedback, the simulation also may fit well in community college and in introductory and nonmajors' college biology curricula.
Emerging Use of Gene Expression Microarrays in Plant Physiology
Wullschleger, Stan D.; Difazio, Stephen P.
2003-01-01
Microarrays have become an important technology for the global analysis of gene expression in humans, animals, plants, and microbes. Implemented in the context of a well-designed experiment, cDNA and oligonucleotide arrays can provide highthroughput, simultaneous analysis of transcript abundance for hundreds, if not thousands, of genes. However, despite widespread acceptance, the use of microarrays as a tool to better understand processes of interest to the plant physiologist is still being explored. To help illustrate current uses of microarrays in the plant sciences, several case studies that we believe demonstrate the emerging application of gene expression arrays in plant physiology weremore » selected from among the many posters and presentations at the 2003 Plant and Animal Genome XI Conference. Based on this survey, microarrays are being used to assess gene expression in plants exposed to the experimental manipulation of air temperature, soil water content and aluminium concentration in the root zone. Analysis often includes characterizing transcript profiles for multiple post-treatment sampling periods and categorizing genes with common patterns of response using hierarchical clustering techniques. In addition, microarrays are also providing insights into developmental changes in gene expression associated with fibre and root elongation in cotton and maize, respectively. Technical and analytical limitations of microarrays are discussed and projects attempting to advance areas of microarray design and data analysis are highlighted. Finally, although much work remains, we conclude that microarrays are a valuable tool for the plant physiologist interested in the characterization and identification of individual genes and gene families with potential application in the fields of agriculture, horticulture and forestry.« less
Effects of Using a Web-Based Individualized Education Program Decision Making Tutorial
ERIC Educational Resources Information Center
Shriner, James G.; Carty, Susan J.; Rose, Chad A.; Shogren, Karrie A.; Kim, Myungjin; Trach, John S.
2013-01-01
This study explored the effects of a web-based decision support system ("Tutorial") for writing standards-based Individualized Education Programs (IEPs). A total of 35 teachers and 154 students participated across two academic years. Participants were assigned to one of three intervention groups based on level of "Tutorial"…
Effects of Using a Web-Based Individualized Education Program Decision-Making Tutorial
ERIC Educational Resources Information Center
Shriner, James G.; Carty, Susan J.; Rose, Chad A.; Shogren, Karrie A.; Kim, Myungjin; Trach, John S.
2013-01-01
This study explored the effects of a web-based decision support system ("Tutorial") for writing standards-based Individualized Education Programs (IEPs). A total of 35 teachers and 154 students participated across two academic years. Participants were assigned to one of three intervention groups based on level of "Tutorial"…
Murphy, Matthew; MacCarthy, M Jayne; McAllister, Lynda; Gilbert, Robert
2014-12-05
Competency profiles for occupational clusters within Canada's substance abuse workforce (SAW) define the need for skill and knowledge in evidence-based practice (EBP) across all its members. Members of the Senior Management occupational cluster hold ultimate responsibility for decisions made within addiction services agencies and therefore must possess the highest level of proficiency in EBP. The objective of this study was to assess the knowledge of the principles of EBP, and use of the components of the evidence-based decision making (EBDM) process in members of this occupational cluster from selected addiction services agencies in Nova Scotia. A convenience sampling method was used to recruit participants from addiction services agencies. Semi-structured qualitative interviews were conducted with eighteen Senior Management. The interviews were audio-recorded, transcribed verbatim and checked by the participants. Interview transcripts were coded and analyzed for themes using content analysis and assisted by qualitative data analysis software (NVivo 9.0). Data analysis revealed four main themes: 1) Senior Management believe that addictions services agencies are evidence-based; 2) Consensus-based decision making is the norm; 3) Senior Management understand the principles of EBP and; 4) Senior Management do not themselves use all components of the EBDM process when making decisions, oftentimes delegating components of this process to decision support staff. Senior Management possess an understanding of the principles of EBP, however, when making decisions they often delegate components of the EBDM process to decision support staff. Decision support staff are not defined as an occupational cluster in Canada's SAW and have not been ascribed a competency profile. As such, there is no guarantee that this group possesses competency in EBDM. There is a need to advocate for the development of a defined occupational cluster and associated competency profile for this critical group.
Career exploration behavior of Korean medical students
2017-01-01
Purpose This study is to analyze the effects of medical students’ social support and career barriers on career exploration behavior mediated by career decision-making self-efficacy. Methods We applied the t-test to investigate the difference among the variables based on gender and admission types. Also, we performed path analysis to verify the effect of perceived career barriers and social support on career exploration behavior with career decision efficacy as a mediator. Results First, we noted statistically significant gender and admission type difference in social support, career barriers and career exploration behaviors. Second, social support and career barriers were found to influence career exploration behavior as a mediating variable for career decision-making self-efficacy. Conclusion Social support and career barriers as perceived by medical students influenced their career exploration behavior, with their decision-making self-efficacy serving as a full mediator. Therefore, this study has educational implications for career program development and educational training for career decision-making self-efficacy. PMID:28870020
Hollen, Patricia J; Gralla, Richard J; Jones, Randy A; Thomas, Christopher Y; Brenin, David R; Weiss, Geoffrey R; Schroen, Anneke T; Petroni, Gina R
2013-03-01
Appropriate utilization of treatment is a goal for all patients undergoing cancer treatment. Proper treatment maximizes benefit and limits exposure to unnecessary measures. This report describes findings of the feasibility and acceptability of implementing a short, clinic-based decision aid and presents an in-depth clinical profile of the participants. This descriptive study used a prospective, quantitative approach to obtain the feasibility and acceptability of a decision aid (DecisionKEYS for Balancing Choices) for use in clinical settings. It combined results of trials of patients with three different common malignancies. All groups used the same decision aid series. Participants included 80 patients with solid tumors (22 with newly diagnosed breast cancer, 19 with advanced prostate cancer, and 39 with advanced lung cancer) and their 80 supporters as well as their physicians and nurses, for a total of 160 participants and 10 health professionals. The decision aid was highly acceptable to patient and supporter participants in all diagnostic groups. It was feasible for use in clinic settings; the overall value was rated highly. Of six physicians, all found the interactive format with the help of the nurse as feasible and acceptable. Nurses also rated the decision aid favorably. This intervention provides the opportunity to enhance decision making about cancer treatment and warrants further study including larger and more diverse groups. Strengths of the study included a theoretical grounding, feasibility testing of a practical clinic-based intervention, and summative evaluation of acceptability of the intervention by patient and supporter pairs. Further research also is needed to test the effectiveness of the decision aid in diverse clinical settings and to determine if this intervention can decrease overall costs.
Frize, Monique; Yang, Lan; Walker, Robin C; O'Connor, Annette M
2005-06-01
This research is built on the belief that artificial intelligence estimations need to be integrated into clinical social context to create value for health-care decisions. In sophisticated neonatal intensive care units (NICUs), decisions to continue or discontinue aggressive treatment are an integral part of clinical practice. High-quality evidence supports clinical decision-making, and a decision-aid tool based on specific outcome information for individual NICU patients will provide significant support for parents and caregivers in making difficult "ethical" treatment decisions. In our approach, information on a newborn patient's likely outcomes is integrated with the physician's interpretation and parents' perspectives into codified knowledge. Context-sensitive content adaptation delivers personalized and customized information to a variety of users, from physicians to parents. The system provides structuralized knowledge translation and exchange between all participants in the decision, facilitating collaborative decision-making that involves parents at every stage on whether to initiate, continue, limit, or terminate intensive care for their infant.
Steger, Doris; Berry, David; Haider, Susanne; Horn, Matthias; Wagner, Michael; Stocker, Roman; Loy, Alexander
2011-01-01
The hybridization of nucleic acid targets with surface-immobilized probes is a widely used assay for the parallel detection of multiple targets in medical and biological research. Despite its widespread application, DNA microarray technology still suffers from several biases and lack of reproducibility, stemming in part from an incomplete understanding of the processes governing surface hybridization. In particular, non-random spatial variations within individual microarray hybridizations are often observed, but the mechanisms underpinning this positional bias remain incompletely explained. This study identifies and rationalizes a systematic spatial bias in the intensity of surface hybridization, characterized by markedly increased signal intensity of spots located at the boundaries of the spotted areas of the microarray slide. Combining observations from a simplified single-probe block array format with predictions from a mathematical model, the mechanism responsible for this bias is found to be a position-dependent variation in lateral diffusion of target molecules. Numerical simulations reveal a strong influence of microarray well geometry on the spatial bias. Reciprocal adjustment of the size of the microarray hybridization chamber to the area of surface-bound probes is a simple and effective measure to minimize or eliminate the diffusion-based bias, resulting in increased uniformity and accuracy of quantitative DNA microarray hybridization.
Haider, Susanne; Horn, Matthias; Wagner, Michael; Stocker, Roman; Loy, Alexander
2011-01-01
Background The hybridization of nucleic acid targets with surface-immobilized probes is a widely used assay for the parallel detection of multiple targets in medical and biological research. Despite its widespread application, DNA microarray technology still suffers from several biases and lack of reproducibility, stemming in part from an incomplete understanding of the processes governing surface hybridization. In particular, non-random spatial variations within individual microarray hybridizations are often observed, but the mechanisms underpinning this positional bias remain incompletely explained. Methodology/Principal Findings This study identifies and rationalizes a systematic spatial bias in the intensity of surface hybridization, characterized by markedly increased signal intensity of spots located at the boundaries of the spotted areas of the microarray slide. Combining observations from a simplified single-probe block array format with predictions from a mathematical model, the mechanism responsible for this bias is found to be a position-dependent variation in lateral diffusion of target molecules. Numerical simulations reveal a strong influence of microarray well geometry on the spatial bias. Conclusions Reciprocal adjustment of the size of the microarray hybridization chamber to the area of surface-bound probes is a simple and effective measure to minimize or eliminate the diffusion-based bias, resulting in increased uniformity and accuracy of quantitative DNA microarray hybridization. PMID:21858215
Evaluate the ability of clinical decision support systems (CDSSs) to improve clinical practice.
Ajami, Sima; Amini, Fatemeh
2013-01-01
Prevalence of new diseases, medical science promotion and increase of referring to health care centers, provide a good situation for medical errors growth. Errors can involve medicines, surgery, diagnosis, equipment, or lab reports. Medical errors can occur anywhere in the health care system: In hospitals, clinics, surgery centers, doctors' offices, nursing homes, pharmacies, and patients' homes. According to the Institute of Medicine (IOM), 98,000 people die every year from preventable medical errors. In 2010 from all referred medical error records to Iran Legal Medicine Organization, 46/5% physician and medical team members were known as delinquent. One of new technologies that can reduce medical errors is clinical decision support systems (CDSSs). This study was unsystematic-review study. The literature was searched on evaluate the "ability of clinical decision support systems to improve clinical practice" with the help of library, books, conference proceedings, data bank, and also searches engines available at Google, Google scholar. For our searches, we employed the following keywords and their combinations: medical error, clinical decision support systems, Computer-Based Clinical Decision Support Systems, information technology, information system, health care quality, computer systems in the searching areas of title, keywords, abstract, and full text. In this study, more than 100 articles and reports were collected and 38 of them were selected based on their relevancy. The CDSSs are computer programs, designed for help to health care careers. These systems as a knowledge-based tool could help health care manager in analyze evaluation, improvement and selection of effective solutions in clinical decisions. Therefore, it has a main role in medical errors reduction. The aim of this study was to express ability of the CDSSs to improve
Bouaud, J; Lamy, J-B
2013-01-01
To summarize excellent research and to select best papers published in 2012 in the field of computer-based decision support in healthcare. A bibliographic search focused on clinical decision support systems (CDSSs) and computer provider order entry was performed, followed by a double-blind literature review. The review process yielded six papers, illustrating various aspects of clinical decision support. The first paper is a systematic review of CDSS intervention trials in real settings, and considers different types of possible outcomes. It emphasizes the heterogeneity of studies and confirms that CDSSs can improve process measures but that evidence lacks for other types of outcomes, especially clinical or economic. Four other papers tackle the safety of drug prescribing and show that CDSSs can be efficient in reducing prescription errors. The sixth paper exemplifies the growing role of ontological resources which can be used for several applications including decision support. CDSS research has to be continuously developed and assessed. The wide variety of systems and of interventions limits the understanding of factors of success of CDSS implementations. A standardization in the characterization of CDSSs and of intervention trial reporting will help to overcome this obstacle.
NASA Astrophysics Data System (ADS)
Spahr, K.; Hogue, T. S.
2016-12-01
Selecting the most appropriate green, gray, and / or hybrid system for stormwater treatment and conveyance can prove challenging to decision markers across all scales, from site managers to large municipalities. To help streamline the selection process, a multi-disciplinary team of academics and professionals is developing an industry standard for selecting and evaluating the most appropriate stormwater management technology for different regions. To make the tool more robust and comprehensive, life-cycle cost assessment and optimization modules will be included to evaluate non-monetized and ecosystem benefits of selected technologies. Initial work includes surveying advisory board members based in cities that use existing decision support tools in their infrastructure planning process. These surveys will qualify the decisions currently being made and identify challenges within the current planning process across a range of hydroclimatic regions and city size. Analysis of social and other non-technical barriers to adoption of the existing tools is also being performed, with identification of regional differences and institutional challenges. Surveys will also gage the regional appropriateness of certain stormwater technologies based off experiences in implementing stormwater treatment and conveyance plans. In additional to compiling qualitative data on existing decision support tools, a technical review of components of the decision support tool used will be performed. Gaps in each tool's analysis, like the lack of certain critical functionalities, will be identified and ease of use will be evaluated. Conclusions drawn from both the qualitative and quantitative analyses will be used to inform the development of the new decision support tool and its eventual dissemination.
NASA Astrophysics Data System (ADS)
Bremer, Leah L.; Delevaux, Jade M. S.; Leary, James J. K.; J. Cox, Linda; Oleson, Kirsten L. L.
2015-04-01
Incorporating ecosystem services into management decisions is a promising means to link conservation and human well-being. Nonetheless, planning and management in Hawai`i, a state with highly valued natural capital, has yet to broadly utilize an ecosystem service approach. We conducted a stakeholder assessment, based on semi-structured interviews, with terrestrial ( n = 26) and marine ( n = 27) natural resource managers across the State of Hawai`i to understand the current use of ecosystem services (ES) knowledge and decision support tools and whether, how, and under what contexts, further development would potentially be useful. We found that ES knowledge and tools customized to Hawai`i could be useful for communication and outreach, justifying management decisions, and spatial planning. Greater incorporation of this approach is clearly desired and has a strong potential to contribute to more sustainable decision making and planning in Hawai`i and other oceanic island systems. However, the unique biophysical, socio-economic, and cultural context of Hawai`i, and other island systems, will require substantial adaptation of existing ES tools. Based on our findings, we identified four key opportunities for the use of ES knowledge and tools in Hawai`i: (1) linking native forest protection to watershed health; (2) supporting sustainable agriculture; (3) facilitating ridge-to-reef management; and (4) supporting statewide terrestrial and marine spatial planning. Given the interest expressed by natural resource managers, we envision broad adoption of ES knowledge and decision support tools if knowledge and tools are tailored to the Hawaiian context and coupled with adequate outreach and training.
Bremer, Leah L; Delevaux, Jade M S; Leary, James J K; J Cox, Linda; Oleson, Kirsten L L
2015-04-01
Incorporating ecosystem services into management decisions is a promising means to link conservation and human well-being. Nonetheless, planning and management in Hawai'i, a state with highly valued natural capital, has yet to broadly utilize an ecosystem service approach. We conducted a stakeholder assessment, based on semi-structured interviews, with terrestrial (n = 26) and marine (n = 27) natural resource managers across the State of Hawai'i to understand the current use of ecosystem services (ES) knowledge and decision support tools and whether, how, and under what contexts, further development would potentially be useful. We found that ES knowledge and tools customized to Hawai'i could be useful for communication and outreach, justifying management decisions, and spatial planning. Greater incorporation of this approach is clearly desired and has a strong potential to contribute to more sustainable decision making and planning in Hawai'i and other oceanic island systems. However, the unique biophysical, socio-economic, and cultural context of Hawai'i, and other island systems, will require substantial adaptation of existing ES tools. Based on our findings, we identified four key opportunities for the use of ES knowledge and tools in Hawai'i: (1) linking native forest protection to watershed health; (2) supporting sustainable agriculture; (3) facilitating ridge-to-reef management; and (4) supporting statewide terrestrial and marine spatial planning. Given the interest expressed by natural resource managers, we envision broad adoption of ES knowledge and decision support tools if knowledge and tools are tailored to the Hawaiian context and coupled with adequate outreach and training.
Dual phase multiplex polymerase chain reaction
Pemov, Alexander [Charlottesville, VA; Bavykin, Sergei [Darien, IL
2008-10-07
Highly specific and sensitive methods were developed for multiplex amplification of nucleic acids on supports such as microarrays. Based on a specific primer design, methods include five types of amplification that proceed in a reaction chamber simultaneously. These relate to four types of multiplex amplification of a target DNA on a solid support, directed by forward and reverse complex primers immobilized to the support and a fifth type--pseudo-monoplex polymerase chain reaction (PCR) of multiple targets in solution, directed by a single pair of unbound universal primers. The addition of the universal primers in the reaction mixture increases the yield over the traditional "bridge" amplification on a solid support by approximately ten times. Methods that provide multitarget amplification and detection of as little as 0.45-4.5.times.10.sup.-12 g (equivalent to 10.sup.2-10.sup.3 genomes) of a bacterial genomic DNA are disclosed.
Polyadenylation state microarray (PASTA) analysis.
Beilharz, Traude H; Preiss, Thomas
2011-01-01
Nearly all eukaryotic mRNAs terminate in a poly(A) tail that serves important roles in mRNA utilization. In the cytoplasm, the poly(A) tail promotes both mRNA stability and translation, and these functions are frequently regulated through changes in tail length. To identify the scope of poly(A) tail length control in a transcriptome, we developed the polyadenylation state microarray (PASTA) method. It involves the purification of mRNA based on poly(A) tail length using thermal elution from poly(U) sepharose, followed by microarray analysis of the resulting fractions. In this chapter we detail our PASTA approach and describe some methods for bulk and mRNA-specific poly(A) tail length measurements of use to monitor the procedure and independently verify the microarray data.
High-throughput screening in two dimensions: binding intensity and off-rate on a peptide microarray.
Greving, Matthew P; Belcher, Paul E; Cox, Conor D; Daniel, Douglas; Diehnelt, Chris W; Woodbury, Neal W
2010-07-01
We report a high-throughput two-dimensional microarray-based screen, incorporating both target binding intensity and off-rate, which can be used to analyze thousands of compounds in a single binding assay. Relative binding intensities and time-resolved dissociation are measured for labeled tumor necrosis factor alpha (TNF-alpha) bound to a peptide microarray. The time-resolved dissociation is fitted to a one-component exponential decay model, from which relative dissociation rates are determined for all peptides with binding intensities above background. We show that most peptides with the slowest off-rates on the microarray also have the slowest off-rates when measured by surface plasmon resonance (SPR). 2010 Elsevier Inc. All rights reserved.
Current Directions in Adding Value to Earth Observation Products for Decision Support
NASA Astrophysics Data System (ADS)
Ryker, S. J.
2015-12-01
Natural resource managers and infrastructure planners face increasingly complex challenges, given competing demands for resources and changing conditions due to climate and land use change. These pressures create demand for high-quality, timely data; for both one-time decision support and long-term monitoring; and for techniques to articulate the value of resources in monetary and nonmonetary terms. To meet the need for data, the U.S. government invests several billion dollars per year in Earth observations collected from satellite, airborne, terrestrial, and ocean-based systems. Earth observation-based decision support is coming of age; user surveys show that these data are used in an increasing variety of analyses. For example, since the U.S. Department of the Interior/U.S. Geological Survey's (USGS) 2008 free and open data policy for the Landsat satellites, downloads from the USGS archive have increased from 20,000 Landsat scenes per year to 10 million per year and climbing, with strong growth in both research and decision support fields. However, Earth observation-based decision support still poses users a number of challenges. Many of those Landsat downloads support a specialized community of remote sensing scientists, though new technologies promise to increase the usability of remotely sensed data for the larger GIS community supporting planning and resource management. Serving this larger community also requires supporting the development of increasingly interpretive products, and of new approaches to host and update products. For example, automating updates will add value to new essential climate variable products such as surface water extent and wildfire burned area extent. Projections of future urbanization in the southeastern U.S. are most useful when long-term land cover trends are integrated with street-level community data and planning tools. The USGS assessment of biological carbon sequestration in vegetation and shallow soils required a significant research investment in satellite and in situ measurements and biogeochemical and climate modeling, and is already providing decision support at a variety of scales; once operationalized, it will be a tool for adaptive management from field-scale soil and wetland conservation projects to national-scale policy.
Addy, Nii Antiaye; Shaban-Nejad, Arash; Buckeridge, David L; Dubé, Laurette
2015-01-23
Multi-stakeholder partnerships (MSPs) have become a widespread means for deploying policies in a whole of society strategy to address the complex problem of childhood obesity. However, decision-making in MSPs is fraught with challenges, as decision-makers are faced with complexity, and have to reconcile disparate conceptualizations of knowledge across multiple sectors with diverse sets of indicators and data. These challenges can be addressed by supporting MSPs with innovative tools for obtaining, organizing and using data to inform decision-making. The purpose of this paper is to describe and analyze the development of a knowledge-based infrastructure to support MSP decision-making processes. The paper emerged from a study to define specifications for a knowledge-based infrastructure to provide decision support for community-level MSPs in the Canadian province of Quebec. As part of the study, a process assessment was conducted to understand the needs of communities as they collect, organize, and analyze data to make decisions about their priorities. The result of this process is a "portrait", which is an epidemiological profile of health and nutrition in their community. Portraits inform strategic planning and development of interventions, and are used to assess the impact of interventions. Our key findings indicate ambiguities and disagreement among MSP decision-makers regarding causal relationships between actions and outcomes, and the relevant data needed for making decisions. MSP decision-makers expressed a desire for easy-to-use tools that facilitate the collection, organization, synthesis, and analysis of data, to enable decision-making in a timely manner. Findings inform conceptual modeling and ontological analysis to capture the domain knowledge and specify relationships between actions and outcomes. This modeling and analysis provide the foundation for an ontology, encoded using OWL 2 Web Ontology Language. The ontology is developed to provide semantic support for the MSP process, defining objectives, strategies, actions, indicators, and data sources. In the future, software interacting with the ontology can facilitate interactive browsing by decision-makers in the MSP in the form of concepts, instances, relationships, and axioms. Our ontology also facilitates the integration and interpretation of community data, and can help in managing semantic interoperability between different knowledge sources. Future work will focus on defining specifications for the development of a database of indicators and an information system to help decision-makers to view, analyze and organize indicators for their community. This work should improve MSP decision-making in the development of interventions to address childhood obesity.
Addy, Nii Antiaye; Shaban-Nejad, Arash; Buckeridge, David L.; Dubé, Laurette
2015-01-01
Multi-stakeholder partnerships (MSPs) have become a widespread means for deploying policies in a whole of society strategy to address the complex problem of childhood obesity. However, decision-making in MSPs is fraught with challenges, as decision-makers are faced with complexity, and have to reconcile disparate conceptualizations of knowledge across multiple sectors with diverse sets of indicators and data. These challenges can be addressed by supporting MSPs with innovative tools for obtaining, organizing and using data to inform decision-making. The purpose of this paper is to describe and analyze the development of a knowledge-based infrastructure to support MSP decision-making processes. The paper emerged from a study to define specifications for a knowledge-based infrastructure to provide decision support for community-level MSPs in the Canadian province of Quebec. As part of the study, a process assessment was conducted to understand the needs of communities as they collect, organize, and analyze data to make decisions about their priorities. The result of this process is a “portrait”, which is an epidemiological profile of health and nutrition in their community. Portraits inform strategic planning and development of interventions, and are used to assess the impact of interventions. Our key findings indicate ambiguities and disagreement among MSP decision-makers regarding causal relationships between actions and outcomes, and the relevant data needed for making decisions. MSP decision-makers expressed a desire for easy-to-use tools that facilitate the collection, organization, synthesis, and analysis of data, to enable decision-making in a timely manner. Findings inform conceptual modeling and ontological analysis to capture the domain knowledge and specify relationships between actions and outcomes. This modeling and analysis provide the foundation for an ontology, encoded using OWL 2 Web Ontology Language. The ontology is developed to provide semantic support for the MSP process, defining objectives, strategies, actions, indicators, and data sources. In the future, software interacting with the ontology can facilitate interactive browsing by decision-makers in the MSP in the form of concepts, instances, relationships, and axioms. Our ontology also facilitates the integration and interpretation of community data, and can help in managing semantic interoperability between different knowledge sources. Future work will focus on defining specifications for the development of a database of indicators and an information system to help decision-makers to view, analyze and organize indicators for their community. This work should improve MSP decision-making in the development of interventions to address childhood obesity. PMID:25625409
Decision Support Systems for Operational Level Command and Control
1990-04-30
business -based. These definitions still have applicability to military command and control - the business of military operations. A synthesis of the...other hand, there are such studies that were conducted in business environments. An eight week empincal study39 was 37 bd, pp 8-1 I. 38 Ranesh Shada...pp 139-158. 19 conducted and the groups with access to decision support system made significantly more effective decisions :n a business simulation
Shared decision-making and decision support: their role in obstetrics and gynecology.
Tucker Edmonds, Brownsyne
2014-12-01
To discuss the role for shared decision-making in obstetrics/gynecology and to review evidence on the impact of decision aids on reproductive health decision-making. Among the 155 studies included in a 2014 Cochrane review of decision aids, 31 (29%) addressed reproductive health decisions. Although the majority did not show evidence of an effect on treatment choice, there was a greater uptake of mammography in selected groups of women exposed to decision aids compared with usual care; and a statistically significant reduction in the uptake of hormone replacement therapy among detailed decision aid users compared with simple decision aid users. Studies also found an effect on patient-centered outcomes of care, such as medication adherence, quality-of-life measures, and anxiety scores. In maternity care, only decision analysis tools affected final treatment choice, and patient-directed aids yielded no difference in planned mode of birth after cesarean. There is untapped potential for obstetricians/gynecologists to optimize decision support for reproductive health decisions. Given the limited evidence-base guiding practice, the preference-sensitive nature of reproductive health decisions, and the increase in policy efforts and financial incentives to optimize patients' satisfaction, it is increasingly important for obstetricians/gynecologists to appreciate the role of shared decision-making and decision support in providing patient-centered reproductive healthcare.
NASA Technical Reports Server (NTRS)
Engelland, Shawn A.; Capps, Alan
2011-01-01
Current aircraft departure release times are based on manual estimates of aircraft takeoff times. Uncertainty in takeoff time estimates may result in missed opportunities to merge into constrained en route streams and lead to lost throughput. However, technology exists to improve takeoff time estimates by using the aircraft surface trajectory predictions that enable air traffic control tower (ATCT) decision support tools. NASA s Precision Departure Release Capability (PDRC) is designed to use automated surface trajectory-based takeoff time estimates to improve en route tactical departure scheduling. This is accomplished by integrating an ATCT decision support tool with an en route tactical departure scheduling decision support tool. The PDRC concept and prototype software have been developed, and an initial test was completed at air traffic control facilities in Dallas/Fort Worth. This paper describes the PDRC operational concept, system design, and initial observations.
RECOVERING FILTER-BASED MICROARRAY DATA FOR PATHWAYS ANALYSIS USING A MULTIPOINT ALIGNMENT STRATEGY
The use of commercial microarrays are rapidly becoming the method of choice for profiling gene expression and assessing various disease states. Research Genetics has provided a series of well defined biological and software tools to the research community for these analyses. Th...
Use of Network Inference to Elucidate Common and Chemical-specific Effects on Steoidogenesis
Microarray data is a key source for modeling gene regulatory interactions. Regulatory network models based on multiple datasets are potentially more robust and can provide greater confidence. In this study, we used network modeling on microarray data generated by exposing the fat...
Galfalvy, Hanga C; Erraji-Benchekroun, Loubna; Smyrniotopoulos, Peggy; Pavlidis, Paul; Ellis, Steven P; Mann, J John; Sibille, Etienne; Arango, Victoria
2003-01-01
Background Genomic studies of complex tissues pose unique analytical challenges for assessment of data quality, performance of statistical methods used for data extraction, and detection of differentially expressed genes. Ideally, to assess the accuracy of gene expression analysis methods, one needs a set of genes which are known to be differentially expressed in the samples and which can be used as a "gold standard". We introduce the idea of using sex-chromosome genes as an alternative to spiked-in control genes or simulations for assessment of microarray data and analysis methods. Results Expression of sex-chromosome genes were used as true internal biological controls to compare alternate probe-level data extraction algorithms (Microarray Suite 5.0 [MAS5.0], Model Based Expression Index [MBEI] and Robust Multi-array Average [RMA]), to assess microarray data quality and to establish some statistical guidelines for analyzing large-scale gene expression. These approaches were implemented on a large new dataset of human brain samples. RMA-generated gene expression values were markedly less variable and more reliable than MAS5.0 and MBEI-derived values. A statistical technique controlling the false discovery rate was applied to adjust for multiple testing, as an alternative to the Bonferroni method, and showed no evidence of false negative results. Fourteen probesets, representing nine Y- and two X-chromosome linked genes, displayed significant sex differences in brain prefrontal cortex gene expression. Conclusion In this study, we have demonstrated the use of sex genes as true biological internal controls for genomic analysis of complex tissues, and suggested analytical guidelines for testing alternate oligonucleotide microarray data extraction protocols and for adjusting multiple statistical analysis of differentially expressed genes. Our results also provided evidence for sex differences in gene expression in the brain prefrontal cortex, supporting the notion of a putative direct role of sex-chromosome genes in differentiation and maintenance of sexual dimorphism of the central nervous system. Importantly, these analytical approaches are applicable to all microarray studies that include male and female human or animal subjects. PMID:12962547
Galfalvy, Hanga C; Erraji-Benchekroun, Loubna; Smyrniotopoulos, Peggy; Pavlidis, Paul; Ellis, Steven P; Mann, J John; Sibille, Etienne; Arango, Victoria
2003-09-08
Genomic studies of complex tissues pose unique analytical challenges for assessment of data quality, performance of statistical methods used for data extraction, and detection of differentially expressed genes. Ideally, to assess the accuracy of gene expression analysis methods, one needs a set of genes which are known to be differentially expressed in the samples and which can be used as a "gold standard". We introduce the idea of using sex-chromosome genes as an alternative to spiked-in control genes or simulations for assessment of microarray data and analysis methods. Expression of sex-chromosome genes were used as true internal biological controls to compare alternate probe-level data extraction algorithms (Microarray Suite 5.0 [MAS5.0], Model Based Expression Index [MBEI] and Robust Multi-array Average [RMA]), to assess microarray data quality and to establish some statistical guidelines for analyzing large-scale gene expression. These approaches were implemented on a large new dataset of human brain samples. RMA-generated gene expression values were markedly less variable and more reliable than MAS5.0 and MBEI-derived values. A statistical technique controlling the false discovery rate was applied to adjust for multiple testing, as an alternative to the Bonferroni method, and showed no evidence of false negative results. Fourteen probesets, representing nine Y- and two X-chromosome linked genes, displayed significant sex differences in brain prefrontal cortex gene expression. In this study, we have demonstrated the use of sex genes as true biological internal controls for genomic analysis of complex tissues, and suggested analytical guidelines for testing alternate oligonucleotide microarray data extraction protocols and for adjusting multiple statistical analysis of differentially expressed genes. Our results also provided evidence for sex differences in gene expression in the brain prefrontal cortex, supporting the notion of a putative direct role of sex-chromosome genes in differentiation and maintenance of sexual dimorphism of the central nervous system. Importantly, these analytical approaches are applicable to all microarray studies that include male and female human or animal subjects.
Wilk, S; Michalowski, W; O'Sullivan, D; Farion, K; Sayyad-Shirabad, J; Kuziemsky, C; Kukawka, B
2013-01-01
The purpose of this study was to create a task-based support architecture for developing clinical decision support systems (CDSSs) that assist physicians in making decisions at the point-of-care in the emergency department (ED). The backbone of the proposed architecture was established by a task-based emergency workflow model for a patient-physician encounter. The architecture was designed according to an agent-oriented paradigm. Specifically, we used the O-MaSE (Organization-based Multi-agent System Engineering) method that allows for iterative translation of functional requirements into architectural components (e.g., agents). The agent-oriented paradigm was extended with ontology-driven design to implement ontological models representing knowledge required by specific agents to operate. The task-based architecture allows for the creation of a CDSS that is aligned with the task-based emergency workflow model. It facilitates decoupling of executable components (agents) from embedded domain knowledge (ontological models), thus supporting their interoperability, sharing, and reuse. The generic architecture was implemented as a pilot system, MET3-AE--a CDSS to help with the management of pediatric asthma exacerbation in the ED. The system was evaluated in a hospital ED. The architecture allows for the creation of a CDSS that integrates support for all tasks from the task-based emergency workflow model, and interacts with hospital information systems. Proposed architecture also allows for reusing and sharing system components and knowledge across disease-specific CDSSs.
Kalogeropoulos, Dimitris A; Carson, Ewart R; Collinson, Paul O
2003-09-01
Given that clinicians presented with identical clinical information will act in different ways, there is a need to introduce into routine clinical practice methods and tools to support the scientific homogeneity and accountability of healthcare decisions and actions. The benefits expected from such action include an overall reduction in cost, improved quality of care, patient and public opinion satisfaction. Computer-based medical data processing has yielded methods and tools for managing the task away from the hospital management level and closer to the desired disease and patient management level. To this end, advanced applications of information and disease process modelling technologies have already demonstrated an ability to significantly augment clinical decision making as a by-product. The wide-spread acceptance of evidence-based medicine as the basis of cost-conscious and concurrently quality-wise accountable clinical practice suffices as evidence supporting this claim. Electronic libraries are one-step towards an online status of this key health-care delivery quality control environment. Nonetheless, to date, the underlying information and knowledge management technologies have failed to be integrated into any form of pragmatic or marketable online and real-time clinical decision making tool. One of the main obstacles that needs to be overcome is the development of systems that treat both information and knowledge as clinical objects with same modelling requirements. This paper describes the development of such a system in the form of an intelligent clinical information management system: a system which at the most fundamental level of clinical decision support facilitates both the organised acquisition of clinical information and knowledge and provides a test-bed for the development and evaluation of knowledge-based decision support functions.
Heuristic-based information acquisition and decision making among pilots.
Wiggins, Mark W; Bollwerk, Sandra
2006-01-01
This research was designed to examine the impact of heuristic-based approaches to the acquisition of task-related information on the selection of an optimal alternative during simulated in-flight decision making. The work integrated features of naturalistic and normative decision making and strategies of information acquisition within a computer-based, decision support framework. The study comprised two phases, the first of which involved familiarizing pilots with three different heuristic-based strategies of information acquisition: frequency, elimination by aspects, and majority of confirming decisions. The second stage enabled participants to choose one of the three strategies of information acquisition to resolve a fourth (choice) scenario. The results indicated that task-oriented experience, rather than the information acquisition strategies, predicted the selection of the optimal alternative. It was also evident that of the three strategies available, the elimination by aspects information acquisition strategy was preferred by most participants. It was concluded that task-oriented experience, rather than the process of information acquisition, predicted task accuracy during the decision-making task. It was also concluded that pilots have a preference for one particular approach to information acquisition. Applications of outcomes of this research include the development of decision support systems that adapt to the information-processing capabilities and preferences of users.
Samantra, Chitrasen; Datta, Saurav; Mahapatra, Siba Sankar
2017-03-01
In the context of underground coal mining industry, the increased economic issues regarding implementation of additional safety measure systems, along with growing public awareness to ensure high level of workers safety, have put great pressure on the managers towards finding the best solution to ensure safe as well as economically viable alternative selection. Risk-based decision support system plays an important role in finding such solutions amongst candidate alternatives with respect to multiple decision criteria. Therefore, in this paper, a unified risk-based decision-making methodology has been proposed for selecting an appropriate safety measure system in relation to an underground coal mining industry with respect to multiple risk criteria such as financial risk, operating risk, and maintenance risk. The proposed methodology uses interval-valued fuzzy set theory for modelling vagueness and subjectivity in the estimates of fuzzy risk ratings for making appropriate decision. The methodology is based on the aggregative fuzzy risk analysis and multi-criteria decision making. The selection decisions are made within the context of understanding the total integrated risk that is likely to incur while adapting the particular safety system alternative. Effectiveness of the proposed methodology has been validated through a real-time case study. The result in the context of final priority ranking is seemed fairly consistent.
Emerencia, Ando C; Boonstra, Nynke; Wunderink, Lex; de Jonge, Peter; Sytema, Sjoerd
2013-01-01
Background Mental health policy makers encourage the development of electronic decision aids to increase patient participation in medical decision making. Evidence is needed to determine whether these decision aids are helpful in clinical practice and whether they lead to increased patient involvement and better outcomes. Objective This study reports the outcome of a randomized controlled trial and process evaluation of a Web-based intervention to facilitate shared decision making for people with psychotic disorders. Methods The study was carried out in a Dutch mental health institution. Patients were recruited from 2 outpatient teams for patients with psychosis (N=250). Patients in the intervention condition (n=124) were provided an account to access a Web-based information and decision tool aimed to support patients in acquiring an overview of their needs and appropriate treatment options provided by their mental health care organization. Patients were given the opportunity to use the Web-based tool either on their own (at their home computer or at a computer of the service) or with the support of an assistant. Patients in the control group received care as usual (n=126). Half of the patients in the sample were patients experiencing a first episode of psychosis; the other half were patients with a chronic psychosis. Primary outcome was patient-perceived involvement in medical decision making, measured with the Combined Outcome Measure for Risk Communication and Treatment Decision-making Effectiveness (COMRADE). Process evaluation consisted of questionnaire-based surveys, open interviews, and researcher observation. Results In all, 73 patients completed the follow-up measurement and were included in the final analysis (response rate 29.2%). More than one-third (48/124, 38.7%) of the patients who were provided access to the Web-based decision aid used it, and most used its full functionality. No differences were found between the intervention and control conditions on perceived involvement in medical decision making (COMRADE satisfaction with communication: F1,68=0.422, P=.52; COMRADE confidence in decision: F1,67=0.086, P=.77). In addition, results of the process evaluation suggest that the intervention did not optimally fit in with routine practice of the participating teams. Conclusions The development of electronic decision aids to facilitate shared medical decision making is encouraged and many people with a psychotic disorder can work with them. This holds for both first-episode patients and long-term care patients, although the latter group might need more assistance. However, results of this paper could not support the assumption that the use of electronic decision aids increases patient involvement in medical decision making. This may be because of weak implementation of the study protocol and a low response rate. Trial Registration Dutch Trial Register (NTR) trial number: 10340; http://www.trialregister.nl/trialreg/admin/rctsearch.asp?Term=10340 (Archived by WebCite at http://www.webcitation.org/6Jj5umAeS). PMID:24100091